Precise Communication Key to Adapting Technologies to Clinical Practice

The researchers found that after adjustment for age, men had a significantly higher mortality risk than women.
Researchers evaluated the feasibility of using machine learning to automatically populate a review of systems of all symptoms discussed in a patient/physician conversation.

Clear and precise communication when using clerical medical record keeping through speech recognition technology is necessary for correctly documenting symptoms, according to a research letter published in JAMA Internal Medicine.

Researchers considered the feasibility of machine learning for automatically populating a review of systems (ROS) for all symptoms discussed in a clinical encounter. Using 90,000 human-transcribed, deidentified medical encounters, researchers randomly selected 2547 encounters from primary care and chose medical specialties to undergo labeling of 185 symptoms by scribes. The labeled transcripts were then randomly split into training (n=2091) and test (n=456).

Each symptom mention was assigned a relevance to the ROS and indicated whether the symptom was experienced or not. The input to the machine model was a sliding window of 5 conversation turns (snippets), and the output was every symptom mentioned, its relevance, and whether the patient experienced the symptom. Sensitivity and positive-predictive value were assessed across the entire test set, in identifying the symptom and the accuracy of the correct documentation in clearly vs unclearly mentioned symptoms.

Related Articles

Researchers selected 800 snippets containing at least 1 of 16 common symptoms from the test set and asked 2 scribes to independently assess how likely they would be to include the initially labeled symptom in the ROS. Only those noted as “extremely likely” to be reported were defined as a “clearly mentioned” symptom. All others were considered to be “unclear.”

The sensitivity of the model to identify symptoms across the full test set was 67.7% (5172/7637), and the positive predictive value of a predicted symptom was 80.6% (5172/6417). Only 48.4% of symptom mentions were clear (387/800), and for clearly mentioned symptoms, the sensitivity of the model was 92.2% (357/387). For unclear symptom mentions, the sensitivity was 67.8% (280/413).

Researchers reported a key challenge was that a substantial proportion of symptoms are mentioned vaguely to the point that even human scribes do not agree on how to document them. In this study, the model did perform well on clearly mentioned symptoms, but its performance declined significantly when it came to the unclearly mentioned symptoms. Further research is needed to assist clinicians with documenting the history of present illness.

Follow @ClinicalPainAdv

Reference

Rajkomar A, Kannan A, Chen K, et al. Automatically charting symptoms from patient-physician conversations using machine learning [published online March 25, 2019]. JAMA Intern Med. doi:10.1001/jamainternmed.2018.8558

This article originally appeared on Medical Bag