Tuesday, October 13, 2015

Lit Bits: Oct 13, 2015

From the recent medical literature...

1. Evaluation of Patients with Suspected Acute PE: Best Practice Advice from the Clinical Guidelines Committee of the American College of Physicians

Raja AS, et al. Ann Intern Med. 2015 Sep 29 [Epub ahead of print]

Description: Pulmonary embolism (PE) can be a severe disease but is also difficult to diagnose, given its nonspecific signs and symptoms. Because of this, testing of patients with suspected acute PE has risen drastically. However, the overuse of some tests, particularly computed tomography (CT) and plasma d-dimer, may not improve care while potentially leading to patient harm and unnecessary expense.

Methods: The literature search encompassed studies indexed by MEDLINE (1966–2014; English-language only) and included all clinical trials and meta-analyses on diagnostic strategies, decision rules, laboratory tests, and imaging studies for the diagnosis of PE. This document is not based on a formal systematic review, but instead seeks to provide practical advice based on the best available evidence and recent guidelines. The target audience for this paper is all clinicians; the target patient population is all adults, both inpatient and outpatient, suspected of having acute PE.

Best Practice Advice 1: Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered.

Best Practice Advice 2: Clinicians should not obtain d-dimer measurements or imaging studies in patients with a low pretest probability of PE and who meet all Pulmonary Embolism Rule-Out Criteria.

Best Practice Advice 3: Clinicians should obtain a high-sensitivity d-dimer measurement as the initial diagnostic test in patients who have an intermediate pretest probability of PE or in patients with low pretest probability of PE who do not meet all Pulmonary Embolism Rule-Out Criteria. Clinicians should not use imaging studies as the initial test in patients who have a low or intermediate pretest probability of PE.

Best Practice Advice 4: Clinicians should use age-adjusted d-dimer thresholds (age × 10 ng/mL rather than a generic 500 ng/mL) in patients older than 50 years to determine whether imaging is warranted.

Best Practice Advice 5: Clinicians should not obtain any imaging studies in patients with a d-dimer level below the age-adjusted cutoff.

Best Practice Advice 6: Clinicians should obtain imaging with CT pulmonary angiography (CTPA) in patients with high pretest probability of PE. Clinicians should reserve ventilation–perfusion scans for patients who have a contraindication to CTPA or if CTPA is not available. Clinicians should not obtain a d-dimer measurement in patients with a high pretest probability of PE.


Vinson’s Simple-minded Summary: Risk stratify. Suggest Wells (though Gestalt can work as well)
  • Low risk? PERC. If neg, stop. If pos, get age-adjusted DD. If neg, stop. If pos, image.
  • Moderate risk? Get age-adjusted DD. If neg, stop. If pos, image.
  • High risk? Skip DD. Image.

2. Intravascular Complications of Central Venous Catheterization by Insertion Site

Parienti J, et al, for the 3SITES Study Group. N Engl J Med 2015; 373:1220-1229

BACKGROUND: Three anatomical sites are commonly used to insert central venous catheters, but insertion at each site has the potential for major complications.

METHODS: In this multicenter trial, we randomly assigned nontunneled central venous catheterization in patients in the adult intensive care unit (ICU) to the subclavian, jugular, or femoral vein (in a 1:1:1 ratio if all three insertion sites were suitable [three-choice scheme] and in a 1:1 ratio if two sites were suitable [two-choice scheme]). The primary outcome measure was a composite of catheter-related bloodstream infection and symptomatic deep-vein thrombosis.

RESULTS: A total of 3471 catheters were inserted in 3027 patients. In the three-choice comparison, there were 8, 20, and 22 primary outcome events in the subclavian, jugular, and femoral groups, respectively (1.5, 3.6, and 4.6 per 1000 catheter-days; P=0.02). In pairwise comparisons, the risk of the primary outcome was significantly higher in the femoral group than in the subclavian group (hazard ratio, 3.5; 95% confidence interval [CI], 1.5 to 7.8; P=0.003) and in the jugular group than in the subclavian group (hazard ratio, 2.1; 95% CI, 1.0 to 4.3; P=0.04), whereas the risk in the femoral group was similar to that in the jugular group (hazard ratio, 1.3; 95% CI, 0.8 to 2.1; P=0.30). In the three-choice comparison, pneumothorax requiring chest-tube insertion occurred in association with 13 (1.5%) of the subclavian-vein insertions and 4 (0.5%) of the jugular-vein insertions.

CONCLUSIONS: In this trial, subclavian-vein catheterization was associated with a lower risk of bloodstream infection and symptomatic thrombosis and a higher risk of pneumothorax than jugular-vein or femoral-vein catheterization.

DRV note: Their rates of PTX at both the IJ and SCL sites correlate with those in community practice: http://www.ncbi.nlm.nih.gov/pubmed/25455050

3. High-sensitivity cardiac troponin I at presentation in patients with suspected ACS: a cohort study

Shah A, et al, on behalf of the High-STEACS investigators. Lancet 2015 Oct 7 [Epub ahead of print]

Background: Suspected acute coronary syndrome is the commonest reason for emergency admission to hospital and is a large burden on health-care resources. Strategies to identify low-risk patients suitable for immediate discharge would have major benefits.

Methods: We did a prospective cohort study of 6304 consecutively enrolled patients with suspected acute coronary syndrome presenting to four secondary and tertiary care hospitals in Scotland. We measured plasma troponin concentrations at presentation using a high-sensitivity cardiac troponin I assay. In derivation and validation cohorts, we evaluated the negative predictive value of a range of troponin concentrations for the primary outcome of index myocardial infarction, or subsequent myocardial infarction or cardiac death at 30 days. This trial is registered with ClinicalTrials.gov (number NCT01852123).

Findings: 782 (16%) of 4870 patients in the derivation cohort had index myocardial infarction, with a further 32 (1%) re-presenting with myocardial infarction and 75 (2%) cardiac deaths at 30 days. In patients without myocardial infarction at presentation, troponin concentrations were less than 5 ng/L in 2311 (61%) of 3799 patients, with a negative predictive value of 99·6% (95% CI 99·3–99·8) for the primary outcome. The negative predictive value was consistent across groups stratified by age, sex, risk factors, and previous cardiovascular disease. In two independent validation cohorts, troponin concentrations were less than 5 ng/L in 594 (56%) of 1061 patients, with an overall negative predictive value of 99·4% (98·8–99·9). At 1 year, these patients had a lower risk of myocardial infarction and cardiac death than did those with a troponin concentration of 5 ng/L or more (0·6% vs 3·3%; adjusted hazard ratio 0·41, 95% CI 0·21–0·80; p less than 0·0001).

Interpretation: Low plasma troponin concentrations identify two-thirds of patients at very low risk of cardiac events who could be discharged from hospital. Implementation of this approach could substantially reduce hospital admissions and have major benefits for both patients and health-care providers.

Funding: British Heart Foundation and Chief Scientist Office (Scotland).


4. Derivation and Validation of Two Decision Instruments for Selective Chest CT in Blunt Trauma: A Multicenter Prospective Observational Study (NEXUS Chest CT)

Rodriguez RM, et al. PLoS Med. 2015 Oct 6;12(10):e1001883.

BACKGROUND: Unnecessary diagnostic imaging leads to higher costs, longer emergency department stays, and increased patient exposure to ionizing radiation. We sought to prospectively derive and validate two decision instruments (DIs) for selective chest computed tomography (CT) in adult blunt trauma patients.

METHODS AND FINDINGS: From September 2011 to May 2014, we prospectively enrolled blunt trauma patients over 14 y of age presenting to eight US, urban level 1 trauma centers in this observational study. During the derivation phase, physicians recorded the presence or absence of 14 clinical criteria before viewing chest imaging results. We determined injury outcomes by CT radiology readings and categorized injuries as major or minor according to an expert-panel-derived clinical classification scheme. We then employed recursive partitioning to derive two DIs: Chest CT-All maximized sensitivity for all injuries, and Chest CT-Major maximized sensitivity for only major thoracic injuries (while increasing specificity). In the validation phase, we employed similar methodology to prospectively test the performance of both DIs.

We enrolled 11,477 patients-6,002 patients in the derivation phase and 5,475 patients in the validation phase. The derived Chest CT-All DI consisted of (1) abnormal chest X-ray, (2) rapid deceleration mechanism, (3) distracting injury, (4) chest wall tenderness, (5) sternal tenderness, (6) thoracic spine tenderness, and (7) scapular tenderness. The Chest CT-Major DI had the same criteria without rapid deceleration mechanism. In the validation phase, Chest CT-All had a sensitivity of 99.2% (95% CI 95.4%-100%), a specificity of 20.8% (95% CI 19.2%-22.4%), and a negative predictive value (NPV) of 99.8% (95% CI 98.9%-100%) for major injury, and a sensitivity of 95.4% (95% CI 93.6%-96.9%), a specificity of 25.5% (95% CI 23.5%-27.5%), and a NPV of 93.9% (95% CI 91.5%-95.8%) for either major or minor injury. Chest CT-Major had a sensitivity of 99.2% (95% CI 95.4%-100%), a specificity of 31.7% (95% CI 29.9%-33.5%), and a NPV of 99.9% (95% CI 99.3%-100%) for major injury and a sensitivity of 90.7% (95% CI 88.3%-92.8%), a specificity of 37.9% (95% CI 35.8%-40.1%), and a NPV of 91.8% (95% CI 89.7%-93.6%) for either major or minor injury. Regarding the limitations of our work, some clinicians may disagree with our injury classification and sensitivity thresholds for injury detection.

CONCLUSIONS: We prospectively derived and validated two DIs (Chest CT-All and Chest CT-Major) that identify blunt trauma patients with clinically significant thoracic injuries with high sensitivity, allowing for a safe reduction of approximately 25%-37% of unnecessary chest CTs. Trauma evaluation protocols that incorporate these DIs may decrease unnecessary costs and radiation exposure in the disproportionately young trauma population.


5. Irrigation of Cutaneous Abscesses Does Not Improve Treatment Success

Chinnock B, et al. Ann Emerg Med 2015 Sept 23 [Epub ahead of print]

Study objective: Irrigation of the cutaneous abscess cavity is often described as a standard part of incision and drainage despite no randomized, controlled studies showing benefit. Our goal is to determine whether irrigation of a cutaneous abscess during incision and drainage in the emergency department (ED) decreases the need for further intervention within 30 days compared with no irrigation.

Methods: We performed a single-center, prospective, randomized, nonblinded study of ED patients receiving an incision and drainage for cutaneous abscess, randomized to irrigation or no irrigation. Patient characteristics and postprocedure pain visual analog scale score were obtained. Thirty-day telephone follow-up was conducted with a standardized data form examining need for further intervention, which was defined as repeated incision and drainage, antibiotic change, or abscess-related hospital admission.

Results: Of 209 enrolled patients, 187 completed follow-up. The irrigation and no-irrigation groups were similar with respect to diabetes, immunocompromise, fever, abscess size, cellulitis, and abscess location, but the irrigation group was younger (mean age 36 versus 40 years) and more often treated with packing (89% versus 75%) and outpatient antibiotics (91% versus 73%). The need for further intervention was not different in the irrigation (15%) and no-irrigation (13%) groups (difference 2%; 95% confidence interval –8% to 12%). There was no difference in pain visual analog scale scores (5.6 versus 5.7; difference 0.1; 95% confidence interval –0.7 to 0.9).

Conclusion: Although there were baseline differences between groups, irrigation of the abscess cavity during incision and drainage did not decrease the need for further intervention.


6. Alignment of DNR Status with Patients’ Likelihood of Favorable Neurological Survival After In-Hospital Cardiac Arrest

Fendler TJ, et al. JAMA 2015; 314(12):1264-1271

Importance: After patients survive an in-hospital cardiac arrest, discussions should occur about prognosis and preferences for future resuscitative efforts.

Objective:  To assess whether patients’ decisions for do-not-resuscitate (DNR) orders after a successful resuscitation from in-hospital cardiac arrest are aligned with their expected prognosis.

Design, Setting, and Participants: Within Get With The Guidelines–Resuscitation, we identified 26 327 patients with return of spontaneous circulation (ROSC) after in-hospital cardiac arrest between April 2006 and September 2012 at 406 US hospitals. Using a previously validated prognostic tool, each patient’s likelihood of favorable neurological survival (ie, without severe neurological disability) was calculated. The proportion of patients with DNR orders within each prognosis score decile and the association between DNR status and actual favorable neurological survival were examined.

Exposures: Do-not-resuscitate orders within 12 hours of ROSC.

Main Outcomes and Measures: Likelihood of favorable neurological survival.

Results: Overall, 5944 (22.6% [95% CI, 22.1%-23.1%]) patients had DNR orders within 12 hours of ROSC. This group was older and had higher rates of comorbidities (all P less than .05) than patients without DNR orders. Among patients with the best prognosis (decile 1), 7.1% (95% CI, 6.1%-8.1%) had DNR orders even though their predicted rate of favorable neurological survival was 64.7% (95% CI, 62.8%-66.6%). Among patients with the worst expected prognosis (decile 10), 36.0% (95% CI, 34.2%-37.8%) had DNR orders even though their predicted rate for favorable neurological survival was 4.0% (95% CI, 3.3%-4.7%) (P for both trends less than .001). This pattern was similar when DNR orders were redefined as within 24 hours, 72 hours, and 5 days of ROSC. The actual rate of favorable neurological survival was higher for patients without DNR orders (30.5% [95% CI, 29.9%-31.1%]) than it was for those with DNR orders (1.8% [95% CI, 1.6%-2.0%]). This pattern of lower survival among patients with DNR orders was seen in every decile of expected prognosis.

Conclusions and Relevance: Although DNR orders after in-hospital cardiac arrest were generally aligned with patients’ likelihood of favorable neurological survival, only one-third of patients with the worst prognosis had DNR orders. Patients with DNR orders had lower survival than those without DNR orders, including those with the best prognosis.


7. External Validation of the STONE Score, a Clinical Prediction Rule for Ureteral Stone: An Observational Multi-institutional Study

Wang RC, et al. Ann Emerg Med. 2015 Oct  2 [Epub ahead of print]

Study objective: The STONE score is a clinical decision rule that classifies patients with suspected nephrolithiasis into low-, moderate-, and high-score groups, with corresponding probabilities of ureteral stone. We evaluate the STONE score in a multi-institutional cohort compared with physician gestalt and hypothesize that it has a sufficiently high specificity to allow clinicians to defer computed tomography (CT) scan in patients with suspected nephrolithiasis.

Methods: We assessed the STONE score with data from a randomized trial for participants with suspected nephrolithiasis who enrolled at 9 emergency departments between October 2011 and February 2013. In accordance with STONE predictors, we categorized participants into low-, moderate-, or high-score groups. We determined the performance of the STONE score and physician gestalt for ureteral stone.

Results: Eight hundred forty-five participants were included for analysis; 331 (39%) had a ureteral stone. The global performance of the STONE score was superior to physician gestalt (area under the receiver operating characteristic curve=0.78 [95% confidence interval {CI} 0.74 to 0.81] versus 0.68 [95% CI 0.64 to 0.71]). The prevalence of ureteral stone on CT scan ranged from 14% (95% CI 9% to 19%) to 73% (95% CI 67% to 78%) in the low-, moderate-, and high-score groups. The sensitivity and specificity of a high score were 53% (95% CI 48% to 59%) and 87% (95% CI 84% to 90%), respectively.

Conclusion: The STONE score can successfully aggregate patients into low-, medium-, and high-risk groups and predicts ureteral stone with a higher specificity than physician gestalt. However, in its present form, the STONE score lacks sufficient accuracy to allow clinicians to defer CT scan for suspected ureteral stone.


8. Nebulized Hypertonic Saline for Acute Bronchiolitis: A Systematic Review

Zhang L, et al. 1, Pediatrics. 2015 Oct;136(4):687-701.

BACKGROUND AND OBJECTIVE: The mainstay of treatment for acute bronchiolitis remains supportive care. The objective of this study was to assess the efficacy and safety of nebulized hypertonic saline (HS) in infants with acute bronchiolitis.

METHODS: Data sources included PubMed and the Virtual Health Library of the Latin American and Caribbean Center on Health Sciences Information up to May 2015. Studies selected were randomized or quasi-randomized controlled trials comparing nebulized HS with 0.9% saline or standard treatment.

RESULTS: We included 24 trials involving 3209 patients, 1706 of whom received HS. Hospitalized patients treated with nebulized HS had a significantly shorter length of stay compared with those receiving 0.9% saline or standard care (15 trials involving 1956 patients; mean difference [MD] -0.45 days, 95% confidence interval [CI] -0.82 to -0.08). The HS group also had a significantly lower posttreatment clinical score in the first 3 days of admission (5 trials involving 404 inpatients; day 1: MD -0.99, 95% CI -1.48 to -0.50; day 2: MD -1.45, 95% CI -2.06 to -0.85; day 3: MD -1.44, 95% CI -1.78 to -1.11). Nebulized HS reduced the risk of hospitalization by 20% compared with 0.9% saline among outpatients (7 trials involving 951 patients; risk ratio 0.80, 95% CI 0.67-0.96). No significant adverse events related to HS inhalation were reported. The quality of evidence is moderate due to inconsistency in results between trials and study limitations (risk of bias).

CONCLUSIONS: Nebulized HS is a safe and potentially effective treatment of infants with acute bronchiolitis.

9. Does Management of DKA with Subcutaneous Rapid-acting Insulin Reduce the Need for ICU Admission?

Cohn BG, et al. J Emerg Med 2015;49:530-8.

BACKGROUND: In the last 20 years, rapid-acting insulin analogs have emerged on the market, including aspart and lispro, which may be efficacious in the management of diabetic ketoacidosis (DKA) when administered by non-intravenous (i.v.) routes.

CLINICAL QUESTION: In patients with mild-to-moderate DKA without another reason for intensive care unit (ICU) admission, is the administration of a subcutaneous (s.c.) rapid-acting insulin analog a safe and effective alternative to a continuous infusion of i.v. regular insulin, and would such a strategy eliminate the need for ICU admission?

EVIDENCE REVIEW: Five randomized controlled trials were identified and critically appraised.

RESULTS: The outcomes suggest that there is no difference in the duration of therapy required to resolve DKA with either strategy.

CONCLUSION: Current evidence supports DKA management with s.c. rapid-acting insulin analogs in a non-ICU setting in carefully selected patients.

10. Pigtail drainage for Hemothorax Works: In Pigs (of course; it’s a PIG-tail drain) and Humans

A. A pilot study of chest tube versus pigtail catheter drainage of acute hemothorax in swine.

Russo RM, et al. J Trauma Acute Care Surg. 2015 Aug 28. [Epub ahead of print]

BACKGROUND: Evacuation of traumatic hemothorax (HTx) is typically accomplished with large-bore (28-40 Fr) chest tubes, often resulting in patient discomfort. Management of HTx with smaller (14 Fr) pigtail catheters has not been widely adopted because of concerns about tube occlusion and blood evacuation rates. We compared pigtail catheters with chest tubes for the drainage of acute HTx in a swine model.

METHODS: Six Yorkshire cross-bred swine (44-54 kg) were anesthetized, instrumented, and mechanically ventilated. A 32 Fr chest tube was placed in one randomly assigned hemithorax; a 14 Fr pigtail catheter was placed in the other. Each was connected to a chest drainage system at -20 cm H2O suction and clamped. Over 15 minutes, 1,500 mL of arterial blood was withdrawn via femoral artery catheters. Seven hundred fifty milliliters of the withdrawn blood was instilled into each pleural space, and fluid resuscitation with colloid was initiated. The chest drains were then unclamped. Output from each drain was measured every minute for 5 minutes and then every 5 minutes for 40 minutes. The swine were euthanized, and thoracotomies were performed to quantify the volume of blood remaining in each pleural space and to examine the position of each tube.

RESULTS: Blood drainage was more rapid from the chest tube during the first 3 minutes compared with the pigtail catheter (348 ± 109 mL/min vs. 176 ± 53 mL/min), but this difference was not statistically significant (p = 0.19). Thereafter, the rates of drainage between the two tubes were not substantially different. The chest tube drained a higher total percentage of the blood from the chest (87.3% vs. 70.3%), but this difference did not reach statistical significance (p = 0.21).

CONCLUSION: We found no statistically significant difference in the volume of blood drained by a 14 Fr pigtail catheter compared with a 32 Fr chest tube.

B. 14 French pigtail catheters placed by surgeons to drain blood on trauma patients: is 14-Fr too small?

Kulvatunyou N, et al. J Trauma Acute Care Surg. 2012 Dec;73(6):1423-7.

BACKGROUND: Small 14F pigtail catheters (PCs) have been shown to drain air quite well in patients with traumatic pneumothorax (PTX). But their effectiveness in draining blood in patients with traumatic hemothorax (HTX) or hemopneumothorax (HPTX) is unknown. We hypothesized that 14F PCs can drain blood as well as large-bore 32F to 40F chest tubes. We herein report our early case series experience with PCs in the management of traumatic HTX and HPTX.

METHODS: We prospectively collected data on all bedside-inserted PCs in patients with traumatic HTX or HPTX during a 30-month period (July 2009 through December 2011) at our Level I trauma center. We then compared our PC prospective data with our trauma registry-derived retrospective chest tube data (January 2008 through December 2010) at our center. Our primary outcome of interest was the initial drainage output. Our secondary outcomes were tube duration, insertion-related complications, and failure rate. For our statistical analysis, we used the unpaired Student's t-test, χ test, and Wilcoxon rank-sum test; we defined significance by a value of p less than 0.05.

RESULTS: A total of 36 patients received PCs, and 191 received chest tubes. Our PC group had a higher rate of blunt mechanism injuries than our chest tube group did (83 vs. 62%; p = 0.01). The mean initial output was similar between our PC group (560 ± 81 mL) and our chest tube group (426 ± 37 mL) (p = 0.13). In the PC group, the tube was inserted later (median, Day 1; interquartile range, Days 0-3) than the tube inserted in our chest tube group (median, Day 0; interquartile range, Days 0-0) (p less than 0.001). Tube duration, rate of insertion-related complications, and failure rate were all similar.

CONCLUSION: In our early experience, 14F PCs seemed to drain blood as well as large-bore chest tubes based on initial drainage output and other outcomes studied. In this early phase, we were being selective in inserting PCs in only stable blunt trauma patients, and PCs were inserted at a later day from the time of the initial evaluation. In the future, we will need a larger sample size and possibly a well-designed prospective study.

DRV note: These investigators have a RCT underway on this: https://clinicaltrials.gov/ct2/show/NCT02553434

11. Images in Clinical Practice

Oral Manifestation of Crohn’s Disease

Osler–Weber–Rendu Syndrome

An Acute Dystonic Reaction after Treatment with Metoclopramide

Pseudoaneurysm after Transradial Coronary Angiography

Oral Maxillary Exostosis

Gastric Ascaris Infection

12. Acetaminophen for Fever in Critically Ill Patients with Suspected Infection

Young P, et al. N Engl J Med 2015 Oct 5 [Epub ahead of print]

BACKGROUND
Acetaminophen is a common therapy for fever in patients in the intensive care unit (ICU) who have probable infection, but its effects are unknown.

METHODS
We randomly assigned 700 ICU patients with fever (body temperature, ≥38°C) and known or suspected infection to receive either 1 g of intravenous acetaminophen or placebo every 6 hours until ICU discharge, resolution of fever, cessation of antimicrobial therapy, or death. The primary outcome was ICU-free days (days alive and free from the need for intensive care) from randomization to day 28.

RESULTS
The number of ICU-free days to day 28 did not differ significantly between the acetaminophen group and the placebo group: 23 days (interquartile range, 13 to 25) among patients assigned to acetaminophen and 22 days (interquartile range, 12 to 25) among patients assigned to placebo (Hodges–Lehmann estimate of absolute difference, 0 days; 96.2% confidence interval [CI], 0 to 1; P=0.07). A total of 55 of 345 patients in the acetaminophen group (15.9%) and 57 of 344 patients in the placebo group (16.6%) had died by day 90 (relative risk, 0.96; 95% CI, 0.66 to 1.39; P=0.84).

CONCLUSIONS
Early administration of acetaminophen to treat fever due to probable infection did not affect the number of ICU-free days. (Funded by the Health Research Council of New Zealand and others; HEAT Australian New Zealand Clinical Trials Registry number,


13. New Developments in Direct Oral Anticoagulant Safety and Indications

By Ryan Patrick Radecki, MD, MS, ACEP NOW. September 14, 2015

Much has changed in the realm of oral anticoagulation over the past few years. Beginning with the approval of dabigatran (Pradaxa), a direct thrombin inhibitor, patients received their first viable long-term alternative to warfarin. Subsequently, factor Xa inhibitors arrived on the market, initially featuring rivaroxaban (Xarelto) and apixaban (Eliquis). Now, edoxaban (Lixiana, Savaysa) has received FDA approval, and betrixaban is undergoing phase III trials. Experience with these classes of medications continues to increase, with several important new developments regarding safety and indications.

Dabigatran

The most striking recent developments involve dabigatran. Dabigatran was initially hailed as a tremendous advance over warfarin, with a simple dosing schedule and lacking the requirement for ongoing laboratory monitoring of anticoagulant effect. However, despite its initial appeal, it was soon discovered that many patients were knowingly placed at untoward risk.1 Legal proceedings against Boehringer Ingelheim in the United States resulted in a $650 million settlement to compensate patients harmed by dabigatran, and just as important, discovery revealed an array of documents describing the manufacturer’s internal analyses.

While dabigatran was marketed as not requiring ongoing monitoring, the documents indicate Boehringer was aware that plasma levels of the drug were, in fact, quite variable depending on individual physiologic and genetic features. The internal analyses described fivefold variability in plasma levels, and dose adjustment based on plasma levels and monitoring could further reduce bleeding by 30 percent to 40 percent compared with warfarin. However, it was further determined that reporting the benefits of such testing “could result in a more complex message and a weaker value proposition.” Translation: maximizing profits trumps maximizing patient safety.

The importance of plasma level variability and monitoring is most evident when comparing the phase III trial populations with the target prescribing population. The pivotal RE-LY trial described the use of dabigatran in a population mostly younger than 75 years of age, but Boehringer’s marketing data indicate 30 percent of patients prescribed dabigatran are 80 years of age and older.2 The reduced renal excretion of the elderly results in supratherapeutic serum levels and unintended elevation of bleeding risk. Internal and regulatory approval documents reveal concerns regarding such risk that may have been specifically minimized by Boehringer representatives.

The bleeding risks associated with dabigatran have been of particular concern because, in contrast to warfarin or factor Xa inhibitors, there is no reliable pharmacologic reversal strategy. The only reported mechanism for reliable attenuation of its clinical effect has been hemodialysis. Now, such a development is on the horizon; an antibody fragment has reached phase III trials. Interim results from the RE-VERSE AD trial demonstrate rapid reduction of serum dabigatran levels following idarucizumab administration, although the pharmacologic effects seem to be durable only up to approximately 24 hours.3 Generally, this should be clinically adequate, however, as the half-life of dabigatran is typically 12 to 14 hours. The cost of such an antidote is likely to be quite high. Do you recall the more than $2,000 per vial price tag of CroFab? Frankly, the best strategy would simply be avoidance of dabigatran in the first place….


14. Modafinil does enhance cognition, review finds

Hawkes N. BMJ 2015;351:h4573

A drug developed to treat narcolepsy may be the world’s first safe “smart drug,” say researchers from Oxford and Harvard Universities who have carried out a systematic review of the evidence.1

Modafinil has had a long under-the-counter career among students and academics who claim that it can increase their mental sharpness. As long ago as 2008 the journal Nature invited its readers to own up if they had used cognition enhancing drugs, and one fifth said that they had. Modafinil had been used by 44% of these, who cited improving concentration as the most popular reason.2

The new review, published in European Neuropsychopharmacology,1 showed that they may have been right. Ruairidh Battleday and Anna-Katharine Brem found 24 randomised controlled trials published from 1990 to 2014 that measured the benefits of taking modafinil, including planning and decision making, flexibility, learning and memory, and creativity. Previous studies have shown positive effects in sleep deprived people, but the two scientists included only studies that looked at people with normal sleep patterns. They reported that the drug made no difference to working memory or flexibility of thought but that it did improve decision making and planning. No real effect on mood was found, and other side effects were slight, but a few people had reported insomnia, headache, stomach ache, or nausea. Patients on placebo also reported the same side effects.

“So we ended up having two main conclusions,” said Brem. “First, that in the face of vanishingly few side effects in these controlled environments, modafinil can be considered a cognitive enhancer; and, second, that we need to figure out better ways of testing normal or even supra-normal cognition in a reliable manner. “However, we would like to stress the point that, with any method used to enhance cognition, ethical considerations always have to be taken into account: this is an important avenue for future work to explore.”

Battleday commented, “This is the first overview of modafinil’s actions in non-sleep-deprived individuals since 2008, and so found that the type of test used to assess modafinil’s cognitive
benefits has changed over the last few decades.

“In the past, people were using very basic tests of cognition, developed for neurologically impaired individuals. In contrast, more recent studies have, in general, used more complex tests: when these are used, it appears that modafinil more reliably enhances cognition: in particular, ‘higher’ brain functions that rely on contribution from multiple simple cognitive processes.”

The findings raised ethical issues, said Guy Goodwin, president of the European College of Neuropsychopharmacology, who was not involved in the study. “Previous ethical discussion of such agents has tended to assume extravagant effects before it was clear that there were any,” he said. “If correct, the present update means the ethical debate is real: how should we classify, condone or condemn a drug that improves human performance in the absence of pre-existing cognitive impairment?”

The remainder of the essay (subscription required): http://www.bmj.com/content/351/bmj.h4573


Medically Clear podcast: http://dballard30.podbean.com/

15. Defining safe criteria to diagnose miscarriage: prospective observational multicentre study

Preisler J, et al. BMJ. 2015 Sep 23;351:h4579.

OBJECTIVES: To validate recent guidance changes by establishing the performance of cut-off values for embryo crown-rump length and mean gestational sac diameter to diagnose miscarriage with high levels of certainty. Secondary aims were to examine the influence of gestational age on interpretation of mean gestational sac diameter and crown-rump length values, determine the optimal intervals between scans and findings on repeat scans that definitively diagnose pregnancy failure.)

DESIGN: Prospective multicentre observational trial.

SETTING: Seven hospital based early pregnancy assessment units in the United Kingdom.

PARTICIPANTS: 2845 women with intrauterine pregnancies of unknown viability included if transvaginal ultrasonography showed an intrauterine pregnancy of uncertain viability. In three hospitals this was initially defined as an empty gestational sac less than 20 mm mean diameter with or without a visible yolk sac but no embryo, or an embryo with crown-rump length less than 6 mm with no heartbeat. Following amended guidance in December 2011 this definition changed to a gestational sac size less than 25 mm or embryo crown-rump length less than 7 mm. At one unit the definition was extended throughout to include a mean gestational sac diameter less than 30 mm or embryo crown-rump length less than 8 mm.

MAIN OUTCOME MEASURES: Mean gestational sac diameter, crown-rump length, and presence or absence of embryo heart activity at initial and repeat transvaginal ultrasonography around 7-14 days later. The final outcome was pregnancy viability at 11-14 weeks' gestation.

RESULTS: The following indicated a miscarriage at initial scan: mean gestational sac diameter ≥25 mm with an empty sac (364/364 specificity: 100%, 95% confidence interval 99.0% to 100%), embryo with crown-rump length ≥7 mm without visible embryo heart activity (110/110 specificity: 100%, 96.7% to 100%), mean gestational sac diameter ≥18 mm for gestational sacs without an embryo presenting after 70 days' gestation (907/907 specificity: 100%, 99.6% to 100%), embryo with crown-rump length ≥3 mm without visible heart activity presenting after 70 days' gestation (87/87 specificity: 100%, 95.8% to 100%). The following were indicative of miscarriage at a repeat scan: initial scan and repeat scan after seven days or more showing an embryo without visible heart activity (103/103 specificity: 100%, 96.5% to 100%), pregnancies without an embryo and mean gestational sac diameter less than 12 mm where the mean diameter has not doubled after 14 days or more (478/478 specificity: 100%, 99.2% to 100%), pregnancies without an embryo and mean gestational sac diameter ≥12 mm showing no embryo heartbeat after seven days or more (150/150 specificity: 100%, 97.6% to 100%).

CONCLUSIONS: Recently changed cut-off values of gestational sac and embryo size defining miscarriage are appropriate and not too conservative but do not take into account gestational age. Guidance on timing between scans and expected findings on repeat scans are still too liberal. Protocols for miscarriage diagnosis should be reviewed to account for this evidence to avoid misdiagnosis and the risk of terminating viable pregnancies.


16. Time to Get Off the Diagnosis Dime onto the 10th Revision of the International Classification of Diseases

Manaker S. Ann Intern Med 2015 Sept 8 [Epub ahead of print]

Physicians passionately care about making the right diagnosis for their patients. The right diagnosis leads to the proper advice and treatment. Diagnostic concordance among physicians allows for care coordination, patient aggregation into study groups for clinical trials, and assessments of population health. So development of a standardized nomenclature for diagnoses represents an essential step along the way. A standardized nomenclature would facilitate communication among treating physicians, improve homogeneity of clinical trial populations, and ease international assessment of health trends and disease burdens. Developed by the World Health Organization at the end of the 19th century, the first International Classification of Diseases (ICD) represented an important attempt to facilitate these goals (1).
 
The ICD was originally updated every decade or so until the past 40 years, when it was done less frequently. The current edition (ICD, 10th Revision, Clinical Modification [ICD-10]), completed in 1992, was subsequently implemented in various countries other than the United States (for example, Australia in 1998, Canada in 2001, and France in 2005), and development of the 11th edition is well under way (1). The ICD-10 has been successfully used around the globe, including in economically distressed, developing nations with poor infrastructure for delivering health care. Nations that adopted the ICD-10 had minimal disruption in diagnosis coding patterns, which are generally similar between the ICD-9 and ICD-10 (2).
 
The ICD-10 is an excellent nomenclature, containing almost 70 000 unique, well-organized, alphanumeric diagnosis codes and offering better specificity than the ninth revision, which was originally crafted in 1975 and contains fewer than 15 000 codes (1). Additional codes describe procedures, primarily used for hospital billing and reporting. With more options to accurately describe a patient's conditions, the ICD-10 easily accommodates expansion for new descriptions of diseases and disorders. In contrast, the ICD-9 is outdated—the National Center for Vital Health Care Statistics last updated its code set in 2012 in anticipation of moving to the ICD-10.
 
Most (87%) of the individual ICD-10 codes correspond to a single ICD-9 code (3). Conversion choices from an ICD-9 diagnosis code to an ICD-10 one can be identified online or in a printed index, and generalized equivalency mappings allow easy conversion back and forth between both revisions (3). Many medical and other professional societies have provided resources for members to assist in the selection of ICD-10 diagnosis codes (4, 5), and most electronic health records allow clinicians simply to enter prose and respond by selecting from a list of diagnosis code choices. However, in the United States, proposals to move from the ninth to the 10th revision at the beginning of this century faltered.
 
Legislative mandates for implementation every 2 to 3 years were cyclically married to subsequent congressional delays. The most recent delay will expire on 1 October 2015, and the ICD-10 will then become the legal standard for diagnosis coding in the United States (6). However, the subsequent delays over the preceding 15 years did not simply reflect mere obstinacy or the intellectual challenge of going from fewer than 15 000 codes to almost 70 000. The delays reflected recognition and fear of the implementation costs (7), because the United States is the only country that uses diagnosis coding for physician payment policies.
 
Physicians care about their patients and making the right diagnoses and should care about putting the right diagnoses on their claim forms for reimbursement. Although the 1988 Medicare Catastrophic Coverage Act required physicians to submit ICD-9 diagnosis codes for reimbursement beginning in 1989, virtually any sign, symptom, disease, or disorder still suffices for payment of a routine office, hospital, or even home visit. However, over time, a correct diagnosis on the claim form is becoming increasingly important for physician reimbursement.
 
Diagnosis codes for risk adjustments are no longer limited to research investigations attempting to ensure balance between treatment groups. Coverage policies increasingly restrict physician payment for diagnostic tests and therapeutic procedures to limited, clinically appropriate diagnoses. In anticipation of the change to the ICD-10, many policies already list diagnoses from both revisions (8). In addition, Medicare began applying a value-based modifier to the Physician Fee Schedule payments to large provider groups in 2015. The value-based modifier will be applied to every physician by 2017 and will use the diagnoses historically reported on claims in the year preceding creation of the fee schedule (9).
 
The per capita cost-measures component of the value-based modifier now being applied in 2015 to traditional fee-for-service payments to large provider groups is risk-adjusted, based on incorporating the diagnoses historically reported on claims from 2013 into the risk-adjustment methodology for hierarchical condition categories from the Centers for Medicare & Medicaid Services (9). This methodology also applies to federal payments to Medicare Advantage plans. Because those payments fund their physician fee schedules, Medicare Advantage plans often require participating physicians to report or attest to clinically appropriate, supplemental diagnoses that increase the plan payments or reward physicians for doing so.
 
In contrast to physicians, hospitals have long understood the importance of getting the diagnoses right from a payment perspective…

The remainder of the essay (subscription required): http://annals.org/article.aspx?articleid=2434622

17. Inpatient Treatment after Multi-Dose Racemic Epinephrine for Croup in the ED

Rudinsky SL, et al. J Emerg Med. 2015;49:408-14.

BACKGROUND: Emergency department (ED) discharge is safe when croup-related stridor has resolved after corticosteroids and a single dose of racemic epinephrine (RE). Little evidence supports the traditional practice of hospital admission after ≥ 2 doses of RE.

OBJECTIVE: Our aim was to describe the frequency and timing of clinically important inpatient interventions after ≥ 2 ED RE doses.

METHODS: We identified patients hospitalized for croup after ED treatment with corticosteroids and ≥2 doses of RE. We compared asymptomatic (admitted solely on the number of RE doses) and symptomatic (admitted due to disease severity) groups with regard to inpatient RE administration, supplemental oxygen, helium-oxygen mixture (heliox) therapy, intubation, or transfer to a higher level of care, time to hospital discharge, and revisit and readmission rates within 48 h of discharge.

RESULTS: Of 200 subjects admitted after ≥ 2 ED RE doses, 72 (36%) received clinically important inpatient interventions: RE (n = 68 [34%]), heliox (n = 9 [5%]), and supplemental oxygen (n = 4 [2%]). Of patients who received inpatient RE, 53% received only 1 dose. No patients underwent intubation or transfer to higher level of care. The 112 asymptomatic patients had fewer interventions (14% vs. 63%; p less than 0.001) and shorter hospital durations (14.5 vs. 22 h; p less than 0.001). Only 14% of the asymptomatic group received RE, with 75% receiving a single dose. There were no differences in revisit and readmission rates.

CONCLUSIONS: Inpatient interventions after ≥ 2 ED doses of RE for croup were infrequent, most commonly RE administration. Most patients asymptomatic upon admission require 0-1 inpatient RE doses and may be candidates for outpatient management.

18. Pledging to Eliminate Low-Volume Surgery

Urbach DR. N Engl J Med 2015; 373:1388-1390

On May 18, 2015, leaders at three hospital systems — Dartmouth–Hitchcock Medical Center, the Johns Hopkins Hospital and Health System, and the University of Michigan Health System — publicly announced a “Take the Volume Pledge” campaign to prevent certain surgical procedures from being performed by their surgeons who perform relatively few of them or at their hospitals where relatively few such procedures are performed. The Pledge, promoted by long-time advocates of quality improvement such as John Birkmeyer and Peter Pronovost, challenges other large health systems to join them in restricting the performance of 10 surgical procedures — including gastrointestinal, cardiovascular, and joint-replacement surgeries — to hospitals and surgeons who perform more than a minimum number. The annual volume thresholds range from 10 per hospital and 5 per surgeon for carotid stenting to 50 per hospital and 25 per surgeon for hip and knee replacement.

The reaction of surgeons to the Pledge, which was widely promoted in U.S. News & World Report as part of its Best Hospitals for Common Care rankings, was predictably hostile — and completely out of proportion to the modest ambition of the Pledge. Of all the possible approaches to restricting surgical care to high-volume hospitals, perhaps the least controversial ought to be a decision by a large metropolitan academic hospital system that its most complex elective surgery should be performed by the providers and hospitals that do the most of a given procedure. (The Pledge is silent on the question of performing complex surgery in independent small and rural hospitals.) If volume-based distribution of surgery cannot be accomplished in this context, then it's probably not going to happen anywhere.

Proponents of the Pledge have presumably calculated that starting with such an easily achievable policy could be the thin end of the wedge for broader efforts to centralize complex surgery. Nevertheless, a discussion board of the American College of Surgeons quickly filled with dozens of postings, almost all bristling at the idea of external organizations imposing volume standards on surgery, arguing instead for quality-based standards, and taking particular offense at the portrayal of low-volume surgeons as hobbyists who are motivated by professional autonomy and pride to continue performing rare procedures despite the clinical and economic consequences.1

For anyone wondering why we are still discussing surgical volume 36 years after Harold Luft pointed out the relationship between higher surgical volume and lower postoperative mortality,2 and why even limited initiatives such as the Pledge elicit such controversy, it is useful to reflect on how we got here and what it means for the prospects of quality improvement in surgery.

There is no doubt that the outcomes of elective surgery — as measured by postoperative mortality, complications, or a wide array of other measures — are better when an operation is done by a surgeon or in a hospital with a high procedure volume. The volume–outcome effect is a remarkably consistent finding in studies of surgical services, and it applies not only to surgery, but also to many types of nonsurgical hospital-based care, such as treatment of congestive heart failure and chronic obstructive pulmonary disease, obstetrical care, trauma care, and intensive care. The mechanism underlying the volume–outcome effect has considerable importance for developing health policy to improve surgical care. If selective referral of patients to providers with excellent outcomes explains the effect, then it is important to identify and promote the best providers to help consumers choose where to go in the health care marketplace. If, on the other hand, outcomes improve because hospitals and surgeons gain expertise with incremental experience through a “practice makes perfect” mechanism, then the focus should be on dissemination of best practices and quality improvement…

The remainder of the essay (free): http://www.nejm.org/doi/full/10.1056/NEJMp1508472

19. Recognition and Management of Sepsis in Children: Practice Patterns in the ED

Thompson GC, et al. J Emerg Med 2015;49:391-9.

Background
Pediatric sepsis remains a leading cause of morbidity and mortality. Understanding current practice patterns and challenges is essential to inform future research and education strategies.

Objective
Our aim was to describe the practice patterns of pediatric emergency physicians (PEPs) in the recognition and management of sepsis in children and to identify perceived priorities for future research and education.

Methods
We conducted a cross-sectional, internet-based survey of members of the American Academy of Pediatrics, Section on Emergency Medicine and Pediatric Emergency Research Canada. The survey was internally derived, externally validated, and distributed using a modified Dillman methodology. Rank scores (RS) were calculated for responses using Likert-assigned frequency values.

Results
Tachycardia, mental-status changes, and abnormal temperature (RS = 83.7, 80.6, and 79.6) were the highest ranked clinical measures for diagnosing sepsis; white blood cell count, lactate, and band count (RS = 73.5, 70.9, and 69.1) were the highest ranked laboratory investigations. The resuscitation fluid of choice (85.5%) was normal saline. Dopamine was the first-line vasoactive medication (VAM) for cold (57.1%) and warm (42.2%) shock with epinephrine (18.5%) and norepinephrine (25.1%) as second-line VAMs (cold and warm, respectively). Steroid administration increased with complexity of presentation (all-comers 3.8%, VAM-resistant shock 54.5%, chronic steroid users 72.0%). Local ED-specific clinical pathways, national emergency department (ED)-specific guidelines, and identification of clinical biomarkers were described as future priorities.

Conclusions
While practice variability exists among clinicians, PEPs continue to rely heavily on clinical metrics for recognizing sepsis. Improved recognition through clinical biomarkers and standardization of care were perceived as priorities. Our results provide a strong framework to guide future research and education strategies in pediatric sepsis.

20. ED Visits and Overdose Deaths from Combined Use of Opioids and Benzodiazepines.

Jones CM, et al. Am J Prev Med. 2015 Oct;49(4):493-501.

INTRODUCTION: Opioid analgesics and benzodiazepines are the prescription drugs most commonly associated with drug overdose deaths. This study was conducted to assess trends in nonmedical use-related emergency department (ED) visits and drug overdose deaths that involved both opioid analgesics and benzodiazepines in the U.S. from 2004 to 2011.

METHODS: Opioid analgesic and benzodiazepine nonmedical use-related ED visits from the Drug Abuse Warning Network and drug overdose deaths from the National Vital Statistics System were analyzed for 2004-2011 to determine trends and demographic-specific rates. Data were analyzed from March 2014 to June 2014.

RESULTS: From 2004 to 2011, the rate of nonmedical use-related ED visits involving both opioid analgesics and benzodiazepines increased from 11.0 to 34.2 per 100,000 population (p-trend less than 0.0001). During the same period, drug overdose deaths involving both drugs increased from 0.6 to 1.7 per 100,000 (p-trend less than 0.0001). Statistically significant increases in ED visits occurred among males and females, non-Hispanic whites, non-Hispanic blacks, and Hispanics, and all age groups except 12- to 17-year-olds. For overdose deaths, statistically significant increases were seen in males and females, all three race/ethnicity groups, and all age groups except 12- to 17-year-olds. Benzodiazepine involvement in opioid analgesic overdose deaths increased each year, increasing from 18% of opioid analgesic overdose deaths in 2004 to 31% in 2011 (p-trend less than 0.0001).

CONCLUSIONS: ED visits and drug overdose deaths involving both opioid analgesics and benzodiazepines increased significantly between 2004 and 2011. Interventions to improve the appropriate prescribing and use of these medications are needed.


21. Outcomes of Basic Versus Advanced Life Support for Out-of-Hospital Medical Emergencies

Sanghavi P, et al. Ann Intern Med 2015 Oct 13 [Epub ahead of print]

Background: Most Medicare patients seeking emergency medical transport are treated by ambulance providers trained in advanced life support (ALS). Evidence supporting the superiority of ALS over basic life support (BLS) is limited, but some studies suggest ALS may harm patients.

Objective: To compare outcomes after ALS and BLS in out-of-hospital medical emergencies.

Design: Observational study with adjustment for propensity score weights and instrumental variable analyses based on county-level variations in ALS use.

Setting: Traditional Medicare.

Patients: 20% random sample of Medicare beneficiaries from nonrural counties between 2006 and 2011 with major trauma, stroke, acute myocardial infarction (AMI), or respiratory failure.

Measurements: Neurologic functioning and survival to 30 days, 90 days, 1 year, and 2 years.

Results: Except in cases of AMI, patients showed superior unadjusted outcomes with BLS despite being older and having more comorbidities. In propensity score analyses, survival to 90 days among patients with trauma, stroke, and respiratory failure was higher with BLS than ALS (6.1 percentage points [95% CI, 5.4 to 6.8 percentage points] for trauma; 7.0 percentage points [CI, 6.2 to 7.7 percentage points] for stroke; and 3.7 percentage points [CI, 2.5 to 4.8 percentage points] for respiratory failure). Patients with AMI did not exhibit differences in survival at 30 days but had better survival at 90 days with ALS (1.0 percentage point [CI, 0.1 to 1.9 percentage points]). Neurologic functioning favored BLS for all diagnoses. Results from instrumental variable analyses were broadly consistent with propensity score analyses for trauma and stroke, showed no survival differences between BLS and ALS for respiratory failure, and showed better survival at all time points with BLS than ALS for patients with AMI.

Limitation: Only Medicare beneficiaries from nonrural counties were studied.

Conclusion: Advanced life support is associated with substantially higher mortality for several acute medical emergencies than BLS.

Primary Funding Source: National Science Foundation, Agency for Healthcare Research and Quality, and National Institutes of Health.

See associated editorial: Is Prehospital Advanced Life Support Harmful? http://annals.org/article.aspx?articleid=2456126 (subscription required)

22. On The Prevalence of Diagnostic Error

A. Urgent change needed to improve diagnosis in health care or diagnostic errors will likely worsen

Most people will experience at least one diagnostic error -- an inaccurate or delayed diagnosis -- in their lifetime, sometimes with devastating consequences, says a new report from the Institute of Medicine of the National Academies of Sciences, Engineering, and Medicine. The committee that conducted the study and wrote the report found that although getting the right diagnosis is a key aspect of health care, efforts to improve diagnosis and reduce diagnostic errors have been quite limited.

Improving diagnosis is a complex challenge, partly because making a diagnosis is a collaborative and inherently inexact process that may unfold over time and across different health care settings. To improve diagnosis and reduce errors, the committee called for more effective teamwork among health care professionals, patients, and families; enhanced training for health care professionals; more emphasis on identifying and learning from diagnostic errors and near misses in clinical practice; a payment and care delivery environment that supports the diagnostic process; and a dedicated focus on new research.

This report is a continuation of the Institute of Medicine's Quality Chasm Series, which includes reports such as To Err Is Human: Building a Safer Health System, Crossing the Quality Chasm: A New Health System for the 21st Century, and Preventing Medication Errors.

"These landmark IOM reports reverberated throughout the health care community and were the impetus for system-wide improvements in patient safety and quality care," said Victor J. Dzau, president of the National Academy of Medicine. "But this latest report is a serious wake-up call that we still have a long way to go. Diagnostic errors are a significant contributor to patient harm that has received far too little attention until now. I am confident that Improving Diagnosis in Health Care, like the earlier reports in the IOM series, will have a profound effect not only on the way our health care system operates but also on the lives of patients."

Data on diagnostic errors are sparse, few reliable measures exist, and errors are often found in retrospect, the committee found. However, from the available evidence, the committee determined that diagnostic errors stem from a wide variety of causes that include inadequate collaboration and communication among clinicians, patients, and their families; a health care work system ill-designed to support the diagnostic process; limited feedback to clinicians about the accuracy of diagnoses; and a culture that discourages transparency and disclosure of diagnostic errors, which impedes attempts to learn and improve. Errors will likely worsen as the delivery of health care and the diagnostic process continue to increase in complexity, the committee concluded. To improve diagnosis, a significant re-envisioning of the diagnostic process and a widespread commitment to change from a variety of stakeholders will be required…



B. Reducing Diagnostic Errors — Why Now?

Khullar D, et al. N Engl J Med 2015 September 23 [Epub ahead of print]

Diagnostic errors are thought to be a substantial source of avoidable illness and death in the United States. Although diagnosis has always been central to the practice of medicine and diagnostic errors have always been prevalent, systematic efforts to measure these errors and analyze their underpinnings have been limited, as compared with other quality- and safety-improvement efforts.1,2 Several reasons have been suggested for this relative lack of attention, including a lack of understanding of decision-making biases, cultural attitudes discouraging discussion of misdiagnosis, the difficulty of defining and identifying such errors, assumptions about the impracticality of potential process or outcome measures of diagnostic quality, and the belief that diagnostic errors are less amenable than other types of medical errors to systems-level solutions.2

But we would argue that diagnostic errors are clinically and financially more costly today than ever before and that they therefore require greater attention and more dedicated resources. In the past, the health care system had less capacity — and perhaps less need — to address this problem. More limited treatment options for many conditions meant less likelihood of iatrogenic harm from inappropriate interventions and less potential for lost clinical benefit from appropriate ones. The tools available for tracking and preventing diagnostic errors, such as health information technology (HIT), were less sophisticated. And there was minimal external pressure from payers to study and tackle the issue.

As treatment options have become more effective and costly, the clinical and financial costs of misdiagnosing a readily treatable condition are substantially greater. Advances in HIT and big data offer new instruments for measuring and reducing diagnostic errors. And pay-for-performance metrics and risk-based contracts have created an economic environment in which accurate, timely diagnosis can be rewarded. In short, there is now more we can do to reduce diagnostic errors, and the clinical and financial value of doing so is greater. It thus makes sense to place greater emphasis on reducing these errors — as organizations such as the Institute of Medicine (now the National Academy of Medicine), which has just released a report on the topic (http://nas.edu/improvingdiagnosis), are beginning to do. (Drs. Jha and Jena served on the Institute of Medicine committee.)

With health care costing more than ever before, and missed or delayed diagnoses often resulting in higher downstream costs for treating more advanced disease, the financial implications of misdiagnosis can be substantial…

Advances in the management of acute myocardial infarction illustrate this point. Before the introduction of coronary care units, inpatient mortality among patients with acute myocardial infarction exceeded 30%. The development of coronary care units and sequential advances in fibrinolysis, percutaneous coronary interventions, and dual antiplatelet therapy have reduced inpatient mortality to nearly 5%, and patients who survive have greater cardiac reserve and higher quality of life than those who survived in the past.3 Thus, a failure to quickly and accurately diagnose acute myocardial infarction today has far greater implications for a patient's immediate and long-term health. Similarly, failing to accurately diagnose pulmonary embolism and stroke — two commonly misdiagnosed conditions — has greater health consequences for patients today simply because better treatments exist…


23. Bits of Wisdom from the School of Life

A. Finding a Mission

Missions are things we tend to associate with astronauts.

If someone at a party asked you what you did, and you said you were involved on a mission of some kind, people would look at you sort of strangely. But in truth, we’d all benefit from focusing in on the purpose of our lives so that we can refer to them as missions of one kind or another.

When the entrepreneur Elon Musk was at university, he asked himself very explicitly what his mission in life would be. He began by wondering what the world needed most urgently, then he looked into himself to see what his talents were and that led him to a list of four possible missions: space exploration, electric transportation, Artificial Intelligence and rewriting the human genome. In the end, Elon Musk chose the first two.

Few of us will ever settle on missions as mighty as these, but the notion of having a mission, rather than a mere job or hobby, remains widely applicable. How, then, can we learn to adopt the mission mindset?...


B. How to Serve

Understanding how to serve customers well is a major factor in the success of corporations: and service has a big role outside work too. It’s one of the many ways in which there’s an overlap between getting better in business and getting better at life in general. Service means helping others to thrive. It’s a goal that’s been around longer than humanity.

We’re all keen on being served well. But getting good at serving is tricky. It’s a problem that deserves a great deal of sympathy. We’re one of the first generations in history to be grappling with the delivery of effective service on a mass scale. There are so many ways service can go wrong.

One: The fear that it’s humiliating to serve…

Two:  The belief that the customer is awful…

Three: The failure to recognise customer anxiety…

Four: Introversion and gregarious service…

Five: Over-commitment to ‘the rules’…

Six: People haven’t served you enough…

Seven: Expectations are violated…

Eight: Trouble with details…

Nine: The art of apology is neglected…

Ten: We don’t understand what’s good about good service…


24. Micro Bits

A. Interactive Health Tracking May Help in Hypertension: a 'gamified' system led to clinically relevant reductions for many participants

WASHINGTON -- Routine use of an interactive website or app that tracked health data and incentivized regular exercise and other healthy behaviors -- structured as a game -- was associated with significantly lower blood pressure among hypertensive participants in a 3-year study.


B. JAMA Study Finds Half of U.S. Adults Have Diabetes or Prediabetes

Sept. 18, 2015 — A recent study in JAMA: the Journal of the American Medical Association found that in 2012, about half of American adults had diabetes or prediabetes. In addition, more than one-third of those who met the study's criteria for diabetes were unaware they had the disease.


C. Content, Consistency, and Quality of Black Box Warnings: Time for a Change


D. Study: Many family physicians work in urban hospital ERs

A report in American Family Physician found family physicians play a bigger than expected role in urban emergency care, submitting almost 12% of the 15 million urban emergency department claims in 2012. Researchers from the Robert Graham Center for Policy Studies in Family Medicine and Primary Care found that in addition to treating ER patients with relatively simple problems, family physicians in the ER also care for patients with strokes, heart attacks and fractures.