Monday, October 31, 2016

Lit Bits: Oct 31, 2016

From the recent medical literature...

1. High Prevalence of PE among Patients Hospitalized for Syncope: Really?

Prandoni P, et al. for the PESIT Investigators. N Engl J Med 2016; 375:1524-1531

The prevalence of pulmonary embolism among patients hospitalized for syncope is not well documented, and current guidelines pay little attention to a diagnostic workup for pulmonary embolism in these patients.

We performed a systematic workup for pulmonary embolism in patients admitted to 11 hospitals in Italy for a first episode of syncope, regardless of whether there were alternative explanations for the syncope. The diagnosis of pulmonary embolism was ruled out in patients who had a low pretest clinical probability, which was defined according to the Wells score, in combination with a negative d-dimer assay. In all other patients, computed tomographic pulmonary angiography or ventilation–perfusion lung scanning was performed.

A total of 560 patients (mean age, 76 years) were included in the study. A diagnosis of pulmonary embolism was ruled out in 330 of the 560 patients (58.9%) on the basis of the combination of a low pretest clinical probability of pulmonary embolism and negative d-dimer assay. Among the remaining 230 patients, pulmonary embolism was identified in 97 (42.2%). In the entire cohort, the prevalence of pulmonary embolism was 17.3% (95% confidence interval, 14.2 to 20.5). Evidence of an embolus in a main pulmonary or lobar artery or evidence of perfusion defects larger than 25% of the total area of both lungs was found in 61 patients. Pulmonary embolism was identified in 45 of the 355 patients (12.7%) who had an alternative explanation for syncope and in 52 of the 205 patients (25.4%) who did not.

Pulmonary embolism was identified in nearly one of every six patients hospitalized for a first episode of syncope.

For a critical review, see Ryan Radecki’s post here:

2. New Guidelines: RBC Transfusion Thresholds and Storage

Carson JL, et al. JAMA.  2016 Oct 12 [Epub ahead of print].  

Importance  More than 100 million units of blood are collected worldwide each year, yet the indication for red blood cell (RBC) transfusion and the optimal length of RBC storage prior to transfusion are uncertain.

Objective  To provide recommendations for the target hemoglobin level for RBC transfusion among hospitalized adult patients who are hemodynamically stable and the length of time RBCs should be stored prior to transfusion.

Evidence Review  Reference librarians conducted a literature search for randomized clinical trials (RCTs) evaluating hemoglobin thresholds for RBC transfusion (1950-May 2016) and RBC storage duration (1948-May 2016) without language restrictions. The results were summarized using the Grading of Recommendations Assessment, Development and Evaluation method. For RBC transfusion thresholds, 31 RCTs included 12 587 participants and compared restrictive thresholds (transfusion not indicated until the hemoglobin level is 7-8 g/dL) with liberal thresholds (transfusion not indicated until the hemoglobin level is 9-10 g/dL). The summary estimates across trials demonstrated that restrictive RBC transfusion thresholds were not associated with higher rates of adverse clinical outcomes, including 30-day mortality, myocardial infarction, cerebrovascular accident, rebleeding, pneumonia, or thromboembolism. For RBC storage duration, 13 RCTs included 5515 participants randomly allocated to receive fresher blood or standard-issue blood. These RCTs demonstrated that fresher blood did not improve clinical outcomes.

Findings  It is good practice to consider the hemoglobin level, the overall clinical context, patient preferences, and alternative therapies when making transfusion decisions regarding an individual patient. Recommendation 1: a restrictive RBC transfusion threshold in which the transfusion is not indicated until the hemoglobin level is 7 g/dL is recommended for hospitalized adult patients who are hemodynamically stable, including critically ill patients, rather than when the hemoglobin level is 10 g/dL (strong recommendation, moderate quality evidence). A restrictive RBC transfusion threshold of 8 g/dL is recommended for patients undergoing orthopedic surgery, cardiac surgery, and those with preexisting cardiovascular disease (strong recommendation, moderate quality evidence). The restrictive transfusion threshold of 7 g/dL is likely comparable with 8 g/dL, but RCT evidence is not available for all patient categories. These recommendations do not apply to patients with acute coronary syndrome, severe thrombocytopenia (patients treated for hematological or oncological reasons who are at risk of bleeding), and chronic transfusion–dependent anemia (not recommended due to insufficient evidence). Recommendation 2: patients, including neonates, should receive RBC units selected at any point within their licensed dating period (standard issue) rather than limiting patients to transfusion of only fresh (storage length: less than 10 days) RBC units (strong recommendation, moderate quality evidence).

Conclusions and Relevance  Research in RBC transfusion medicine has significantly advanced the science in recent years and provides high-quality evidence to inform guidelines. A restrictive transfusion threshold is safe in most clinical settings and the current blood banking practices of using standard-issue blood should be continued.

3. Your Surgeon Is Probably a Republican, Your Psychiatrist Probably a Democrat

Margot Sanger-Katz. New York Times, Oct. 6, 2016

We know that Americans are increasingly sorting themselves by political affiliation into  friendships, even into neighborhoods. Something similar seems to be happening with doctors and their various specialties.

New data show that, in certain medical fields, large majorities of physicians tend to share the political leanings of their colleagues, and a study suggests ideology could affect some treatment recommendations. In surgery, anesthesiology and urology, for example, around two-thirds of doctors who have registered a political affiliation are Republicans. In infectious disease medicine, psychiatry and pediatrics, more than two-thirds are Democrats.

The conclusions are drawn from data compiled by researchers at Yale. They joined two large public data sets, one listing every doctor in the United States and another containing the party registration of every voter in 29 states.

Eitan Hersh, an assistant professor of political science, and Dr. Matthew Goldenberg, an assistant professor of psychiatry (guess his party!), shared their data with The Upshot. Using their numbers, we found that more than half of all doctors with party registration identify as Democrats. But the partisanship of physicians is not evenly distributed throughout the fields of medical practice.

The new research is the first to directly measure the political leanings of a large sample of all doctors. Earlier research — using surveys of physicians and medical students, and looking at doctors’ campaign contributions — has reached somewhat similar conclusions. What we found is that though doctors, over all, are roughly split between the parties, some specialties have developed distinct political preferences.

It’s possible that the experience of being, say, an infectious disease physician, who treats a lot of drug addicts with hepatitis C, might make a young physician more likely to align herself with Democratic candidates who support a social safety net. But it’s also possible that the differences resulted from some initial sorting by medical students as they were choosing their fields….

4. Will This Hemodynamically Unstable Patient Respond to a Bolus of IV Fluids?

Bentzer P, et al. JAMA.  2016;316(12):1298-1309.  

Importance  Fluid overload occurring as a consequence of overly aggressive fluid resuscitation may adversely affect outcome in hemodynamically unstable critically ill patients. Therefore, following the initial fluid resuscitation, it is important to identify which patients will benefit from further fluid administration.

Objective  To identify predictors of fluid responsiveness in hemodynamically unstable patients with signs of inadequate organ perfusion.

Data Sources and Study Selection  Search of MEDLINE and EMBASE (1966 to June 2016) and reference lists from retrieved articles, previous reviews, and physical examination textbooks for studies that evaluated the diagnostic accuracy of tests to predict fluid responsiveness in hemodynamically unstable adult patients who were defined as having refractory hypotension, signs of organ hypoperfusion, or both. Fluid responsiveness was defined as an increase in cardiac output following intravenous fluid administration.

Data Extraction  Two authors independently abstracted data (sensitivity, specificity, and likelihood ratios [LRs]) and assessed methodological quality. A bivariate mixed-effects binary regression model was used to pool the sensitivities, specificities, and LRs across studies.

Results  A total of 50 studies (N = 2260 patients) were analyzed. In all studies, indices were measured before assessment of fluid responsiveness. The mean prevalence of fluid responsiveness was 50% (95% CI, 42%-56%). Findings on physical examination were not predictive of fluid responsiveness with LRs and 95% CIs for each finding crossing 1.0. A low central venous pressure (CVP) (mean threshold less than 8 mm Hg) was associated with fluid responsiveness (positive LR, 2.6 [95% CI, 1.4-4.6]; pooled specificity, 76%), but a CVP greater than the threshold made fluid responsiveness less likely (negative LR, 0.50 [95% CI, 0.39-0.65]; pooled sensitivity, 62%). Respiratory variation in vena cava diameter measured by ultrasound (distensibility index over 15%) predicted fluid responsiveness in a subgroup of patients without spontaneous respiratory efforts (positive LR, 5.3 [95% CI, 1.1-27]; pooled specificity, 85%). Patients with less vena cava distensibility were not as likely to be fluid responsive (negative LR, 0.27 [95% CI, 0.08-0.87]; pooled sensitivity, 77%). Augmentation of cardiac output or related parameters following passive leg raising predicted fluid responsiveness (positive LR, 11 [95% CI, 7.6-17]; pooled specificity, 92%). Conversely, the lack of an increase in cardiac output with passive leg raising identified patients unlikely to be fluid responsive (negative LR, 0.13 [95% CI, 0.07-0.22]; pooled sensitivity, 88%).

Conclusions and Relevance  Passive leg raising followed by measurement of cardiac output or related parameters may be the most useful test for predicting fluid responsiveness in hemodynamically unstable adults. The usefulness of respiratory variation in the vena cava requires confirmatory studies.

5. More Bleeds With Rivaroxaban in Real-World Comparison than with Dabigatran in Medicare data

by Larry Husten. CardioBrief October 05, 2016

Patients who take rivaroxaban (Xarelto) may be more likely to have serious bleeding events than patients who take dabigatran (Pradaxa), an observational study found.

In a paper published in JAMA Internal Medicine, David Graham, MD, MPH, and colleagues at the FDA performed a retrospective analysis of 118,000 Medicare patients with atrial fibrillation who started therapy with either rivaroxaban or dabigatran. They found that rivaroxaban was associated with:
Statistically significant increases in intracranial hemorrhage, resulting in an excess of 2.3 cases per 1,000 person-years after an adjustment for baseline differences between groups

Significant excesses in major extracranial bleeding (an excess of 13 cases per 1,000 person-years), including an increase in major GI bleeds (9.4 cases per 1,000 person-years)

A non-significant reduction in thromboembolic stroke associated with rivaroxaban, resulting in 1.8 fewer cases per 1,000 person-years

While there was no significant difference in mortality, the authors noted a significant increase with rivaroxaban over dabigatran in the subgroup of patients 75 or older with a CHADS2 score over 2.

The authors noted that, in 2014, rivaroxaban was used two to three times more often than dabigatran by U.S. patients with atrial fibrillation. They speculated that this might be due to "prescriber misperceptions about bleeding risks with dabigatran, arising from U.S. Food and Drug Administration receipt of a large number of postmarketing case reports following its approval. Ironically, we found substantially higher bleeding risks with use of rivaroxaban than dabigatran."

Both non-vitamin K oral anticoagulants (NOACs) have been the subject of controversy and criticism: Dabigatran has been plagued by reports of bleeding complications since shortly after its approval. More recently, there have been reports critical of the pivotal rivaroxaban ROCKET AF trial because of the use of a defective monitoring device in the trial.

The authors acknowledged that a "randomized direct comparison between dabigatran and rivaroxaban would be optimal," but that "such a study is unlikely to be undertaken by the manufacturer of either product."

Outside experts expressed concern about conclusions derived from observational studies based on Medicare claims. Comparative effectiveness studies based on claims do not provide "very strong evidence," commented Harlan Krumholz, MD, of Yale. Sanjay Kaul, MD, MPH, of Cedars-Sinai Medical Center in Los Angeles, warned that "drawing strong inferences from a claims database is a slippery slope."

In an editor's note, on the other hand, Rita Redberg, MD, and Anna Parks, MD, both of the University of California San Francisco, offered strong support for the study's real world application: "The additional information should lead us to prescribe dabigatran over rivaroxaban for patients with atrial fibrillation. Also, it can help inform further investigation of NOAC monitoring and tailored dosing, as others have previously recommended for dabigatran. In addition, knowing that a patient prescribed rivaroxaban is at comparatively increased risk of bleeding might encourage clinicians to more vigilantly identify and mitigate modifiable risk factors. Finally, it offers important guidance to consumers on relative efficacy and safety profiles that drive NOAC selection, particularly for those at greatest risk of hemorrhage."

I asked Redberg if she had any concerns about basing recommendations on observational studies. "Clearly, I would prefer randomized controlled trial data," she responded. "But currently there are multiple NOACs on the market, there are no randomized controlled trials that compare them, and I know of none in the works. Thus, the best data we will get right now to help inform patient care is from high-quality observational studies."

Source: Graham DJ, et al. Stroke, Bleeding, and Mortality Risks in Elderly Medicare Beneficiaries Treated With Dabigatran or Rivaroxaban for Nonvalvular Atrial Fibrillation. JAMA Intern Med. 2016 Oct 3 [Epub ahead of print].

6. Physicians beat symptom trackers for getting correct diagnosis

A study comparing 23 symptom-checker websites or apps with physicians found doctors got the correct diagnosis 72% of the time, compared with 34% for the digital platforms, researchers reported in JAMA Internal Medicine. Researchers said 84% of physicians listed the correct diagnosis in the top three options, compared with 51% of the symptom checker programs.

Semigran HL, et al. Comparison of Physician and Computer Diagnostic Accuracy. JAMA Intern Med. 2016 Oct 10 [Epub ahead of print].

The Institute of Medicine recently highlighted that physician diagnostic error is common and information technology may be part of the solution.1 Given advancements in computer science, computers may be able to independently make accurate clinical diagnoses.2 While studies have compared computer vs physician performance for reading electrocardiograms,3 the diagnostic accuracy of computers vs physicians remains unknown. To fill this gap in knowledge, we compared the diagnostic accuracy of physicians with computer algorithms called symptom checkers.

Symptom checkers are websites and apps that help patients with self-diagnosis. After answering a series of questions, the user is given a list of rank-ordered potential diagnoses generated by a computer algorithm. Previously, we evaluated the diagnostic accuracy of 23 symptom checkers using 45 clinical vignettes.4 The vignettes included the patient’s medical history and had no physical examination or test findings. In this study we compared the diagnostic performance of physicians with symptom checkers for those same vignettes using a unique online platform called Human Dx.

Human Dx is a web- and app-based platform on which physicians generate differential diagnoses for clinical vignettes. Since 2015, Human Dx has been used by over 2700 physicians and trainees from 40 countries who have addressed over 100 000 vignettes.

The 45 vignettes, previously developed for the systematic assessment of online symptom checkers,4 were disseminated by Human Dx between December 2015 and May 2016 to internal medicine, family practice, or pediatrics physicians who did not know which vignettes were part of the research study. There were 15 high, 15 medium, and 15 low-acuity condition vignettes and 26 common and 19 uncommon condition vignettes.4 Physicians submitted free text ranked differential diagnoses for each case. Each vignette was solved by at least 20 physicians.

Given that physicians provided free text responses, 2 physicians (S.N. and D.M.L.) hand-reviewed the submitted diagnoses and independently decided whether the participant listed the correct diagnosis first or in the top 3 diagnoses. Interrater agreement was high (Cohen κ, 96%), and a third study physician (A.M.) resolved discrepancies (n = 60).
We used χ2 tests of significance to compare in physicians’ performance. Physician diagnosis accuracy was compared with previously reported symptom checker accuracy for these same vignettes using 2-sample tests of proportion.4 The study was exempt from Harvard’s institutional review board and participants were not compensated.

Of the 234 physicians who solved at least 1 vignette, 211 (90%) were trained in internal medicine and 121 (52%) were fellows or residents (Table 1).
Physicians listed the correct diagnosis first more often across all vignettes compared with symptom checkers (72.1% vs 34.0%, P  less than  .001) as well as in the top 3 diagnoses listed (84.3% vs 51.2%, P  less than  .001) (Table 2).

Across physicians, they were more likely to list the correct diagnosis first for high-acuity vignettes (vs low-acuity vignettes) and for uncommon vignettes (vs common vignettes). In contrast, symptom checkers were more likely to list the correct diagnosis first for low-acuity vignettes and common vignettes (Table 2).

In what we believe to be the first direct comparison of diagnostic accuracy, physicians vastly outperformed computer algorithms in diagnostic accuracy (84.3% vs 51.2% correct diagnosis in the top 3 listed).4 Despite physicians’ superior performance, they provided the incorrect diagnosis in about 15% of cases, similar to prior estimates (10%-15%) for physician diagnostic error.5 While in this project we compared diagnostic performance, future work should test whether computer algorithms can augment physician diagnostic accuracy.6

Key limitations included our use of clinical vignettes, which likely do not reflect the complexity of real-world patients and did not include physical examination or test results. Physicians who chose to use Human Dx may not be a representative sample of US physicians and therefore may differ in diagnostic accuracy. Symptom checkers are only 1 form of computer diagnostic tools, and other tools may have superior performance.

7. Medicaid Coverage Increases ED Use over Two Years — Further Evidence from Oregon’s Experiment

Finkelstein AN, et al. N Engl J Med 2016; 375:1505-1507.

The effect of Medicaid coverage on health and the use of health care services is of first-order policy importance, particularly as policymakers consider expansions of public health insurance. Estimating the effects of expanding Medicaid is challenging, however, because Medicaid enrollees and the uninsured differ in many ways that may also affect outcomes of interest. Oregon’s 2008 expansion of Medicaid through random-lottery selection of potential enrollees from a waiting list offers the opportunity to assess Medicaid’s effects with a randomized evaluation that is not contaminated by such confounding factors. In a previous examination of the Oregon Health Insurance Experiment, we found that Medicaid coverage increased health care use across a range of settings, improved financial security, and reduced rates of depression among enrollees, but it produced no detectable changes in several measures of physical health, employment rates, or earnings.1-4

A key finding was that Medicaid increased emergency department (ED) visits by 40% in the first 15 months after people won the lottery.3 This finding was greeted with considerable attention and surprise, given the widespread belief that expanding Medicaid coverage to more uninsured people would encourage the use of primary care and thereby reduce ED use. Many observers speculated that the increase in ED use would abate over time as the newly insured found alternative sites of care or as their health needs were addressed and their health improved. One commentator, for example, raised the question, “But why did these patients go to the ED and not to a primary care office?” He hypothesized that “Despite the earlier finding that coverage increased outpatient use, many of these newly insured patients probably had not yet established relationships with primary care physicians. If so, the excess ED use will attenuate with time.”5

We have now analyzed additional data in order to address these questions: Does the increase in ED use caused by Medicaid coverage represent a short-term effect that is likely to dissipate over time? And does Medicaid coverage encourage the newly insured to substitute physician office visits for ED visits? We used the lottery to implement a randomized, controlled evaluation of the causal effect of Medicaid coverage on health care use, applying a standard instrumental variables approach. More detail on the lottery, data, and methods is available elsewhere1-3 as well as in the Supplementary Appendix (available at, which also provides additional results.

Extending our ED administrative data by a year to span the 2007–2010 period, we analyzed the pattern of the effect of Medicaid coverage on ED use over a 2-year period after the 2008 lottery. The graphs show the effect of Medicaid coverage over time — both in terms of the mean number of ED visits per person (Panel A) and whether a person had any ED visits (Panel B) — measured separately for the four 6-month periods after lottery notification. There is no statistical or substantive evidence of any time pattern in the effect on ED use on either variable. Medicaid coverage increased the mean number of ED visits per person by 0.17 (standard error, 0.04) over the first 6 months or about 65% relative to the mean in the control group of individuals not selected in the lottery; over the subsequent three 6-month periods, the point estimates are similar and, for the most part, statistically indistinguishable from each other. For example, we cannot reject (P=0.80) the hypothesis that the 0.17 increase in ED visits attributable to Medicaid coverage in the first 6 months is the same as the 0.15 increase in visits in months 18 to 24. Thus, using another year of ED data, we found no evidence that the increase in ED use due to Medicaid coverage is driven by pent-up demand that dissipates over time; the effect on ED use appears to persist over the first 2 years of coverage. We repeated a similar analysis for hospital admissions and once again found no evidence of any time patterns in the effects of Medicaid coverage over the first 2 years (see the Supplementary Appendix for details)…

One possible reason for this finding is that the type of people who use more care when they gain Medicaid coverage are likely to increase use across multiple settings, including both the ED and the physician’s office. Another possible reason is that by increasing the use of primary care, Medicaid coverage may end up driving greater use of emergency care. For example, primary care providers may sometimes encourage patients to seek such care. One study participant we interviewed told us, “I went to the doctor’s office one time and they said, no, you need to go to the ER because your blood sugar is way too high. It’s going to take us hours to get it down. So you need to go to the ER.”

For policymakers deliberating about Medicaid expansions, our results, which draw on the strength of a randomized, controlled design, suggest that newly insured people will most likely use more health care across settings — including the ED and the hospital — for at least 2 years and that expanded coverage is unlikely to drive substantial substitution of office visits for ED use.

8. Propofol or Ketofol for Procedural Sedation and Analgesia in Emergency Medicine-The POKER Study: A Randomized Double-Blind Clinical Trial.

Ferguson I, et a l. Ann Emerg Med. 2016;68(5): 574–582.e1.

STUDY OBJECTIVE: We determine whether emergency physician-provided deep sedation with 1:1 ketofol versus propofol results in fewer adverse respiratory events requiring physician intervention when used for procedural sedation and analgesia.

METHODS: Consenting patients requiring deep sedation were randomized to receive either ketofol or propofol in a double-blind fashion according to a weight-based dosing schedule. The primary outcome was the occurrence of a respiratory adverse event (desaturation, apnea, or hypoventilation) requiring an intervention by the sedating physician. Secondary outcomes included hypotension and patient satisfaction.

RESULTS: Five hundred seventy-three patients were enrolled and randomized, 292 in the propofol group and 281 in the ketofol group. Five percent in the propofol group and 3% in the ketofol group met the primary outcome, an absolute difference of 2% (95% confidence interval [CI] -2% to 5%). Patients receiving propofol were more likely to become hypotensive (8 versus 1%; difference 7%; 95% CI 4% to 10%). Patient satisfaction was very high in both groups (10/10; interquartile range 10 to 10/10), and although the ketofol group was more likely to experience severe emergence delirium (5% versus 2%; difference 3%; 95% CI 0.4% to 6%), they had lower pain scores at 30 minutes postprocedure. Other secondary outcomes were similar between groups.

CONCLUSION: Ketofol and propofol resulted in a similar incidence of adverse respiratory events requiring the intervention of the sedating physician. Although propofol resulted in more hypotension, the clinical relevance of this is questionable, and both agents are associated with high levels of patient satisfaction.

9. If This Public Figure Wrote a Research Paper…

Below are the opening paragraphs of a fictitious research article crafted in the style of a well-known public figure. After reading the opener and before linking to the full-text, guess what figure is being caricaturized.
a. Oprah Winfrey
b. Jimmy Fallon
c. Donald Trump
d. Dr. Oz

A title for a really great piece of research, just the best, really

The current research, and it is really great research, it really is. It relies on the theory — and I have the best theories, you know, I use the best theories in my research. It really is quite amazing just how great the theory is, but I’m not really, in fact — it is a theory. A really good one and I’ve talked to people and, lots of people actually, and they all think what I said. It has a lot of appeal. It’s really just all there and what it is. If people, you know, losers and whatever, if they don’t get it, then what are you going to do? It’s not like the idea isn’t there and that, you know, it’s what it is. I have to shake my head. Everyone is just shaking their heads. It really is. Along with the theory, there’s other work. Existing data — and again, I have the best data.

You would really, if you had the same great data, be completely happy and the data are there. And they are really, you know, data and we have all kinds. The best kinds. And that is what we base the current work, which is great work, that I did and it’s great. If other people want to be walked through like babies or something, then I don’t know what their problem is. The data just are there so get off your lazy butts and stop looking for handouts…

Full-text (free):

10. Images in Clinical Practice

Right ventricular myocardial infarction

Hemorrhagic Bullae in a Primary Varicella Zoster Virus Infection

Worms in the Eye

Young Man With Severe Abdominal Pain

Male With Pain in His Neck

Child With Sore Throat

Adolescent Athlete With Sudden Groin Pain

Elderly Male With Mass on Right Thumb

Young Man With Epigastric Pain

Male With Hypertension

Neonate With a Swollen Thigh

Emphysematous Pyelonephritis

Cutaneous Lupus — “The Pimple That Never Went Away”

First Branchial Cleft Cyst

Congenital Rubella

Reversible Acute Mesenteric Ischemia

11. Letter to the Doctors and Nurses Who Cared for My Wife

By Peter DeMarco, New York Times. Oct. 6, 2016

After his 34-year-old wife suffered a devastating asthma attack and later died, the Boston writer Peter DeMarco wrote the following letter to the intensive care unit staff of CHA Cambridge Hospital who cared for her and helped him cope.

As I begin to tell my friends and family about the seven days you treated my wife, Laura Levis, in what turned out to be the last days of her young life, they stop me at about the 15th name that I recall. The list includes the doctors, nurses, respiratory specialists, social workers, even cleaning staff members who cared for her.

“How do you remember any of their names?” they ask.

How could I not, I respond.

Every single one of you treated Laura with such professionalism, and kindness, and dignity as she lay unconscious. When she needed shots, you apologized that it was going to hurt a little, whether or not she could hear. When you listened to her heart and lungs through your stethoscopes, and her gown began to slip, you pulled it up to respectfully cover her. You spread a blanket, not only when her body temperature needed regulating, but also when the room was just a little cold, and you thought she’d sleep more comfortably that way.

You cared so greatly for her parents helping them climb into the room’s awkward recliner, fetching them fresh water almost by the hour, and by answering every one of their medical questions with incredible patience. My father-in-law, a doctor himself as you learned, felt he was involved in her care. I can’t tell you how important that was to him.

Then, there was how you treated me. How would I have found the strength to have made it through that week without you?

How many times did you walk into the room to find me sobbing, my head down, resting on her hand, and quietly go about your task, as if willing yourselves invisible? How many times did you help me set up the recliner as close as possible to her bedside, crawling into the mess of wires and tubes around her bed in order to swing her forward just a few feet?

How many times did you check in on me to see whether I needed anything, from food to drink, fresh clothes to a hot shower, or to see whether I needed a better explanation of a medical procedure, or just someone to talk to?

How many times did you hug me and console me when I fell to pieces, or ask about Laura’s life and the person she was, taking the time to look at her photos or read the things I’d written about her? How many times did you deliver bad news with compassionate words, and sadness in your eyes?

When I needed to use a computer for an emergency email, you made it happen. When I smuggled in a very special visitor, our tuxedo cat, Cola, for one final lick of Laura’s face, you “didn’t see a thing.”…

12. Time to Treatment with Endovascular Thrombectomy and Outcomes From Ischemic Stroke: A Meta-analysis

Saver JL, et al. JAMA. 2016;316(12):1279-1288.

Importance  Endovascular thrombectomy with second-generation devices is beneficial for patients with ischemic stroke due to intracranial large-vessel occlusions. Delineation of the association of treatment time with outcomes would help to guide implementation.

Objective  To characterize the period in which endovascular thrombectomy is associated with benefit, and the extent to which treatment delay is related to functional outcomes, mortality, and symptomatic intracranial hemorrhage.

Design, Setting, and Patients  Demographic, clinical, and brain imaging data as well as functional and radiologic outcomes were pooled from randomized phase 3 trials involving stent retrievers or other second-generation devices in a peer-reviewed publication (by July 1, 2016). The identified 5 trials enrolled patients at 89 international sites.

Exposures  Endovascular thrombectomy plus medical therapy vs medical therapy alone; time to treatment.

Main Outcomes and Measures  The primary outcome was degree of disability (mRS range, 0-6; lower scores indicating less disability) at 3 months, analyzed with the common odds ratio (cOR) to detect ordinal shift in the distribution of disability over the range of the mRS; secondary outcomes included functional independence at 3 months, mortality by 3 months, and symptomatic hemorrhagic transformation.

Results  Among all 1287 patients (endovascular thrombectomy + medical therapy [n = 634]; medical therapy alone [n = 653]) enrolled in the 5 trials (mean age, 66.5 years [SD, 13.1]; women, 47.0%), time from symptom onset to randomization was 196 minutes (IQR, 142 to 267). Among the endovascular group, symptom onset to arterial puncture was 238 minutes (IQR, 180 to 302) and symptom onset to reperfusion was 286 minutes (IQR, 215 to 363). At 90 days, the mean mRS score was 2.9 (95% CI, 2.7 to 3.1) in the endovascular group and 3.6 (95% CI, 3.5 to 3.8) in the medical therapy group. The odds of better disability outcomes at 90 days (mRS scale distribution) with the endovascular group declined with longer time from symptom onset to arterial puncture: cOR at 3 hours, 2.79 (95% CI, 1.96 to 3.98), absolute risk difference (ARD) for lower disability scores, 39.2%; cOR at 6 hours, 1.98 (95% CI, 1.30 to 3.00), ARD, 30.2%; cOR at 8 hours,1.57 (95% CI, 0.86 to 2.88), ARD, 15.7%; retaining statistical significance through 7 hours and 18 minutes. Among 390 patients who achieved substantial reperfusion with endovascular thrombectomy, each 1-hour delay to reperfusion was associated with a less favorable degree of disability (cOR, 0.84 [95% CI, 0.76 to 0.93]; ARD, −6.7%) and less functional independence (OR, 0.81 [95% CI, 0.71 to 0.92], ARD, −5.2% [95% CI, −8.3% to −2.1%]), but no change in mortality (OR, 1.12 [95% CI, 0.93 to 1.34]; ARD, 1.5% [95% CI, −0.9% to 4.2%]).

Conclusions and Relevance  In this individual patient data meta-analysis of patients with large-vessel ischemic stroke, earlier treatment with endovascular thrombectomy + medical therapy compared with medical therapy alone was associated with lower degrees of disability at 3 months. Benefit became nonsignificant after 7.3 hours.

13. Do Peripheral Thermometers Accurately Correlate to Core Body Temperature?

Hernandez JM, et al. Ann Emerg Med 2016;68(5):562-3.

Take-Home Message
Peripheral thermometers lack accuracy compared with central thermometers; therefore, use central thermometers when accurate temperature measurements are needed to guide diagnosis and patient management.

Changes in body temperature are important for clinical decisions in the acute care setting and may point the clinician toward certain diagnoses3,4  or predict mortality in certain patient populations.5,6 As such, accurate assessment of temperature is essential, especially in the setting of possible fever (over 38°C [100.4°F]) or hypothermia (less than 36°C [96.8°F]). Although temperature measurement from a central site (pulmonary artery catheter, urinary bladder, esophageal, or rectal) has shown accuracy and reliability,7, 8 it is currently commonplace in the emergency department (ED) to measure temperature from a peripheral site (eg, tympanic membrane, oral). Approximately 1 in 5 febrile patients is initially missed by peripheral temperature measurements in the ED.8

The results of this meta-analysis demonstrate that temperature measurements with peripheral thermometers do not adequately estimate body temperature compared with central thermometers. This was especially evident in clinical situations that may affect diagnosis, clinical management, and outcomes (ie, fever and hypothermia). Although the specificity of peripheral thermometers is adequate to confirm fever when detected, the low sensitivity cannot exclude fever; peripheral thermometers gave readings as much as 1°C to 2°C (1.80°F to 3.60°F) lower than those of central thermometers. If normothermia is due to inaccurate temperature measurement, potentially harmful time-dependent omissions in hypo- or hyperthermia management (eg, sepsis, bacteremia) may occur.

In conclusion, clinicians should use central thermometers when accurate temperature measurement will guide diagnosis and management in high-risk patient populations. Rectal thermometers can generally be used for most patients in this circumstance.

14. A Pragmatic Randomized Evaluation of a Nurse-Initiated Protocol to Improve Timeliness of Care in an Urban Emergency Department.

Douma MJ, et al. Ann Emerg Med. 2016;68(5):546–552.

STUDY OBJECTIVE: Emergency department (ED) crowding is a common and complicated problem challenging EDs worldwide. Nurse-initiated protocols, diagnostics, or treatments implemented by nurses before patients are treated by a physician or nurse practitioner have been suggested as a potential strategy to improve patient flow.

METHODS: This is a computer-randomized, pragmatic, controlled evaluation of 6 nurse-initiated protocols in a busy, crowded, inner-city ED. The primary outcomes included time to diagnostic test, time to treatment, time to consultation, or ED length of stay.

RESULTS: Protocols decreased the median time to acetaminophen for patients presenting with pain or fever by 186 minutes (95% confidence interval [CI] 76 to 296 minutes) and the median time to troponin for patients presenting with suspected ischemic chest pain by 79 minutes (95% CI 21 to 179 minutes). Median ED length of stay was reduced by 224 minutes (95% CI -19 to 467 minutes) by implementing a suspected fractured hip protocol. A vaginal bleeding during pregnancy protocol reduced median ED length of stay by 232 minutes (95% CI 26 to 438 minutes).

CONCLUSION: Targeting specific patient groups with carefully written protocols can result in improved time to test or medication and, in some cases, reduce ED length of stay. A cooperative and collaborative interdisciplinary group is essential to success.

15. Comparison of acyclovir and famciclovir for the treatment of Bell's palsy.

Kim HJ, et al. Eur Arch Otorhinolaryngol. 2016 Oct;273(10):3083-90.

The relative effectiveness of acyclovir and famciclovir in the treatment of Bell's palsy is unclear. This study therefore compared recovery outcomes in patients with Bell's palsy treated with acyclovir and famciclovir. The study cohort consisted of patients with facial palsy who visited the outpatient clinic between January 2006 and January 2014. Patients were treated with prednisolone plus either acyclovir (n = 457) or famciclovir (n = 245). Patient outcomes were measured using the House-Brackmann scale according to initial severity of disease and underlying disease. The overall recovery rate tended to be higher in the famciclovir than in the acyclovir group. The rate of recovery in patients with initially severe facial palsy (grades V and VI) was significantly higher in the famciclovir than in the acyclovir group (p = 0.01), whereas the rates of recovery in patients with initially moderate palsy (grade III-IV) were similar in the two groups. The overall recovery rates in patients without hypertension or diabetes mellitus were higher in the famciclovir than in the acyclovir group, but the difference was not statistically significant. Treatment with steroid plus famciclovir was more effective than treatment with steroid plus acyclovir in patients with severe facial palsy. Famciclovir may be the antiviral agent of choice in the treatment of patients with severe facial palsy.

16. Midazolam-Droperidol better than either Droperidol or Olanzapine for Acute Agitation: A RCT

Taylor DM, et al.

Study objective
We aim to determine the most efficacious of 3 common medication regimens for the sedation of acutely agitated emergency department (ED) patients.

We undertook a randomized, controlled, double-blind, triple-dummy, clinical trial in 2 metropolitan EDs between October 2014 and August 2015. Patients aged 18 to 65 years and requiring intravenous medication sedation for acute agitation were enrolled and randomized to an intravenous bolus of midazolam 5 mg–droperidol 5 mg, droperidol 10 mg, or olanzapine 10 mg. Two additional doses were administered, if required: midazolam 5 mg, droperidol 5 mg, or olanzapine 5 mg. The primary outcome was the proportion of patients adequately sedated at 10 minutes.

Three hundred forty-nine patients were randomized to the 3 groups. Baseline characteristics were similar across the groups. Ten minutes after the first dose, significantly more patients in the midazolam-droperidol group were adequately sedated compared with the droperidol and olanzapine groups: differences in proportions 25.0% (95% confidence interval [CI] 12.0% to 38.1%) and 25.4% (95% CI 12.7% to 38.3%), respectively. For times to sedation, the differences in medians between the midazolam-droperidol group and the droperidol and olanzapine groups were 6 (95% CI 3 to 8) and 6 (95% CI 3 to 7) minutes, respectively. Patients in the midazolam-droperidol group required fewer additional doses or alternative drugs to achieve adequate sedation. The 3 groups’ adverse event rates and lengths of stay did not differ.

Midazolam-droperidol combination therapy is superior, in the doses studied, to either droperidol or olanzapine monotherapy for intravenous sedation of the acutely agitated ED patient.

17. Probiotics and the Prevention of Antibiotic-Associated Diarrhea in Infants and Children

Johnston BC, et al. JAMA. 2016;316(14):1484-1485.

Clinical Question:  In children prescribed an antibiotic, is the co-administration of a probiotic associated with lower rates of antibiotic-associated diarrhea without an increase in clinically important adverse events?

Bottom Line:  Moderate-quality evidence suggests that probiotics are associated with lower rates of antibiotic-associated diarrhea in children (aged 1 month to 18 years) without an increase in adverse events.

Among 23 studies comparing probiotics with control for the prevention of antibiotic-associated diarrhea, probiotics were associated with lower rates of diarrhea and were not associated with higher rates of adverse events. No trials reported serious adverse events attributable to probiotics.

… The findings are based on an aggregate data meta-analysis; therefore, it was impossible to explore patient- and intervention-level variables that may be associated with antibiotic-associated diarrhea. For example, the largest prospective cohort of children (n = 650) who were followed up for antibiotic-associated diarrhea risk suggests that younger children (aged less than 2 years) and those exposed to amoxicillin/clavulanate are at the highest risk for antibiotic-associated diarrhea (18% and 23%, respectively).3

Comparison of Findings With Current Practice Guidelines
Our findings are consistent with the European Society for Pediatric Gastroenterology, Hepatology and Nutrition recommendations, the only published guideline based on systematic reviews addressing antibiotic-associated diarrhea, which suggests that L rhamnosus or S boulardii at 5 to 40 billion colony-forming units/d may be reasonable to consider among otherwise healthy children receiving antibiotics.6

18. Study outlines strategies to reduce physician burnout

A study in The Lancet found "clinically meaningful reductions" in physician burnout may be achieved through the use of strategies such as mindfulness, stress management training and small group discussions. Researchers said studies found no single intervention to be superior, and that both individual and structural or organizational interventions are probably needed.

Interventions to prevent and reduce physician burnout: a systematic review and meta-analysis

West CP, et al. Lancet 2016 September 28 [Epub ahead of print]

Physician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.

In this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.

We identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5–14]; p less than 0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67–3·64]; p less than 0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15–1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11–18]; p less than 0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0–8]; p=0·04; I2=0%; 16 studies).

The literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.

19. Therapeutic Hypothermia Worsened Survival after In-Hospital Cardiac Arrest.

Chan PS, et al; American Heart Association’s Get With the Guidelines–Resuscitation Investigators. JAMA. 2016 Oct 4;316(13):1375-1382.

Importance: Therapeutic hypothermia is used for patients following both out-of-hospital and in-hospital cardiac arrest. However, randomized trials on its efficacy for the in-hospital setting do not exist, and comparative effectiveness data are limited.

Objective: To evaluate the association between therapeutic hypothermia and survival after in-hospital cardiac arrest.

Design, Setting, and Patients: In this cohort study, within the national Get With the Guidelines-Resuscitation registry, 26 183 patients successfully resuscitated from an in-hospital cardiac arrest between March 1, 2002, and December 31, 2014, and either treated or not treated with hypothermia at 355 US hospitals were identified. Follow-up ended February 4, 2015.

Exposure: Induction of therapeutic hypothermia.

Main Outcomes and Measures: The primary outcome was survival to hospital discharge. The secondary outcome was favorable neurological survival, defined as a Cerebral Performance Category score of 1 or 2 (ie, without severe neurological disability). Comparisons were performed using a matched propensity score analysis and examined for all cardiac arrests and separately for nonshockable (asystole and pulseless electrical activity) and shockable (ventricular fibrillation and pulseless ventricular tachycardia) cardiac arrests.

Results: Overall, 1568 of 26 183 patients with in-hospital cardiac arrest (6.0%) were treated with therapeutic hypothermia; 1524 of these patients (mean [SD] age, 61.6 [16.2] years; 58.5% male) were matched by propensity score to 3714 non-hypothermia-treated patients (mean [SD] age, 62.2 [17.5] years; 57.1% male). After adjustment, therapeutic hypothermia was associated with lower in-hospital survival (27.4% vs 29.2%; relative risk [RR], 0.88 [95% CI, 0.80 to 0.97]; risk difference, -3.6% [95% CI, -6.3% to -0.9%]; P = .01), and this association was similar (interaction P = .74) for nonshockable cardiac arrest rhythms (22.2% vs 24.5%; RR, 0.87 [95% CI, 0.76 to 0.99]; risk difference, -3.2% [95% CI, -6.2% to -0.3%]) and shockable cardiac arrest rhythms (41.3% vs 44.1%; RR, 0.90 [95% CI, 0.77 to 1.05]; risk difference, -4.6% [95% CI, -10.9% to 1.7%]). Therapeutic hypothermia was also associated with lower rates of favorable neurological survival for the overall cohort (hypothermia-treated group, 17.0% [246 of 1443 patients]; non-hypothermia-treated group, 20.5% [725 of 3529 patients]; RR, 0.79 [95% CI, 0.69 to 0.90]; risk difference, -4.4% [95% CI, -6.8% to -2.0%]; P  less than  .001) and for both rhythm types (interaction P = .88).

Conclusions and Relevance: Among patients with in-hospital cardiac arrest, use of therapeutic hypothermia compared with usual care was associated with a lower likelihood of survival to hospital discharge and a lower likelihood of favorable neurological survival. These observational findings warrant a randomized clinical trial to assess efficacy of therapeutic hypothermia for in-hospital cardiac arrest.

20. Evaluation of CT in patients with atypical angina or CP clinically referred for invasive coronary angiography: RCT

BMJ 2016;355:i5441

Objective To evaluate whether invasive coronary angiography or computed tomography (CT) should be performed in patients clinically referred for coronary angiography with an intermediate probability of coronary artery disease.

Design Prospective randomised single centre trial.

Setting University hospital in Germany.

Participants 340 patients with suspected coronary artery disease and a clinical indication for coronary angiography on the basis of atypical angina or chest pain.

Interventions 168 patients were randomised to CT and 172 to coronary angiography. After randomisation one patient declined CT and 10 patients declined coronary angiography, leaving 167 patients (88 women) and 162 patients (78 women) for analysis. Allocation could not be blinded, but blinded independent investigators assessed outcomes.

Main outcome measure The primary outcome measure was major procedural complications within 48 hours of the last procedure related to CT or angiography.

Results Cardiac CT reduced the need for coronary angiography from 100% to 14% (95% confidence interval 9% to 20%, P less than 0.001) and was associated with a significantly greater diagnostic yield from coronary angiography: 75% (53% to 90%) v 15% (10% to 22%), P less than 0.001. Major procedural complications were uncommon (0.3%) and similar across groups. Minor procedural complications were less common in the CT group than in the coronary angiography group: 3.6% (1% to 8%) v 10.5% (6% to 16%), P=0.014. CT shortened the median length of stay in the angiography group from 52.9 hours (interquartile range 49.5-76.4 hours) to 30.0 hours (3.5-77.3 hours, P less than 0.001). Overall median exposure to radiation was similar between the CT and angiography groups: 5.0 mSv (interquartile range 4.2-8.7 mSv) v 6.4 mSv (3.4-10.7 mSv), P=0.45. After a median follow-up of 3.3 years, major adverse cardiovascular events had occurred in seven of 167 patients in the CT group (4.2%) and six of 162 (3.7%) in the coronary angiography group (adjusted hazard ratio 0.90, 95% confidence interval 0.30 to 2.69, P=0.86). 79% of patients stated that they would prefer CT for subsequent testing. The study was conducted at a University hospital in Germany and thus the performance of CT may be different in routine clinical practice. The prevalence was lower than expected, resulting in an underpowered study for the predefined primary outcome.

Conclusions CT increased the diagnostic yield and was a safe gatekeeper for coronary angiography with no increase in long term events. The length of stay was shortened by 22.9 hours with CT, and patients preferred non-invasive testing.

21. Micro Bits

A. The Supplement Paradox: Negligible Benefits, Robust Consumption

Cohen PE. JAMA.  2016;316(14):1453-1454.

Dietary supplements encompass a wide variety of products from vitamins, minerals, and botanicals to probiotics, protein powders, and fish oils.1 During the past 2 decades, a steady stream of high-quality studies evaluating dietary supplements has yielded predominantly disappointing results about potential health benefits, whereas evidence of harm has continued to accumulate. How consumers have responded to these scientific developments is not known. In this issue of JAMA,2 the report by Kantor and colleagues sheds light on this important question.

To place the results of the study by Kantor et al in context, it is helpful to understand the regulatory changes predating their study. In the late 1980s, 36% of men and 48% of women used vitamins, minerals, and other supplements.3 These high consumption levels increased further after the passage of the Dietary Supplement Health and Education Act of 1994 (DSHEA), a law that continues to define supplement policy to this day. Under DSHEA, all supplements are assumed to be safe until the US Food and Drug Administration (FDA) detects evidence of harm,1 usually only after consumers have been extensively exposed to the product. The lax DSHEA requirements for proof of product safety led to a rapid increase in the number of supplements in the marketplace, from an estimated 4000 in 1994 to 55 000 in 2012.4 Dietary supplements had grown into a greater than $32 billion industry by 2012.5

The period Kantor and colleagues studied, 1999 through 2012, was an era of intense investigation into the health effects of supplements. The National Institutes of Health (NIH) invested more than $250 million to $300 million per year funding dietary supplement research.4,5 Many major clinical studies were published, but the results generally failed to demonstrate beneficial effects on health. According to a recent summary of this extensive investment: “most of the larger NIH-supported clinical trials of [dietary supplements] failed to demonstrate a significant benefit compared to control groups.”4 Prominent examples of high-quality studies published during this era and showing no benefits of supplements include an evaluation of multivitamins to prevent cancer and heart disease,6 echinacea to treat the common cold,7 St John’s wort to treat major depression,8 and vitamin E to prevent prostate cancer.9

At the same time, the health risks of supplements were also becoming better understood. In the late 1990s and early 2000s, supplements containing ephedra were linked to many serious adverse events including myocardial infarctions, seizures, strokes, and sudden deaths.10 By 2002, national poison centers were receiving more than 10 000 calls related to ephedra poisonings per year.10 Long-term risks began to be recognized as well, for example, beta-carotene supplements were found to actually increase the risk of lung cancer among smokers.11…

The remainder of the editorial (free):

B. Higher CV Risk Seen With Calcium Supplements

But observational study suggests benefit from dietary sources

Calcium supplements might increase cardiovascular risk, whereas dietary calcium was associated with a protective effect, a new observational study found.
Many people -- in particular, older women -- take calcium supplements to prevent or treat osteoporosis, though the supporting evidence for this use is quite thin. In recent years, several studies have raised concerns that calcium supplements might be linked to increased cardiovascular risk. But the precise relationship between calcium -- both from dietary sources and from supplements -- and atherosclerosis has not been carefully studied in randomized trials.

In a paper published in the Journal of the American Heart Association, Erin Michos, MD, MHS, of Johns Hopkins, and colleagues analyzed data from more than 2,700 participants in the MESA (Multi-Ethnic Study of Atherosclerosis) study who did not have diagnosed cardiovascular disease. Among the 1,567 participants who had no coronary artery calcium at baseline, the group in the highest quintile of calcium intake (as assessed by a food questionnaire and medication inventory) had a significant 27% lower likelihood of development of coronary artery calcium (CAC) at 10 years, after adjustment for differences in baseline risk.

But the investigators found a potentially important difference in effect based on the calcium source. Calcium from supplements was associated with a 22% higher risk of CAC. Any adverse effect of calcium supplements could have broad importance, because 43% of U.S. adults take calcium supplements.

C. Kaiser Permanente embraces telemedicine

California-based Kaiser Permanente conducted 52% of patient transactions using virtual visits, online portals or its telehealth apps last year, company CEO Bernard Tyson said, noting that as health care professionals assume risk under value-based care models such as bundled payments, interest in telehealth has increased. Kaiser also saw 4 million appointments scheduled online, 37 million tests viewed online and 20 million health care professional e-mail collaborations in 2014-2015.

D. Zika: Worse Than Thalidomide?

Jeff Lyon. JAMA. 2016;316(12):1246.

A moment of truth is at hand for health experts tracking Zika virus in Latin America and the Caribbean. Thousands of pregnant women who were infected in the past year by Zika, just as it was unmasked as a devastating threat to fetuses, are at the point of giving birth.

Alarmed by the scientific consensus that the Zika virus was behind a 20-fold spike in microcephaly cases reported last year in Brazil, investigators are anxious to see what befalls this new wave of mothers and infants.

Research suggests that nearly a third of deliveries in mothers infected with Zika will involve severe birth complications, including microcephaly, fetal cerebral calcification, and central nervous system alterations (Brasil P et al. NEJM. doi:10.1056/NEJMoa1602412 [published online March 4, 2016]). But as evidence mounts that the virus’ strong affinity for neural stem cells may also cause subtler central nervous system damage, the medical community fears that the current tragedy may give way to an equally horrific second act that will play out over years as exposed children who seemed unscathed at birth exhibit serious neurological ills as they age. Expectations range from auditory and visual problems to cognitive delays and seizure disorders.

Accordingly, the World Health Organization recently called for broadening the definition of Zika-related pathology beyond microcephaly, noting “Zika virus is an intensely neurotropic virus that particularly targets neural progenitor cells, but also—to a lesser extent—neuronal cells in all stages of maturity. … [I]t is possible that many thousands of infants will incur moderate to severe neurological disabilities.” (

The outlook is bleak enough that some authorities speak of the Zika epidemic in the same breath as the thalidomide and rubella disasters of the 1960s. Citing these earlier crises, Hal C. Lawrence III, MD, chief executive officer of the American College of Obstetricians and Gynecologists, predicted that the Zika virus’ full toll may not be known “for years downstream.”

E. Explosion Injuries from E-Cigarettes

N Engl J Med 2016; 375:1400-1402

Electronic nicotine-delivery systems (ENDS) include electronic cigarettes (e-cigarettes) and personal vaporizers. The prevalence of ENDS use is increasing among current, former, and never smokers. E-cigarettes share a basic design; common components include an aerosol generator, a flow sensor, a battery, and a solution storage area.1 Many users do not understand the risk of “thermal runaway,” whereby internal battery overheating causes a battery fire or explosion.

At our center, from October 2015 through June 2016, we treated 15 patients with injuries from e-cigarette explosions due to the lithium-ion battery component. Such explosions were initially thought to be rare, but there have been reports, primarily in the media, of 25 separate incidents of e-cigarette explosions from 2009 through 2014 across the United States.2 More recently, there have been case reports in the medical literature.3

The e-cigarette explosion injuries seen among our patients included flame burns (80% of patients), chemical burns (33%), and blast injuries (27%) (Figure 1Figure 1
Injuries of the Face, Hands, and Thighs Caused by E-Cigarette Explosions.
). Patients have presented with injuries to the face (20%), hands (33%), and thigh or groin (53%) — injuries that have substantial implications for cosmetic and functional outcomes. Blast injuries have led to tooth loss, traumatic tattooing, and extensive loss of soft tissue, requiring operative débridement and closure of tissue defects. The flame-burn injuries have required extensive wound care and skin grafting, and exposure to the alkali chemicals released from the battery explosion has caused chemical skin burns requiring wound care…

The remainder of the article (free):

F. Study shows physicians spend hours on administrative tasks

Each hour a physician spent on face-to-face clinical time with patients meant almost two additional hours of time during clinic hours spent on documentation tasks, electronic health records and paperwork, according to a study in the Annals of Internal Medicine. Researchers said the study, which included physicians in family medicine, internal medicine, cardiology and orthopedics, showed many physicians also spend an hour or two per night on work at home, and they noted that decreased face time with patients contributes to the growing problem of physician burnout in the U.S.

G. For Acute MI, Better Hospital May Mean Longer Life

More than a year of life expectancy difference seen in Medicare study

In this study, patients admitted to high-performing hospitals after acute myocardial infarction had longer life expectancies than patients treated in low-performing hospitals. This survival benefit occurred in the first 30 days and persisted over the long term.

H. Your devices are probably ruining your productivity. Here’s why

BY Lesley McClurg, KQED Science   October 17, 2016

I’ll admit it. I even take my phone with me to fire off a few texts when I go to the restroom. Or I’ll scroll through my email when I leave the office for lunch. My eyes are often glued to my phone from the moment I wake up, but I often reach the end of my days wondering what I’ve accomplished.

How the Digital Age Zaps Productivity

My productivity mystery was solved after reading “The Distracted Mind: Ancient Brains in a High Tech World,” by University of California, San Francisco neuroscientist Adam Gazzaley. He explains why the brain can’t multitask, and why my near-obsessive efforts to keep up on emails is likely lowering my creative output.

I visited Gazzaley in his UCSF laboratory, Neuroscape, to learn more about the science of distraction. Gazzaley pulled up, on a TV screen, a 3-D image of a brain, created from an MRI Scan. He pointed to different sections to explain what’s going on when our attention flits between tasks.

The habit of multitasking could lower your score on an IQ test.

“The prefrontal cortex is the area most challenged,” Gazzely says. “And then visual areas, auditory areas, and the hippocampus — these networks are really what’s challenged when we are constantly switching between multiple tasks that our technological world might throw at us.”

When you engage in one task at a time, the prefrontal cortex works in harmony with other parts of the brain, but when you toss in another task it forces the left and right sides of the brain to work independently. The process of splitting our attention usually leads to mistakes.

In other words, each time our eyes glance away from our computer monitor to sneak a peak at a text message, the brain takes in new information, which reduces our primary focus. We think the mind can juggle two or three activities successfully at once, but Gazzaley says we woefully overestimate our ability to multitask.

“An example is when you attempt to check your email while on a conference call,” says Gazzaley. “The act of doing that makes it so incredibly obvious how you can’t really parallel process two attention-demanding tasks. You either have to catch up and ask what happened in the conversation, or you have to read over the email before you send it — if you’re wise!”

Responding to an Email Could Cost You 28 Minutes

Gazzaley stresses that our tendency to respond immediately to emails and texts hinders high-level thinking. If you’re working on a project and you stop to answer an email, the research shows, it will take you nearly a half-hour to get back on task…

I. A Randomized Trial of Long-Term Oxygen for COPD with Moderate Desaturation

In patients with stable COPD and resting or exercise-induced moderate desaturation, the prescription of long-term supplemental oxygen did not result in a longer time to death or first hospitalization than no long-term supplemental oxygen, nor did it provide sustained benefit with regard to any of the other measured outcomes.

K. Diagnosis of Acute Gout: A Clinical Practice Guideline From the American College of Physicians

L. Life expectancies getting shorter for Americans, study finds

The long-standing trend of Americans' life expectancies becoming longer appears to be reversing, based on a Society of Actuaries study. The shortening of life expectancy applies to men and women and every age group from 25 to 85.

M. Researchers examine efficacy of migraine drugs in youths

Fifty-two percent and 55% of children who received migraine drugs amitriptyline and topiramate, respectively, had a 50% reduction in the number of headache days, compared with 61% of those who received a placebo pill, according to a study in the New England Journal of Medicine. The findings, based on data from a 24-week randomized clinical trial involving 328 youths with migraines ages 8 to 17, also showed significantly higher risks of side effects such as dry mouth, fatigue, mood changes, and tingling in the hands, arms, legs or feet among those who received the prescription medicines.

N. Empathy predicts an experimental pain reduction during touch

Studies have provided evidence for pain-alleviating effects of tactile stimulation, yet the effects of social touch are still unexplored. This study examined the analgesic effects of social touch and tested the moderating role of the toucher's empathy. Tonic heat stimuli were administered to females; their partners either watched or touched their hands, a stranger touched their hands, or no one interacted with them. The results revealed diminished levels of pain during partners' touch compared with all other control conditions. The authors note that pain perception models should be extended, taking into account psychological characteristics of observers.

O. No survival difference found based on blood storage time

A randomized study in the New England Journal of Medicine involving about 31,500 patients at six hospitals in the US, Australia, Israel and Canada found no significant difference in survival rates based on the freshness of transfused blood, with an in-hospital mortality rate of 9.1% for those who received the newest blood in the study compared with 8.7% for those who received the oldest. "Our study provides strong evidence that transfusion of fresh blood does not improve patient outcomes, and this should reassure clinicians that fresher is not better," said researcher Nancy Heddle of the McMaster Centre for Transfusion Research in Canada.