From the
recent medical literature…
1. No SIRS; Quick SOFA Instead: SCCM Redefines Sepsis without EM
Input
Faust JS. Ann
Emerg Med. 2016;67(5):A15–A19.
It’s barely
noon and already 5 patients have triggered a sepsis alert in triage. Once that
happens, patients are expedited into the emergency department (ED), assessed,
and treated urgently. Laboratory tests are ordered; fluids are given. Two
systemic inflammatory response criteria and suspect infection is all it takes
to sound the alarm, but this may be about to change.
Until
February, the Society of Critical Care Medicine’s (SCCM’s) definitions for
sepsis had remained largely unchanged since 1991, although some revisions were
incorporated into a 2003 update. To reflect changes in the understanding of sepsis
pathophysiology and to respond to the need for more precise definitions, a task
force composed of intensivists and infectious disease, surgical, and pulmonary
specialists (emergency medicine was notably absent) was convened from
representatives of the SCCM and the European Society of Intensive Care Medicine
(ESICM) in January 2014. The task force was given the mandate to determine the
Third International Consensus Definitions for Sepsis and Septic Shock, which
were unveiled at the 45th annual SCCM Critical Care Congress in Orlando, FL, on
February 22, 2016, and published as a trio of articles in the Journal of the
American Medical Association the same day.
According to
Clifford S. Deutschman, MS, MD, of the Feinstein Institute for Medical
Research, who, along with Mervyn Singer, MBBS, of University College London,
cochaired the task force, the new definition states that sepsis is
life-threatening organ dysfunction resulting from a dysregulated host response
to infection (Figure 1). Septic shock is now defined as a subset of sepsis
patients in whom “circulatory and cellular/metabolic abnormalities are profound
enough to substantially increase mortality.”1
In the
creation of these definitions, the term “severe sepsis” was determined to be
redundant with the new definition of sepsis and thus was eliminated from
official nomenclature.
Dr.
Deutschman highlighted an important departure from previous official statements
on sepsis. “We didn’t just do definitions,” he said by telephone in February.
“We did both definitions and clinical criteria.” The purpose was to create the
most scientifically valid description of sepsis possible while providing a
definition that would be clinically useful. Furthermore, the task force sought
to distinguish patients with sepsis—in whom organ damage increased the risk of
mortality to greater than 10%—from patients with septic shock, whose mortality
exceeded 40%. The criteria, Dr. Deutschman said, are intended to help
physicians identify and stratify septic patients by using “proxies for a likely
sepsis-related outcome,” which was defined as death from sepsis or a course of
3 or more days in an ICU.2
The problem
of quickly identifying patients who may have sepsis was also addressed.
Systemic inflammatory response syndrome (SIRS) criteria—in use since 1991—were
deemed “unhelpful” by the task force. In work led by Christopher W. Seymour,
MD, MSc, of the University of Pittsburgh, SIRS was found to be reasonably
sensitive, but extremely nonspecific.2
As an
alternative, the use of the Sepsis-related/Sequential Organ Failure Assessment
(SOFA) was discussed (Figure 2). However, because a SOFA score cannot be
performed in out-of-hospital, triage, and some emergency settings, a new
measure relying on 3 easily obtainable clinical features, termed quick SOFA,
was derived and validated retrospectively (Figure 3)…
The rest of
the essay (full-text free): http://www.annemergmed.com/article/S0196-0644(16)00216-X/fulltext
DRV Note: Q(uick)SOFA Score (1 point
each; score range 0-3)
- New or worsened AMS (GCS 13 or less)
- RR ≥22/min
- SBP ≤100mmHg
If the sum is
0-1 points, patient not high risk by
qSOFA. If sepsis is still suspected, continue to monitor, evaluate, and
initiate treatment as appropriate, including serial qSOFA assessments.
If the sum is
2-3 points, patient high risk by
qSOFA. Assess for evidence of organ dysfunction with blood testing including
serum lactate and calculation of the full SOFA Score. Patients meeting these
qSOFA criteria should have infection considered even if it was previously not.
“The qSOFA
was tested in 4 external data sets comprising 706,399 patient encounters at 165
hospitals in out-of-hospital (n = 6508), non-ICU (n = 619,137), and ICU (80,595)
settings...” Seymour CW, et al. Assessment of Clinical Criteria for Sepsis: For
the Third International Consensus Definitions for Sepsis and Septic Shock
(Sepsis-3). JAMA. 2016 Feb 23;315(8):762-74. Abstract: http://www.ncbi.nlm.nih.gov/pubmed/26903335
2. New Stroke Research
A.
One-Year Risk of Stroke after Transient Ischemic Attack or Minor Stroke
Amarenco P,
et al. N Engl J Med. 2016;374:1533-1542.
Background
Previous
studies conducted between 1997 and 2003 estimated that the risk of stroke or an
acute coronary syndrome was 12 to 20% during the first 3 months after a
transient ischemic attack (TIA) or minor stroke. The TIAregistry.org project
was designed to describe the contemporary profile, etiologic factors, and
outcomes in patients with a TIA or minor ischemic stroke who receive care in
health systems that now offer urgent evaluation by stroke specialists.
Methods
We recruited
patients who had had a TIA or minor stroke within the previous 7 days. Sites
were selected if they had systems dedicated to urgent evaluation of patients
with TIA. We estimated the 1-year risk of stroke and of the composite outcome
of stroke, an acute coronary syndrome, or death from cardiovascular causes. We
also examined the association of the ABCD2 score for the risk of stroke (range,
0 [lowest risk] to 7 [highest risk]), findings on brain imaging, and cause of
TIA or minor stroke with the risk of recurrent stroke over a period of 1 year.
Results
From 2009
through 2011, we enrolled 4789 patients at 61 sites in 21 countries. A total of
78.4% of the patients were evaluated by stroke specialists within 24 hours
after symptom onset. A total of 33.4% of the patients had an acute brain
infarction, 23.2% had at least one extracranial or intracranial stenosis of 50%
or more, and 10.4% had atrial fibrillation. The Kaplan–Meier estimate of the
1-year event rate of the composite cardiovascular outcome was 6.2% (95%
confidence interval, 5.5 to 7.0). Kaplan–Meier estimates of the stroke rate at
days 2, 7, 30, 90, and 365 were 1.5%, 2.1%, 2.8%, 3.7%, and 5.1%, respectively.
In multivariable analyses, multiple infarctions on brain imaging, large-artery
atherosclerosis, and an ABCD2 score of 6 or 7 were each associated with more
than a doubling of the risk of stroke.
Conclusions
We observed a
lower risk of cardiovascular events after TIA than previously reported. The
ABCD2 score, findings on brain imaging, and status with respect to large-artery
atherosclerosis helped stratify the risk of recurrent stroke within 1 year
after a TIA or minor stroke. (Funded by Sanofi and Bristol-Myers Squibb.)
B.
The Value of Urgent Specialized Care for TIA and Minor Stroke
Sacco RL, et
al. N Engl J Med 2016; 374:1577-1579.
Patients with
minor stroke or transient ischemic attack (TIA), characterized as a brief
episode of neurologic dysfunction caused by focal cerebral ischemia without
infarction, have the least amount of disability and the most to lose should
they have a stroke. Patients with vanishing symptoms may slip through our
systems for detecting acute stroke owing either to patients’ delays in seeking
medical attention or clinicians’ assessments that urgent treatment is not
needed. This lost opportunity is even more worrisome given the tremendous
improvements in the quality of primary and secondary stroke prevention that
include evidence-based treatments with antiplatelet and oral anticoagulation
therapy; better control of hypertension, dyslipidemia, and diabetes; more
accurate and sensitive neuroimaging; increased use of thrombolytics and
interventional treatments for acute stroke; and organized systems of stroke
care that are designed for the rapid evaluation of symptomatic patients. Just
as the rapid diagnosis and treatment of acute stroke has improved outcomes, the
urgent evaluation of patients with a TIA or minor stroke and the use of preventive
treatments can markedly reduce the risk of stroke.1 Demonstrations of these
dramatic results in clinical trials, however, are not enough and require
implementation and dissemination of successful community programs.
In this issue
of the Journal, Amarenco and colleagues2 report a very low risk of stroke after
a TIA or minor stroke in specialized TIA units in which urgent evidence-based
care is delivered by stroke specialists. Using data from a large, international
registry (the TIAregistry.org project), the authors report a risk of stroke of
only 1.5% at 2 days, 2.1% at 7 days, and 3.7% at 90 days after symptom onset.
Although this was not a randomized trial and there was no comparison group to
assess whether specialized units performed better than nonspecialized units,
these risks are substantially lower than expected. Outcomes in this study were
at least 50% lower than those reported in previous studies in which the risk of
stroke was between 8% and 20% at 30 and 90 days and as high as 10% within the
first 2 days after symptom onset.3-5
The
TIAregistry.org study is not the first to show that rates of stroke after TIA
and minor stroke have declined…
C.
Endovascular thrombectomy after large-vessel ischaemic stroke: a meta-analysis
of individual patient data from five randomised trials.
Goyal M et
al. Lancet 2016 Feb 18 [Epub ahead of print]
BACKGROUND: In
2015, five randomised trials showed efficacy of endovascular thrombectomy over
standard medical care in patients with acute ischaemic stroke caused by
occlusion of arteries of the proximal anterior circulation. In this
meta-analysis we, the trial investigators, aimed to pool individual patient
data from these trials to address remaining questions about whether the therapy
is efficacious across the diverse populations included.
METHODS: We
formed the HERMES collaboration to pool patient-level data from five trials (MR
CLEAN, ESCAPE, REVASCAT, SWIFT PRIME, and EXTEND IA) done between December,
2010, and December, 2014. In these trials, patients with acute ischaemic stroke
caused by occlusion of the proximal anterior artery circulation were randomly
assigned to receive either endovascular thrombectomy within 12 h of symptom
onset or standard care (control), with a primary outcome of reduced disability
on the modified Rankin Scale (mRS) at 90 days. By direct access to the study
databases, we extracted individual patient data that we used to assess the
primary outcome of reduced disability on mRS at 90 days in the pooled
population and examine heterogeneity of this treatment effect across
prespecified subgroups. To account for between-trial variance we used mixed-effects
modelling with random effects for parameters of interest. We then used
mixed-effects ordinal logistic regression models to calculate common odds
ratios (cOR) for the primary outcome in the whole population (shift analysis)
and in subgroups after adjustment for age, sex, baseline stroke severity
(National Institutes of Health Stroke Scale score), site of occlusion (internal
carotid artery vs M1 segment of middle cerebral artery vs M2 segment of middle
cerebral artery), intravenous alteplase (yes vs no), baseline Alberta Stroke
Program Early CT score, and time from stroke onset to randomisation.
FINDINGS: We
analysed individual data for 1287 patients (634 assigned to endovascular
thrombectomy, 653 assigned to control). Endovascular thrombectomy led to
significantly reduced disability at 90 days compared with control (adjusted cOR
2·49, 95% CI 1·76-3·53; p less than 0·0001). The number needed to treat with
endovascular thrombectomy to reduce disability by at least one level on mRS for
one patient was 2·6. Subgroup analysis of the primary endpoint showed no
heterogeneity of treatment effect across prespecified subgroups for reduced
disability (pinteraction=0·43). Effect sizes favouring endovascular
thrombectomy over control were present in several strata of special interest,
including in patients aged 80 years or older (cOR 3·68, 95% CI 1·95-6·92),
those randomised more than 300 min after symptom onset (1·76, 1·05-2·97), and
those not eligible for intravenous alteplase (2·43, 1·30-4·55). Mortality at 90
days and risk of parenchymal haematoma and symptomatic intracranial haemorrhage
did not differ between populations.
INTERPRETATION:
Endovascular thrombectomy is of benefit to most patients with acute ischaemic
stroke caused by occlusion of the proximal anterior circulation, irrespective
of patient characteristics or geographical location. These findings will have
global implications on structuring systems of care to provide timely treatment
to patients with acute ischaemic stroke due to large vessel occlusion.
D.
Analysis of Workflow and Time to Treatment and the Effects on Outcome in
Endovascular Treatment of Acute Ischemic Stroke: Results from the SWIFT PRIME RCT
[Time Matters!]
Goyal M, et
al. Radiology. 2016 Apr 19 [Epub ahead of print]
Purpose To study
the relationship between functional independence and time to reperfusion in the
Solitaire with the Intention for Thrombectomy as Primary Endovascular Treatment
for Acute Ischemic Stroke (SWIFT PRIME) trial in patients with disabling acute
ischemic stroke who underwent endovascular therapy plus intravenous tissue
plasminogen activator (tPA) administration versus tPA administration alone and
to investigate variables that affect time spent during discrete steps.
Materials and
Methods Data were analyzed from the SWIFT PRIME trial, a global, multicenter,
prospective study in which outcomes were compared in patients treated with
intravenous tPA alone or in combination with the Solitaire device (Covidien,
Irvine, Calif). Between December 2012 and November 2014, 196 patients were
enrolled. The relation between time from (a) symptom onset to reperfusion and
(b) imaging to reperfusion and clinical outcome was analyzed, along with
patient and health system characteristics that affect discrete steps in patient
workflow. Multivariable logistic regression was used to assess relationships
between time and outcome; negative binomial regression was used to evaluate
effects on workflow. The institutional review board at each site approved the
trial. Patients provided written informed consent, or, at select sites, there
was an exception from having to acquire explicit informed consent in emergency
circumstances.
Results In
the stent retriever arm of the study, symptom onset to reperfusion time of 150
minutes led to 91% estimated probability of functional independence, which
decreased by 10% over the next hour and by 20% with every subsequent hour of
delay. Time from arrival at the emergency department to arterial access was 90
minutes (interquartile range, 69-120 minutes), and time to reperfusion was 129
minutes (interquartile range, 108-169 minutes). Patients who initially arrived
at a referring facility had longer symptom onset to groin puncture times
compared with patients who presented directly to the endovascular-capable
center (275 vs 179.5 minutes, P less than .001).
Conclusion
Fast reperfusion leads to improved functional outcome among patients with acute
stroke treated with stent retrievers. Detailed attention to workflow with
iterative feedback and aggressive time goals may have contributed to efficient
workflow environments. © RSNA, 2016 Online supplemental material is available
for this article.
3. Zika Virus: A NEJM Review
Petersen LR,
et al. N Engl J Med 2016; 374:1552-1563.
In 1947, a
study of yellow fever yielded the first isolation of a new virus, from the
blood of a sentinel rhesus macaque that had been placed in the Zika Forest of
Uganda.1 Zika virus remained in relative obscurity for nearly 70 years; then,
within the span of just 1 year, Zika virus was introduced into Brazil from the
Pacific Islands and spread rapidly throughout the Americas.2 It became the
first major infectious disease linked to human birth defects to be discovered
in more than half a century and created such global alarm that the World Health
Organization (WHO) would declare a Public Health Emergency of International
Concern.3 This review describes the current understanding of the epidemiology,
transmission, clinical characteristics, and diagnosis of Zika virus infection,
as well as the future outlook with regard to this disease…
4. Ketamine as Rescue Treatment for Difficult-to-Sedate Severe
Acute Behavioral Disturbance in the ED
Isbister GK,
et al. Ann Emerg Med. 2016;67(5):581–587.e1.
STUDY
OBJECTIVE: We investigate the effectiveness and safety of ketamine to sedate
patients with severe acute behavioral disturbance who have failed previous
attempts at sedation.
METHODS: This
was a prospective study of patients given ketamine for sedation who had failed
previous sedation attempts. Patients with severe acute behavioral disturbance
requiring parenteral sedation were treated with a standardized sedation
protocol including droperidol. Demographics, drug dose, observations, and
adverse effects were recorded. The primary outcome was the number of patients
who failed to sedate within 120 minutes of ketamine administration or requiring
further sedation within 1 hour.
RESULTS:
Forty-nine patients from 2 hospitals were administered rescue ketamine during
27 months; median age was 37 years (range 20-82 years); 28 were men. Police
were involved with 20 patients. Previous sedation included droperidol (10 mg; 1),
droperidol (10+10 mg; 33), droperidol (10+10+5 mg; 1), droperidol (10+10+10 mg;
11), and combinations of droperidol and benzodiazepines (2) and midazolam alone
(1). The median dose of ketamine was 300 mg (range 50 to 500 mg). Five patients
(10%; 95% confidence interval 4% to 23%) were not sedated within 120 minutes or
required additional sedation within 1 hour. Four of 5 patients received 200 mg
or less. Median time to sedation postketamine was 20 minutes (interquartile
range 10 to 30 minutes; 2 to 500 minutes). Three patients (6%) had adverse
effects, 2 had vomiting, and a third had a transient oxygen desaturation to 90%
after ketamine that responded to oxygen.
CONCLUSION:
Ketamine appeared effective and did not cause obvious harm in this small sample
and is a potential option for patients who have failed previous attempts at
sedation. A dose of 4 to 5 mg/kg is suggested, and doses less than 200 mg are
associated with treatment failure.
5. New Cardiac Arrest Research
A.
Amiodarone, Lidocaine, or Placebo in Out-of-Hospital Cardiac Arrest
Kudenchuk PJ,
et al. Resuscitation Outcomes Consortium Investigators. N Engl J Med. 2016 Apr
4 [Epub ahead of print].
Background
Antiarrhythmic drugs are used commonly in out-of-hospital cardiac arrest for
shock-refractory ventricular fibrillation or pulseless ventricular tachycardia,
but without proven survival benefit.
Methods In
this randomized, double-blind trial, we compared parenteral amiodarone,
lidocaine, and saline placebo, along with standard care, in adults who had
nontraumatic out-of-hospital cardiac arrest, shock-refractory ventricular fibrillation
or pulseless ventricular tachycardia after at least one shock, and vascular
access. Paramedics enrolled patients at 10 North American sites. The primary
outcome was survival to hospital discharge; the secondary outcome was favorable
neurologic function at discharge. The per-protocol (primary analysis)
population included all randomly assigned participants who met eligibility
criteria and received any dose of a trial drug and whose initial cardiac-arrest
rhythm of ventricular fibrillation or pulseless ventricular tachycardia was
refractory to shock. Results In the per-protocol population, 3026 patients were
randomly assigned to amiodarone (974), lidocaine (993), or placebo (1059); of
those, 24.4%, 23.7%, and 21.0%, respectively, survived to hospital discharge.
The difference in survival rate for amiodarone versus placebo was 3.2
percentage points (95% confidence interval [CI], -0.4 to 7.0; P=0.08); for
lidocaine versus placebo, 2.6 percentage points (95% CI, -1.0 to 6.3; P=0.16);
and for amiodarone versus lidocaine, 0.7 percentage points (95% CI, -3.2 to
4.7; P=0.70). Neurologic outcome at discharge was similar in the three groups.
There was heterogeneity of treatment effect with respect to whether the arrest
was witnessed (P=0.05); active drugs were associated with a survival rate that
was significantly higher than the rate with placebo among patients with
bystander-witnessed arrest but not among those with unwitnessed arrest. More
amiodarone recipients required temporary cardiac pacing than did recipients of
lidocaine or placebo.
Conclusions
Overall, neither amiodarone nor lidocaine resulted in a significantly higher
rate of survival or favorable neurologic outcome than the rate with placebo
among patients with out-of-hospital cardiac arrest due to initial
shock-refractory ventricular fibrillation or pulseless ventricular tachycardia.
B.
Early administration of epinephrine (adrenaline) in patients with cardiac
arrest with initial shockable rhythm in hospital: propensity score
matched analysis.
Andersen LW,
et al. American Heart Association’s Get With The Guidelines-Resuscitation
Investigators. BMJ. 2016 Apr 6;353:i1577.
OBJECTIVES: To evaluate whether patients who experience
cardiac arrest in hospital receive epinephrine (adrenaline) within the two
minutes after the first defibrillation (contrary to American Heart Association
guidelines) and to evaluate the association between early administration of
epinephrine and outcomes in this population.
DESIGN:
Prospective observational cohort study.
SETTING:
Analysis of data from the Get With The Guidelines-Resuscitation registry, which
includes data from more than 300 hospitals in the United States.
PARTICIPANTS:
Adults in hospital who experienced cardiac arrest with an initial shockable
rhythm, including patients who had a first defibrillation within two minutes of
the cardiac arrest and who remained in a shockable rhythm after defibrillation.
INTERVENTION:
Epinephrine given within two minutes after the first defibrillation.
MAIN OUTCOME
MEASURES: Survival to hospital discharge. Secondary outcomes included return of
spontaneous circulation and survival to hospital discharge with a good
functional outcome. A propensity score was calculated for the receipt of
epinephrine within two minutes after the first defibrillation, based on
multiple characteristics of patients, events, and hospitals. Patients who
received epinephrine at either zero, one, or two minutes after the first
defibrillation were then matched on the propensity score with patients who were
"at risk" of receiving epinephrine within the same minute but who did
not receive it.
RESULTS: 2978
patients were matched on the propensity score, and the groups were well
balanced. 1510 (51%) patients received epinephrine within two minutes after the
first defibrillation, which is contrary to current American Heart Association
guidelines. Epinephrine given within the first two minutes after the first
defibrillation was associated with decreased odds of survival in the propensity
score matched analysis (odds ratio 0.70, 95% confidence interval 0.59 to 0.82;
P less than 0.001). Early epinephrine administration was also associated with a
decreased odds of return of spontaneous circulation (0.71, 0.60 to 0.83; P less
than 0.001) and good functional outcome (0.69, 0.58 to 0.83; P less than 0.001).
CONCLUSION:
Half of patients with a persistent shockable rhythm received epinephrine within
two minutes after the first defibrillation, contrary to current American Heart
Association guidelines. The receipt of epinephrine within two minutes after the
first defibrillation was associated with decreased odds of survival to hospital
discharge as well as decreased odds of return of spontaneous circulation and
survival to hospital discharge with a good functional outcome.
C.
Defibrillation time intervals and outcomes of cardiac arrest in hospital:
retrospective cohort study from Get With The Guidelines-Resuscitation registry.
Bradley SM,
et al. American Heart Association’s Get With The Guidelines-Resuscitation
Investigators. BMJ. 2016 Apr 6;353:i1653.
OBJECTIVE: To
describe temporal trends in the time interval between first and second attempts
at defibrillation and the association between this time interval and outcomes
in patients with persistent ventricular tachycardia or ventricular fibrillation
(VT/VF) arrest in hospital.
DESIGN:
Retrospective cohort study
SETTING: 172
hospitals in the United States participating in the Get With The
Guidelines-Resuscitation registry, 2004-12.
PARTICIPANTS:
Adults who received a second defibrillation attempt for persistent VT/VF arrest
within three minutes of a first attempt.
INTERVENTIONS:
Second defibrillation attempts categorized as early (time interval of up to and
including one minute between first and second defibrillation attempts) or
deferred (time interval of more than one minute between first and second
defibrillation attempts).
MAIN OUTCOME
MEASURE: Survival to hospital discharge.
RESULTS:
Among 2733 patients with persistent VT/VF after the first defibrillation
attempt, 1121 (41%) received a deferred second attempt. Deferred second
defibrillation for persistent VT/VF increased from 26% in 2004 to 57% in 2012
(P less than 0.001 for trend). Compared with early second defibrillation,
unadjusted patient outcomes were significantly worse with deferred second
defibrillation (57.4%v62.5% for return of spontaneous circulation, 38.4%v43.6%
for survival to 24 hours, and 24.7%v30.8% for survival to hospital discharge; P
less than 0.01 for all comparisons). After risk adjustment, deferred second
defibrillation was not associated with survival to hospital discharge
(propensity weighting adjusted risk ratio 0.89, 95% confidence interval 0.78 to
1.01; P=0.08; hierarchical regression adjusted 0.92, 0.83 to 1.02; P=0.1).
CONCLUSIONS:
Since 2004, the use of deferred second defibrillation for persistent VT/VF in
hospital has doubled. Deferred second defibrillation was not associated with
improved survival.
6. How to read a scientific paper
By Adam Ruben.
Science. Jan. 20, 2016, 3:15 PM
Nothing makes
you feel stupid quite like reading a scientific journal article.
I remember my
first experience with these ultra-congested and aggressively bland manuscripts
so dense that scientists are sometimes caught eating them to stay regular. I
was in college taking a seminar course in which we had to read and discuss a
new paper each week. And something just wasn’t working for me.
Every week I
would sit with the article, read every single sentence, and then discover that
I hadn’t learned a single thing. I’d attend class armed with exactly one piece
of knowledge: I knew I had read the paper. The instructor would ask a question;
I’d have no idea what she was asking. She’d ask a simpler question—still no
idea. But I’d read the damn paper!
It reminded
me of kindergarten, when I would feel proud after reading a book above my grade
level. But if you had asked me a simple question about the book’s contents—What
kind of animal is Wilbur? How did Encyclopedia Brown know that Bugs Meany
wasn’t really birdwatching?—I couldn’t have answered it.
A few weeks
into the seminar, I decided enough was enough. I wasn’t going to read another
paper without understanding it. So I took that week’s journal article to the
library. Not just the regular library, but the obscure little biology library,
one of those dusty academic hidey-holes only populated by the most wretched
forms of life, which are, of course, insects and postdocs.
I placed the
paper on a large empty desk. I eliminated all other distractions. To avoid
interruptions from friends encouraging alcohol consumption, as friends do in
college, I sat in an obscure anteroom with no foot traffic. To avoid
interruptions from cellphone calls, I made sure it was 1999.
Most
importantly, if I didn’t understand a word in a sentence, I forbade myself from
proceeding to the next sentence until I looked it up in a textbook and then
reread the sentence until it made sense.
I
specifically remember this happening with the word “exogenous.” Somehow I had
always glossed over this word, as though it was probably unimportant to its
sentence. Wrong.
It took me
more than 2 hours to read a three-page paper. But this time, I actually
understood it.
And I
thought, “Wow. I get it. I really get it.”
And I
thought, “Oh crap. I’m going to have to do this again, aren’t I?”
If you’re at
the beginning of your career in science, you may be struggling with the same problem.
It may help you to familiarize yourself with the 10 Stages of Reading a
Scientific Paper:
1. Optimism. “This can’t be too difficult,” you
tell yourself with a smile—in the same way you tell yourself, “It’s not
damaging to drink eight cups of coffee a day” or “There are plenty of
tenure-track jobs.” After all, you’ve been reading words for decades. And
that’s all a scientific paper is, right? Words?
2. Fear. This is the stage when you realize,
“Uh … I don’t think all of these are words.” So you slow down a little. Sound
out the syllables, parse the jargon, look up the acronyms, and review your work
several times. Congratulations: You have now read the title.
3. Regret. You begin to realize that you should
have budgeted much more time for this whole undertaking. Why, oh why, did you
think you could read the article in a single bus ride? If only you had more
time. If only you had one of those buzzer buttons from workplaces in the 1960s,
and you could just press it and say, “Phoebe, cancel my January.” If only there
was a compact version of the same article, something on the order of 250 or
fewer words, printed in bold at the beginning of the paper…
4. Corner-cutting. Why, what’s this? An abstract, all
for me? Blessed be the editors of scientific journals who knew that no article
is comprehensible, so they asked their writers to provide, à la Spaceballs,
“the short, short version.” Okay. Let’s do this.
5. Bafflement. What the hell? Was that abstract
supposed to explain something? Why was the average sentence 40 words long? Why
were there so many acronyms? Why did the authors use the word “characterize”
five times?
6. Distraction. What if there was, like, a
smartphone for ducks? How would that work? What would they use it for? And what
was that Paul Simon lyric, the one from “You Can Call Me Al,” that’s been in
your head all day? How would your life change if you owned a bread maker? You’d
have to buy yeast. Is yeast expensive? You could make your own bread every few
days, but then it might go stale. It’s not the same as store-bought bread; it’s
just not. Oh, right! “Don’t want to end up a cartoon in a cartoon graveyard.”
Is Paul Simon still alive? You should check Wikipedia. Sometimes you confuse
him with Paul McCartney or Paul Shaffer. Shame about David Bowie. Can you put
coffee in a humidifier?
7. Realization that 15 minutes have
gone by and you haven’t progressed to the next sentence.
8. Determination. All righty. Really gonna read this
time. Really gonna do it. Yup, yuppers, yup-a-roo, readin’ words is what you
do. Let’s just point those pupils at the dried ink on the page, and …
9. Rage. HOW COULD ANY HUMAN BRAIN PRODUCE
SUCH SENTENCES?
10. Genuine contemplation of a career
in the humanities. Academic
papers written on nonscientific subjects are easy to understand, right? Right?
What a
strange document a scientific journal article is. We work on them for months or
even years. We write them in a highly specialized vernacular that even most
other scientists don’t share. We place them behind a paywall and charge
something ridiculous, like $34.95, for the privilege of reading them. We so
readily accept their inaccessibility that we have to start “journal clubs” in
the hopes that our friends might understand them and summarize them for us.
Can you imagine
if mainstream magazine articles were like science papers? Picture a Time cover
story with 48 authors. Or a piece in The Economist that required, after every
object described, a parenthetical listing of the company that produced the
object and the city where that company is based. Or a People editorial about
Jimmy Kimmel that could only be published following a rigorous review process
by experts in the field of Jimmy Kimmel.
Do you know
what you’d call a magazine article that required intellectual scrutiny and
uninterrupted neural commitment to figure out what it’s even trying to say?
You’d call it a badly written article.
So for those
new to reading journals, welcome. Good luck. And we’re sorry. We’re trying to
write articles comprehensibly, but sometimes our subdiscipline is so
hyperspecific that we need a million acronyms. And sometimes we’re attempting
to sound like good scientists by copying the tone of every article we’ve read.
And sometimes we’re just writing badly.
Quackberry.
That’s what you’d call the smartphone for ducks.
7. Ultrasound in Emergency Medicine
A.
Mistakes and Pitfalls Associated with Two-Point Compression Ultrasound for DVT
Zitek T, et
al. West J Emerg Med. 2016;17(2):201-208.
Introduction:
Two-point compression ultrasound is purportedly a simple and accurate means to
diagnose proximal lower extremity deep vein thrombosis (DVT), but the pitfalls
of this technique have not been fully elucidated. The objective of this study
is to determine the accuracy of emergency medicine resident-performed two-point
compression ultrasound, and to determine what technical errors are commonly
made by novice ultrasonographers using this technique.
Methods: This
was a prospective diagnostic test assessment of a convenience sample of adult
emergency department (ED) patients suspected of having a lower extremity DVT.
After brief training on the technique, residents performed two-point
compression ultrasounds on enrolled patients. Subsequently a radiology
department ultrasound was performed and used as the gold standard. Residents
were instructed to save videos of their ultrasounds for technical analysis.
Results:
Overall, 288 two-point compression ultrasound studies were performed. There
were 28 cases that were deemed to be positive for DVT by radiology ultrasound.
Among these 28, 16 were identified by the residents with two-point compression.
Among the 260 cases deemed to be negative for DVT by radiology ultrasound, 10
were thought to be positive by the residents using two-point compression. This
led to a sensitivity of 57.1% (95% CI [38.8-75.5]) and a specificity of 96.1%
(95% CI [93.8-98.5]) for resident-performed two-point compression ultrasound.
This corresponds to a positive predictive value of 61.5% (95% CI 42.8% to
80.2%) and a negative predictive value of 95.4% (95% CI 92.9% to 98.0%). The positive likelihood ratio is 14.9 (95% CI
7.5 to 29.5) and the negative likelihood ratio is 0.45 (95% CI 0.29 to 0.68).
Video analysis revealed that in four cases the resident did not identify a DVT
because the thrombus was isolated to the superior femoral vein (SFV), which is
not evaluated by two-point compression. Moreover, the video analysis revealed
that the most common mistake made by the residents was inadequate visualization
of the popliteal vein.
Conclusion:
Two-point compression ultrasound does not identify isolated SFV thrombi, which
reduces its sensitivity. Moreover, this technique may be more difficult than
previously reported, in part because novice ultrasonographers have difficulty
properly assessing the popliteal vein.
B.
US-Guided Cannulation: Time to Bring Subclavian Central Lines Back
Despite
multiple advantages, subclavian vein (SCV) cannulation via the traditional
landmark approach has become less used in comparison to ultrasound (US) guided
internal jugular catheterization due to a higher rate of mechanical
complications. A growing body of evidence indicates that SCV catheterization
with real-time US guidance can be accomplished safely and efficiently. While
several cannulation approaches with real-time US guidance have been described,
available literature suggests that the infraclavicular, longitudinal “in-plane”
technique may be preferred. This approach allows for direct visualization of
needle advancement, which reduces risk of complications and improves successful
placement. Infraclavicular SCV cannulation requires simultaneous use of US
during needle advancement, but for an inexperienced operator, it is more easily
learned compared to the traditional landmark approach. In this article, we
review the evidence supporting the use of US guidance for SCV catheterization
and discuss technical aspects of the procedure itself.
Related
article: Kim EH, et al. Influence of caudal traction of ipsilateral arm on
ultrasound image for supraclavicular central venous catheterization. Am J Emerg
Med. 2016 Feb 12 [Epub ahead of print] http://www.ncbi.nlm.nih.gov/pubmed/26947376
C.
Routine CXR Is Not Necessary After US-Guided Right IJ Vein Catheterization.
Hourmozdi JJ,
et al. Crit Care Med. 2016 Mar 31 [Epub ahead of print]
OBJECTIVES:
Central venous catheter placement is a common procedure performed on critically
ill patients. Routine postprocedure chest radiographs are considered standard
practice. We hypothesize that the rate of clinically relevant complications
detected on chest radiographs following ultrasound-guided right internal
jugular vein catheterization is exceedingly low.
DESIGN:
Retrospective chart review.
SETTING:
Adult ICUs, emergency departments, and general practice units at an academic
tertiary care hospital system.
PATIENTS: All
1,322 ultrasound-guided right internal jugular vein central venous catheter
attempts at an academic tertiary care hospital system over a 1-year period.
INTERVENTIONS:
None.
MEASUREMENTS
AND MAIN RESULTS: Data from standardized procedure notes and postprocedure
chest radiographs were extracted and individually reviewed to verify the
presence of pneumothorax or misplacement, and any intervention performed for
either complication. The overall success rate of ultrasound-guided right
internal jugular vein central venous catheter placement was 96.9% with an
average of 1.3 attempts. There was only one pneumothorax (0.1% [95% CI,
0-0.4%]), and the rate of catheter misplacement requiring repositioning or
replacement was 1.0% (95% CI, 0.6-1.7%). There were no arterial placements
found on chest radiographs. Multivariate regression analysis showed no
correlation between high-risk patient characteristics and composite complication
rate.
CONCLUSIONS:
In a large teaching hospital system, the overall rate of clinically relevant
complications detected on chest radiographs following ultrasound-guided right
internal jugular vein catheterization is exceedingly low. Routine chest radiograph
after this common procedure is an unnecessary use of resources and may delay
resuscitation of critically ill patients.
8. Pediatric Corner
A.
Lung US may be alternative for pediatric pneumonia diagnosis
Feasibility
and Safety of Substituting Lung Ultrasound for Chest X-ray When Diagnosing
Pneumonia in Children: A RCT
Jones BP, et
al. Chest. 2016 Feb 25 [Epub ahead of print].
BACKGROUND: Chest
x-ray (CXR) is the test of choice for diagnosing pneumonia. Lung ultrasound
(LUS) has been shown to be accurate for diagnosing pneumonia in children and
may be an alternative to CXR. Our objective was to determine the feasibility
and safety of substituting LUS for CXR when evaluating children with suspected
pneumonia.
METHODS: We
conducted a randomized control trial comparing LUS to CXR in 191 children from
birth to 21 years of age with suspected pneumonia in an emergency department.
Patients in the investigational arm received a LUS. If there was clinical
uncertainty after ultrasound, clinicians had the option to obtain CXR. Patients
in the control arm underwent sequential imaging with CXR followed by LUS.
Primary outcome was the rate of CXR reduction; secondary outcomes were missed
pneumonia, subsequent unscheduled healthcare visits, and adverse events between
investigational and control arms.
RESULTS: There
was a 38.8% reduction (95% CI, 30.0 to 48.9%) in CXR among investigational
subjects compared to no reduction (95% CI, 0.0 to 3.6%) in the control group.
Novice and experienced clinician-sonologists achieved 30.0% and 60.6% reduction
in CXR use, respectively. There were no cases of missed pneumonia among all
study participants (investigational arm 0%; 95% CI: 0.0-2.9%; control arm 0%;
95% CI 0.0-3%) or differences in adverse events, or subsequent unscheduled
healthcare visits between arms.
CONCLUSIONS: It
may be feasible and safe to substitute LUS for CXR when evaluating children
with suspected pneumonia with no missed cases of pneumonia or increase in rates
of adverse events.
B.
Implementation of Electronic Clinical Decision Support for Pediatric
Appendicitis
Kharbanda AB,
et al. Pediatic 2016 April [Epub ahead of print]
BACKGROUND
AND OBJECTIVE: Computed tomography (CT) and ultrasound (US) are commonly used
in patients with acute abdominal pain. We sought to standardize care and reduce
CT use while maintaining patient safety through implementation of a
multicomponent electronic clinical decision support tool for pediatric patients
with possible appendicitis.
METHODS: We
conducted a quasi-experimental study of children 3 to 18 years old who
presented with possible appendicitis to the pediatric emergency department (ED)
between January 2011 and December 2013. Outcomes were use of CT and US.
Balancing measures included missed appendicitis, ED revisits within 30 days,
appendiceal perforation, and ED length of stay.
RESULTS: Of
2803 patients with acute abdominal pain over the 3-year study period, 794 (28%)
had appendicitis and 207 (26.1% of those with appendicitis) had a perforation.
CT use during the 10-month preimplementation period was 38.8% and declined to
17.7% by the end of the study (54% relative decrease). For CT, segmented
regression analysis revealed that there was a significant change in trend from
the preimplementation period to implementation (monthly decrease –3.5%; 95%
confidence interval: –5.9% to –0.9%; P = .007). US use was 45.7%
preimplementation and 59.7% during implementation. However, there was no
significant change in US or total imaging trends. There were also no
statistically significant differences in rates of missed appendicitis, ED
revisits within 30 days, appendiceal perforation, or ED length of stay between
time periods.
CONCLUSIONS:
Our electronic clinical decision support tool was associated with a decrease in
CT use while maintaining safety and high quality care for patients with
possible appendicitis.
C.
Observation for isolated traumatic skull fractures in the pediatric population:
unnecessary and costly
Blackwood BP,
et al. J Pediatr Surg. 2015 Sep 24. [Epub ahead of print]
BACKGROUND:
Blunt head trauma accounts for a majority of pediatric trauma admissions. There
is a growing subset of these patients with isolated skull fractures, but little
evidence guiding their management. We hypothesized that inpatient neurological
observation for pediatric patients with isolated skull fractures and normal
neurological examinations is unnecessary and costly.
METHODS: We
performed a single center 10year retrospective review of all head traumas with
isolated traumatic skull fractures and normal neurological examination.
Exclusion criteria included: penetrating head trauma, depressed fractures,
intracranial hemorrhage, skull base fracture, pneumocephalus, and poly-trauma.
In each patient, we analyzed: age, fracture location, loss of consciousness,
injury mechanism, Emergency Department (ED) disposition, need for repeat imaging,
hospital costs, intracranial hemorrhage, and surgical intervention.
RESULTS:
Seventy-one patients presented to our ED with acute isolated skull fractures,
56% were male and 44% were female. Their ages ranged from 1week to 12.4years
old. The minority (22.5%) of patients were discharged from the ED following
evaluation, whereas 77.5% were admitted for neurological observation. None of
the patients required neurosurgical intervention. Age was not associated with
repeat imaging or inpatient observation (p=0.7474, p=0.9670). No patients
underwent repeat head imaging during their index admission. Repeat imaging was
obtained in three previously admitted patients who returned to the ED. Cost
analysis revealed a significant difference in total hospital costs between the
groups, with an average increase in charges of $4,291.50 for admitted patients
(p less than 0.0001).
CONCLUSION:
Pediatric isolated skull fractures are low risk conditions with a low
likelihood of complications. Further studies are necessary to change clinical
practice, but our research indicates that these patients can be discharged
safely from the ED without inpatient observation. This change in practice,
additionally, would allow for huge health care dollar savings.
D.
Techniques and Trends, Success Rates, and Adverse Events in ED Pediatric
Intubations: A Report from the National Emergency Airway Registry.
Pallin DJ, et
al. Ann Emerg Med. 2016;67(5):610–615.e1.
STUDY
OBJECTIVE: We describe emergency department (ED) intubation practices for children
younger than 16 years through multicenter prospective surveillance.
METHODS:
Academic and community EDs in the United States, Canada, and Australia recorded
data electronically, from 2002 to 2012, with verified greater than or equal to
90% reporting.
RESULTS: Ten
of 18 participating centers provided qualifying data, reporting 1,053
encounters. Emergency physicians initiated 85% of intubations. Trainees
initiated 83% (95% confidence interval [CI] 81% to 85%). Premedication became
uncommon, reaching less than 30% by the last year. Etomidate was used in 78% of
rapid sequence intubations. Rocuronium use increased during the period of
study, whereas succinylcholine use declined. Video laryngoscopy increased,
whereas direct laryngoscopy declined. The first attempt was successful in 83%
of patients (95% CI 81% to 85%) overall. The risk of first-attempt failure was
highest for infants (relative risk versus all others 2.3; 95% CI 1.8 to 3.0).
Odds of first-attempt success for girls relative to boys were 0.57. The odds
were 3.4 times greater for rapid sequence intubation than sedation without
paralysis. The ultimate success rate was 99.5%.
CONCLUSION:
Because we sampled only 10 centers and most of the intubations were by
trainees, our results may not be generalizable to the typical ED setting. We
found that premedication is now uncommon, etomidate is the predominant
induction agent, and rocuronium and video laryngoscopy are used increasingly.
First-attempt failure is most common in infants.
E.
Clinical Policy for Well-Appearing Infants and Children less than 2 Years of
Age Presenting to the ED with Fever
Mace SE, et
al. Ann Emerg Med. 2016;67(5):625–639.e13.
This clinical
policy from the American College of Emergency Physicians addresses key issues
for well-appearing infants and children younger than 2 years presenting to the
emergency department with fever. A writing subcommittee conducted a systematic
review of the literature to derive evidence-based recommendations to answer the
following clinical questions:
(1) For
well-appearing immunocompetent infants and children aged 2 months to 2 years
presenting with fever (≥38.0°C [100.4°F]), are there clinical predictors that
identify patients at risk for urinary tract infection?
(2) For
well-appearing febrile infants and children aged 2 months to 2 years undergoing
urine testing, which laboratory testing method(s) should be used to diagnose a
urinary tract infection?
(3) For
well-appearing immunocompetent infants and children aged 2 months to 2 years
presenting with fever (≥38.0°C [100.4°F]), are there clinical predictors that
identify patients at risk for pneumonia for whom a chest radiograph should be
obtained?
(4) For
well-appearing immunocompetent full-term infants aged 1 month to 3 months (29
days to 90 days) presenting with fever (≥38.0°C [100.4°F]), are there
predictors that identify patients at risk for meningitis from whom
cerebrospinal fluid should be obtained? Evidence was graded and recommendations
were made based on the strength of the available data.
9. Ondansetron good second-line treatment for nausea in
pregnancy
Ondansetron
may be a good option for relieving nausea and vomiting in pregnant women when
other treatments do not work, researchers reported in Obstetrics &
Gynecology. Study data showed using the drug in the first trimester was tied to
a low risk of birth defects and a small potential increase in cardiac
malformations in newborns.
Carstairs SD.
Ondansetron Use in Pregnancy and Birth Defects: A Systematic Review. Obstet
Gynecol. 2016 May;127(5):878-883.
OBJECTIVE: To
examine the risk of birth defects in children born to women who used
ondansetron early in pregnancy for nausea and vomiting of pregnancy or
hyperemesis gravidarum.
DATA SOURCES:
PubMed, EMBASE, Cochrane, Scopus, Web of Science, Journals@Ovid Fulltext,
ClinicalTrials.gov, and Google Scholar databases.
METHODS OF
STUDY SELECTION: Studies were included for review if they were written in
English, included a comparison population of patients not exposed to
ondansetron, and reported human data, original research, exposure to
ondansetron during the first trimester, and structural birth defects as an
outcome.
TABULATION,
INTEGRATION, AND RESULTS: A total of 423 records were identified. After
accounting for duplicate records and including only relevant articles, a total
of eight records met criteria for review. Data from the various studies were
conflicting: whereas the three largest studies showed no increased risk of
birth defects as a whole (36 malformations, 1,233 exposed compared with 141
malformations and 4,932 unexposed; 58/1,248 exposed compared with 31,357/895,770
unexposed; and 38/1,349 exposed compared with 43,620/1,500,085 unexposed; with
odds ratios [ORs] of 1.12 (95% confidence interval [CI] 0.69–1.82), 1.3 [95% CI
1.0–1.7], and 0.95 [95% CI 0.72–1.26], respectively), two of these studies
demonstrated a slightly increased risk of cardiac defects specifically (OR 2.0
[95% CI 1.3–3.1] and 1.62 [95% CI 1.04–2.14]), a finding that was not
replicated in other studies. The most consistent association (if any) appears
to be a small increase in the incidence of cardiac abnormalities, the bulk of
which are septal defects.
CONCLUSION:
The overall risk of birth defects associated with ondansetron exposure appears
to be low. There may be a small increase in the incidence of cardiac
abnormalities in ondansetron-exposed neonates. Therefore, ondansetron use for
nausea and vomiting of pregnancy should be reserved for those women whose
symptoms have not been adequately controlled by other methods.
10. What the ED can do to address the rising rates of opioid
addiction?
A.
ED motivational interview might lower opioid misuse risk
A pilot randomized clinical trial of
an intervention to reduce overdose risk behaviors among ED patients at risk for
prescription opioid overdose.
Bohnert AS,
et al. Drug Alcohol Depend. 2016 Mar 26 [Epub ahead of print]
BACKGROUND
AND AIMS: Prescription opioid overdose is a significant public health problem.
Interventions to prevent overdose risk behaviors among high-risk patients are
lacking. This study examined the impact of a motivational intervention to
reduce opioid misuse and overdose risk behaviors.
METHODS: This
study was a pilot randomized controlled trial set in a single emergency
department (ED) in which, 204 adult, English-speaking patients seeking care who
reported prescription opioid misuse during the prior 3 months were recruited.
Patients were randomized to either the intervention, a 30-minute motivational
interviewing-based session delivered by a therapist plus educational enhanced
usual care (EUC), or EUC alone. Participants completed self-reported surveys at
baseline and 6 months post-baseline (87% retention rate) to measure the primary
outcomes of overdose risk behaviors and the secondary outcome of non-medical
opioid use.
FINDINGS: Participants
in the intervention condition reported significantly lower levels of overdose
risk behaviors (incidence rate ratio [IRR]=0.72, 95% CI: 0.59-0.87; 40.5%
reduction in mean vs. 14.7%) and lower levels of non-medical opioid use
(IRR=0.81, 95% CI: 0.70-0.92; 50.0% reduction in mean vs. 39.5%) at follow-up
compared to the EUC condition.
CONCLUSIONS: This
study represents the first clinical trial of a behavioral intervention to
reduce overdose risk. Results indicate that this single motivational
enhancement session reduced prescription opioid overdose risk behaviors,
including opioid misuse, among adult patients in the ED.
Essay in UPI:
http://www.upi.com/Health_News/2016/04/20/Short-counseling-session-in-ER-can-reduce-opioid-abuse-trial-shows/6521461164779/
B.
RCT of Electronic Care Plan Alerts and Resource Utilization by High Frequency
ED Users with Opioid Use Disorder
Rathlev N, et
al. West J Emerg Med. 2016;17(1):28-34.
INTRODUCTION:
There is a paucity of literature supporting the use of electronic alerts for
patients with high frequency emergency department (ED) use. We sought to
measure changes in opioid prescribing and administration practices, total charges
and other resource utilization using electronic alerts to notify providers of
an opioid-use care plan for high frequency ED patients.
METHODS: This
was a randomized, non-blinded, two-group parallel design study of patients who
had 1) opioid use disorder and 2) high frequency ED use. Three affiliated
hospitals with identical electronic health records participated. Patients were
randomized into "Care Plan" versus "Usual Care groups".
Between the years before and after randomization, we compared as primary
outcomes the following: 1) opioids (morphine mg equivalents) prescribed to
patients upon discharge and administered to ED and inpatients; 2) total medical
charges, and the numbers of; 3) ED visits, 4) ED visits with advanced
radiologic imaging (computed tomography [CT] or magnetic resonance imaging
[MRI]) studies, and 5) inpatient admissions.
RESULTS: A
total of 40 patients were enrolled. For ED and inpatients in the "Usual
Care" group, the proportion of morphine mg equivalents received in the
post-period compared with the pre-period was 15.7%, while in the "Care
Plan" group the proportion received in the post-period compared with the
pre-period was 4.5% (ratio=0.29, 95% CI [0.07-1.12]; p=0.07). For discharged
patients in the "Usual Care" group, the proportion of morphine mg
equivalents prescribed in the post-period compared with the pre-period was
25.7% while in the "Care Plan" group, the proportion prescribed in
the post-period compared to the pre-period was 2.9%. The "Care Plan"
group showed an 89% greater proportional change over the periods compared with
the "Usual Care" group (ratio=0.11, 95% CI [0.01-0.092]; p=0.04).
Care plans did not change the total charges, or, the numbers of ED visits, ED
visits with CT or MRI or inpatient admissions.
CONCLUSION:
Electronic care plans were associated with an incremental decrease in opioids
(in morphine mg equivalents) prescribed to patients with opioid use disorder
and high frequency ED use.
11. Present and future of cardiac troponin in clinical practice
Sandoval Y,
et al. Am J Med. 2016 Apr;129(4):354-65.
Despite its
wide use and central role in the evaluation of patients with potential ischemic
symptoms, misconceptions and confusion about cardiac troponin (cTn) prevail.
The implementation of high-sensitivity (hs) cTn assays in clinical practice has
multiple potential advantages provided there is an education process tied to
the introduction of these assays that emphasizes appropriate use.
Several
diagnostic strategies have been explored with hs-cTn assays, including the use
of undetectable values, accelerated serial hs-cTn sampling, hs-cTn measurements
in combination with a clinical-risk score, and a single hs-cTn measurement with
a concentration threshold tailored to meet a clinical need. The authors discuss
basic concepts to facilitate integration of hs-cTn assays into clinical care.
12. Images in Clinical Practice
Man With
Sharp Periumbilical Pain
Elderly Man
With Dyspnea
Man With
Right Hand Bullae
Discussion:
Phytophotodermatitis: The Other “Lime” Disease
Young Woman
With Syncope
Adult Female
With Abdominal Pain
Elderly Woman
With Pain and Numbness in Left Hand
Electronic
Vapor Cigarette Battery Explosion Causing Shotgun-like Superficial Wounds and
Contusion
Traumatic
Arthrotomy with Pneumarthrosis on Plain Radiograph of the Knee
Young Woman
with a Fever and Chest Pain
Bullosis
Diabeticorum
Turbid
Peritoneal Fluid
Diffuse
Melanosis Cutis
Tracheobronchomegaly
Paget’s
Disease of Bone
Carotid–Cavernous
Sinus Fistula
Reversible
Loss of Vision in Malignant Hypertension
Creeping
Eruption — Cutaneous Larva Migrans
Metronidazole-Associated
Encephalopathy
“Frog Sign” in
Atrioventricular Nodal Reentrant Tachycardia
Resolution of
Lumbar Disk Herniation without Surgery
Young Woman
With Diffuse Pustular Rash
Young Girl
With Shoulder and Chest Pain
Discussion:
Penetrating Neck Injury: What’s In and What’s Out?
Ulcerations
on the Neck and Forearm
Child With
Swollen Face
Man With
Thrill in Neck
Female With
Severe Shock
Video
on Managing Procedural Anxiety in Children
For those
with access to NEJM, you’ll appreciate this.
Krauss BS, et
al. N Engl J Med 2016; 374:e19.
When children
need medical care, the situation can be stressful for both the parents and the
child. This video describes the signs of acute anxiety in children and
demonstrates approaches to interacting with children that minimize anxiety and
maximize cooperation.
13. Community Paramedicine — Addressing Questions as Programs
Expand
Iezzoni LI,
et al. N Engl J Med 2016; 374:1107-1109.
Growing
increasingly short of breath late one night, Ms. E. called her health care
provider’s urgent care line, anticipating that the on-call nurse practitioner
would have her transported to the emergency department (ED). Over the past 6
months, Ms. E. had made many ED visits. She is 83 years old and poor, lives
alone, and has multiple health problems, including heart failure, advanced
kidney disease, hepatitis C with liver cirrhosis, diabetes, and hypertension.
In the ED, she generally endures long waits, must repeatedly recite her lengthy
medical history, and feels vulnerable and helpless. She was therefore relieved
when, instead of dialing 911, the nurse practitioner dispatched a specially
trained and equipped paramedic to her home. As part of a pilot program overseen
by the Massachusetts Department of Public Health, the paramedic retrieved Ms.
E.’s electronic health record, performed a physical examination, and conducted
blood tests while communicating with her provider’s on-call physician. As
instructed, the paramedic administered intravenous diuretics and ensured that
Ms. E. was clinically stable before leaving her home, where her primary care
team followed up with her the next morning.
The
Massachusetts acute community care program is one of numerous new initiatives
in the United States using emergency medical services (EMS) personnel. These
mobile integrated health care and community paramedicine programs aim to
address critical problems in local delivery systems, such as insufficient
primary and chronic care resources, overburdened EDs, and costly, fragmented
emergency and urgent care networks.1 Despite growing enthusiasm for these
programs,2 however, their performance has rarely been rigorously evaluated, and
they raise important questions about training, oversight, care coordination,
and value.
EMS systems
were established in the United States in the 1950s and expanded, using federal
funding, in the 1970s to create 911 response networks nationwide. Operating EMS
systems around the clock requires trained workers with diverse skills. In 1975,
the American Medical Association recognized emergency medical technicians
(EMTs), paramedics, and other EMS staff as allied health workers. The federal
government specifies educational standards for the various EMS occupations. As
entry-level EMS providers, for example, EMTs undergo about 6 months of training
and must pass state certification exams. In contrast, paramedics must have
substantial prior EMT experience and then complete at least 2 years of didactic
and field training before passing rigorous state licensing exams assessing
knowledge and psychomotor skills.
Since the
1980s, reduced federal funding has contributed to EMS fragmentation. Local fire
departments provide roughly half of today’s emergency medical services. Almost
all 911 calls result in transportation to an ED because of state regulations
and payment policies: insurers, including Medicare, typically reimburse EMS
providers only for transporting patients. At the receiving end, many EDs face
escalating demand and soaring costs, as more people seek attention for
nonurgent acute and chronic conditions — in part because they lack regular
sources of primary and chronic disease care. One estimate suggests that about 15%
of persons transported by ambulance to EDs could safely receive care in
non–urgent care settings, potentially saving the system hundreds of millions of
dollars each year.2
Other
countries have faced similar health care delivery challenges, and some have
enlisted EMS personnel as part of their solutions. For example, in Australia
and Canada, specially trained paramedics provide preventive and nonurgent
primary care in rural regions, which benefits both patients and the paramedics,
who can use their clinical skills to maximum advantage in regions with low
emergency call volumes. In England, Wales, Canada, Australia, and New Zealand,
EMS personnel provide urgent care on scene, averting unnecessary trips to the
ED. The United Kingdom spent more than £4 million ($5.7 million) investigating
new approaches that would allow EMS personnel to safely care for people who
called 999 — the U.K. equivalent of 911 — in their homes or communities.3 It
implemented the successful approaches to substantially change how EMS providers
respond to 999 calls, reducing ED transport rates from 90% in 2000 to 58% in
2012.3 These changes have not affected patient safety.
Community
paramedicine has come to the United States only recently, but initiatives are
already under way in nearly 20 states. These programs vary widely…
14. Dual-targeted therapy with trastuzumab and lapatinib in
treatment-refractory, KRAS codon 12/13 wild-type, HER2-positive metastatic
colorectal cancer (HERACLES): a proof-of-concept, multicentre, open-label,
phase 2 trial
Sartore-Bianchi
A, et al. Lancet Oncology 2016 April 20 [Epub ahead of print].
Whoa! Where did that come from?
I thought it was one cool title. And
you gotta love the acronym.
15. Ineffectiveness of glucagon in relieving esophageal foreign
body impaction: a multicenter study
Bodkin RP, et
al. Am J Emerg Med. 2016 Mar 9 [Epub ahead of print].
PURPOSE:
Glucagon is thought to decrease lower esophageal sphincter tone and is used as
an alternative to invasive endoscopy for esophageal foreign body impaction
(EFBI). The purpose of this study was to evaluate efficacy and safety of
glucagon and identify characteristics associated with success.
METHODS: A
multicenter, retrospective study of patients receiving glucagon for EFBI at 2
academic emergency departments was conducted between 2006 and 2010. A control
group of patients that did not receive glucagon was evaluated. Data collection
included demographics, type of foreign body, glucagon dose, resolution of
impaction, incidence of vomiting, additional medication, and endoscopy
required. Descriptive and univariate analysis was performed as appropriate.
RESULTS: A
total of 133 doses of glucagon were administered in 127 patients.
Glucagon-related resolution of EFBI occurred in 18 patients (14.2%) and
vomiting in 16 patients (12.6%). No statistical differences between successful
and unsuccessful groups were seen with the exception of concomitant medication
administration (benzodiazepine or nitroglycerin) being associated with less
glucagon success, 33.3% vs 59.6%, respectively (P = .04). Eighty-four percent
of patients in the unsuccessful group underwent endoscopy. Comparing those that
received glucagon (n = 127) and the control group (n = 29), there was no
significant difference in resolution of EFBI, 14.2% vs 10.3%, respectively (P =
.586).
CONCLUSIONS:
Glucagon-related resolution occurred in 14.2% of patients and was not
significantly different compared with those that did not receive glucagon
(10.3%). Concomitant medication administration was associated with lower
success. Overall, glucagon had a low success rate, was related to adverse
effects, and does not offer advantages for treatment.
16. Quick Reviews from Ann Emerg Med
A.
Are New Oral Anticoagulants Safer Than Vitamin K Antagonists in the Treatment
of Venous Thromboembolism?
Take-home: Patients
with acute venous thromboembolism treated with new oral anticoagulants have
similar rates of recurrent venous thromboembolism and death as those treated
with warfarin. Although new oral anticoagulants are associated with a lower
incidence of major bleeding, there may be an increased incidence of myocardial infarction.
B.
Does Albumin Infusion Reduce Renal Impairment and Mortality in Patients With
Spontaneous Bacterial Peritonitis?
Take-home: Albumin
administration within 6 hours of diagnosis has been shown to decrease the risk
of both renal impairment and mortality in patients with spontaneous bacterial
peritonitis.
American
Association for the Study of Liver Diseases Guidelines: http://onlinelibrary.wiley.com/doi/10.1002/hep.22853/full
C.
Do Nonopioid Medications Provide Effective Analgesia in Adult Patients With
Acute Renal Colic?
Take-home: Nonsteroidal
anti-inflammatory drugs are effective in the treatment of acute renal colic and
may be more effective than other nonopioid medication.
D.
Do Corticosteroids Provide Benefit to Patients With Community-Acquired
Pneumonia?
Take-home: For
adult patients with community-acquired pneumonia requiring hospitalization,
data suggest that corticosteroid therapy may reduce mortality, need for
mechanical ventilation, and hospital length of stay.
E.
What Is the Prognostic Value of Intermediate Lactate Level Elevations in ED
Patients With Suspected Infection?
Take-home: Patients
with a suspected infection in the emergency department (ED) who have
intermediate lactate level elevations (2.0 to 3.9 mmol/L) carry a moderate to
high risk of mortality, even without hypotension.
F.
Should Patients Who Receive a Diagnosis of Acute PE and Have Evidence of Right
Ventricular Strain Be Treated With Thrombolytic Therapy?
Take-home: There
is evidence to suggest a small mortality benefit with the administration of
thrombolytics for hemodynamically stable pulmonary embolism patients with right
ventricular strain, but this benefit must be weighed against the significantly
increased risk of major bleeding.
G.
Does Using a Low Threshold for Venous Thromboembolism Testing in Pregnant
Patients Lead to Low Diagnostic Yield?
Take-home: In
the emergency department (ED) setting, clinicians have a low threshold for
testing pregnant patients for pulmonary embolism, which leads to lower rates of
positive diagnostic test results for venous thromboembolism than in nonpregnant
patients.
H.
What Is the Most Effective Treatment of Superficial Thrombophlebitis?
Take-home: Although
there is insufficient evidence to support or refute the utility of systemic or
topical therapies for superficial thrombophlebitis, the use of nonsteroidal
anti-inflammatory drugs, heat, and anticoagulants (for low-risk thromboses) is
reasonable.
17. The effect of nebulized magnesium sulfate in the treatment
of moderate to severe asthma attacks: a RCT
Hossein S, et
al. Amer J Emerg Med. 2016 Jan 16 [Epub ahead of print]
Objective
Thirty
percent of people with asthma do not respond to standard treatment, and
complementary therapies are needed. The objective of this study was to
investigate the impact of inhaled magnesium sulfate on the treatment response
in emergency department (ED) patients with moderate to severe attacks of
asthma.
Methods
This study is
a randomized controlled trial, enrolling patients with moderate to severe
asthma in the ED. Subjects allocated to the study group were treated with the
standard, plus 3 ml of 260 mmol/L solution of magnesium sulfate every 20 to
60 minutes. The control group was treated with nebulized saline as a placebo in
addition to standard protocol. The study results included admission rate and
changes in peak expiratory flow rate (PEFR) (primary outcomes) as well as
dyspnea severity score, respiratory rate and peripheral oxygen saturation.
Results
A total of 50
patients were enrolled (25 allocated to the study group and 25 to the control
group). The study group as compared to the control group had significantly more
improvement in the intensity of dyspnea, PEFR and Spo2 20, 40 and 60 minutes
after intervention. In the control group, 11 patients (44%) required admission
as compared to 18 (72%) in the control group (P = .02).
Conclusion
Adding
nebulized magnesium sulfate to standard therapy in patients with moderate to
severe asthma attacks leads to greater and faster improvement in PEFR,
respiratory rate, oxygen saturation and respiratory rate. It also reduces hospitalization
rates in this patient population.
18. Eye of the Beholder: N Engl J Med Interactive Case
A 47-year-old
man presented to an urgent care ambulatory clinic with a 3-day history of
swelling around his left eye and a sensation of tightness in his throat. It had
become difficult for him to swallow solids, and he felt as if food was sticking
in his throat. The patient had no shortness of breath or wheezing, but 1 day
before presentation he felt his voice was becoming hoarse. He . . .
19. Outcomes in a Warfarin-Treated Population with Atrial
Fibrillation
Björck F, et
al. JAMA Cardiol. 2016 April 20 [Epub ahead of print]
Importance Vitamin K antagonist (eg, warfarin) use is
nowadays challenged by the non–vitamin K antagonist oral anticoagulants (NOACs)
for stroke prevention in atrial fibrillation (AF). NOAC studies were based on
comparisons with warfarin arms with times in therapeutic range (TTRs) of 55.2%
to 64.9%, making the results less credible in health care systems with higher
TTRs.
Objectives To evaluate the efficacy and safety of
well-managed warfarin therapy in patients with nonvalvular AF, the risk of
complications, especially intracranial bleeding, in patients with concomitant
use of aspirin, and the impact of international normalized ratio (INR) control.
Design,
Setting, and Participants A
retrospective, multicenter cohort study based on Swedish registries, especially
AuriculA, a quality register for AF and oral anticoagulation, was conducted.
The register contains nationwide data, including that from specialized
anticoagulation clinics and primary health care centers. A total of 40 449
patients starting warfarin therapy owing to nonvalvular AF during the study
period were monitored until treatment cessation, death, or the end of the
study. The study was conducted from January 1, 2006, to December 31, 2011, and
data were analyzed between February 1 and November 15, 2015. Associating
complications with risk factors and individual INR control, we evaluated the
efficacy and safety of warfarin treatment in patients with concomitant aspirin
therapy and those with no additional antiplatelet medications.
Exposures Use of warfarin with and without concomitant
therapy with aspirin.
Main Outcomes
and Measures Annual incidence of
complications in association with individual TTR (iTTR), INR variability, and
aspirin use and identification of factors indicating the probability of
intracranial bleeding.
Results Of the 40 449 patients included in the study,
16 201 (40.0%) were women; mean (SD) age of the cohort was 72.5 (10.1) years,
and the mean CHA2DS2-VASc (cardiac failure or dysfunction, hypertension, age ≥75
years [doubled], diabetes mellitus, stroke [doubled]–vascular disease, age
65-74 years, and sex category [female]) score was 3.3 at baseline. The annual
incidence, reported as percentage (95% CI) of all-cause mortality was 2.19%
(2.07-2.31) and, for intracranial bleeding, 0.44% (0.39-0.49). Patients
receiving concomitant aspirin had annual rates of any major bleeding of 3.07%
(2.70-3.44) and thromboembolism of 4.90% (4.43-5.37), and those with renal
failure were at higher risk of intracranial bleeding (hazard ratio, 2.25; 95%
CI, 1.32-3.82). Annual rates of any major bleeding and any thromboembolism in
iTTR less than 70% were 3.81% (3.51-4.11) and 4.41% (4.09-4.73), respectively,
and, in high INR variability, were 3.04% (2.85-3.24) and 3.48% (3.27-3.69), respectively.
For patients with iTTR 70% or greater, the level of INR variability did not
alter event rates.
Conclusions
and Relevance Well-managed warfarin
therapy is associated with a low risk of complications and is still a valid
alternative for prophylaxis of AF-associated stroke. Therapy should be closely
monitored for patients with renal failure, concomitant aspirin use, and poor
INR control.
20. College Admissions Shocker!
Frank Bruni,
New York Times, March 30, 2016
PALO ALTO,
California — Cementing its standing as the most selective institution of higher
education in the country, Stanford University announced this week that it had
once again received a record-setting number of applications and that its
acceptance rate — which had dropped to a previously uncharted low of 5 percent
last year — plummeted all the way to its inevitable conclusion of 0 percent.
With no one
admitted to the class of 2020, Stanford is assured that no other school can
match its desirability in the near future.
“We had
exceptional applicants, yes, but not a single student we couldn’t live
without,” said a Stanford administrator who requested anonymity. “In the stack
of applications that I reviewed, I didn’t see any gold medalists from the last
Olympics — Summer or Winter Games — and while there was a 17-year-old who’d
performed surgery, it wasn’t open-heart or a transplant or anything like that.
She’ll thrive at Yale.”
News of
Stanford’s unprecedented selectiveness sent shock waves through the Ivy League,
along with Amherst, Northwestern and at least a dozen other elite schools
where, as a consequence, there could be substantial turnover among
underperforming deans of admission.
Administrators
at several of these institutions, mortified by acceptance rates still north of
6 percent, chided themselves for insufficient international outreach. Carnegie
Mellon vowed that over the next five years, it would quadruple the number of
applicants from Greenland. The University of Chicago announced plans to host a
college fair in Ulan Bator.
Officials at
the University of Pennsylvania, meanwhile, realized that sweatshirts, T-shirts
and glossy print and web catalogs weren’t doing nearly enough to advertise its
charms, and that the university wasn’t fully leveraging the mystique of its
world-renowned business school. So early next fall, every high school senior in
America who scored in the top 4 percent nationally on the SAT will receive, in
the mail, a complimentary spray bottle of Wharton: The Fragrance, which has a
top note of sandalwood and a bottom note of crisp, freshly minted $100 bills.
Seniors who
scored in the top 2 percent will get the scented shower gel and reed diffuser
set as well.
On campuses
from coast to coast, there was soul searching about ways in which colleges
might be unintentionally deterring prospective applicants.
Were the
applications themselves too laborious? Brown may give next year’s aspirants the
option of submitting, in lieu of several essays, one haiku and one original
recipe using organic kale…
The remainder
of the essay: http://www.nytimes.com/2016/03/30/opinion/college-admissions-shocker.html
21. A Clinical Decision Rule to Identify ED Patients at Low Risk
for ACS who Do Not Need Objective CAD Testing: The No Objective Testing Rule.
Greenslade JH,
et al. Ann Emerg Med. 2016;67(4): 478–489.e2.
STUDY
OBJECTIVE: We derive a clinical decision rule for ongoing investigation of
patients who present to the emergency department (ED) with chest pain. The rule
identifies patients who are at low risk of acute coronary syndrome and could be
discharged without further cardiac testing.
METHODS: This
was a prospective observational study of 2,396 patients who presented to 2 EDs
with chest pain suggestive of acute coronary syndrome and had normal troponin
and ECG results 2 hours after presentation. Research nurses collected clinical
data on presentation, and the primary endpoint was diagnosis of acute coronary
syndrome within 30 days of presentation to the ED. Logistic regression analyses
were conducted on 50 bootstrapped samples to identify predictors of acute
coronary syndrome. A rule was derived and diagnostic accuracy statistics were
computed.
RESULTS:
Acute coronary syndrome was diagnosed in 126 (5.3%) patients. Regression
analyses identified the following predictors of acute coronary syndrome:
cardiac risk factors, age, sex, previous myocardial infarction, or coronary
artery disease and nitrate use. A rule was derived that identified 753 low-risk
patients (31.4%), with sensitivity 97.6% (95% confidence interval [CI] 93.2% to
99.5%), negative predictive value 99.6% (95% CI 98.8% to 99.9%), specificity
33.0% (95% CI 31.1% to 35.0%), and positive predictive value 7.5% (95% CI 6.3%
to 8.9%) for acute coronary syndrome. This was referred to as the no objective
testing rule.
CONCLUSION:
We have derived a clinical decision rule for chest pain patients with negative
early cardiac biomarker and ECG testing results that identifies 31% at low risk
and who may not require objective testing for coronary artery disease. A prospective
trial is required to confirm these findings.
22. Health Wearables
A.
Counting Steps: Fit or Fad?
Ballard DB.
Emerg Med News 2016;38(3B).
My brother
recently gave me the gift of friendly competition. Sibling rivalry is nothing
new with just an 18-month age difference between us. Not that long ago we
competed on the basketball court, but these days, due to the ravages of
middle-aged arthritis, we're better suited to more pedestrian exploits such as
counting steps!
He introduced
me about a month ago to the 20-million-strong Fitbit nation. I've been rocking
a Fitbit Zip since then, and have earned buzzy 10,000 steps-in-a-day “badges”
and one “Penguin March” award for my first 70 miles. I have tried my bit in my
pocket, on my shoe, in my gym bag, and even in the dryer. I have read up on
Fitbit and other step counters in the press and scientific literature. I have
even committed the Fitbit jingle to memory:
Should You
Count?
Is counting
steps a worthwhile endeavor? If you are an uber-active athlete, the type who
does yoga in the morning, jogs at midday, and rides a bike in the evening (that
sounds fantastic, by the way), then I suspect that counting steps is probably
not worth your time. You can measure your activity level in many other ways, and
you're not really missing out. In fact, some evidence suggests that it is not
just the volume of steps that improves health but the combination of volume and
intensity.
Pillay, et
al., compared several metrics of physical health (waist circumference, body fat
percentage, and V02max) among study participants with low and high numbers of
steps as well as “aerobic” steps (greater than 60 steps/minute for at least a
minute). (J Phys Act Health 2014;11[1]:10.) They found that those in the
high-high group (those active at a high volume of steps and aerobic steps) had
significantly better metrics of physical health than those in other groups.
Counting steps alone may not be of maximal benefit if you are young and active,
unless, of course, you think knowing your steps (and perhaps bragging about
them) is fun, in which case, go for it.
On the other
hand, if you have fewer athletic outlets or higher intensity exercise isn't
practical, then counting steps could have a significant impact on your health,
particularly if you tend to gravitate to couches, elevators, and valet parking.
You may even find that you really enjoy that little dopamine squirt the
buzz-buzz of your phone gives you when you hit your daily step goal.
How Should
You Count?
Is the Fitbit
the best choice? The answer depends on how much you value accuracy. The
traditional gold standard is an old-fashioned pedometer, like the Yamax SW-200,
which uses a coil spring mechanism to keep count. The downside to these devices
is that they lack the technical and social features of step-counters that link
to or work on your smartphone. Smartphone step-counting applications use GPS
data plus or minus an accelerometer. A recent study in Bio Medical Central
Research Notes compared these technologies in accuracy during five different
scenarios and found widely divergent values.
On the other
hand, a recent Journal of the American Medical Association study noted that the
apps on your smartphone are perhaps even better than the Fitbit, at least in
measuring steps and calories. (2015;313[6]:625.) University of Pennsylvania
researchers compared 10 of the top-selling smartphone fitness applications and
pedometers with wearable devices, tracking 14 healthy adults as they walked on
the treadmill, and found that wearable devices had significantly more variation
(22%) than smartphones (6%).
My own
informal experiments with two Fitbits (mine and my wife's) supported these
findings. A Fitbit in my pocket during a 12-hour ED shift counted 800 fewer
steps than one on my shoe. The next day, the Fitbit on my wife's shoe
registered 2,000 more steps than the one in her pocket! And I registered 883
steps purely by storing my Fitbit in my gym bag in the back of the car and
6,000 or so by giving it a prolonged spin in the dryer. Bottom line: Buyer
bewareFIT if you are a stickler for accuracy.
You can
mitigate discrepancies with a Fitbit by being consistent with how and where you
apply your device: Use the same physical location each day, and recognize that
your pocket may not be as accurate as your wrist or waist.
What About a
Daily Goal?
The most
widely published goal by far is 10,000 steps a day (the pre-set goal for Fitbit
customers). Interestingly, this step target, admittedly a nice round number,
does not have its origin in evidence. Apparently, the 10,000-step mantra
originated around the time of the 1964 Tokyo Olympics with a pedometer named
the Manpo Meter and a popular Japanese “Let's walk 10,000 steps a day” tagline.
Hitting
10,000 steps with the Fitbit provides a properly configured phone to give you a
buzz-buzz of approval, but is there any magic in this number? New evidence
suggests that there might be, and in a surprising way.
Vallance and
colleagues examined the association in people 55 or older between steps per day
in three categories (less than 7,000, 7,000-10,000, and more than 10,000) and
scores on a phone-administered quality of life survey. (J Aging Health 2015;
Oct 20 [ePub Ahead of Print].) Participants in the high-step group had
significantly better scores on mental, physical, and global health scores
independent of obesity markers (such as waist circumference and body mass
index) than those in the low-step group. This was a rather skewed sample of
wealthy Caucasians, and it is difficult to know if the steps themselves rather
than the ability to take steps (e.g., freedom from severe joint pain, time, and
wherewithal to walk) contributed to the improvement in quality of life.
Nonetheless,
it would seem that the makers of Fitbit and other step-trackers may be onto
something. Research shows that people over 55 who take more steps may be
happier and healthier than those who do not. There may be other equally valid
means of achieving these benefits, but steps are so easy to count and to take!
Want more of them? Don't circle three times to find the closest parking spot,
just park and amble; march up the stairs; promenade the dog rather than letting
it promenade itself; don't send the tech to grab a warm blanket for Trauma A,
grab it yourself! Game on!
B.
EKG on a Wristband: Novel mobile health device could simplify frequent rhythm
checks
by Satish
Misra MD. iMedicalApps. March 17, 2016
AliveCor this
week introduced the Kardia Band for the Apple Watch, which will let users
capture a single-lead EKG by just touching the watch's band.
AliveCor is
best known for its smartphone EKG device, the AliveCor Heart Monitor,which the
company has now rebranded Kardia Mobile. That device uses a small smartphone
attachment with two embedded electrodes to let users capture a single-lead EKG
by just touching both of the electrodes with their hands or to their chest.
Perhaps the most interesting feature of that device is how the data is
transmitted to the smartphone - using an inaudible ultrasonic signal that's
picked up by the smartphone's microphone and converted back to an EKG signal.
The Kardia
Band is basically a band for the Apple Watch that has one electrode facing
inwards and the other facing outwards. By touching the outer electrode, users
can capture a single-lead EKG. Using previously validated and FDA-cleared
algorithms, the companion Apple Watch app tells users whether the tracing was
normal. It can also tell users if atrial fibrillation was identified.
Devices like
Kardia Mobile and Kardia Band are generally most useful for patients with
symptomatic paroxysmal arrhythmias. In addition, there has been some promising
data in using the Kardia Mobile for atrial fibrillation (AF) screening. A large
study in Australia using the Kardia Mobile device for opportunistic, one-time
screening of elderly patients in community pharmacies showed a detection rate
of around 2% for AF in elderly patients picking up medications at their
pharmacy. Devices like Kardia Band may offer a strategy for monitoring that
makes wearers more amenable to frequent rhythm checks that could improve
screening sensitivity.
The Kardia
Band could also have interesting applications for research, letting users
capture frequent EKG tracings along with activity, heart rate, and other
metrics already captured by the watch. AliveCor also recently announced
integration with HealthKit, which could help integrate these devices into
ResearchKit-based studies.
What is more
interesting, though, is where AliveCor takes things from here. When I spoke to
AliveCor founder and chief medical officer Dave Albert, MD, last fall, it was
clear that his vision was to make these devices as accessible as possible. He
was particularly proud of the fact that the Kardia Mobile had dropped in price
by more than 50% within 2 years of release, and was redesigned to basically
work with any smartphone. I'd guess the same vision will follow here with
Kardia Band, and that over time we'll see it become available for a broader
range of smartwatches, including low-cost devices, and at a lower price point.
The Kardia
Band is anticipated to receive FDA clearance later this spring and become
available for purchase soon after. This post originally appeared on iMedic
23. The Science of Choosing Wisely — Overcoming the Therapeutic
Illusion
Casarett D. N
Engl J Med 2016;374:1203-1205.
In recent
years, the United States has seen increasing efforts to reduce inappropriate
use of medical treatments and tests. Perhaps the most visible has been the
Choosing Wisely campaign, in which medical societies have identified many
tests, medications, and treatments that are used inappropriately. The result is
recommendations advising against using these interventions or suggesting that
they be considered more carefully and discussed with patients.
The success
of such efforts, however, may be limited by the tendency of human beings to
overestimate the effects of their actions. Psychologists call this phenomenon,
which is based on our tendency to infer causality where none exists, the
“illusion of control.”1 In medicine, it may be called the “therapeutic
illusion” (a label first applied in 1978 to “the unjustified enthusiasm for
treatment on the part of both patients and doctors”2). When physicians believe
that their actions or tools are more effective than they actually are, the
results can be unnecessary and costly care. Therefore, I think that efforts to
promote more rational decision making will need to address this illusion
directly.
The best
illustration of the illusion of control comes from studies in which volunteers
were asked to figure out how to press a button in order to cause a panel to
light up.3 The volunteers searched enthusiastically for strategies and were
generally confident that they’d succeeded. They didn’t know, however, that
their success was determined entirely by chance.
The
phenomenon has since been described in widely varied settings. Gamblers, for
example, consistently overestimate the control they have over outcomes, both in
gambling and in everyday life. Their belief leads them to engage in seemingly
bizarre or ritualistic behaviors such as throwing dice in a certain way or
wearing specific colors. But the illusion of control is widespread, and its
effects may be enhanced when people are placed in positions of authority or
subjected to time pressure or competition.
The decisions
that physicians make at the bedside are both more complicated and more
evidence-based than the choices of volunteers in a laboratory. Nevertheless,
physicians also overestimate the benefits of everything from interventions for
back pain to cancer chemotherapy.4,5 And their therapeutic illusion facilitates
continued use of inappropriate tests and treatments.
The outcome
of virtually all medical decisions is at least partly outside the physician’s
control, and random chance can encourage physicians to embrace mistaken beliefs
about causality. For instance, joint lavage is overused for relief of
osteoarthritis-related knee pain, despite a recommendation against it from the
American Academy of Orthopedic Surgery. Knee pain tends to wax and wane, so
many patients report improvement in symptoms after lavage, and it’s natural to
conclude that the intervention was effective.
Moreover, the
therapeutic illusion is reinforced by a tendency to look selectively for
evidence of impact — one manifestation of the “confirmation bias” that leads us
to seek only evidence that supports what we already believe to be true.
Physicians may be particularly susceptible to that bias when caring for a
patient with a complex illness. When a patient has multiple medical problems,
it’s often possible to find some evidence of improvement after an intervention,
particularly if the patient is being intensively monitored. For instance, the
Critical Care Collaborative advises against administering total parenteral
nutrition during a patient’s first 7 days in an intensive care unit. However,
if it is used, available tests are likely to provide at least a few indications
of improvement in the patient’s electrolytes, volume status, or prealbumin
level.
The illusion
of control is deeply ingrained in human psychology in the form of a heuristic,
or rule of thumb, that we rely on to interpret events and make decisions. Many
heuristics are subconscious and therefore difficult to avoid or eradicate, but
we can ameliorate their effects by using countervailing, conscious heuristics.
Indeed, physicians already use this strategy to improve diagnostic decisions.
For instance, we tend to ignore the pretest probability of an illness when
making a diagnosis, which can cause us to overdiagnose rare conditions and
underdiagnose common ones; medical students are therefore taught the rule “When
you hear hoofbeats, look for horses, not zebras.” ….
24. Academic Life in Emerg Med (ALiEM) Resources
25. Micro Bits
A. Suicide screening in the emergency
department
The Emergency
Department Safety Assessment and Follow-up Evaluation Screening Outcome
Evaluation examined whether universal suicide risk screening is feasible and
effective in the emergency department. A three-phase interrupted time series
design was used: Treatment as Usual (phase 1), Universal Screening (phase 2),
and Universal Screening + Intervention (phase 3). Across the three phases,
documented screenings rose from 26% (phase 1) to 84% (phase 3). Detection rose
from 2.9% to 5.7%, with the majority of detected intentional self-harm
confirmed as recent suicidal ideation or behavior by patient interview.
B. 67% of US seniors take 5 or more
meds or supplements, study finds
A study
published in JAMA Internal Medicine found that 67% of people ages 62 to 85 took
at least five medications or supplements in 2010-2011, compared with 53% five
years earlier. As polypharmacy rose over the study period, risk of major drug
interactions increased from 8% to 15%. Gerontologist Michael Steinman, who
wrote an editorial published with the study, said rising rates of polypharmacy
are a sign more seniors are getting treatment they need, but he said the trend
underscores the need for care coordination and clear physician-patient
communication.
C. CDC reports on increase in US
suicide rate
The rate of
suicide in the US rose 24% from 1999 to 2014, resulting in 42,773 deaths in
2014, according to a report from the CDC. Researchers found major increases in
suicide rates for white women and Native Americans.
LA Times: http://www.latimes.com/science/sciencenow/la-sci-sn-suicides-soared-since-1999-20160421-story.html
D. The Effects of Azithromycin in
Treatment-Resistant Cough: A Randomized, Double-Blind, Placebo-Controlled Trial
Conclusion: Treatment
with low-dose azithromycin for 8 weeks did not significantly improve LCQ score
compared with placebo. The use of macrolides for treatment-resistant cough
cannot be recommended from this study.
E. Task force recommends low-dose
aspirin therapy for some adults
The US
Preventive Services Task Force reviewed the evidence on low-dose aspirin for
primary prevention of cardiovascular disease and colorectal cancer and
recommended the therapy for adults ages 50 to 59 with a 10% or higher risk of
CVD and no increased risk for bleeding. The task said use of the drug among
patients ages 60 to 69 should be considered on an individual basis. Similar
recommendations were released by the AAFP.
F. Direct oral anticoagulants not
recommended for obese patients
Patients
whose body mass index is greater than 40 kg/m² or whose weight is more than 120
kg are not advised to take direct oral anticoagulants to prevent and treat
venous thromboembolism or to prevent systemic arterial embolism and ischemic
stroke in nonvalvular atrial fibrillation, according to a guideline published
in the Journal of Thrombosis and Haemostasis. The authors noted that based on
evidence, lower drug exposure, decreased peak concentration and shorter
half-lives occur as weight increases.
G. Too little sleep might raise risk
of colds, study finds
Nineteen
percent of US adults who slept for five or fewer hours each night had
experienced a head or chest cold in the past month, compared with 16% of those
who got six hours of sleep and 15% of those who slept for seven hours or more,
according to a study in JAMA Internal Medicine that included more than 22,000
adults. Those with a diagnosed sleep disorder were also at increased risk of
such illness.
H. Antibiotics by age 3 may increase
prediabetes risk in adolescents, study finds
Youths with
prediabetes were 8.5 times as likely to have received antibiotics by age 3,
compared with those who were healthy, according to a Greek study presented at
the Endocrine Society meeting. The findings, based on stool samples of 24
youths with and without prediabetes, ages 12 to 17, also showed that those with
prediabetes had fewer colonies of Ruminococcus, which feeds beneficial gut
bacteria, compared with those without the condition.
I. Study suggests women have twice the
risk of death from a heart attack
A study
presented at a meeting of the American College of Cardiology found that women
have up to twice the likelihood of in-hospital death from ST-segment elevation
myocardial infarction than men. Researchers attribute the disparity to an
average delay of 5.3 minutes in time to appropriate treatment in women. The
findings, based on analyses of over 700,000 STEMI patients across 29 countries,
also show that women have a 70% higher mortality risk than men within a month,
six months and one year of suffering a heart attack.
J. Most Kidney Transplant Recipients
Visit the Emergency Department After Discharge
More than
half of kidney transplant recipients will visit an emergency department in the
first 2 years after transplantation, according to a study appearing in an
upcoming issue of the Clinical Journal of the American Society of Nephrology.
The findings indicate that efforts are needed to coordinate care for this
vulnerable patient population.