From the recent medical literature...
1. Endobronchial intubation detected by insertion depth of endotracheal tube, bilateral auscultation, or observation of chest movements: randomised trial
Sitzwohl C, et al. BMJ 2010; 341:c5943.
Objective: To determine which bedside method of detecting inadvertent endobronchial intubation in adults has the highest sensitivity and specificity.
Design: Prospective randomised blinded study.
Setting: Department of anaesthesia in tertiary academic hospital.
Participants: 160 consecutive patients (American Society of Anesthesiologists category I or II) aged 19-75 scheduled for elective gynaecological or urological surgery.
Interventions: Patients were randomly assigned to eight study groups. In four groups, an endotracheal tube was fibreoptically positioned 2.5-4.0 cm above the carina, whereas in the other four groups the tube was positioned in the right mainstem bronchus. The four groups differed in the bedside test used to verify the position of the endotracheal tube. To determine whether the tube was properly positioned in the trachea, in each patient first year residents and experienced anaesthetists were randomly assigned to independently perform bilateral auscultation of the chest (auscultation); observation and palpation of symmetrical chest movements (observation); estimation of the position of the tube by the insertion depth (tube depth); or a combination of all three (all three).
Main outcome measures: Correct and incorrect judgments of endotracheal tube position.
Results: 160 patients underwent 320 observations by experienced and inexperienced anaesthetists. First year residents missed endobronchial intubation by auscultation in 55% of cases and performed significantly worse than experienced anaesthetists with this bedside test (odds ratio 10.0, 95% confidence interval 1.4 to 434). With a sensitivity of 88% (95% confidence interval 75% to 100%) and 100%, respectively, tube depth and the three tests combined were significantly more sensitive for detecting endobronchial intubation than auscultation (65%, 49% to 81%) or observation(43%, 25% to 60%) (P less than 0.001). The four tested methods had the same specificity for ruling out endobronchial intubation (that is, confirming correct tracheal intubation). The average correct tube insertion depth was 21 cm in women and 23 cm in men. By inserting the tube to these distances, however, the distal tip of the tube was less than 2.5 cm away from the carina (the recommended safety distance, to prevent inadvertent endobronchial intubation with changes in the position of the head in intubated patients) in 20% (24/118) of women and 18% (7/42) of men. Therefore optimal tube insertion depth was considered to be 20 cm in women and 22 cm in men.
Conclusion: Less experienced clinicians should rely more on tube insertion depth than on auscultation to detect inadvertent endobronchial intubation. But even experienced physicians will benefit from inserting tubes to 20-21 cm in women and 22-23 cm in men, especially when high ambient noise precludes accurate auscultation (such as in emergency situations or helicopter transport). The highest sensitivity and specificity for ruling out endobronchial intubation, however, is achieved by combining tube depth, auscultation of the lungs, and observation of symmetrical chest movements.
Full-text (free): http://www.bmj.com/content/341/bmj.c5943.full
2. Safety of Assessment of Patients with Potential Ischemic Chest Pain in an ED Waiting Room: A Prospective Comparative Cohort Study
Scheuermeyer FX, et al. Ann Emerg Med. 2010;56:455-462.
Emergency department (ED) crowding has been associated with a variety of adverse outcomes. Current guidelines suggest that patients with potentially ischemic chest pain should undergo rapid assessment and treatment in a monitored setting to optimize the diagnosis of acute coronary syndrome. These patients may be at high risk of incorrect diagnosis and adverse events when their evaluation is delayed because of crowding. To mitigate crowding-related delays, we developed processes that enabled emergency physicians to evaluate potentially sick patients in the waiting room when all nurse-staffed stretchers are occupied. The objective of this study was to investigate the safety of waiting room chest pain evaluation.
Methods
This prospective comparative cohort study was conducted in a busy urban, tertiary care ED. Explicit triage and waiting room evaluation processes were introduced. One thousand one hundred seven patients with chest pain of potential cardiac origin were triaged either to a monitored bed or a waiting room chair, depending on bed availability and triage judgment. After diagnostic evaluation, patients were followed for 30 days to identify the proportion of missed cases of acute coronary syndrome (primary outcome) and other prespecified adverse events. Analysis was based on intention to treat.
Results
Eight hundred four patients were triaged to monitored bed and 303 to waiting room evaluation. Initial vital signs were similar, but the waiting room group was younger and had lower rates of some cardiovascular risk factors. The rate of acute coronary syndrome, defined as acute myocardial infarction or objective unstable angina, was 11.7% in the monitored bed group and 7.6% in waiting room patients. There were no missed acute coronary syndrome cases in either the monitored bed group (0%; 95% confidence interval [CI] 0% to 0.4%) or the waiting room group (0%; 95% CI 0% to 1.0%). There were 32 adverse events in the monitored bed group (4.0%; 95% CI 2.6% to 5.3%) and 2 in the waiting room group (0.7%; 95% CI 0% to 1.6%).
Conclusion
Our organized approach to triage and waiting room evaluation for stable chest pain patients was safe and efficient. Although waiting room evaluation is not ideal, it may be a feasible contingency strategy for periods when ED crowding compromises access to monitored, nurse-staffed ED beds.
Full-text free: http://www.annemergmed.com/article/S0196-0644(10)00350-1/fulltext
3. Use of Intranasal Fentanyl for the Relief of Pediatric Orthopedic Trauma Pain
Saunders S, et al. Acad Emerg Med. 2010;17:1155-1161.
Objectives: The objective was to evaluate the use of a single 2 μg/kg dose of intranasal fentanyl as analgesia for painful orthopedic injuries in children presenting to a pediatric emergency department (ED).
Methods: This was a prospective, nonblinded interventional trial, in a convenience sample of patients 3 to 18 years of age seen in a tertiary care pediatric ED. All had clinically suspected fractures and were treated between July and November 2006. Eligible patients had moderate to severe pain based on initial pain scores using the Wong Baker Faces Scale (WBS) for patients aged 3–8 years or the Visual Analog Scale (VAS) for patients aged 9–18 years. All enrolled patients received fentanyl via intranasal atomization. Pain scores were obtained at baseline and at 10, 20, and 30 minutes after intranasal fentanyl administration. Satisfaction scores were obtained using a 100-mm VAS. Vital signs and adverse events were recorded.
Results: Eighty-one patients were enrolled, 28 in the VAS group and 53 in the WBS group. The mean patient age was 8 years. Fracture locations included forearm, 38 (47%); supracondylar, 16 (20%); clavicle, 7 (9%); tibia/fibula, 5 (6%); and other, 15 (18%). In the WBS group, the median pain scores decreased from five faces (interquartile range [IQR] = 4–6) at baseline to three faces (IQR = 2–5) at 10 minutes, two faces (IQR = 1–4) at 20 minutes, and two faces (IQR = 1–3) at 30 minutes. The mean pain score in the VAS group at baseline was 70 mm (95% confidence interval [CI] = 63 to 77 mm). In this group, the pain scores decreased by a mean of 21 mm (95% CI = 14 to 28 mm) at 10 minutes, 25 mm (95% CI = 15 to 34 mm) at 20 minutes, and 27 mm (95% CI = 16 to 37 mm) at 30 minutes. Mean satisfaction scores were 79 mm for providers, 74 mm for parents, and 62 mm for patients. No adverse events were recorded.
Conclusions: Intranasal fentanyl at a dose of 2 μg/kg provides effective analgesia for pediatric ED patients with painful orthopedic trauma within 10 minutes of administration.
4. The Crashing Atrial Fibrillation Patient--EMCrit Podcast 20
Your patient is pale and diaphoretic. Blood pressure is 70/50. Heart rate is 178. EKG shows atrial fibrillation… What are you going to do???
Yeah, yeah the Pavlovian ACLS response–You cardiovert. Wonderful, except it didn’t change a thing. Now what?
9 minute podcast here: http://blog.emcrit.org/podcasts/crashing-a-fib/
Speaking of AF…
5. Vernakalant HCl: A Novel Atrial-selective Agent for the Cardioversion of Recent-onset Atrial Fibrillation in the ED
Stiell IG, et al. Acad Emerg Med 2010;17:1175-1182.
Objectives: Vernakalant is a relatively atrial-selective antiarrhythmic agent that has been shown to successfully convert atrial fibrillation (AF) to normal sinus rhythm for some patients whose onset of dysrhythmia occurred less than 7 days previously. This study sought to evaluate the efficacy and safety of vernakalant for patients with recent-onset AF.
Methods: This was a post hoc analysis of patients with recent-onset AF (between 3 and 48 hours) enrolled in the double-blind, placebo-controlled Atrial arrhythmia Conversion Trial (ACT) I and the open-label ACT IV trials. The studies enrolled adults presenting with AF to 78 emergency departments (ED) and cardiac clinics in six countries. Patients received a 10-minute intravenous infusion of vernakalant or placebo, followed by an additional infusion if necessary. Efficacy assessments included conversion to sinus rhythm within 90 minutes and median time to conversion. Safety evaluations included telemetry, Holter monitoring, and adverse events (AEs).
Results: Of the 290 patients, 229 received vernakalant, 61 received placebo, and the overall mean age was 59 years. The vernakalant and placebo groups were similar. Of all patients given vernakalant, 136 (59.4%) converted to sinus rhythm within 90 minutes, compared with three (4.9%) placebo patients. The median time to conversion with vernakalant was 12 minutes (interquartile range = 7–24.5 minutes). Clinically significant bradycardia and hypotension were uncommon, and no cases of torsade de pointes or ventricular fibrillation occurred.
Conclusions: Vernakalant rapidly converted recent-onset AF to sinus rhythm in over half of patients, was well tolerated, and has the potential to offer an important therapeutic option for rhythm control of recent-onset AF in the ED.
More on the mechanisms and outcomes of vernakalant: http://www.medscape.com/viewarticle/725628
6. Brain and Carotid Imaging Findings Improve ABCD2 Assessment of Stroke Risk after TIA
November 2, 2010 — The ABCD3-I score, which adds brain and carotid imaging findings to the standard ABCD2 clinical prediction score, can markedly improve risk stratification after transient ischemic attack (TIA), confirm results of a multinational observational study.
The ABCD3-I score, with a range of 0 to 13 points, "has shown external validity, good calibration, and risk reclassification," first author Aine Merwick, MB, MRCPI, from Mater Misericordiae University Hospital, Dublin, Ireland, noted in an email to Medscape Medical News.
"This study demonstrates that carotid stenosis, earlier TIA preceding the TIA prompting medical attention, and diffusion-weighted imaging (DWI) lesions are independent predictors of early stroke risk in TIA patients," Dr. Merwick added.
TIA is associated with high risk for early stroke. It is now recommended that DWI and carotid imaging be performed "urgently as part of the standard assessment of patients with TIA," the study team notes in their report. "We expect that, as these guidelines are increasingly adopted, the ABCD3-I score will be of substantial practical use."
Stroke Prediction Tools
The validated and widely used ABCD2 score incorporates 5 clinical characteristics at the time of presentation to risk-stratify TIA patients for early stroke. They are age 60 years or older (1 point), blood pressure 140/90 mm Hg or higher (1 point), clinical weakness (2 points) or speech impairment (1 point), duration of symptoms for 60 minutes or longer (2 points) or for 10 to 59 minutes (1 point), and diabetes (1 point).
Dr. Merwick and colleagues developed 2 new versions of this score (ABCD3 and ABCD3-I). The ABCD3 score adds 2 points for a history of previous TIA within 7 days, and the ABCD3-I score adds 2 points for a new magnetic resonance imaging lesion that shows DWI positivity, and 2 points for internal carotid stenosis of at least 50%.
The team tested the scores in 3886 patients with clinically defined TIA who were seen in secondary care. There were 2654 patients in the derivation sample and 1232 in the validation sample.
In the derivation sample, the C statistic, which indicates discrimination better than chance at more than 0.5, was higher for the ABCD3 score than the ABCD2 score for 7-day stroke prediction, with a further increase for the ABCD3-I score, the authors report.
In the validation sample, both scores predicted early stroke at 7, 28, and 90 days, and the ABCD3-I score improved discrimination of patients at high stroke risk at 28 and 90 days.
However, in the validation sample, discrimination and net reclassification of patients with early stroke were similar with the ABCD3 and ABCD2. Therefore, "use of the ABCD3 cannot be recommended without further validation," the authors say.
Remainder of this article: http://www.medscape.com/viewarticle/731831
Lancet abstract: http://www.ncbi.nlm.nih.gov/pubmed/20934388
7. Early lactate-guided therapy in ICU patients: a multicenter, open-label, randomized controlled trial
Jansen TC, et al. Am J Respir Crit Care Med. 2010 Sep 15;182(6):752-61.
AT A GLANCE COMMENTARY
Scientific Knowledge on the Subject: Increased blood lactate levels have been associated with significant morbidity and mortality. Nevertheless, it is unknown whether monitoring of lactate aimed to decrease levels during initial treatment in critically ill patients improves outcome.
What This Study Adds to the Field: In patients with hyperlactatemia on ICU admission, lactate monitoring followed by targeted treatment significantly reduced ICU length of stay. In addition, ICU and hospital mortality were reduced when adjusting for predefined risk factors. This study suggests that initial treatment aimed at reducing lactate levels has clinical benefit.
RATIONALE: It is unknown whether lactate monitoring aimed to decrease levels during initial treatment in critically ill patients improves outcome.
OBJECTIVES: To assess the effect of lactate monitoring and resuscitation directed at decreasing lactate levels in intensive care unit (ICU) patients admitted with a lactate level of greater than or equal to 3.0 mEq/L.
METHODS: Patients were randomly allocated to two groups. In the lactate group, treatment was guided by lactate levels with the objective to decrease lactate by 20% or more per 2 hours for the initial 8 hours of ICU stay. In the control group, the treatment team had no knowledge of lactate levels (except for the admission value) during this period. The primary outcome measure was hospital mortality.
MEASUREMENTS AND MAIN RESULTS: The lactate group received more fluids and vasodilators. However, there were no significant differences in lactate levels between the groups. In the intention-to-treat population (348 patients), hospital mortality in the control group was 43.5% (77/177) compared with 33.9% (58/171) in the lactate group (P = 0.067). When adjusted for predefined risk factors, hospital mortality was lower in the lactate group (hazard ratio, 0.61; 95% confidence interval, 0.43-0.87; P = 0.006). In the lactate group, Sequential Organ Failure Assessment scores were lower between 9 and 72 hours, inotropes could be stopped earlier, and patients could be weaned from mechanical ventilation and discharged from the ICU earlier.
CONCLUSIONS: In patients with hyperlactatemia on ICU admission, lactate-guided therapy significantly reduced hospital mortality when adjusting for predefined risk factors. As this was consistent with important secondary endpoints, this study suggests that initial lactate monitoring has clinical benefit. Clinical trial registered with www.clinicaltrials.gov (NCT00270673).
8. Triaging Herpes Zoster Ophthalmicus Patients in the ED: Do All Patients Require Referral?
Adam RS, et al. Acad Emerg Med 2010;17:1183–1188.
Objectives: The objective was to assess the predictive value of clinical signs and symptoms of herpes zoster ophthalmicus (HZO) for development of moderate to severe eye disease.
Methods: This was a prospective cohort multicenter study of 54 patients referred to the ophthalmology service after presenting to the emergency department (ED) or primary care clinic with a zosteriform rash of less than 10 days’ duration. Upon referral to ophthalmology, easily assessable clinical signs and symptoms were documented. A complete ocular exam was then performed. Patients were followed for 2 months.
Results: Twenty-three patients (43%) developed moderate to severe disease as defined by corneal or intraocular involvement. Eye redness and rash in the supratrochlear nerve distribution had a statistically significant association with clinically relevant eye disease. All 23 patients who developed moderate to severe eye disease presented with a red eye. Hutchinson’s sign (nasociliary nerve involvement) was not predictive of clinically relevant eye disease.
Conclusions: Eye redness was 100% sensitive for predicting moderate to severe eye disease in this sample of patients and should necessitate immediate referral for ophthalmologic assessment. Patients lacking eye redness, even with a positive Hutchinson’s sign, may not require immediate specialist consultation. All patients not being referred require careful instructions to seek further care should they develop any concerning eye symptoms such as redness, pain, photophobia, or visual disturbance.
9. Evidence-Based Emergency Medicine from Annals of Emergency Medicine
a. Immediate β-Blockade in Patients with Myocardial Infarctions: Is There Evidence of Benefit?
Conclusion: Evidence from a single randomized trial failed to demonstrate a reduction in mortality or reinfarction with administration of β-blocker within the first 24 hours after STEMI.
Full-text (free): http://www.annemergmed.com/article/S0196-0644(10)00341-0/fulltext
b. Use of Platelet Glycoprotein IIb/IIIa Inhibitors in Patients With Unstable Angina and Non-ST-Segment Elevation Myocardial Infarction
Take home: In patients with non-ST-segment elevation acute coronary syndromes who do not undergo early percutaneous coronary intervention, administration of platelet glycoprotein IIb/IIIa (GPIIb/IIIa) inhibitors, given in addition to aspirin and unfractionated heparin, does not reduce 30-day or 6-month mortality. For the composite endpoint of myocardial infarction or death, there was modest benefit at 30 days and 6 months; however, there was an increased risk of major hemorrhage among those receiving GPIIb/IIIa inhibitors.
Full-text (free): http://www.annemergmed.com/article/S0196-0644(10)00481-6/fulltext
10. Sufentanil Is Not Superior to Morphine for the Treatment of Acute Traumatic Pain in an Emergency Setting: A Randomized, Double-Blind, Out-of-Hospital Trial
[Ed. Note: Attend to the loading dose of morphine and the subsequent interval doses.]
Bounes V, et al. Ann Emerg Med. 2010;56:509-516.
Study objective
We determine the best intravenous opioid titration protocol by comparing morphine and sufentanil for adult patients with severe traumatic acute pain in an out-of-hospital setting, with a physician providing care.
Methods
In this double-blind randomized clinical trial, patients were eligible for inclusion if aged 18 years or older, with acute severe pain (defined as a numeric rating scale score ≥6/10) caused by trauma. They were assigned to receive either intravenous 0.15 μg/kg sufentanil, followed by 0.075 μg/kg every 3 minutes or intravenous 0.15 mg/kg morphine and then 0.075 mg/kg. The primary endpoint of the study was pain relief at 15 minutes, defined as a numeric rating scale less than or equal to 3 of 10. Secondary endpoints were time to analgesia, adverse events, and duration of analgesia during the first 6 hours.
Results
A total of 108 patients were included, 54 in each group. At 15 minutes, 74% of the patients in the sufentanil group had a numeric rating scale score of 3 or lower versus 70% of those in the morphine group (Δ4%; 95% confidence interval –13% to 21%). At 9 minutes, 65% of the patients in the sufentanil group experienced pain relief versus 46% of those in the morphine group (Δ18%; 95% confidence interval 0.1% to 35%). The duration of analgesia was in favor of the morphine group. Nineteen percent of patients experienced an adverse event in both groups, all mild to moderate.
Conclusion
Intravenous morphine titration using a loading dose of morphine followed by strictly administered lower doses at regular intervals remains the criterion standard. Moreover, this study supports the idea that the doses studied should be considered for routine administration in severe pain protocols.
11. EMS Crashes: Lack of Evidence Bring the ‘Golden Hour’ Concept under New Scrutiny
Berger E. Ann Emerg Med 2010;56:A17-A19.
.
On a steamy Harlem morning last July, an Acura blasted its radio while approaching the intersection of 125th Street and Seventh Avenue. As a result, the car's driver did not hear the sirens of an oncoming ambulance and broadsided the emergency vehicle, which was responding to a call in haste. Reacting to the Acura, the ambulance swerved and crashed into a city bus and Ford Expedition. Nineteen people were injured, requiring the services of several more ambulances.
A week later I telephoned Nadine Levick, MD, MPH, who told me she lives about 10 blocks from the intersection where the crash occurred. An emergency physician who researches emergency medical services (EMS) transport, Dr. Levick has become a leading crusader for increasing information about the prevalence of ambulance crashes, increased oversight of EMS and slowing down ambulances on most emergency calls.
“The accident certainly highlights the issue,” she said.
Dr. Levick's cause—a reevaluation of speed at all costs in delivering patients to the hospital—has gained ground in recent years as scientists have stepped up their investigations into the notion of a “golden hour,” the time-honored idea that patients have the best chance of surviving a traumatic injury if they receive medical care within 60 minutes. In several studies, scientists have accumulated a growing amount of evidence that time does not always matter when it comes to traumatic injuries.
The golden hour, it seems, stands on a foundation of less than rigorous scientific evidence.
The concept is most widely credited to famed trauma surgeon R. Adams Cowley, a pioneer in emergency medicine who spearheaded the creation of the nation's first statewide EMS system in Maryland.
Cowley originated the idea on a cocktail napkin in a Baltimore bar, said Bryan Bledsoe, DO, a professor of emergency medicine at the University of Nevada School of Medicine. A review of Cowley's writings, published in the July 2001 issue of Academic Emergency Medicine, found no scientific articles backing up his assertion about the significance of a single hour in the mortality of a patient.
Remainder of the essay: http://www.annemergmed.com/article/S0196-0644(10)01550-7/fulltext
12. Prehospital Hypertonic Fluid Fails to Improve Outcomes in Patients with Blunt Head Trauma
In the largest randomized controlled trial to date, prehospital hypertonic fluid therapy did not improve neurological outcomes in patients with severe blunt head trauma without hypovolemic shock.
Hypertonic fluid therapy diminishes cerebral edema and enhances systemic perfusion pressure in patients with severe blunt head injury, but its effect on neurological outcome is unknown. In a multicenter, double-blind, randomized, placebo-controlled trial, researchers evaluated the effect of hypertonic fluid in patients over 15 years who had sustained severe blunt closed head injury (prehospital Glasgow Coma Scale score less than 8) and did not have hypovolemic shock (systolic blood pressure 70 mm Hg or 71–90 mm Hg with a pulse 108 beats per minute). Patients were randomized to receive an initial fluid bolus of 250 mL of 7.5% saline, 7.5% saline/6% dextran 70, or 0.9% saline within 4 hours of the dispatch call.
Six-month outcome data were available for 1087 of 1282 patients (85%) who were enrolled from 2006 to 2009. At 6 months, there were no significant differences between the hypertonic-fluid groups and the normal-saline group in neurological outcome (as measured by the Extended Glasgow Outcome Scale and Disability Rating Scale), survival at 28 days, survival at hospital discharge, development of organ failure, or length of stay in an intensive care unit or hospital. No increase in progression of intracranial hemorrhage was noted in the hypertonic-fluid groups.
Comment: Although this study is the largest of its kind, the authors did not control for postintervention neurosurgical management or fluid administration (including additional hypertonic saline or mannitol), and 15% of patients were lost to follow-up. Currently, hypertonic saline is not recommended for prehospital treatment of patients with severe head injury. This trial should not lead to a change in practice.
— John A. Marx, MD, FAAEM. Published in Journal Watch Emergency Medicine November 5, 2010.Citation: Bulger EM et al. Out-of-hospital hypertonic resuscitation following severe traumatic brain injury: A randomized controlled trial. JAMA 2010 Oct 6; 304:1455.
13. New York’s Non-invasive Protocol for Septic ED Patients
Reserves the Central Line for only those patients who need pressors.
Protocol: http://blog.emcrit.org/wp-content/uploads/non-invasive.pdf
Video on using IVC US to estimate fluid responsiveness in spontaneously breathing patients: http://blog.emcrit.org/procedures/ivc-ultrasound/
Is this aggressive sepsis management? Yes. Is this adhering to the Rivers Protocol? No. Apparently, there are other ways to skin the cat. The evidence for these types of “selective central line” protocols will be forthcoming in the next few years.
Example: The Process Study: http://clinicaltrials.gov/ct2/show/NCT00510835
14. Effect of Bedside Ultrasound on Management of Pediatric Soft-Tissue Infection
Sivitz AB, et al. J Emerg Med. 2010;39:637-643.
Background: Superficial soft-tissue infections (SSTI) are frequently managed in the emergency department (ED). Soft-tissue bedside ultrasound (BUS) for SSTI has not been specifically studied in the pediatric ED setting. Objective: To evaluate the effect of a soft-tissue BUS evaluation on the clinical diagnosis and management of pediatric superficial soft-tissue infection.
Methods: We conducted a prospective observational study in two urban academic pediatric EDs. Eligible patients were aged less than 18 years presenting with suspected SSTI. Before BUS, treating physicians were asked to assess the likelihood of subcutaneous fluid collection and whether further treatment would require medical management or invasive management. A trained emergency physician then performed a BUS of the lesion(s). A post-test questionnaire assessed whether the physician changed the initial management plan based on the results of the BUS.
Results: BUS changed management in 11/50 cases. After initial clinical assessment, 20 patients were designated to receive invasive management, whereas the remaining 30 patients were designated to receive medical management. Management changed in 6/20 in the invasive group. In the medical group, 5/30 patients changed management. BUS had a sensitivity of 90% (95% confidence interval [CI] 77–100%) and specificity of 83% (05% CI 70–97%), whereas clinical suspicion had a sensitivity of 75% (95% CI 56–94%) and specificity of 80% (95% CI 66–94%) in detecting fluid collections requiring drainage.
Conclusions: BUS evaluation of pediatric SSTI may be a useful clinical adjunct for the emergency physician. It changed management in 22% of cases by detecting subclinical abscesses or avoiding unnecessary invasive procedures.
15. Images in Emergency Medicine
Woman with Leg Swelling
http://www.annemergmed.com/article/S0196-0644(09)01717-X/fulltext
Man With Back Pain and Rash
http://www.annemergmed.com/article/S0196-0644(09)01710-7/fulltext
16. Coffee Boosts Stroke Risk, Study Finds
Allison Gandey. November 5, 2010 — A cup of coffee can heighten the risk for ischemic stroke, particularly among infrequent drinkers, report researchers. Their study provides new information that may be useful in stroke prevention and is in line with what is already known about the physiologic effects of coffee.
Investigators led by Elizabeth Mostofsky, MPH, from Harvard Medical School in Boston, Massachusetts, found a 2-fold increased stroke risk in the hour after drinking a cup of coffee. The increased risk returned to baseline within a 2-hour window, which investigators say strengthens the possibility of a causal relationship.
Previous studies evaluating the effect of coffee on cardiovascular diseases have provided conflicting results. Case-control studies have shown an increase in risk, but prospective cohort studies have suggested no harmful effects. Coffee has been shown to have a negative effect on cardiovascular biomarkers increasing serum cholesterol, insulin resistance, and plasma homocysteine. However, several studies have shown that coffee intake decreases the risk for type 2 diabetes.
In this new multicenter crossover study, researchers interviewed 390 people with ischemic stroke. They compared each person's coffee intake the hour before stroke symptoms to his or her usual consumption.
Most people, 78%, said they drank coffee the prior year. More than half of these had a cup of coffee within 24 hours of stroke. Close to 9% of patients had coffee within 1 hour of stroke onset.
Although an increase in stroke risk was seen with coffee, there was no apparent increase in risk in the hour after a cup of caffeinated tea or cola.
The association between ischemic stroke in the hour after coffee consumption was only apparent among those consuming 1 cup or less per day and not for those who drank coffee more regularly (P for trend = .002). Relative risks remained similar when researchers restricted the sample to those who were not simultaneously exposed to other potential triggers, and the results remained significant after stratifying by time of day.
"Elegant Study Design"
In an accompanying editorial, Giancarlo Logroscino, MD, from University of Bari in Italy, and Tobias Kurth, MD, from the Institut National de la Santé et de la Recherche Médicale in Paris, France, say "the authors used an elegant study design" and they call this "an important addition to the 'coffee paradox'."
But they add clinicians will need further evidence to properly advise people about coffee intake, especially when other risk factors for stroke are present.
"Even if coffee contains other substances that may be responsible for the observed effect, caffeine is the most likely candidate for pulling the trigger," they note.
The peak plasma concentration of caffeine is usually less than 2 hours and has several systemic effects, including rapidly increasing epinephrine release, blood pressure, and insulin sensitivity, they explain. "Caffeine has both systemic and cerebral vasoconstrictive effects."
Mostofsky E, et al. Neurology. 2010;75:1583-1588.
Abstract: http://www.ncbi.nlm.nih.gov/pubmed/20881275
17. Getting through the Shift
by Ed Leap.
What gets you through the shift, friends? I'm not being elevated and philosophical. I'm simply asking, in the press of life and death, rules and regulations, fatigue and soul-weariness, what makes you feel better when you go off to your emergency department for eight or 12 or 24 hours?
Let me illustrate. I have a ritual. I don't believe it has any supernatural powers, and I sure don't believe it makes for a good shift if I do it or a bad shift if I don't. It only makes me happy. My ritual is this: I stop by Jack-in-the-Box for what is arguably one of the largest styrofoam drink cups in the fast-food world. (Although not as big as the Route 44 cup from Sonic, which is 44 ounces of sheer soft-drink delight. And in which a small, careless primate could likely drown.)
When I go in, I obtain one large Diet Coke with ice, and fill it almost to the top, at which point I add an ounce or so of regular Coke for flavor. (I mean, for sugar.) Then I take a second cup, and fill it with sweetened tea sans ice. The first half of the eight hours, I drink the Diet Coke. Then, I fill the cup with ice a second time, and transfer the tea over. I know; it sounds ridiculous. But it gives me something to look forward to, and serves as a kind of mile-marker for my work. I could say, quite accurately, that my “tea time” is the halfway marker to home.
Remainder of the essay: http://journals.lww.com/em-news/Fulltext/2010/11000/Second_Opinion__Getting_Through_the_Shift.6.aspx
18. Midazolam Reduces Agitation Associated with Ketamine Sedation in Adults
Bob Saunders. NEW YORK (Reuters Health) Nov 01 - Adding midazolam to ketamine for emergency department sedation in adults reduces the occurrence of agitation during recovery, investigators report in the Annals of Emergency Medicine online October 25.
"The main message is that ketamine can and should be used for painful procedures in adults," lead author Dr. Carl H. Schultz commented by email. "The addition of midazolam will substantially improve the sedation process and make ketamine very useful in adults."
Dr. Schultz, at UC Irvine School of Medicine, Orange, California, and colleagues in Turkey point out in their report that while ketamine has been used widely for procedural sedation and analgesia in children, its acceptance for adult emergency department patients "may be limited by physician apprehension about dreaming and hallucinations during recovery, and unpleasant reactions and nightmares, collectively referred to as recovery agitation."
To see if midazolam reduces recovery agitation after ketamine administration in adult emergency patients, and whether the route of administration of ketamine made any difference, the researchers conducted a prospective study involving 182 such patients. They were randomized in a 2x2 factorial design to receive either midazolam 0.03 mg/kg or placebo intravenously, with ketamine at a dose of either 1.5 mg/kg IV or 4 mg/kg IM.
"We defined recovery agitation as any moaning, screaming, cursing, unpleasant dreams, or unpleasant hallucinations, regardless of severity," the authors state. "Pleasant hallucinations were not counted as adverse events."
Agitation occurred in only 8% of patients given midazolam compared to 25% of those given placebo, according to the report. The rates weren't significantly different when comparing IV versus IM ketamine administration -- 13% versus 17%.
Sedation time was longer with ketamine given IM than when administered intravenously, but midazolam didn't prolong those times significantly. "People have said that if you use midazolam, it will prolong recovery," noted Dr. Schultz. "We showed that is not true."
After the procedures, nurses, doctors and patients in the study were asked to rate their overall satisfaction. Staff registered no differences in satisfaction scores, whereas 69% of patients in the midazolam group expressed satisfaction compared with 48% in the placebo group.
Dr. Schultz said that ketamine "is incredibly safe," and that generally there are no adverse effects from midazolam co-administration at the doses used. While midazolam is not specifically approved by the US Food and Drug Administration for administration with ketamine, it is approved for sedation "so this is really not an issue," he noted.
Given the results, should the strategy be used more widely? "Absolutely!" said Dr. Schultz. "I have been using ketamine in adults with midazolam for over 10 years but have had some difficulty convincing my colleagues that it is effective. Now I think we should be doing this routinely."
Ann Emerg Med. Posted October 25, 2010. Abstract: http://www.ncbi.nlm.nih.gov/pubmed/20970888
19. Emergency Medicine Clinical Reviews
a. Paronychia Management
A Review by James Roberts, Emerg Med News 2010;32:12-15.
http://journals.lww.com/em-news/Fulltext/2010/11000/InFocus__Fingertip_Problems__Acute_Paronychia.5.aspx
b. Umbilical Vein Catheterization
Courtesy of EM-RAP TV Episode 97
http://www.emrap.tv/index.php?option=com_content&view=article&id=2281:EMRAPTV97_Umbilical-vein-cath
20. Infection and Natural History of Emergency Department–Placed Central Venous Catheters
LeMaster CH, et al. Ann Emerg Med. 2010;56:492-497.
STUDY OBJECTIVES: Central line-associated bloodstream infection (CLABSI, hereafter referred to in this paper as "bloodstream infection") is a leading cause of hospital-acquired infection. To our knowledge, there are no previously published studies designed to determine the rate of bloodstream infection among central venous catheters placed in the emergency department (ED). We design a retrospective chart review methodology to determine bloodstream infection and duration of catheterization for central venous catheters placed in the ED.
METHODS: Using hospital infection control, administrative, and ED billing databases, we identified patients with central venous catheters placed in the ED between January 1, 2007, and December 31, 2008, at one academic, urban ED with an annual census of 57,000. We performed a structured, explicit chart review to determine duration of catheterization and confirm bloodstream infection.
RESULTS: We screened 4,251 charts and identified 656 patients with central venous catheters inserted in the ED, 3,622 catheter-days, and 7 bloodstream infections. The rate of bloodstream infection associated with central venous catheters placed in the ED was 1.93 per 1,000 catheter-days (95% confidence interval 0.50 to 3.36). The mean duration of catheterization was 5.5 days (median 4; range 1 to 29 days). Among infected central venous catheters, the mean duration of catheterization was 8.6 days (median 7; range 2 to 19 days). A total of 667 central venous catheters were placed in the internal jugular (392; 59%), subclavian (145; 22%), and femoral (130; 19%) veins. The sensitivity of using ED procedural billing code for identifying ED-placed central venous catheters among patients subsequently admitted to any ICU was 74.9% (95% confidence interval 71.4% to 78.3%).
CONCLUSION: The rate of ED bloodstream infection at our institution is similar to current rates in ICUs. Central venous catheters placed in the ED remain in admitted patients for a substantial period.
21. Brain Death from Cardiac Arrest Is OK for Organ Donation
Will Boggs MD. NEW YORK (Reuters Health) Oct 27 - Organs from brain dead cardiac arrest victims seem as good as those from donors with brain death from other causes, say the authors of a systematic review published online October 6th in Resuscitation.
"Cardiac arrest as a cause of brain death should not prevent a heart-beating, brain-dead patient to be considered for organ donation," lead author Dr. Claudio Sandroni from Catholic University School of Medicine, Rome, Italy, told Reuters Health in an email.
The point of the study, he said, was to determine whether recent resuscitation from a cardiac arrest affects the quality of a donor's organs.
The goal was not to show that the pool of organ donors could be expanded by including these patients, he added. "They are already routinely included in the pool of possible organ donors, just like those who develop brain death due to a primary cerebral injury."
But Dr. Sandroni and colleagues found only three studies of sufficient quality to be included in the final analysis - and only one of the three was prospective. Overall, they involved 741 brain dead donors, including 69 who died after cardiac arrest. The studies reported on different organs, so the authors could not pool the outcome data.
The largest study compared 566 heart grafts from non-cardiac arrest donors with 38 from donors who did have cardiac arrest. There was no difference in 30-day, one-year, or five-year patient survival, nor in early postoperative outcomes (i.e., rates of renal failure, respiratory failure, balloon pump support, or need for ventricular assist devices).
A second report compared outcomes of cardiac arrest and non-cardiac arrest donor kidneys (29 and 79, respectively), livers (14 and 43), hearts (7 and 9), and lungs (2 and 4). When numbers were large enough to allow statistical analysis, there were no differences between the groups.
Finally, a third transplant team compared outcomes in recipients of intestinal, liver, and multivisceral grafts (12 from cardiac arrest donors and 55 from non-arrest donors). Here too there were no statistically significant differences in postoperative outcomes (including time to independence from total parenteral nutrition) or in patient survival.
Clearly, as the authors conclude, larger and prospective studies are needed to confirm their conclusions.
Abstract: http://www.ncbi.nlm.nih.gov/pubmed/20932627
Friday, November 12, 2010
Subscribe to:
Posts (Atom)