From the recent medical literature...
1. Stress Cardiac MRI with Observation Unit Care Reduces Cost for Patients With Emergent Chest Pain: A Randomized Trial
Chadwick DM, et al. Ann Emerg Med. 2010;56:209-219.e2
Introduction
Background
Current guidelines from the American College of Cardiology and the American Heart Association suggest that patients being evaluated for acute coronary syndrome without diagnostic ECGs or biomarkers may be further evaluated in an observation unit. In practice, patients at intermediate or high probability of experiencing acute coronary syndrome, such as patients with previous myocardial infarction, diabetes, or advanced age, are commonly admitted to the hospital. After admission, these patients frequently receive highly variable and aggressive care, with cardiac catheterization rates as high as 37% to 56%.
Cardiac magnetic resonance imaging (MRI) is an established stress-testing modality, with sensitivity and specificity superior to that of stress echocardiography and without the radiation exposure or radioisotope-related time delays associated with nuclear testing. It has proven to be highly accurate for identifying acute coronary syndrome in patients presenting with emergent chest pain in a research setting. Cardiac MRI testing is appealing because it can detect recent infarction before biomarker increase, identify inducible myocardial ischemia, and differentiate between new and old myocardial infarctions. The comprehensive information provided by stress cardiac MRI makes it appropriate for most non-low-risk patients with emergent chest pain, including those with known coronary artery disease. These attributes also make it well suited for protocol-driven rapid care units.
Importance
Improving the efficiency of chest pain evaluations through the use of observation units could reduce the risk of such evaluations. The strengths of cardiac MRI testing and its high accuracy suggest that cardiac MRI is an ideal test for integration into an observation unit care pathway. However, implementing a cardiac MRI testing program requires equipment purchases and personnel training; for these reasons, it is associated with “up-front” expenses. For this to be a worthwhile investment, a cardiac MRI strategy would have to represent an improvement over existing models of care delivery. Therefore, the cost associated with a cardiac MRI care strategy should be examined.
Goals of This Investigation
The purpose of this trial was to integrate cardiac MRI testing with observation unit care among patients with emergent non-low-risk chest pain and compare direct medical cost of this care pathway with that of conventional inpatient care.
Abstract
Study objective
We determine whether imaging with cardiac magnetic resonance imaging (MRI) in an observation unit would reduce medical costs among patients with emergent non-low-risk chest pain who otherwise would be managed with an inpatient care strategy.
Methods
Emergency department patients (n=110) at intermediate or high probability for acute coronary syndrome without electrocardiographic or biomarker evidence of a myocardial infarction provided consent and were randomized to stress cardiac MRI in an observation unit versus standard inpatient care. The primary outcome was direct hospital cost calculated as the sum of hospital and provider costs. Estimated median cost differences (Hodges-Lehmann) and distribution-free 95% confidence intervals (Moses) were used to compare groups.
Results
There were 110 participants with 53 randomized to cardiac MRI and 57 to inpatient care; 8 of 110 (7%) experienced acute coronary syndrome. In the MRI pathway, 49 of 53 underwent stress cardiac MRI, 11 of 53 were admitted, 1 left against medical advice, 41 were discharged, and 2 had acute coronary syndrome. In the inpatient care pathway, 39 of 57 patients initially received stress testing, 54 of 57 were admitted, 3 left against medical advice, and 6 had acute coronary syndrome. At 30 days, no subjects in either group experienced acute coronary syndrome after discharge. The cardiac MRI group had a reduced median hospitalization cost (Hodges-Lehmann estimate $588; 95% confidence interval $336 to $811); 79% were managed without hospital admission.
Conclusion
Compared with inpatient care, an observation unit strategy involving stress cardiac MRI reduced incident cost without any cases of missed acute coronary syndrome in patients with emergent chest pain.
Full-text (free): http://www.annemergmed.com/article/S0196-0644(10)00375-6/fulltext
2. A Clinical Decision Rule to Identify Infants with Apparent Life-threatening Event Who Can Be Safely Discharged from the ED
Sun G, et al. Acad Emerg Med. 2010;17(5) Supplement 1;S70 (abstract 205)
Introduction: Recent studies have questioned the need to hospitalize all infants with an apparent life-threatening event (ALTE), but the ability to identify those who can be safely discharged from the emergency department (ED) is lacking.
Objective: To formulate a clinical decision rule (CDR) to identify infants with ALTE at low risk of adverse outcome who can be safely discharged from the ED.
Methods: Prospective cohort study of infants with an ED diagnosis of ALTE. Detailed information about the visit and 4-week follow-up were collected. Admission was considered warranted if the infant required any significant intervention (SI) during hospital stay (e.g., parenteral antibiotics for proven infection, airway management, admission to ICU) or had a recurrence of ALTE leading to a repeat ED visit within 72 hours. Logistic regression and recursive partitioning analysis were used to develop and validate a CDR identifying patients at low risk of SI and thus suitable for discharge from the ED.
Results: Three hundred infants were enrolled; 228 (76%) were admitted; 37 (12%) required SI. None died during hospital stay or within 72 hours of discharge, or was diagnosed with serious bacterial infection (bacterial meningitis, bacteremia, or UTI). Of the 286 subjects with complete case data who underwent further analysis, 35 (12%) met criteria for SI. Logistic regression identified prematurity (OR 4.5, 95% CI 2-9.9), abnormal physical exam in the ED (3.4, 1.6-7.5), color change to blue (3.2, 1.4-7.2), absence of reported cough or runny nose (2.9, 1.1-7.9), and absence of history of choking (2.3, 1.1-5.1) as predictors for SI. These variables were used to create a CDR. Based on this model (terminal nodes 1, 2, and 5 indicating subjects at low risk), 184 (64%) infants could be discharged home safely from the ED. The model has a negative predictive value of 96.2% (92-98.3).
Conclusion: Only 12% of infants presenting to the ED with an ALTE had a significant intervention warranting hospital admission. We created a CDR that would have that would have decreased the admission rate safely from 76% (observed) to 36%.
Analyzing the data, the researchers discovered that almost any infant in the study could have been safely discharged if they met one of the following criteria:
• Born at term and whose color did not change to blue
• Born at term, color change to blue, with a history of choking and normal exam in ED
• Born preterm and with a history of URI symptoms in the prior 24 hours
Work in now underway to validate the rule in other centers.
Article in ACEP News: http://www.acep.org/acepnews.aspx?id=49207
3. The ROSE (risk stratification of syncope in the emergency department) study
Reed MJ, et al. J Am Coll Cardiol. 2010;55:713-21.
OBJECTIVES: The aim of this study was to develop and validate a clinical decision rule (CDR) to predict 1-month serious outcome and all-cause death in patients presenting with syncope to the emergency department.
BACKGROUND: Syncope is a common, potentially serious condition accounting for many hospital admissions.
METHODS: This was a single center, prospective, observational study of adults presenting to the emergency department with syncope. A CDR was devised from 550 patients in a derivation cohort and tested in a validation cohort of a further 550 patients.
RESULTS: One-month serious outcome or all-cause death occurred in 40 (7.3%) patients in the derivation cohort. Independent predictors were brain natriuretic peptide concentration more than 300 pg/ml (odds ratio [OR]: 7.3), positive fecal occult blood (OR: 13.2), hemoglobin less than 90 g/l (OR: 6.7), oxygen saturation 94% or lower (OR: 3.0), and Q-wave on the presenting electrocardiogram (OR: 2.8). One-month serious outcome or all-cause death occurred in 39 (7.1%) patients in the validation cohort. The ROSE (Risk stratification Of Syncope in the Emergency department) rule had a sensitivity and specificity of 87.2% and 65.5%, respectively, and a negative predictive value of 98.5%. An elevated B-type natriuretic peptide (BNP) concentration alone was a major predictor of serious cardiovascular outcomes (8 of 22 events, 36%) and all-cause deaths (8 of 9 deaths, 89%).
CONCLUSIONS: The ROSE rule has excellent sensitivity and negative predictive value in the identification of high-risk patients with syncope. As a component, BNP seems to be a major predictor of serious cardiovascular outcomes and all-cause death. The ROSE rule and BNP measurement might be valuable risk stratification tools in patients with emergency presentations of syncope and should now be subjected to external validation.
Comment by Eric C. Bruno, MD
Syncope remains an elusive beast. Patients are routinely at their baseline upon arrival in the emergency department, making the evaluation somewhat limited. Clinicians were given a clinical decision rule, the San Francisco Syncope Rule, but it was later unable to be validated. Clinical judgement was the best predictor in that failed validation study. The authors of this prospective observational study took a stab at creating a clinical decision rule, selecting 32 predetermined variables, that would predict a serious event within 30 days of the presentation. Considering the funding, the true intent of the study was to determine if the provided Biosite (R) B-type natriuretic peptide (BNP) marker was an effective predictor of serious events. The authors seemingly attempted to shoehorn a decision rule around the BNP test.
The results did, in fact, show that an elevated BNP greater than 300 does anticipate serious outcomes as well six other factors … that predicted the clinically significant events, including myocardial infarction, arrhythmia, and intracranial hemorrhage. The authors created a double pneumonic (the ROSE study resulting in the BRACES clinical rule) which will ultimately lead to some level of confusion. Fortunately, the researchers validated their own study in this paper.
4. Alteplase Is Effective Up to 4.5 Hours after Onset of Ischemic Stroke
But earlier is better.
On the basis of reports published in September 2008 from two large international studies, professional stroke organizations extended the recommended time between symptom onset and administration of alteplase from 3 to 4.5 hours (JW Emerg Med Sep 24 2008 and JW Emerg Med Sep 15 2008). To assess implementation of the wider treatment window and its effects, investigators analyzed data for nearly 24,000 patients who were included in one of the study's stroke registry from 2002 to 2010.
Overall, 2376 patients received alteplase between 3 and 4.5 hours after symptom onset; the proportion of patients who were treated within this window was three times higher in the last quarter of 2009 than in the first quarter of 2008. Rates of poor outcomes were low: 7.1% of patients treated within 3 hours and 7.4% of those treated at 3 to 4.5 hours had symptomatic intracerebral hemorrhage and 12.3% and 12.0%, respectively, died within 3 months. However, in analyses adjusted for confounding variables, patients treated at 3 to 4.5 hours had significantly higher rates of symptomatic intracerebral hemorrhage (1 extra hemorrhage for every 200 patients) and 3-month mortality (1 extra death for every 333 patients), as well as significantly worse functional outcomes (odds ratio for functional independence, 0.84). Median time from admission to treatment was 65 minutes before and after the reports. The authors conclude that the extended treatment window was implemented rapidly with no overall increase in admission-to-treatment time and that although risk from alteplase was greater when administered at 3 to 4.5 hours, treatment was still beneficial.
Comment: Although the U.S. FDA has not yet approved use of alteplase beyond 3 and up to 4.5 hours after onset of ischemic stroke symptoms, this evidence supports a wider treatment window and professional organizations recommend it. Nevertheless, time is brain, and eligible patients should be treated as soon as possible.
— Kristi L. Koenig, MD, FACEP. Published in Journal Watch Emergency Medicine August 27, 2010. Citation: Ahmed N et al. Lancet Neurol 2010 Sep; 9:866.
Abstract: http://www.ncbi.nlm.nih.gov/pubmed/20667790
5. Randomized Controlled Trial of Trimethoprim-Sulfamethoxazole for Uncomplicated Skin Abscesses in Patients at Risk for Community-Associated MRSA Infection
Schmitz GR, et al. Ann Emerg Med. 2010;56:283-287.
Abstract
STUDY OBJECTIVE: Community-associated methicillin-resistant Staphylococcus aureus is now the leading cause of uncomplicated skin abscesses in the United States, and the role of antibiotics is controversial. We evaluate whether trimethoprim-sulfamethoxazole reduces the rate of treatment failures during the 7 days after incision and drainage and whether it reduces new lesion formation within 30 days.
METHODS: In this multicenter, double-blind, randomized, placebo-controlled trial, we randomized adults to oral trimethoprim-sulfamethoxazole or placebo after uncomplicated abscess incision and drainage. Using emergency department rechecks at 2 and 7 days and telephone follow-up, we assessed treatment failure within 7 days, and using clinical follow-up, telephone follow-up, and medical record review, we recorded the development of new lesions within 30 days.
RESULTS: We randomized 212 patients, and 190 (90%) were available for 7-day follow-up. We observed a statistically similar incidence of treatment failure in patients receiving trimethoprim-sulfamethoxazole (15/88; 17%) versus placebo (27/102; 26%), difference 9%, 95% confidence interval -2% to 21%; P=.12. On 30-day follow-up (successful in 69% of patients), we observed fewer new lesions in the antibiotic (4/46; 9%) versus placebo (14/50; 28%) groups, difference 19%, 95% confidence interval 4% to 34%, P=.02.
CONCLUSION: After the incision and drainage of uncomplicated abscesses in adults, treatment with trimethoprim-sulfamethoxazole does not reduce treatment failure but may decrease the formation of subsequent lesions.
6. Serotonin Syndrome: Case Report and Review
Serotonin syndrome is a potentially life-threatening adverse drug reaction that results from therapeutic drug use, usually of selective serotonin reuptake inhibitors (SSRIs), intentional excessive use or interactions between various drugs. It is not an idiopathic reaction and occurs due to excess serotonin activity in the brain and periphery. The serotonin syndrome was first described in 1959 in a patient with tuberculosis who received meperidine and died. The syndrome received its current name in 1982. The significance of serotonin syndrome was brought to attention in 1984 when Libby Zion, an 18-year-old college student, presented to a New York hospital with fever, agitation and confusion. The child was on phenelzine, a monoamine-oxidase inhibitor (MAOI) antidepressant. She was given meperidine in the hospital for agitation. She became increasingly agitated and hyperpyrexic with a temperature of 41.6°C and died 6 h later.
Epidemiology
Serotonin syndrome has been reported in all age groups. There has been a reported increase in its incidence because of the increasing use of proserotonergic agents over the last few years. In 2004, the Toxic Exposure Surveillance System (TESS) reported 48,204 exposures from SSRIs that resulted in moderate or major outcomes in 8,187 patients, including 103 deaths. The serotonin syndrome has been reported to occur in approximately 15% of persons who overdose SSRIs. The true incidence is difficult to assess because of lack of awareness of this clinical entity amongst physicians, and indeed one survey found that 85% of physicians were unaware of this syndrome.
The remainder of the text (free): http://www.springerlink.com/content/q382111247766427/fulltext.html
7. The Likelihood of Acute Pulmonary Embolism in ED Patients Is Not Lower Among Patients Taking Warfarin
Nordenholz K, et al. Acad Emerg Med. 2010;17(5) Supplement 1;S2 (abstract 4)
Background: Patients commonly require testing for pulmonary embolism (PE) while on warfarin for previous PE, deep venous thrombosis, or atrial fibrillation.
Objective: To determine whether, among emergency department (ED) patients evaluated for PE, the ‘‘rule in’’ rates vary according to the presence or absence of warfarin and the international normalized ratio (INR) level.
Methods: A secondary analysis of a prospective, observational study of 7940 ED patients from 13 centers who had a diagnostic test for PE and for whom 45 day PE outcomes were known by record review and phone follow-up. Data were collected about warfarin use, INR, presence or absence of cancer, and diagnosis of acute PE. INR was defined as therapeutic (TR) (1.5-2.5), high (above 2.5), low (below 1.5), or INR was either not measured or not recorded (NM).
Results: The table shows the rule-in rate of PE among all patients and in strata defined by warfarin use and by INR level. The PE rate was not lower in patients taking warfarin, regardless of INR. The presence of active, metastatic, or inactive cancer was not an effect modifier and did not alter these findings. The data set is limited by testing bias and INRs NM.
Conclusions: Among ED patients with suspected PE who are taking warfarin, half do not have an INR recorded. However, a therapeutic or high INR is not associated with a lower likelihood of acute PE diagnosis and should not be used alone to withhold PE testing. Further investigation is warranted to discriminate this particular PE population.
8. Dilute proparacaine for the outpatient management of acute corneal injuries in the ED
Ball IM, et al. CJEM 2010;12(5):389-396
Objective: Dogma discourages the provision of topical anesthetics to patients with corneal injuries discharged from the emergency department because of the toxicity of concentrated solutions. We compared the analgesic efficacy of dilute topical proparacaine with placebo in emergency department patients with acute corneal injuries.
Methods: We conducted a prospective randomized controlled trial of adults with corneal injuries presenting to one of 2 tertiary care emergency departments in London, Ont. Patients were randomly assigned to groups receiving either 0.05% proparacaine or placebo drops as outpatients and were followed up to healing by a single ophthalmologist. Our primary outcome was pain reduction as measured on a 10-cm visual analog scale.
Results: Fifteen participants from the proparacaine group and 18 participants from the placebo group completed the study. The mean age of the patients was 38.7 (standard deviation 12.3) years and the majority were male (85%). Pain reduction was significantly better in the proparacaine group than in the placebo group, with a median improvement of 3.9 (interquartile range [IQR] 1.5–5.1) cm on the visual analog scale versus a median improvement of 0.6 (IQR 0.2–2.0) cm (p = 0.007). The proparacaine group was more satisfied (median level of satisfaction 8.0 [IQR 6.0–9.0] cm on a 10-cm visual analog scale v. 2.6 [IQR 1.0–8.0] cm, p = 0.027). There were no ocular complications or signs of delayed wound healing in either group.
Conclusion: Dilute topical proparacaine is an efficacious analgesic for acute corneal injuries. Although no adverse events were observed in our study population, larger studies are required to evaluate safety.
9. Bronchiolitis Illness Duration and Caregiver Burden: An Inconvenient Truth
The median duration of illness in infants with first-time bronchiolitis was 15 days.
Petruzella FD, Gorelick MH. Pediatrics. 2010;126:285-90.
OBJECTIVES: To describe the duration of illness in infants with first-time bronchiolitis who present to an emergency department (ED) and assess the burden of the illness on caregivers and families.
METHODS: This was a prospective cohort study of infants younger than 12 months who presented to a tertiary care children's hospital ED with a first episode of bronchiolitis. Subjects were excluded if they had a history of bronchodilator use or immunocompromise. Demographic and clinical data were collected in the ED. Outcomes data were collected by weekly telephone interviews for 4 weeks or until the subject was free of cough for 24 hours.
RESULTS: Ninety-five infants were enrolled from November 2007 to March 2008. Median duration of symptoms was 15 days; 25% of the infants remained symptomatic after 21 days. Subjects with a history of eczema trended toward a longer median duration of symptoms when compared with those who did not (18 days [interquartile range (IQR): 15.5-24] and 15 days [IQR: 11-19], respectively; P = .055). Duration of symptoms did not significantly vary with regards to respiratory syncytial virus status or secondhand smoke exposure. Subjects missed a median of 2.5 days (IQR: 0.5-5.5) of day care, and caregivers missed a median of 2 days (IQR: 1-4) of work. Of these infants, 37.1% (95% confidence interval: 24.3-44.1) had a subsequent unscheduled medical visit.
CONCLUSIONS: Infants seen in the ED for bronchiolitis have a prolonged disease course, with substantial burden to the family. Symptom duration may be influenced by a propensity toward atopy. Clinicians may use this information for counseling families.
10. Biphasic DC shocks for paediatric atrial dysrhythmias: Which dose is best?
Tibballsa J, et al. Resuscitation 2010;81:1101-1104.
Objective
To determine cardioversion doses of biphasic DC shock for paediatric atrial dysrhythmias.
Design
Prospective recording of energy, pre-shock and post-shock rhythms.
Setting
Paediatric hospital.
Patients
Shockable atrial dysrhythmias.
Main results
Forty episodes of atrial dysrhythmias among 25 children (mean age 6.8±7.1 years, mean weight 28.2±28.5kg) were treated with external shock. The first shock converted the dysrhythmia to sinus rhythm in 25 episodes. Cardioversion occurred in 2 of 8 (25%) episodes with a dose of less than 0.5 J/kg, 14 of 16 (88%) with a dose of 0.5–1.0J/kg and 9 of 16 (56%) with a dose of above 1.0 J/kg (p=0.01, Fisher's exact test). Ten of 15 initially non-responsive episodes were cardioverted with additional shocks at 1.1±0.6J/kg (range 0.5–2.1J/kg). Of the remaining 5 unresponsive episodes, 2 of ventricular fibrillation (induced by unsynchronized shock) were successfully defibrillated, and 3 were managed with cardiopulmonary bypass. Among 11 additional children (mean age 4.3±6.8 years, mean weight 18.1±22.0kg), 18 episodes of atrial dysrhythmias were treated with internal shock which successfully cardioverted all episodes with one or more shocks at 0.4±0.2J/kg.
Conclusions
In rounded doses, recommended initial external cardioversion doses are 0.5–1.0J/kg and subsequently up to 2J/kg, internal cardioversion doses are 0.5J/kg.
11. Balancing Work, Family and Friends, and Lifestyle
Alpert JS. Amer J Med. 2010;123:775-776.
Discussing the balance in a person's life resembles conversations about managing stressful times. As I write this, my daughter, Eva, an accountant and tax attorney, is in the final phase of her stressful and very hectic “busy season” as the April 15th tax deadline draws near. Some people think of those stressful times as periods in which their work/life/family–friends balance is unbalanced, and personal and professional fulfillment seems very far away. The stresses in our lives often make us feel uncomfortable, nervous, and anxious. At these times we wish that our daily activities could meld better so that the stress-induced discomfort would fade away, and we would feel more balanced. Each individual has a different sense of what these entities entail, and each of us undoubtedly feels strongly about what causes stress in our life or how to balance work, play, and human relationships.
Recently, I was asked to deliver a lecture to a group of early career cardiologists attending the annual scientific sessions of the American Heart Association. It was unclear to me at the time why I was chosen for this task and was not convinced that my life represented the ideal of balanced work and lifestyle. Nevertheless, I searched the Internet for clues concerning advice about balancing one's life. In this essay, I will discuss some of the recommendations derived from this exercise in browsing combined with some personal opinion based on experience. There is no doubt in my mind that others will disagree with my priorities, but, as I have already stated, each of us will have different concepts and beliefs on this topic.
…Professional healthcare workers have very demanding educational and work schedules. Family and close friends often fail to understand the work pressures involved in our field. Despite the great physical and psychological requirements in our daily routine, most of us hope to be seen as human beings who can simultaneously involve ourselves in our families, our jobs, and our community. Given these demands, how can we achieve balance in our lives without making major sacrifices that might lead to psychological distress, indeed, a form of posttraumatic stress syndrome? I cannot claim to have the perfect solution to this conundrum. However, I will outline some of the ways that have been suggested by others and approaches that I have found to be effective….
The remainder of the essay (free): http://www.amjmed.com/article/S0002-9343(10)00356-6/fulltext
12. Nasal foreign bodies in children: kissing works half the time
Taylor C, et al. Emerg Med J 2010;27:712-713.
Objective: To evaluate the use, success rate and time in the paediatric emergency department when employing the kissing technique to remove nasal foreign bodies from children.
Methods: The present work was a retrospective case note review for children attending with a nasal foreign body over a 15-month period.
Results: In all, 116 children had a confirmed nasal foreign body and 84 were treated by the kissing technique with a success rate of 48.8%. This group had lower rates of instrumentation (20.2% vs 53.1%) and general anaesthesia (11.9% vs 18.8%). The average time saved per patient who had the kissing technique attempted in the paediatric emergency department was 30.6 min.
Conclusion: The kissing technique should be employed as a preferred technique to remove nasal foreign bodies in children.
13. Urinalysis Is an Inadequate Screen for Rhabdomyolysis
Ruegner R, et al. Acad Emerg Med. 2010;17(5) Supplement 1;S45 (abstract 131)
Objectives: The classic screening test for rhabdomyolysis is a positive urine dip for blood, but negative microscopy. Our objective was to determine the sensitivity of the urinalysis (UA) in the diagnosis of rhabdomyolysis.
Methods: Retrospective electronic chart review of all patients with a primary or secondary discharge diagnosis of rhabdomyolysis admitted from 2003-2009 at a university-affiliated Level I trauma center. A creatine kinase (CK) over 1000 U/L and UA within 24 hours of presentation were also required for inclusion. Using a standardized data form, we collected demographic information, laboratory values (initial, peak, final), treatment, and disposition. For study purposes we defined a ‘‘positive’’ UA as dip positive for blood (trace or more) but negative microscopy (3 RBC or fewer). We calculated the sensitivity and 95% confidence interval (CI) of the UA in the detection of rhabdomyolysis.
Results: Of 1,796 records reviewed, 228 patients met all inclusion criteria. The mean age was 46 years (range 15-102) and 79% were males. Twenty-nine percent of the cases were associated with illicit drug use, 12% seizure, 11% sepsis, 4% trauma, 1% heat illness, and the remainder were unknown. The mean initial CK was 17,371 (range 56 - 278,000), and mean peak CK was 27,509 (range 1,166-404,700). One hundred and ninety-five (86%) had a urine dip positive for blood, but only 94 had the combination of a positive dip but negative microscopy, yielding a sensitivity of 41% (95% CI, 35%-47%) for detecting rhabdomyolysis. Broadening the definition of negative microscopy to less than 10 RBC would increase sensitivity to 79% (95% CI, 73%-83%). In the subset of 66 patients with more severe rhabdomyolysis and an initial CK greater than 10,000 (mean
53,365), the sensitivity of the UA was 55% (95% CI, 43%-67%). Overall mortality was 18%.
Conclusions: The combination of a positive urine dip for blood and negative microscopy is an insensitive screening test for rhabdomyolysis and should not be used alone to exclude the diagnosis.
14. Delayed Sequence Intubation (DSI)
By Scott Weingart, MD. ACEP News, July 2010
The Case
You have a 50-year-old male with bad bilateral pneumonia; BP 108/70, HR 96, RR 28. He is delirious, agitated, and looks sick, sick, sick! Saturation is 70% on a nasal cannula; when you try to place the patient on a non-rebreather (NRB), he just swats your hand away and rips off the mask.
It is obvious to everyone in the room that this patient needs intubation -- but the question is, how are you going to do it?
Your first impulse may be to perform rapid sequence intubation (RSI), maybe with some bagging during the paralysis period. This is essentially a gamble.
If you have first-pass success with RSI, you (and your patient) may just luck out, allowing you to get the tube in and start ventilation before critical desaturation and the resultant hemodynamic instability.
However, the odds are against you: Bagging during RSI predisposes to aspiration; conventional bag valve mask (BVM) without a positive end expiratory pressure (PEEP) valve is unlikely to raise the saturation in this shunted patient; and if there is any difficulty in first-pass tube placement, your patient will be in a very bad place.
A Better Way
Sometimes patients such as this one who desperately require preoxygenation will impede its provision.
Hypoxia and hypercapnia can lead to a state of delirium, causing these patients to rip off their NRB or noninvasive ventilation (NIV) masks.
This delirium, combined with the low oxygen desaturation on the monitor, often leads to precipitous attempts at intubation without adequate preoxygenation.
Standard RSI consists of the simultaneous administration of a sedative and a paralytic agent and the provision of no ventilations until after endotracheal intubation. This sequence can be broken to allow for adequate preoxygenation without risking gastric insufflation or aspiration; we call this method "delayed sequence intubation" (DSI).
DSI consists of the administration of specific sedative agents, which do not blunt spontaneous ventilations or airway reflexes, followed by a period of preoxygenation before the administration of a paralytic agent.
Another way of thinking about the DSI method is to view it as a procedural sedation--with the procedure in this case being effective preoxygenation.
After the completion of this procedure, the patient can be paralyzed and intubated. Just as in a procedural sedation, we want patients to be calm but still spontaneously breathing and protecting their airway.
The rest of the essay: http://www.acep.org/acepnews.aspx?id=48971
15. Images in Emergency Medicine
Woman with Neck Pain and Swelling
http://www.annemergmed.com/article/S0196-0644(09)01546-7/fulltext
Asterixis
http://www.nejm.org/doi/full/10.1056/NEJMicm0911157
Miliary Tuberculosis
http://www.nejm.org/doi/full/10.1056/NEJMicm1001480
Acute maxillary sinusitis detected by bedside ED ultrasonography
http://www.springerlink.com/content/m4147580675138vh/fulltext.html
Pneumomediastinum from nasal insufflation of cocaine
http://www.springerlink.com/content/33077g51t88gk488/fulltext.html
Diffuse ST segment depression from hypothermia
http://www.springerlink.com/content/q488tq1168m0t773/fulltext.html
Isolated uvular angioedema: Quincke’s disease
http://www.springerlink.com/content/y137466n62k8k080/fulltext.html
A potentially fatal cause of rash
http://www.springerlink.com/content/y37017062041k375/fulltext.html
16. Managing anaemia in non-bleeding critically ill adults: state of the evidence
Walsh TS, et al. BMJ 2010; 341:c4408.
Introduction
Anaemia (haemoglobin below 12.0 g/dl for women, and below 13.0 g/dl for men) is common in acutely unwell patients. Maintaining sufficient oxygen transport to the tissues is fundamental to survival and recovery from acute illness, and in the United Kingdom 8-10% of the blood supply is used to treat patients in intensive care. Red blood cells transport more than 97% of the oxygen content of blood—about 200 ml/l—and anaemia greatly reduces oxygen delivery, especially if patients also have cardiovascular and respiratory compromise.
Transfusion of donor (allogeneic) red blood cells is the standard method for rapidly correcting anaemia in acutely unwell patients, but the risk-benefit balance of this intervention is a subject of continuing debate, controversy, and concern. We highlight uncertainties in the management of anaemia in critically ill patients, especially in relation to the use of red cells, and summarise current evidence from observational studies and randomised trials. We focus on the management of anaemia in critically ill patients without active bleeding, such as those who are in adult medical and surgical intensive care units, high dependency units, and other acute units. We do not discuss the management of patients with major haemorrhage, for which recent evidence is available elsewhere.
Summary points
Acute anaemia is common in critically ill patients
Several factors, including blood sampling and reduced red cell production associated with systemic inflammation, can contribute to anaemia
The risk-benefit profile for red cell transfusions to treat anaemia in non-bleeding critically ill adults is uncertain, but they may contribute to adverse patient outcomes in some situations
Best evidence suggests that using single unit red cell transfusions when haemoglobin is close to 7.0 g/dl and aiming for a haemoglobin of 7.0-9.0 g/dl is not harmful in most patients
Aiming for a haemoglobin nearer to 9.0-10.0 g/dl might be better for patients with acute cardiac disease and the early stages of severe sepsis
What haemoglobin concentration should trigger blood transfusion in critically ill patients?
The best evidence on what haemoglobin concentration should trigger transfusions in critically ill patients comes from a well performed non-blinded multicentre Canadian trial published in 1999 (the TRICC trial; summarised in box 3), which is widely considered the most important trial in transfusion medicine. Patients with a haemoglobin 9.0 g/dl or less were randomised to either a relatively high haemoglobin transfusion trigger of less than 10.0 g/dl with a target of 10.0-12.0 g/dl (the “liberal” group) or a lower haemoglobin transfusion trigger of less than 7.0 g/dl with a target of 7.0-9.0 g/dl (“restrictive” group). The findings strongly support using red cells only to maintain a haemoglobin concentration of 7.0-9.0 g/dl, especially in younger or less severely ill patients. The generalisability of these findings are unclear, however, and this might explain why clinical practice still varies. This trial has never been replicated in adult critical care, and a recent Cochrane systematic review noted the need for further trials.
Summary of key findings of the TRICC trial
Population
Non-bleeding critically ill patients whose haemoglobin value was 9.0 g/dl or less during the first three days in the intensive care unit
Intervention and comparator
The trial compared a restrictive strategy (haemoglobin transfusion trigger less than 7.0 g/dl; target value 7.0-9.0 g/dl) with a liberal strategy (haemoglobin trigger 10.0 g/dl; target value 10.0-12.0 g/dl) for managing anaemia with blood transfusions during the intensive care unit stay
Outcomes
The restrictive group received 54% fewer units of blood and 33% received no blood transfusions in the intensive care unit, whereas all of the liberal group were transfused
The restrictive group showed a non-significant trend towards lower mortality (18.7% v 23.3%; P=0.11)
The restrictive group had lower rates of cardiac complications (13.2% v 21.0%) and new organ failures (difference in multiple organ dysfunction score of 1 between the groups)
The liberal group showed a trend towards higher rates of acute respiratory distress syndrome (11.4% v 7.7%)
Predefined subgroup analyses
Younger patients (below 55 years) and patients with lower illness severity during the first 24 hours in the intensive care unit (APACHE II score less than20) had significantly better outcomes when they were in the restrictive group (these patients were more anaemic and received fewer blood transfusions)
Subgroup analyses that were not predefined (post hoc analyses)
There was a trend for patients with known ischaemic heart disease to have better outcomes in the liberal group
No differences in outcomes were seen in the subgroup of patients who were mechanically ventilated.
Uncertainties about the generalisability of the findings
The blood used was not leuco-depleted. Transfused leucocytes may have adverse effects in critically ill patients and most countries now routinely leuco-deplete blood before storage.
The storage age of the blood was unknown. Longer storage times might affect patient outcomes, especially if the blood was not leuco-depleted.
The study could not prove that the restrictive approach was safe for all patient subgroups, especially those with heart disease and sicker older patients.
Improvements in critical care and blood processing over the past decade might mean the findings would be different if the trial were repeated now.
Link: http://www.bmj.com/content/341/bmj.c4408
17. Which Oral Rehydration Solution Tastes Best?
Children preferred Pedialyte and Pediatric Electrolyte to Enfalyte.
Freedman SB, et al. Arch Pediatr Adolesc Med. 2010;164:696-702.
OBJECTIVE: To compare the palatability of 3 oral rehydration solutions.
DESIGN: Prospective, blinded, randomized, 3-period, 3-treatment crossover trial.
SETTING: Emergency department of a tertiary care pediatric hospital.
PARTICIPANTS: Sixty-six children aged 5 to 10 years with concerns unrelated to the gastrointestinal tract. Intervention Each participant consumed as much of each solution as they desired during a 15-minute period.
MAIN OUTCOME MEASURES: The primary outcome was each child's rating of taste as measured on a 100-mm visual analog scale (worst taste, 0 mm; best taste, 100 mm). Secondary outcome measures were volume consumed, willingness to consume each liquid again, and the most favored liquid.
RESULTS: All enrolled patients completed all 3 study periods. A significant carryover effect was detected for taste scores (P=.03), which were significantly different with and without adjustment for the carryover effect (P less than .001). Unadjusted values were 65 mm for Pedialyte, 58 mm for Pediatric Electrolyte, and 23 mm for Enfalyte. Differences in mean volume consumed were not significant (Enfalyte, 15 mL; Pediatric Electrolyte, 17 mL; and Pedialyte, 22 mL [P=.44]). The proportion of children who would drink each solution in the future varied significantly between Enfalyte and Pediatric Electrolyte (odds ratio, 0.22; 95% confidence interval, 0.11-0.46) and between Enfalyte and Pedialyte (0.38; 0.25-0.57). There were differences in the identification of the best-tasting solution, with Pedialyte selected by 35 of 66 children (53%), Pediatric Electrolyte by 26 of 66 children (39%), and Enfalyte by 5 of 66 children (8%) (P less than .001).
CONCLUSION: Sucralose-sweetened oral rehydration solutions (Pedialyte and Pediatric Electrolyte) were significantly more palatable than was a comparable rice-based solution (Enfalyte).
18. ED Delirium Associated with Elevated 6-month Mortality
Han JH, et al. Ann Emerg Med. 2010;56:244-252.e1.
Background
Delirium occurs in 8% to 10% of older emergency department (ED) patients. This form of brain dysfunction is characterized by an acute change in cognition that cannot be better accounted for by a preexisting or evolving dementia. Approximately 1.5 million older patients with delirium will be evaluated in the ED each year in the United States, which is similar to the number of annual acute coronary syndromes, a disease with comparable mortality and morbidity. Despite the magnitude of this problem, emergency physicians miss delirium in up to 75% of cases.
Importance
The lack of delirium recognition may be a result of a dearth of ED outcomes data. Predominantly from inhospital literature, delirium has been shown to be a marker for long-term death in noncritically ill and critically ill inpatients. Unfortunately, hospital-based studies have limited generalizability to the ED population because they exclude patients discharged from the ED and enrollment typically occurs in 24 to 48 hours after admission. In addition, many of these studies include patients who developed delirium during their hospital course, and a significant proportion of inpatients who were classified as having delirium may not have been delirious in the ED.
Even less is known about delirium's effect on mortality in nursing home patients treated in the ED, despite being 7 times more likely to be delirious compared with community-dwelling elders. Conclusions derived from general geriatric data sets have limited external validity to nursing home patients because they have higher rates of dementia, poorer functional status, and higher comorbidity burden. There are approximately 1.5 million nursing home residents in the United States, and 43% of community-dwelling elders will enter a nursing home during their lifetime. Because up to 25% of all nursing home patients will visit the ED for an acute illness, understanding how delirium affects nursing home patients is crucial.
Goals of This Investigation
Because there is a paucity of data about the relationship between delirium in the ED and long-term mortality, and little is known about its effect on nursing home patients, we sought to determine whether delirium is an independent predictor of 6-month mortality in older ED patients, regardless of their admission status, and to assess whether this relationship is modified by nursing home residence.
What question this study addressed: This 628-patient prospective cohort study examined the association between delirium and 6-month mortality.
What this study adds to our knowledge: Mortality at 6 months was much higher among older adults with delirium (37% versus 14%).
How this might change clinical practice: Further research is required to determine whether earlier detection and treatment of delirium alters outcome.
Full-text (free): http://www.annemergmed.com/article/S0196-0644(10)00220-9/fulltext
19. Dantrolene in the treatment of MDMA-related hyperpyrexia: a systematic review
Grunau BE, et al. CJEM 2010;12(5):435-442
Objective: The use of dantrolene in the treatment of hyperpyrexia related to MDMA (3,4-methylenedioxymethamphetamine) is controversial, with little data available to guide clinical decision-making. Although the treatment is recommended by several poison control centres, published data are primarily in the form of case reports and animal and in vitro experiments. We conducted a systematic review to investigate the published evidence regarding the safety and benefits of dantrolene for MDMA-related hyperpyrexia in humans.
Data sources: A systematic search of Embase and MEDLINE was conducted from the earliest possible date to November 2008.
Study selection: All human trials and case reports of MDMA related hyperpyrexia were considered.
Data extraction: Data were abstracted systematically and characteristics including use of dantrolene, adverse reactions attributed to dantrolene, peak temperature, complications from MDMA-related hyperpyrexia and survival were recorded.
Data synthesis: Our search yielded 668 articles of which 53, reporting 71 cases of MDMA-induced hyperpyrexia, met our inclusion criteria. No clinical trials, randomized controlled trials, observational studies or meta-analyses were identified. Dantrolene was used in 26 cases. Patient characteristics were similar in the dantrolene and no dantrolene groups. The proportion of survivors was higher in the dantrolene group (21/26) than in the no dantrolene group (25/45). This difference was especially pronounced in those with extreme (≥ 42°C) and severe (≥ 40°C) fever, with a survival rate of 8 of 13 and 10 of 10, respectively, in the dantrolene group compared with 0 of 4 and 15 of 27 in the no dantrolene group. There were no reports of adverse events attributable to dantrolene with the exception of a possible association with an episode of transient hypoglycemia.
Conclusion: Our systematic review suggests that dantrolene is safe for patients with MDMA-related hyperpyrexia. Dantrolene may also be associated with improved survival and reduced complications, especially in patients with extreme (≥ 42°C) or severe (≥ 40°C) hyperpyrexia, although this conclusion must be interpreted with caution given the risk of reporting or publication bias.
See their related case report below.
Dantrolene for the treatment of MDMA toxicity
Grunau BE, et al. CJEM 2010;5(12):457-459
MDMA (3,4-methylenedioxymethamphetamine), popularly known as “Ecstasy,” was first introduced and patented by Merck & Co., Inc., in 1914 as an appetite suppressant. Currently, its primary role is as an illegal stimulant used to produce a euphoric effect during parties. This case report describes a 31-year-old man who, after taking 3 tablets of Ecstasy, presented to an emergency department with a decreased level of consciousness and became progressively hyperthermic and rigid. During the course of his acute illness, his temperature reached 42.2°C rectally. He was given mechanical ventilation. He was aggressively cooled and dantrolene was initiated. Soon after the administration of dantrolene his temperature decreased and his rigidity began to resolve. The only complication was rhabdomyolysis with a creatine kinase level increasing to over 150 µkat/L. This did not progress to acute renal failure. The patient made a full recovery and was discharged to psychiatry for assessment.
20. Delayed versus immediate defibrillation for out-of-hospital cardiac arrest due to ventricular fibrillation: A systematic review and meta-analysis of randomised controlled trials
Simpson PM, et al. Resusc 2010;81:925-931.
Background
Human studies over the last decade have indicated that delaying initial defibrillation to allow a short period of cardiopulmonary resuscitation (CPR) may promote a more responsive myocardial state that is more likely to respond to defibrillation and result in increased rates of restoration of spontaneous circulation (ROSC) and/or survival. Out-of-hospital studies have produced conflicting results regarding the benefits of CPR prior to defibrillation in relation to survival to hospital discharge. The aim of this study was to conduct a systematic review and meta-analysis of randomised controlled trials comparing the effect of delayed defibrillation preceded by CPR with immediate defibrillation on survival to hospital discharge.
Methods
A systematic literature search of key electronic databases including Medline, EMBASE, and the Cochrane Library was conducted independently by two reviewers. Randomised controlled trials meeting the eligibility criteria were critically appraised according to the Cochrane Group recommended methodology. Meta-analyses were conducted for the outcomes of survival to hospital discharge overall and according to response time of emergency medical services.
Results
Three randomised controlled trials were identified which addressed the question of interest. All included studies were methodologically appropriate to include in a meta-analysis. Pooled results from the three studies demonstrated no benefit from providing CPR prior to defibrillation compared to immediate defibrillation for survival to hospital discharge (OR 0.94 95% CI 0.46–1.94). Meta-analysis of results according to ambulance response time (below 5min or beyond 5min) also showed no difference in survival rates.
Conclusion
Delaying initial defibrillation to allow a short period of CPR in out-of-hospital cardiac arrest due to VF demonstrated no benefit over immediate defibrillation for survival to hospital discharge irrespective of response time. There is no evidence that CPR before defibrillation is harmful. Based on the existing evidence, EMS jurisdictions are justified continuing with current practice using either defibrillation strategy.
21. Meaningful Use of EHRs Will Become Part of Specialty Board Recertification
August 16, 2010 — The federal government wants physicians to become "meaningful users" of electronic health records (EHR) — so much so it that will start giving them cash bonuses in 2011 to go digital.
Becoming a meaningful EHR user also will pay another dividend in the future — helping physicians get recertified by their specialty boards.
Earlier this month, the American Board of Medical Specialties (ABMS) announced it would develop tools to promote meaningful EHR use and incorporate them into its Maintenance of Certification (MOC) program, which its 24 member boards use to push physicians toward lifelong learning and self-assessment. The initiative, at least in its preliminary form, does not baldly make EHR use a condition of recertification. Rather, it takes its cue from the federal government's bonus program and encourages EHR adoption with carrots rather than sticks, said Sheldon Horowitz, MD, special advisor to ABMS President Kevin Weiss, MD, MPH.
"We're trying to make our program attractive," Dr. Horowitz told Medscape Medical News. "We want to help physicians do well with meaningful use and patient care."
Dr. Horowitz said that the objectives of meaningful EHR use — which include improving the quality, safety, and coordination of patient care — overlap with the 6 core professional competencies that the MOC program tries to measure. Among these competencies are a physician's ability to scientifically assess and improve his or her own practice of medicine, as well as operate within the entire healthcare system (ie, coordination of care).
Last month, the Centers for Medicare and Medicaid Services (CMS) published its final version of meaningful-use requirements that physicians must meet to earn incentive money — up to $44,000 under Medicare, or almost $64,000 under Medicaid. Fifteen of the 25 requirements, such as prescribing electronically and using drug interaction and allergy alerts, are mandatory. Of the 10 other requirements, physicians may choose 5 to meet.
Simulated EHR Exercises Are Coming
In its effort to take board recertification into the EHR era, the ABMS will initially work with the American Board of Family Medicine, the American Board of Internal Medicine, and the American Board of Pediatrics — 3 groups that have certified 369,000 physicians. As one of their first projects, the boards will develop 2 self-assessment modules on healthcare information technology, said Dr. Horowitz.
One module will test a physician's knowledge of how to use the technology to manage and analyze patient data, practice evidence-based medicine, and make the right calls on diagnosis and treatment with "decision support." The other module will focus on promoting patient safety through such EHR functions as electronic prescribing and computerized physician order entry.
Dr. Horowitz said the 2 modules will be optional, not required. However, by completing these particular modules, physicians will receive more points toward recertification than they would with other modules.
The remainder of the article: http://www.medscape.com/viewarticle/726967
Monday, September 13, 2010
Subscribe to:
Posts (Atom)