From the recent medical literature...
1. Some Guidance Questioned on Managing Cardiac Arrest in Pregnancy
NEW YORK (Reuters Health) May 23 - A recommended maneuver in resuscitating pregnant women in cardiac arrest may be counterproductive, according to the authors of a systematic review of the topic.
Furthermore, they found that perimortem Cesarean section as a last resort is not being utilized optimally in this situation.
Writing in Resuscitation online May 9, Dr. Laurie J. Morrison at the University of Toronto in Ontario, Canada, and colleagues note that cardiac arrest occurs in about 1 in 20,000 pregnancies. Although rare, the incidence is 10 times higher than the rate of cardiac arrest in young athletes.
There are published recommendations for managing cardiac arrest in pregnancy, the authors note, but these are based on "very little science." One recommendation is to place the women in a left lateral tilt during chest compressions to relieve aortocaval compression. Another guideline indicates that perimortem Cesarean section should be performed within 5 minutes of maternal cardiac arrest.
In conducting a review of the literature on these issues, the team identified three studies dealing with resuscitation technique and two with perimortem C-section.
These data indicated that it is feasible to apply chest compressions with the patient tilted to the left, but the compression force is only about 80% of that delivered in the supine position. Given that, and considering the time spent positioning the patient, the team believes that a better strategy is to displace the uterus leftward manually while the patient is supine during chest compressions.
As for defibrillation, the evidence supports usual energy settings, since transthoracic impedance isn't changed in pregnancy, according to the report.
Regarding the performance of perimortem Cesareans, a series of 38 cases showed that it was undertaken within 5 minutes in only eight instances. Nonetheless, 17 infants survived without sequelae.
Furthermore, in 22 cases with sufficient information, "twelve women had sudden and often dramatic improvement in their clinical status immediately after the uterus was emptied including a return of the pulse and blood pressure," Dr. Morrison and colleagues report.
Summing up, they conclude, "Perimortem cesarean section is rarely done within 5 min from cardiac arrest. Maternal and neonatal survival has been documented with the use of perimortem cesarean section; however, there is not enough information about its optimal use."
They continue, "Chest compressions in a left lateral tilt from the horizontal are feasible but less forceful compared to the supine position, and there are good theoretical arguments to use left lateral uterine displacement rather than lateral tilt from the horizontal during maternal resuscitation."
The authors suggest that an international registry would help frame future recommendations for managing cardiac arrest in pregnancy.
Abstract in Resuscitation: http://dx.doi.org/10.1016/j.resuscitation.2011.01.028
2. Hypotensive Resuscitation in Trauma Patients Lessens Transfusion Needs
. . . and reduces incidence of postoperative coagulopathy and associated death.
These authors report an interim analysis of the first prospective randomized trial of intraoperative hypotensive resuscitation in patients. At a single level I trauma center, 90 patients with at least one episode of in-hospital systolic blood pressure 90 mm Hg who were undergoing laparotomy or thoracotomy for blunt (6 patients) or penetrating (84) trauma were randomized at entry to the operating room to have their mean arterial pressure (MAP) maintained at a target minimum of 50 mm Hg (low MAP) or 65 mm Hg (high MAP). Methods of achieving target levels were at the discretion of the anesthesiologist. MAPs that rose above the target were not lowered.
The low-MAP group received a significantly smaller amount of blood products (packed red blood cells, fresh frozen plasma, platelets) than the high-MAP group (1594 mL vs. 2898 mL) and had significantly lower mortality within 24 hours of admission to the intensive care unit (2.3% vs. 17.4%) and significantly lower mortality due to coagulopathy-associated postoperative hemorrhage (0 of 6 vs. 7 of 10). Mortality at 30 days did not differ significantly between the two groups (23% and 28%, respectively).
Comment: This interim analysis suggests that maintaining a low MAP during intraoperative resuscitation in seriously ill trauma patients is safe, reduces use of blood products, and decreases the incidence of postoperative coagulopathy and the related consequence of death. If these promising findings hold in the final analysis, similar approaches should be undertaken in the field and the emergency department.
— John A. Marx, MD, FAAEM. Published in Journal Watch Emergency Medicine May 6, 2011. Citation: Morrison CA et al. Hypotensive resuscitation strategy reduces transfusion requirements and severe postoperative coagulopathy in trauma patients with hemorrhagic shock: Preliminary results of a randomized controlled trial. J Trauma 2011 Mar; 70:652.
3. Absolute Lymphocyte Count Below 950 in the ED Predicts a Low CD4 Count in Admitted HIV-positive Patients
Napoli AM, et al. Acad Emerg Med. 2011;18:385–389.
Objectives: This study sought to determine if the automated absolute lymphocyte count (ALC) predicts a “low” (below 200 × 106 cells/μL) CD4 count in patients with known human immunodeficiency virus (HIV+) who are admitted to the hospital from the emergency department (ED).
Methods: This retrospective cohort study over an 8-year period was performed in a single, urban academic tertiary care hospital with over 85,000 annual ED visits. Included were patients who were known to be HIV+ and admitted from the ED, who had an ALC measured in the ED and a CD4 count measured within 24 hours of admission. Back-translated means and confidence intervals (CIs) were used to describe CD4 and ALC levels. The primary outcome was to determine the utility of an ALC threshold for predicting a CD4 count of below 200 × 106 cells/μL by assessing the strength of association between log-transformed ALC and CD4 counts using a Pearson correlation coefficient. In addition, area under the receiver operator curve (AUC) and a decision plot analysis were used to calculate the sensitivity, specificity, and the positive and negative likelihood ratios to identify prespecified optimal clinical thresholds of a likelihood ratio below 0.1 and above 10.
Results: A total of 866 patients (mean age 42 years, 40% female) met inclusion criteria. The transformed means (95% CIs) for CD4 and ALC were 34 (31–38) and 654 (618–691), respectively. There was a significant relationship between the two measures, r = 0.74 (95% CI = 0.71 to 0.77, p less than 0.01). The AUC was 0.92 (95% CI = 0.90 to 0.94, p less than 0.001). An ALC below 1700 × 106 cells/μL had a sensitivity of 95% (95% CI = 93% to 96%), specificity of 52% (95% CI = 43% to 62%), and negative likelihood ratio of 0.09 (95% CI = 0.05 to 0.2) for a CD4 count below 200 × 106 cells/μL. An ALC less than 950 × 106 cells/μL has a sensitivity of 76% (95% CI = 73% to 79%), specificity of 93% (95% CI = 87% to 96%), and positive likelihood ratio of 10.1 (95% CI = 8.2 to 14) for a CD4 count below 200 × 106 cells/μL.
Conclusions: Absolute lymphocyte count was predictive of a CD4 count of below 200 × 106 cells/μL in HIV+ patients who present to the ED, necessitating hospital admission. A CD4 count below 200 × 106 cells/μL is very likely if the ED ALC below 950 × 106 cells/μL and less likely if the ALC is below 1,700 × 106 cells/μL. Depending on pretest probability, clinical use of this relationship may help emergency physicians predict the likelihood of susceptibility to opportunistic infections and may help identify patients who should receive definitive CD4 testing.
4. Needle and guidewire visualization in ultrasound-guided IJ vein cannulation
Moak JH, et al. Amer J Emerg Med. 2011;29:432-436.
Study objective: Reimbursement for ultrasound-guided central lines requires documenting the needle entering the vessel lumen. We hypothesized that physicians often successfully perform ultrasound-guided internal jugular (IJ) cannulation without visualizing the needle in the lumen and that guidewire visualization occurs more frequently.
Methods: This prospective, observational study enrolled emergency physicians performing ultrasound-guided IJ cannulations over an 8-month period. Physicians reported sonographic visualization of the needle or guidewire and recorded DVD images for subsequent review. Outcome measures were the proportion of successful procedures in which the operator reported seeing the needle or guidewire in the vessel lumen and the proportion of successful, recorded procedures, in which a reviewer noted the same findings. Procedures were deemed successful when functioning central venous catheters were placed. Fisher exact test was used for comparisons.
Results: Of 41 attempted catheterizations, 35 (85.4%) were successful. Eighteen of these were recorded on DVD for review. The operator reported visualizing the needle within the vessel lumen in 23 (65.7%) of 35 successful cannulations (95% confidence interval [CI], 47.7%-80.3%). In 27 cases, the operator attempted to view the guidewire and reported doing so in 24 cases (88.9%; 95% CI, 69.7%-97.1%). On expert review, the needle was seen penetrating the vessel lumen in 1 (5.6%) of 18 cases (95% CI, 0.3%-29.4%). Among recorded procedures in which the operator also attempted wire visualization, the reviewer could identify the wire within the vessel lumen in 12 (75.0%) of 16 cases (95% CI, 47.4%-91.7%).
Conclusions: During successful ultrasound-guided IJ cannulation, physicians can visualize the guidewire more readily than the needle.
5. Physicians Often Can't Predict Who Will Misuse Pain Meds
Allison Gandey. May 27, 2011 (Austin, Texas) — Prescribers trying to determine who will abuse pain medications are wrong about half the time, a new study shows.
Presenting here at the American Pain Society 30th Annual Scientific Meeting, researchers found that the ability of physicians to correctly predict at-risk patients was only slightly better than chance.
Investigators in this industry-funded study looked at 549 patients from 50 practices. Clinicians were asked to identify which patients they thought were at risk for medication misuse and those who weren't, using standard risk assessment methods.
Researchers then compared these results to urine drug tests. Physicians' best guesses were most accurate for patients they believed to be misusing their medications. However, in the group thought to be compliant, clinicians missed 60% of patients who went on to have an abnormal urine drug test.
Misuse was confirmed if illicit drugs were present or if prescribed medications were absent in the urine sample.
Table 1. Risk Assessments of Patients Receiving Long-Term Opioid Therapy
Physician Evaluation Predicted Correctly (%)
Patients thought to be misusing (n = 173) 72
Patients believed to be not at risk (n = 204) 40
Investigators included a third random group in which no physician risk assessments were performed (n = 172). Most of these patients, 61%, had a normal test result; however, 30% were misusing drugs.
The team led by Harry Leider, MD, chief medical officer at Ameritox, a company that provides pain medication monitoring, concludes that all patients receiving long-term opioid therapy should have urine drug testing to identify those who may be misusing their medications.
The current 2009 American Pain Society guidelines call for urine analysis, but not in all cases.
In the US Food and Drug Administration's long-awaited opioid plan unveiled last month, regulators opted to focus on new education programs for prescribers. The opioid Risk Evaluation and Mitigation Strategy (REMS) will require drug makers to provide and pay for the plan, although the training is not mandatory for prescribers.
American Pain Society (APS) 30th Annual Scientific Meeting: Posters 111 and 119. Presented May 19, 2011.
6. CT Evaluation for Blunt Trauma Exposes Children to Dangerous Radiation Levels
Radiation doses to the thyroid were at levels associated with increased cancer risk in 71% of patients. Mueller DL et al. J Trauma 2011;70:724
Background: Increased utilization of computed tomography (CT) scans for evaluation of blunt trauma patients has resulted in increased doses of radiation to patients. Radiation dose is relatively amplified in children secondary to body size, and children are more susceptible to long-term carcinogenic effects of radiation. Our aim was to measure radiation dose received in pediatric blunt trauma patients during initial CT evaluation and to determine whether doses exceed doses historically correlated with an increased risk of thyroid cancer.
Methods: A prospective cohort study of patients aged 0 years to 17 years was conducted over 6 months. Dosimeters were placed on the neck, chest, and groin before CT scanning to measure surface radiation. Patient measurements and scanning parameters were collected prospectively along with diagnostic findings on CT imaging. Cumulative effective whole body dose and organ doses were calculated.
Results: The mean number of scans per patient was 3.1 ± 1.3. Mean whole body effective dose was 17.43 mSv. Mean organ doses were thyroid 32.18 mGy, breast 10.89 mGy, and gonads 13.15 mGy. Patients with selective CT scanning defined as ≤2 scans had a statistically significant decrease in radiation dose compared with patients with more than 2 scans.
Conclusions: Thyroid doses in 71% of study patients fell within the dose range historically correlated with an increased risk of thyroid cancer and whole body effective doses fell within the range of historical doses correlated with an increased risk of all solid cancers and leukemia. Selective scanning of body areas as compared with whole body scanning results in a statistically significant decrease in all doses.
7. Is CTA a Resource Sparing Strategy in the Risk Stratification and Evaluation of Acute Chest Pain? Results of a Randomized Controlled Trial
Miller AH, et al. Acad Emerg Med. 2011;18:458–467.
Objectives: Annually, almost 6 million U.S. citizens are evaluated for acute chest pain syndromes (ACPSs), and billions of dollars in resources are utilized. A large part of the resource utilization results from precautionary hospitalizations that occur because care providers are unable to exclude the presence of coronary artery disease (CAD) as the underlying cause of ACPSs. The purpose of this study was to examine whether the addition of coronary computerized tomography angiography (CCTA) to the concurrent standard care (SC) during an index emergency department (ED) visit could lower resource utilization when evaluating for the presence of CAD.
Methods: Sixty participants were assigned randomly to SC or SC + CCTA groups. Participants were interviewed at the index ED visit and at 90 days. Data collected included demographics, perceptions of the value of accessing health care, and clinical outcomes. Resource utilization included services received from both the primary in-network and the primary out-of-network providers. The prospectively defined primary endpoint was the total amount of resources utilized over a 90-day follow-up period when adding CCTA to the SC risk stratification in ACPSs.
Results: The mean (± standard deviation [SD]) for total resources utilized at 90 days for in-network plus out-of-network services was less for the participants in the SC + CCTA group ($10,134; SD ±$14,239) versus the SC-only group ($16,579; SD ±$19,148; p = 0.144), as was the median for the SC + CCTA ($4,288) versus SC only ($12,148; p = 0.652; median difference = –$1,291; 95% confidence interval [CI] = –$12,219 to $1,100; p = 0.652). Among the 60 total study patients, only 19 had an established diagnosis of CAD at 90 days. However, 18 (95%) of these diagnosed participants were in the SC + CCTA group. In addition, there were fewer hospital readmissions in the SC + CCTA group (6 of 30 [20%] vs. 16 of 30 [53%]; difference in proportions = –33%; 95% CI = –56% to –10%; p = 0.007).
Conclusions: Adding CCTA to the current ED risk stratification of ACPSs resulted in no difference in the quantity of resources utilized, but an increased diagnosis of CAD, and significantly less recidivism and rehospitalization over a 90-day follow-up period.
8. Diagnostic Tests: Another Frontier for Less Is More
Or Why Talking to Your Patient Is a Safe and Effective Method of Reassurance
Redberg R, et al. Arch Intern Med. 2011;171:619.
During the past several months, the Archives has published a number of articles demonstrating that more treatment is not necessarily better. Proton pump inhibitors for persons with nonulcer dyspepsia, opioid medications for persons with chronic nonmalignant pain, and statin medications for persons without coronary artery disease are all examples of the widespread use of medications with known adverse effects despite the absence of data for patient benefit for these indications. Epitomizing the theory of less is more, one study found that 58% of medications could be discontinued in elderly patients and that quality of life improved with drug discontinuation.1
Diagnostic testing represents another important example of less is more. Often, diagnostic tests are ordered without questioning how the result will or should change patient treatment. Instead, tests are ordered to "reassure," "just to be sure," "just in case," or "just to know." Among the problems, beyond the waste of resources, is that the likelihood ratios of many commonly ordered tests are not high enough to rule diagnoses in or out accurately in real world settings. The result is that abnormal findings on one unneeded test often require another more invasive test or treatment.2 This cascade can result in a patient sustaining serious adverse effects from a simple blood test or imaging procedure that may have been thought to carry no risk at all. For example, the risk of harm caused by performing a D-dimer assay in a patient with a low probability of venous thrombosis may be thought to be limited to the minute risk of venopuncture, unless, of course, a decreased D-dimer level results in a computed tomographic (CT) scan to rule out a pulmonary embolism (radiation), and a false-positive reading of the CT scan results in unnecessary anticoagulation, and anticoagulation results in gastrointestinal bleeding, and so on.
The case described by Becker et al3 in this issue of the Archives illustrates dramatically and tragically why we must reevaluate how we look at diagnostic tests. In her physicians' good-faith attempts to "reassure" this middle-aged woman that she did not have heart disease, they ordered a cardiac CT scan. This test, with known adverse effects, was unnecessary because the woman's prior probability of having serious cardiac disease was too low for a positive result to change her clinical treatment. Yet, sadly, the test was given more credit than it deserved, with the result that a healthy woman with a normal heart required a heart transplant, hardly the reassuring outcome that was intended.
Hindsight is, of course, 20/20. But applying the "Less is More" principles prospectively could have avoided this unfortunate case. Less is more means that if a test is not sufficiently accurate to change clinical management in a particular setting, it should not be done. Many patients now receive a battery of tests in an emergency department or a screening facility even before they are seen by a clinician. Given that these tests are ordered with no assessment of the likelihood that the patient has or does not have any particular disease, test results are unlikely to be helpful, and can be harmful. Proper ordering and sequencing of tests and imaging require assessing the likelihood that a patient has specific conditions before a test is ordered as well as understanding the accuracy of the test and how it will change clinical management (Bayesian thinking).
Perhaps the most important point to be learned from the case described by Becker and colleagues is that there are safer ways to reassure patients. Physicians are (still) highly respected professionals, and patients value our advice. Talking with our patients should be our first choice for reassurance; tests should be reserved for cases in which the benefits can be reasonably expected to outweigh the risks. This case reminds us that no test (not even a noninvasive one) is benign, and often less is more.
9. Lidocaine gel as an anesthetic protocol for nasogastric tube insertion in the ED
Uri O, et al. Amer J Emerg Med. 2011;29:386-390.
Objective
The aim of the study was to evaluate the efficacy of topical 2% lidocaine gel in reducing pain and discomfort associated with nasogastric tube insertion (NGTI) and compare lidocaine to ordinary lubricant gel in the ease in carrying out the procedure.
Methods
This prospective, randomized, double-blind, placebo-controlled, convenience sample trial was conducted in the emergency department of our tertiary care university-affiliated hospital. Five milliliters of 2% lidocaine gel or placebo lubricant gel were administered nasally to alert hemodynamically stable adult patients 5 minutes before undergoing a required NGTI. The main outcome measures were overall pain, nasal pain, discomfort (eg, choking, gagging, nausea, vomiting), and difficulty in performing the procedure. Standard comparative statistical analyses were used.
Results
The study cohort included 62 patients (65% males). Thirty-one patients were randomized to either lidocaine or placebo groups. Patients who received lidocaine reported significantly less intense overall pain associated with NGTI compared to those who received placebo (37 ± 28 mm vs 51 ± 26 mm on 100-mm visual analog scale; P less than .05). The patients receiving lidocaine also had significantly reduced nasal pain (33 ± 29 mm vs 48 ± 27 mm; P less than .05) and significantly reduced sensation of gagging (25 ± 30 mm vs 39 ± 24 mm; P less than .05). However, conducting the procedure was significantly more difficult in the lidocaine group (2.1 ± 0.9 vs 1.4 ± 0.7 on 5-point Likert scale; P less than .05).
Conclusion
Lidocaine gel administered nasally 5 minutes before NGTI significantly reduces pain and gagging sensations associated with the procedure but is associated with more difficult tube insertion compared to the use of lubricant gel.
10. Optic Nerve Ultrasound Predicts Elevated Intracranial Pressure
In a small meta-analysis, ultrasound measurement of optic nerve sheath diameter had a sensitivity of 90% for predicting elevated ICP.
Bedside emergency department ocular ultrasound is increasingly used to detect retinal detachment, but does it also have other uses? Researchers performed a meta-analysis of six prospective studies (231 patients) in which researchers compared intracranial pressure (ICP) monitoring and ultrasound measurement of optic nerve sheath diameter (ONSD) in consecutive adult patients with suspected elevated ICP. ONSD was measured 3 mm behind the globe; ICP and ONSD measurements were performed within 1 hour of each other.
The pooled sensitivity for ONSD detection of elevated ICP was 90% and the pooled specificity was 85%. The pooled diagnostic odds ratio was 51, meaning that patients with elevated ICP were 51 times more likely to have a positive ONSD test than those without elevated ICP.
Comment: With a 90% sensitivity for ruling out elevated intracranial pressure, bedside ultrasound measurement of optic nerve sheath diameter shows promise as a new tool to guide decision making, including prioritizing patients for diagnostic studies and determining whether computed tomography is needed before an unstable polytrauma patient is taken to the operating room.
— Kristi L. Koenig, MD, FACEP. Published in Journal Watch Emergency Medicine May 20, 2011
Citation: Dubourg J et al. Ultrasonography of optic nerve sheath diameter for detection of raised intracranial pressure: A systematic review and meta-analysis. Intensive Care Med 2011 Apr 20; [e-pub ahead of print].
11. ED Septic Shock Protocol and Guideline for Children May Improve Care
Laurie Barclay, MD. May 24, 2011 — Implementing an emergency department (ED) septic shock protocol and care guideline for children improves compliance with fluid resuscitation and early antibiotic and oxygen administration, according to the results of a study reported online May 16 that will also appear in the June print issue of Pediatrics.
"Unrecognized and undertreated septic shock increases morbidity and mortality," write Gitte Y. Larsen, MD, MPH, from Pediatric Critical Care, Department of Pediatrics, Primary Children's Medical Center in Salt Lake City, Utah, and colleagues. "Septic shock in children is defined as sepsis and cardiovascular organ dysfunction, not necessarily with hypotension."
The hypothesis tested by this study was that a septic shock protocol and care guideline would facilitate early identification of septic shock, improve compliance with recommended treatment, and improve patient outcomes.
The investigators reviewed cases of unrecognized and undertreated septic shock in the ED of their institution, highlighting increased recognition at triage and more aggressive therapy once septic shock was identified.
From January 2005 to December 2009, the investigators studied the effect in all eligible ED patients of an ED septic shock protocol and care guideline they had developed to improve recognition beginning at triage. Of 345 pediatric ED patients identified, 49% were boys, and 297 (86.1%) met septic shock criteria at triage. Median age was 5.6 years; 196 patients (56.8%) had at least 1 chronic complex condition.
The most frequently observed signs of septic shock were skin color changes in 269 patients (78%) and tachycardia in 251 patients (73%); 120 patients (34%) had hypotension. During the study period, the median hospital length of stay decreased from 181 to 140 hours (P less than .05). However, mortality rate did not change during the study period (average 6.3%; 22/345). The most dramatic improvements in care were more complete recording of triage vital signs, timely fluid resuscitation and initiation of antibiotic treatment, and serum lactate measurements.
"Implementation of an ED septic shock protocol and care guideline improved compliance in delivery of rapid, aggressive fluid resuscitation and early antibiotic and oxygen administration and was associated with decreased length of stay," the study authors write.
Limitations of this study include failure to assess shock reversal for individual patients, inability to determine causality, lack of randomization, unrecognized factors that may have affected hospital length of stay or mortality, and inability to separately evaluate the effect of individual components of the care guidelines.
"Consistent successful treatment of septic shock cannot begin in the ICU [intensive care unit] for patients who present to the ED in shock; it must begin at the time of triage in the ED," the study authors conclude. "Early recognition and treatment of septic shock benefits all ED patients, because the effort to recognize early shock leads to a more meticulous patient assessment from the initial encounter. We developed a septic shock protocol and care guideline that led to improved compliance in delivery of rapid, aggressive fluid resuscitation, early antibiotic and oxygen administration, and decreased hospital LOS [length of stay]."
The study authors have disclosed no relevant financial relationships.
Pediatrics. Published online May 16, 2011.
12. ED Abnormal Vital Sign “Triggers” Program Improves Time to Therapy
McGillicuddy DC, et al. Acad Emerg Med. 2011;18:483-487.
Background: Implementation of rapid response systems to identify deteriorating patients in the inpatient setting has demonstrated improved patient outcomes. A “trigger” system using vital sign abnormalities to initiate evaluation by physician was recently described as an effective rapid response method.
Objectives: The objective was to evaluate the effect of a triage-based trigger system on the primary outcome of time to physician evaluation and the secondary outcomes of therapeutic intervention, antibiotics, and disposition in emergency department (ED) patients.
Methods: A separate-samples pre- and postintervention study was conducted using retrospective chart review of outcomes in ED patients for three arbitrarily selected 5-day periods in 2007 (pretriggers) and 2008 (posttriggers). There were 2,165 and 2,212 charts in the pre- and posttriggers chart review, with 71 and 79 patients meeting trigger criteria. Trigger criteria used to identify patients at triage were: heart rate below 40 or above 130 beats/min, respiratory rate below 8 or above 30 breaths/min, systolic blood pressure of below 90 mm Hg, and oxygen saturation less than 90% on room air. Median times (in minutes) were compared between pre- and posttrigger groups with interquartile ranges (IQRs 25–75), with the Wilcoxon rank sum test used to determine statistical significance.
Results: Overall median times were decreased among the posttriggers group. Median times to physician evaluation (21 minutes [IQR = 13–41 minutes] vs. 11 minutes [IQR = 5–21 minutes]; p less than 0.001), first intervention (58 minutes [IQR = 20–139 minutes] vs. 26 minutes [IQR = 11–71 minutes]; p less than 0.01), and antibiotics (110 minutes [IQR = 74–171 minutes] vs. 69 minutes [IQR = 23–130 minutes]; p less than 0.01) were significant. Median times to disposition (177 minutes [IQR = 121–303 minutes] vs. 162 minutes [IQR = 114–230 minutes]; p = 0.18) were not significant.
Conclusions: Implementation of an ED triggers program allows for more rapid time to physician evaluation, therapeutic intervention, and antibiotics.
13. Financial Factors Cause ED Closures
Emma Hitt, PhD. May 17, 2011 — From 1990 to 2009, the number of hospital emergency departments (EDs) in nonrural areas decreased by 27%. Factors associated with increased risk for ED closure included for-profit ownership, location in a competitive market, "safety-net" status, and low profit margin, a new study has found.
The analysis, led by Renee Y. Hsia, MD, from the Department of Emergency Medicine at the University of California–San Francisco, and colleagues was reported in the May 18 issue of the Journal of the American Medical Association.
According to the researchers, EDs are the "safety net of the safety net," being the only medical facilities in the United States that serve all patients. Although federal law requires EDs to treat all patients, "no federal law ensures the availability of hospital EDs," the authors note. The current study sought to evaluate hospital, community and market factors that may be associated with ED closings throughout the United States. The most recent data on EDs from 1990 through 2009 were acquired from American Hospital Association Annual Surveys and were combined with financial information collected through 2007 and derived from Medicare hospital cost reports.
From 1990 to 2009, the number of ED hospitals in nonrural areas declined from 2446 to 1779, including 1041 ED closings and 374 ED openings.
A higher percentage of for-profit hospitals, as well as hospitals with low profit margins, closed between 1990 and 2007 compared with EDs in each of those categories that stayed open (26% vs 16%; hazard ratio [HR], 1.8; 95% confidence interval [CI], 1.5 - 2.1; and 36% vs 18%; HR, 1.9; 95% CI, 1.6 - 2.3 respectively). In addition, hospitals in more competitive markets had an increased risk of closing their EDs (34% vs 17%; HR, 1.3; 95% CI, 1.1 - 1.6), as did safety-net hospitals (10% vs 6%; HR, 1.4; 95% CI, 1.1 - 1.7), and those serving a disproportionally poor population (37% vs 31%; HR, 1.4; 95% CI, 1.1 - 1.7).
Study limitations include assessment of only quantifiable factors; political or other community pressures to close or stay open, philanthropic efforts, the hospital’s ability to fill beds with non-ED admissions or other factors that may contribute to ED closure decisions were not considered. In addition, federal hospitals, such as those operated by the Veteran’s Administration, were not evaluated.
"Our findings underscore that market-based approaches to health care do not ensure that care will be equitably distributed," Dr. Hsai and colleagues conclude. "In fact, the opposite may be true."
According to the researchers, as long as "tens of millions of Americans are uninsured, and tens of millions more pay well below their cost of care, the push for 'results-driven competition' will not correct system-level disparities that markets cannot — and should not — be expected to resolve."
The study was funded by the National Institutes of Health and the Robert Wood Johnson Foundation. The authors have disclosed no relevant financial relationships.
JAMA. 2011;305(19):1978-1985.
14. Cost-ineffectiveness of Point-of-care Biomarker Assessment for Suspected AMI
Fitzgerald P, et al. Acad Emerg Med. 2011;18:488–495.
Objectives: Chest pain due to suspected myocardial infarction (MI) is responsible for many hospital admissions and consumes substantial health care resources. The Randomized Assessment of Treatment using Panel Assay of Cardiac markers (RATPAC) trial showed that diagnostic assessment using a point-of-care (POC) cardiac biomarker panel consisting of CK-MB, myoglobin, and troponin increased the proportion of patients successfully discharged after emergency department (ED) assessment. In this economic analysis, the authors aimed to determine whether POC biomarker panel assessment reduced health care costs and was likely to be cost-effective.
Methods: The RATPAC trial was a multicenter individual patient randomized controlled trial comparing diagnostic assessment using a POC biomarker panel (CK-MB, myoglobin, and troponin, measured at baseline and 90 minutes) to standard care without the POC panel in patients attending six EDs with acute chest pain due to suspected MI (n = 2,243). Individual patient resource use data were collected from all participants up to 3 months after hospital attendance using self-completed questionnaires at 1 and 3 months and case note review. ED staff and POC testing costs were estimated through a microcosting study of 246 participants. Resource use was valued using national unit costs. Health utility was measured using the EQ-5D self-completed questionnaire, mailed at 1 and 3 months. Quality-adjusted life-years (QALYs) were calculated by the trapezium rule using the EQ-5D tariff values at all follow-up points. Mean costs per patient were compared between the two treatment groups. Cost-effectiveness was estimated in terms of probability of dominance and incremental cost per QALY.
Results: Point-of-care panel assessment was associated with higher ED costs, coronary care costs, and cardiac intervention costs, but lower general inpatient costs. Mean costs per patient were £1217.14 (standard deviation [SD] ± 3164.93), or $1,987.14 (SD ±$4,939.25), with POC versus £1005.91 (SD ±£1907.55), or $1,568.64 (SD ±$2,975.78), with standard care (p = 0.056). Mean QALYs were 0.158 (SD ± 0.052) versus 0.161 (SD ± 0.056; p = 0.250). The probability of standard care being dominant (i.e., cheaper and more effective) was 0.888, while the probability of the POC panel being dominant was 0.004. These probabilities were not markedly altered by sensitivity analysis varying the costs of the POC panel and excluding intensive care costs.
Conclusions: Point-of-care panel assessment does not reduce costs despite reducing admissions and may even increase costs. It is unlikely to be considered a cost-effective use of health care resources.
15. More Hospital Deaths if Admitted on Weekends: Better Wait ‘Til Monday!
By Alison McCook. NEW YORK (Reuters Health) May 20 - People admitted to the hospital on the weekend are 10% more likely to die than those who checked in during the week, according to a new analysis of nearly 30 million people.
When applied to the entire U.S. population, that equals an additional tens of thousands of people each year, study author Dr. Rocco Ricciardi of Tufts University Medical School told Reuters Health.
"In other words, an extra 20 to 25 thousand people die each year in the United States because of admission on a weekend," he said.
This is not the first study to uncover a "weekend effect," in which patients are likely to fare worse during the weekends. Previous research has shown a "weekend effect" for patients admitted to the hospital for heart attack, a blood clot in a lung, a ruptured abdominal artery and strokes of all kinds. Still, the data are not always consistent: earlier this year, a survey of Pennsylvania hospitals found that people with injuries fare slightly better on weekends.
The current study is based on an analysis of a national sample of close to 30 million people who were admitted to hospitals in 35 states over a 5-year period. All were admitted for "non-elective" reasons, which represents most admissions, Dr. Ricciardi noted.
Reporting in the Archives of Surgery, he and his colleagues found that 2.7% of the people admitted during the weekend died while in the hospital, which happened to only 2.3% of those admitted on a weekday.
It's not entirely clear why people might fare worse when they come in during the weekend, Dr. Ricciardi said in an email. Looking specifically at traumas, he and his colleagues found no differences in death rates between weekend and weekday arrivals, which helps eliminate the possibility that people experience more life-threatening accidents on weekends.
But it's possible that care is different on weekends, he said - perhaps there is less nursing, fewer well-trained doctors, or less access to imaging and other necessary tools.
"Either (1) the patients coming to the hospital on weekends are sicker or else (2) the hospital is doing a worse job of treating them," said Dr. Raman Khanna at the University of California at San Francisco, who was not involved in the study.
Since the researchers found no differences in trauma rates, and also took into account whether weekend arrivers had other illnesses that could make them sicker, "the authors can make the case that number 2 is more likely," he told Reuters Health in an email. Dr. Ricciardi and his team also looked at death rates by admission day for different diagnoses, and saw that not all fared worse on the weekends.
The categories that did fare worst on weekends included problems with pregnancy and the female reproductive system, blood cell and bone marrow disorders, and circulatory and nervous system problems. The findings suggest that hospitals should focus their efforts on those specific conditions, Dr. Khanna said, "rather than a blanket increase in nursing ratios for everyone at every hospital over the weekend, since a more tailored solution may be just as effective while far less expensive."
Abstract: http://archsurg.ama-assn.org/cgi/content/short/146/5/545
16. PECARN: ED Observation of Children with Minor Head Injury Reduces Use of CT
But does not impair identification of clinically important traumatic brain injuries
In a secondary analysis of data from the Pediatric Emergency Care Applied Research Network (PECARN) Lancet publication, researchers evaluated whether observing children (age, below 18 years) with minor head injury before deciding whether to obtain a head computed tomography (CT) scan affects use of CT and diagnosis of traumatic brain injury (TBI). Data on duration of observation were not collected.
Of 40,113 patients (median age, 5.6 years), 5433 (14%) were observed. Observed patients were significantly less likely to undergo CT than patients who were not observed (31% vs. 35%). After adjustment for clinical covariates, the likelihood of CT scanning remained lower for patients who were observed (adjusted odds ratio, 0.53). Rates of clinically important TBI (defined as intracranial injury resulting in death, neurosurgical intervention, intubation for longer than 24 hours, or hospital admission for 2 nights) were similar between groups (0.75% and 0.87%, respectively).
The authors conclude that observing intermediate-risk patients would result in approximately 39 fewer CT scans per 1000 children who present to the emergency department with blunt head trauma; intermediate-risk children were defined as those with normal mental status and no evidence of skull fracture and at least one of the following: loss of consciousness, severe mechanism of injury, vomiting, not acting normally per parents (children younger than 2 years), or severe headache (children 2 years).
Comment: The lack of data on duration of observation makes practical application of these findings difficult. However, neurologically normal children with a history of loss of consciousness, transient vomiting, or headache can be observed before deciding about CT. Children with persistent symptoms or any sign of clinical deterioration should undergo immediate CT.
— Katherine Bakes, MD. Published in Journal Watch Emergency Medicine May 27, 2011. Citation: Nigrovic LE et al. The effect of observation on cranial computed tomography utilization for children after blunt head trauma. Pediatrics 2011 Jun; 127:1067.
17. BUN in the Early Assessment of Acute Pancreatitis: An International Validation Study
Wu BU, et al. Arch Intern Med. 2011;171:669-676.
Background Objective assessment of acute pancreatitis (AP) is critical to help guide resuscitation efforts. Herein we (1) validate serial blood urea nitrogen (BUN) measurement for early prediction of mortality and (2) develop an objective BUN-based approach to early assessment in AP.
Methods We performed a secondary analysis of 3 prospective AP cohort studies: Brigham and Women's Hospital (BWH), June 2005 through May 2009; the Dutch Pancreatitis Study Group (DPSG), March 2004 through March 2007; and the University of Pittsburgh Medical Center (UPMC), June 2003 through September 2007. Meta-analysis and stratified multivariate logistic regression adjusted for age, sex, and creatinine levels were calculated to determine risk of mortality associated with elevated BUN level at admission and rise in BUN level at 24 hours. The accuracy of the BUN measurements was determined by area under the receiver operating characteristic curve (AUC) analysis compared with serum creatinine measurement and APACHE II score. A BUN-based assessment algorithm was derived on BWH data and validated on the DPSG and UPMC cohorts.
Results A total of 1043 AP cases were included in analysis. In pooled analysis, a BUN level of 20 mg/dL or higher was associated with an odds ratio (OR) of 4.6 (95% confidence interval [CI], 2.5-8.3) for mortality. Any rise in BUN level at 24 hours was associated with an OR of 4.3 (95% CI, 2.3-7.9) for death. Accuracy of serial BUN measurement (AUC, 0.82-0.91) was comparable to that of the APACHE II score (AUC, 0.72-0.92) in each of the cohorts. A BUN-based assessment algorithm identified patients at increased risk for mortality during the initial 24 hours of hospitalization.
Conclusions We have confirmed the accuracy of BUN measurement for early prediction of mortality in AP and developed an algorithm that may assist physicians in their early resuscitation efforts.
18. A prospective, randomized, trial of phenobarbital versus benzodiazepines for acute alcohol withdrawal
Hendey GW, et al. Amer J Emerg Med. 2011;29:382-385.
Objective
The aim of this study was to compare phenobarbital (PB) versus lorazepam (LZ) in the treatment of alcohol withdrawal in the emergency department (ED) and at 48 hours.
Methods
Prospectively, randomized, consenting patients were assessed using a modified Clinical Institute Withdrawal Assessment (CIWA) score and given intravenous PB (mean, 509 mg) or LZ (mean, 4.2 mg). At discharge, LZ patients received chlordiazepoxide (Librium), and PB patients received placebo.
Results
Of 44 patients, 25 received PB, and 19 LZ. Both PB and LZ reduced CIWA scores from baseline to discharge (15.0-5.4 and 16.8-4.2, P less than .0001). There were no differences between PB and LZ in baseline CIWA scores (P = .3), discharge scores (P = .4), ED length of stay (267 versus 256 minutes, P = .8), admissions (12% versus 16%, P = .8), or 48-hour follow-up CIWA scores (5.8 versus 7.2, P = .6).
Conclusion
Phenobarbital and LZ were similarly effective in the treatment of mild/moderate alcohol withdrawal in the ED and at 48 hours.
19. 72-Hr Returns May Not be a Good Indicator of Safety in the ED: A National Study
Pham JC, et al. Acad Emerg Med. 2011;18:390–397.
Objectives: The objective was to measure the association between returns to an emergency department (ED) within 72 hours and resource utilization, severity of illness, mortality, and admission rate.
Methods: This was a retrospective, cross-sectional analysis of ED visits using data from the National Hospital Ambulatory Medical Care Survey (NHAMCS) from 1998 to 2006. Cohorts were patients who had been seen in the ED within the past 72 hours versus those without the prior visit. A multivariate model was created to predict adjusted-resource utilization and mortality or admission rate.
Results: During the study period, there were 218,179 ED patient visits and a 3.2% 72-hour return rate. Patients with Medicare (3.5%) and without insurance (3.5%) were more likely to return within 72 hours. Visits associated with alcohol (4.1%), low triage acuity (4.0%), or dermatologic conditions (5.9%) were more likely to return. Seventy-two-hour return visits used fewer resources (5.0 [±0.1] vs. 5.5 [±0.1] tests, medications, procedures), were less likely to be Level I triage acuity (17% vs. 20%), and had a similar admission rate (13% vs. 13%) as those not seen within 72 hours. The sample size was too small to evaluate mortality.
Conclusions: Patients who return to the ED within 72 hours do not use more resources, are not more severely ill, and do not have a higher hospital admission rate than those who had not been previously seen. These findings do not support the use of 72-hour returns as a quality or safety indicator. A more refined variation such as 72-hour returns resulting in admission may have more value
20. Even Short-Term NSAID Use Risky in Cardiac Patients
Megan Brooks. May 9, 2011 — In patients with prior myocardial infarction (MI), most nonsteroidal anti-inflammatory drugs (NSAIDs), even when taken for as little as 1 week, are associated with an increased risk for death and recurrent MI, new observational data indicate.
Use of NSAIDs was associated with a 45% increased risk for death or recurrent MI in the first 7 days of treatment and a 55% increased risk if treatment continued to 3 months. The findings were published online May 9 in Circulation.
"We found that short-term treatment with most NSAIDs was associated with increased and instantaneous cardiovascular risk," first author Anne-Marie Schjerning Olsen, MB, from Copenhagen University in Hellerup, Denmark, told Medscape Medical News.
"Our results indicate that there is no apparent safe therapeutic window for NSAIDs in patients with prior MI and challenge the current recommendations of low-dose and short-term use of NSAIDs as being safe," she said.
Results 'Completely Consistent' With 2007 AHA Advisory
In a 2007 scientific statement, the American Heart Association (AHA) advised clinicians about the risks of NSAID use among patients with known cardiovascular disease or those at risk for ischemic heart disease and provided a stepped-care approach for use of these agents in this patient population.
Asked to comment on the new study, Elliott Antman, MD, from Brigham and Women's Hospital and Harvard Medical School in Boston, Massachusetts, and lead author of the 2007 advisory, said, "Essentially, what this paper shows is that there is a gradient of risk among NSAIDs; some are associated with more risk than others; none appear to be completely safe, and the researchers could not identify a period that appeared to be safe, no matter how short."
"This is completely consistent with the advice that we put forward in 2007, which is to use the safest drug, in the lowest dose required to control musculoskeletal symptoms, for the shortest period of time," Dr. Antman told Medscape Medical News.
"We do have to be practical here," Dr. Antman said, "because despite very best efforts with physical therapy and nonpharmacologic treatments, there are individuals who have severe, debilitating arthritis, lupus, or rheumatoid arthritis and we do have to have a treatment plan for those patients." The AHA's stepped-care approach provides such a plan, Dr. Antman noted.
First Time-to-Event Analysis
Using the Danish National Patient Registry, Dr. Olsen and colleagues identified 83,675 patients who were admitted to a hospital with a first MI between 1997 and 2006 and were discharged alive. Their average age was 68 years, and 63% were men.
At least 1 prescription claim for NSAID treatment after discharge was identified for 35,405 patients (42.3%), most commonly ibuprofen (23%) and diclofenac (13.4%). The most commonly prescribed selective cyclooxygenase (COX)-2 inhibitors were rofecoxib (4.7%) and celecoxib (4.8%).
During the observation period, 35,257 deaths or recurrent MIs (42.1%) were registered in the database. According to the investigators, the risk for death or recurrent MI was elevated at the beginning of NSAID treatment (hazard ratio [HR], 1.45; 95% confidence interval [CI], 1.29 - 1.62) and the risk persisted throughout treatment (HR after 90 days, 1.55; 95% CI, 1.46 - 1.64).
Particularly worrying was the fact that…diclofenac was associated with early and higher cardiovascular risk than…rofecoxib, which was withdrawn from the market in 2004 due to its unfavorable cardiovascular risk profile.
All NSAIDs, except naproxen, were associated with an increased risk for death or recurrent MI, with diclofenac having the highest risk (HR in the first week of treatment, 3.26; 95% CI, 2.57 - 3.86).
"Particularly worrying," Dr. Olsen told Medscape Medical News, "was the fact that the widely used nonselective NSAID diclofenac was associated with early and higher cardiovascular risk than the selective COX-2 inhibitor rofecoxib, which was withdrawn from the market in 2004 due to its unfavorable cardiovascular risk profile."
"The accumulating evidence suggests that we must limit NSAID use to the absolute minimum in patients with established cardiovascular disease," she added.
"If NSAID therapy is necessary for patients with known cardiovascular disease, the doctors should choose a more selective COX-1 inhibitor in minimum dose (eg, naproxen ≤ 500 mg daily or ibuprofen ≤ 1200 mg daily) for the shortest period of time," she added.
Dr. Antman said this paper provides a "good reminder" for clinicians and patients about the risks of NSAIDs in this patient population.
"Many of the drugs that we are talking about," he noted, "can be obtained over-the-counter, and it is the presumption of many patients that if it is a drug that they can get over-the-counter it must be 'safer.' Very often they don't report those medications to their physician when they go for an office visit."
The authors and Dr. Antman have reported no relevant financial relationships.
Circulation. Published online May 9, 2011
21. Hospital-reported Data on the Pneumonia Quality Measure “Time to First Antibiotic Dose” Are Not Associated with Inpatient Mortality: Results of a Nationwide Cross-sectional Analysis
Quattromani E, et al. Acad Emerg Med. 2011;18:496–503.
Objectives: Significant controversy exists regarding the Centers for Medicare & Medicaid Services (CMS) “time to first antibiotics dose” (TFAD) quality measure. The objective of this study was to determine whether hospital performance on the TFAD measure for patients admitted from the emergency department (ED) for pneumonia is associated with decreased mortality.
Methods: This was a cross-sectional analysis of 95,704 adult ED admissions with a principal diagnosis of pneumonia from 530 hospitals in the 2007 Nationwide Inpatient Sample. The sample was merged with 2007 CMS Hospital Compare data, and hospitals were categorized into TFAD performance quartiles. Univariate association of TFAD performance with inpatient mortality was evaluated by chi-square test. A population-averaged logistic regression model was created with an exchangeable working correlation matrix of inpatient mortality adjusted for age, sex, comorbid conditions, weekend admission, payer status, income level, hospital size, hospital location, teaching status, and TFAD performance.
Results: Patients had a mean age of 69.3 years. In the adjusted analysis, increasing age was associated with increased mortality with odds ratios (ORs) above 2.3. Unadjusted inpatient mortality was 4.1% (95% confidence interval [CI] = 3.9% to 4.2%). Median time to death was 5 days (25th–75th interquartile range = 2–11). Mean TFAD quality performance was 77.7% across all hospitals (95% CI = 77.6% to 77.8%). The risk-adjusted OR of mortality was 0.89 (95% CI = 0.77 to 1.02) in the highest performing TFAD quartile, compared to the lowest performing TFAD quartile. The second highest performing quartile OR was 0.94 (95% CI = 0.82 to 1.08), and third highest performing quartile was 0.91 (95% CI = 0.79 to 1.05).
Conclusions: In this nationwide heterogeneous 2007 sample, there was no association between the publicly reported TFAD quality measure performance and pneumonia inpatient mortality.
1. Some Guidance Questioned on Managing Cardiac Arrest in Pregnancy
NEW YORK (Reuters Health) May 23 - A recommended maneuver in resuscitating pregnant women in cardiac arrest may be counterproductive, according to the authors of a systematic review of the topic.
Furthermore, they found that perimortem Cesarean section as a last resort is not being utilized optimally in this situation.
Writing in Resuscitation online May 9, Dr. Laurie J. Morrison at the University of Toronto in Ontario, Canada, and colleagues note that cardiac arrest occurs in about 1 in 20,000 pregnancies. Although rare, the incidence is 10 times higher than the rate of cardiac arrest in young athletes.
There are published recommendations for managing cardiac arrest in pregnancy, the authors note, but these are based on "very little science." One recommendation is to place the women in a left lateral tilt during chest compressions to relieve aortocaval compression. Another guideline indicates that perimortem Cesarean section should be performed within 5 minutes of maternal cardiac arrest.
In conducting a review of the literature on these issues, the team identified three studies dealing with resuscitation technique and two with perimortem C-section.
These data indicated that it is feasible to apply chest compressions with the patient tilted to the left, but the compression force is only about 80% of that delivered in the supine position. Given that, and considering the time spent positioning the patient, the team believes that a better strategy is to displace the uterus leftward manually while the patient is supine during chest compressions.
As for defibrillation, the evidence supports usual energy settings, since transthoracic impedance isn't changed in pregnancy, according to the report.
Regarding the performance of perimortem Cesareans, a series of 38 cases showed that it was undertaken within 5 minutes in only eight instances. Nonetheless, 17 infants survived without sequelae.
Furthermore, in 22 cases with sufficient information, "twelve women had sudden and often dramatic improvement in their clinical status immediately after the uterus was emptied including a return of the pulse and blood pressure," Dr. Morrison and colleagues report.
Summing up, they conclude, "Perimortem cesarean section is rarely done within 5 min from cardiac arrest. Maternal and neonatal survival has been documented with the use of perimortem cesarean section; however, there is not enough information about its optimal use."
They continue, "Chest compressions in a left lateral tilt from the horizontal are feasible but less forceful compared to the supine position, and there are good theoretical arguments to use left lateral uterine displacement rather than lateral tilt from the horizontal during maternal resuscitation."
The authors suggest that an international registry would help frame future recommendations for managing cardiac arrest in pregnancy.
Abstract in Resuscitation: http://dx.doi.org/10.1016/j.resuscitation.2011.01.028
2. Hypotensive Resuscitation in Trauma Patients Lessens Transfusion Needs
. . . and reduces incidence of postoperative coagulopathy and associated death.
These authors report an interim analysis of the first prospective randomized trial of intraoperative hypotensive resuscitation in patients. At a single level I trauma center, 90 patients with at least one episode of in-hospital systolic blood pressure 90 mm Hg who were undergoing laparotomy or thoracotomy for blunt (6 patients) or penetrating (84) trauma were randomized at entry to the operating room to have their mean arterial pressure (MAP) maintained at a target minimum of 50 mm Hg (low MAP) or 65 mm Hg (high MAP). Methods of achieving target levels were at the discretion of the anesthesiologist. MAPs that rose above the target were not lowered.
The low-MAP group received a significantly smaller amount of blood products (packed red blood cells, fresh frozen plasma, platelets) than the high-MAP group (1594 mL vs. 2898 mL) and had significantly lower mortality within 24 hours of admission to the intensive care unit (2.3% vs. 17.4%) and significantly lower mortality due to coagulopathy-associated postoperative hemorrhage (0 of 6 vs. 7 of 10). Mortality at 30 days did not differ significantly between the two groups (23% and 28%, respectively).
Comment: This interim analysis suggests that maintaining a low MAP during intraoperative resuscitation in seriously ill trauma patients is safe, reduces use of blood products, and decreases the incidence of postoperative coagulopathy and the related consequence of death. If these promising findings hold in the final analysis, similar approaches should be undertaken in the field and the emergency department.
— John A. Marx, MD, FAAEM. Published in Journal Watch Emergency Medicine May 6, 2011. Citation: Morrison CA et al. Hypotensive resuscitation strategy reduces transfusion requirements and severe postoperative coagulopathy in trauma patients with hemorrhagic shock: Preliminary results of a randomized controlled trial. J Trauma 2011 Mar; 70:652.
3. Absolute Lymphocyte Count Below 950 in the ED Predicts a Low CD4 Count in Admitted HIV-positive Patients
Napoli AM, et al. Acad Emerg Med. 2011;18:385–389.
Objectives: This study sought to determine if the automated absolute lymphocyte count (ALC) predicts a “low” (below 200 × 106 cells/μL) CD4 count in patients with known human immunodeficiency virus (HIV+) who are admitted to the hospital from the emergency department (ED).
Methods: This retrospective cohort study over an 8-year period was performed in a single, urban academic tertiary care hospital with over 85,000 annual ED visits. Included were patients who were known to be HIV+ and admitted from the ED, who had an ALC measured in the ED and a CD4 count measured within 24 hours of admission. Back-translated means and confidence intervals (CIs) were used to describe CD4 and ALC levels. The primary outcome was to determine the utility of an ALC threshold for predicting a CD4 count of below 200 × 106 cells/μL by assessing the strength of association between log-transformed ALC and CD4 counts using a Pearson correlation coefficient. In addition, area under the receiver operator curve (AUC) and a decision plot analysis were used to calculate the sensitivity, specificity, and the positive and negative likelihood ratios to identify prespecified optimal clinical thresholds of a likelihood ratio below 0.1 and above 10.
Results: A total of 866 patients (mean age 42 years, 40% female) met inclusion criteria. The transformed means (95% CIs) for CD4 and ALC were 34 (31–38) and 654 (618–691), respectively. There was a significant relationship between the two measures, r = 0.74 (95% CI = 0.71 to 0.77, p less than 0.01). The AUC was 0.92 (95% CI = 0.90 to 0.94, p less than 0.001). An ALC below 1700 × 106 cells/μL had a sensitivity of 95% (95% CI = 93% to 96%), specificity of 52% (95% CI = 43% to 62%), and negative likelihood ratio of 0.09 (95% CI = 0.05 to 0.2) for a CD4 count below 200 × 106 cells/μL. An ALC less than 950 × 106 cells/μL has a sensitivity of 76% (95% CI = 73% to 79%), specificity of 93% (95% CI = 87% to 96%), and positive likelihood ratio of 10.1 (95% CI = 8.2 to 14) for a CD4 count below 200 × 106 cells/μL.
Conclusions: Absolute lymphocyte count was predictive of a CD4 count of below 200 × 106 cells/μL in HIV+ patients who present to the ED, necessitating hospital admission. A CD4 count below 200 × 106 cells/μL is very likely if the ED ALC below 950 × 106 cells/μL and less likely if the ALC is below 1,700 × 106 cells/μL. Depending on pretest probability, clinical use of this relationship may help emergency physicians predict the likelihood of susceptibility to opportunistic infections and may help identify patients who should receive definitive CD4 testing.
4. Needle and guidewire visualization in ultrasound-guided IJ vein cannulation
Moak JH, et al. Amer J Emerg Med. 2011;29:432-436.
Study objective: Reimbursement for ultrasound-guided central lines requires documenting the needle entering the vessel lumen. We hypothesized that physicians often successfully perform ultrasound-guided internal jugular (IJ) cannulation without visualizing the needle in the lumen and that guidewire visualization occurs more frequently.
Methods: This prospective, observational study enrolled emergency physicians performing ultrasound-guided IJ cannulations over an 8-month period. Physicians reported sonographic visualization of the needle or guidewire and recorded DVD images for subsequent review. Outcome measures were the proportion of successful procedures in which the operator reported seeing the needle or guidewire in the vessel lumen and the proportion of successful, recorded procedures, in which a reviewer noted the same findings. Procedures were deemed successful when functioning central venous catheters were placed. Fisher exact test was used for comparisons.
Results: Of 41 attempted catheterizations, 35 (85.4%) were successful. Eighteen of these were recorded on DVD for review. The operator reported visualizing the needle within the vessel lumen in 23 (65.7%) of 35 successful cannulations (95% confidence interval [CI], 47.7%-80.3%). In 27 cases, the operator attempted to view the guidewire and reported doing so in 24 cases (88.9%; 95% CI, 69.7%-97.1%). On expert review, the needle was seen penetrating the vessel lumen in 1 (5.6%) of 18 cases (95% CI, 0.3%-29.4%). Among recorded procedures in which the operator also attempted wire visualization, the reviewer could identify the wire within the vessel lumen in 12 (75.0%) of 16 cases (95% CI, 47.4%-91.7%).
Conclusions: During successful ultrasound-guided IJ cannulation, physicians can visualize the guidewire more readily than the needle.
5. Physicians Often Can't Predict Who Will Misuse Pain Meds
Allison Gandey. May 27, 2011 (Austin, Texas) — Prescribers trying to determine who will abuse pain medications are wrong about half the time, a new study shows.
Presenting here at the American Pain Society 30th Annual Scientific Meeting, researchers found that the ability of physicians to correctly predict at-risk patients was only slightly better than chance.
Investigators in this industry-funded study looked at 549 patients from 50 practices. Clinicians were asked to identify which patients they thought were at risk for medication misuse and those who weren't, using standard risk assessment methods.
Researchers then compared these results to urine drug tests. Physicians' best guesses were most accurate for patients they believed to be misusing their medications. However, in the group thought to be compliant, clinicians missed 60% of patients who went on to have an abnormal urine drug test.
Misuse was confirmed if illicit drugs were present or if prescribed medications were absent in the urine sample.
Table 1. Risk Assessments of Patients Receiving Long-Term Opioid Therapy
Physician Evaluation Predicted Correctly (%)
Patients thought to be misusing (n = 173) 72
Patients believed to be not at risk (n = 204) 40
Investigators included a third random group in which no physician risk assessments were performed (n = 172). Most of these patients, 61%, had a normal test result; however, 30% were misusing drugs.
The team led by Harry Leider, MD, chief medical officer at Ameritox, a company that provides pain medication monitoring, concludes that all patients receiving long-term opioid therapy should have urine drug testing to identify those who may be misusing their medications.
The current 2009 American Pain Society guidelines call for urine analysis, but not in all cases.
In the US Food and Drug Administration's long-awaited opioid plan unveiled last month, regulators opted to focus on new education programs for prescribers. The opioid Risk Evaluation and Mitigation Strategy (REMS) will require drug makers to provide and pay for the plan, although the training is not mandatory for prescribers.
American Pain Society (APS) 30th Annual Scientific Meeting: Posters 111 and 119. Presented May 19, 2011.
6. CT Evaluation for Blunt Trauma Exposes Children to Dangerous Radiation Levels
Radiation doses to the thyroid were at levels associated with increased cancer risk in 71% of patients. Mueller DL et al. J Trauma 2011;70:724
Background: Increased utilization of computed tomography (CT) scans for evaluation of blunt trauma patients has resulted in increased doses of radiation to patients. Radiation dose is relatively amplified in children secondary to body size, and children are more susceptible to long-term carcinogenic effects of radiation. Our aim was to measure radiation dose received in pediatric blunt trauma patients during initial CT evaluation and to determine whether doses exceed doses historically correlated with an increased risk of thyroid cancer.
Methods: A prospective cohort study of patients aged 0 years to 17 years was conducted over 6 months. Dosimeters were placed on the neck, chest, and groin before CT scanning to measure surface radiation. Patient measurements and scanning parameters were collected prospectively along with diagnostic findings on CT imaging. Cumulative effective whole body dose and organ doses were calculated.
Results: The mean number of scans per patient was 3.1 ± 1.3. Mean whole body effective dose was 17.43 mSv. Mean organ doses were thyroid 32.18 mGy, breast 10.89 mGy, and gonads 13.15 mGy. Patients with selective CT scanning defined as ≤2 scans had a statistically significant decrease in radiation dose compared with patients with more than 2 scans.
Conclusions: Thyroid doses in 71% of study patients fell within the dose range historically correlated with an increased risk of thyroid cancer and whole body effective doses fell within the range of historical doses correlated with an increased risk of all solid cancers and leukemia. Selective scanning of body areas as compared with whole body scanning results in a statistically significant decrease in all doses.
7. Is CTA a Resource Sparing Strategy in the Risk Stratification and Evaluation of Acute Chest Pain? Results of a Randomized Controlled Trial
Miller AH, et al. Acad Emerg Med. 2011;18:458–467.
Objectives: Annually, almost 6 million U.S. citizens are evaluated for acute chest pain syndromes (ACPSs), and billions of dollars in resources are utilized. A large part of the resource utilization results from precautionary hospitalizations that occur because care providers are unable to exclude the presence of coronary artery disease (CAD) as the underlying cause of ACPSs. The purpose of this study was to examine whether the addition of coronary computerized tomography angiography (CCTA) to the concurrent standard care (SC) during an index emergency department (ED) visit could lower resource utilization when evaluating for the presence of CAD.
Methods: Sixty participants were assigned randomly to SC or SC + CCTA groups. Participants were interviewed at the index ED visit and at 90 days. Data collected included demographics, perceptions of the value of accessing health care, and clinical outcomes. Resource utilization included services received from both the primary in-network and the primary out-of-network providers. The prospectively defined primary endpoint was the total amount of resources utilized over a 90-day follow-up period when adding CCTA to the SC risk stratification in ACPSs.
Results: The mean (± standard deviation [SD]) for total resources utilized at 90 days for in-network plus out-of-network services was less for the participants in the SC + CCTA group ($10,134; SD ±$14,239) versus the SC-only group ($16,579; SD ±$19,148; p = 0.144), as was the median for the SC + CCTA ($4,288) versus SC only ($12,148; p = 0.652; median difference = –$1,291; 95% confidence interval [CI] = –$12,219 to $1,100; p = 0.652). Among the 60 total study patients, only 19 had an established diagnosis of CAD at 90 days. However, 18 (95%) of these diagnosed participants were in the SC + CCTA group. In addition, there were fewer hospital readmissions in the SC + CCTA group (6 of 30 [20%] vs. 16 of 30 [53%]; difference in proportions = –33%; 95% CI = –56% to –10%; p = 0.007).
Conclusions: Adding CCTA to the current ED risk stratification of ACPSs resulted in no difference in the quantity of resources utilized, but an increased diagnosis of CAD, and significantly less recidivism and rehospitalization over a 90-day follow-up period.
8. Diagnostic Tests: Another Frontier for Less Is More
Or Why Talking to Your Patient Is a Safe and Effective Method of Reassurance
Redberg R, et al. Arch Intern Med. 2011;171:619.
During the past several months, the Archives has published a number of articles demonstrating that more treatment is not necessarily better. Proton pump inhibitors for persons with nonulcer dyspepsia, opioid medications for persons with chronic nonmalignant pain, and statin medications for persons without coronary artery disease are all examples of the widespread use of medications with known adverse effects despite the absence of data for patient benefit for these indications. Epitomizing the theory of less is more, one study found that 58% of medications could be discontinued in elderly patients and that quality of life improved with drug discontinuation.1
Diagnostic testing represents another important example of less is more. Often, diagnostic tests are ordered without questioning how the result will or should change patient treatment. Instead, tests are ordered to "reassure," "just to be sure," "just in case," or "just to know." Among the problems, beyond the waste of resources, is that the likelihood ratios of many commonly ordered tests are not high enough to rule diagnoses in or out accurately in real world settings. The result is that abnormal findings on one unneeded test often require another more invasive test or treatment.2 This cascade can result in a patient sustaining serious adverse effects from a simple blood test or imaging procedure that may have been thought to carry no risk at all. For example, the risk of harm caused by performing a D-dimer assay in a patient with a low probability of venous thrombosis may be thought to be limited to the minute risk of venopuncture, unless, of course, a decreased D-dimer level results in a computed tomographic (CT) scan to rule out a pulmonary embolism (radiation), and a false-positive reading of the CT scan results in unnecessary anticoagulation, and anticoagulation results in gastrointestinal bleeding, and so on.
The case described by Becker et al3 in this issue of the Archives illustrates dramatically and tragically why we must reevaluate how we look at diagnostic tests. In her physicians' good-faith attempts to "reassure" this middle-aged woman that she did not have heart disease, they ordered a cardiac CT scan. This test, with known adverse effects, was unnecessary because the woman's prior probability of having serious cardiac disease was too low for a positive result to change her clinical treatment. Yet, sadly, the test was given more credit than it deserved, with the result that a healthy woman with a normal heart required a heart transplant, hardly the reassuring outcome that was intended.
Hindsight is, of course, 20/20. But applying the "Less is More" principles prospectively could have avoided this unfortunate case. Less is more means that if a test is not sufficiently accurate to change clinical management in a particular setting, it should not be done. Many patients now receive a battery of tests in an emergency department or a screening facility even before they are seen by a clinician. Given that these tests are ordered with no assessment of the likelihood that the patient has or does not have any particular disease, test results are unlikely to be helpful, and can be harmful. Proper ordering and sequencing of tests and imaging require assessing the likelihood that a patient has specific conditions before a test is ordered as well as understanding the accuracy of the test and how it will change clinical management (Bayesian thinking).
Perhaps the most important point to be learned from the case described by Becker and colleagues is that there are safer ways to reassure patients. Physicians are (still) highly respected professionals, and patients value our advice. Talking with our patients should be our first choice for reassurance; tests should be reserved for cases in which the benefits can be reasonably expected to outweigh the risks. This case reminds us that no test (not even a noninvasive one) is benign, and often less is more.
9. Lidocaine gel as an anesthetic protocol for nasogastric tube insertion in the ED
Uri O, et al. Amer J Emerg Med. 2011;29:386-390.
Objective
The aim of the study was to evaluate the efficacy of topical 2% lidocaine gel in reducing pain and discomfort associated with nasogastric tube insertion (NGTI) and compare lidocaine to ordinary lubricant gel in the ease in carrying out the procedure.
Methods
This prospective, randomized, double-blind, placebo-controlled, convenience sample trial was conducted in the emergency department of our tertiary care university-affiliated hospital. Five milliliters of 2% lidocaine gel or placebo lubricant gel were administered nasally to alert hemodynamically stable adult patients 5 minutes before undergoing a required NGTI. The main outcome measures were overall pain, nasal pain, discomfort (eg, choking, gagging, nausea, vomiting), and difficulty in performing the procedure. Standard comparative statistical analyses were used.
Results
The study cohort included 62 patients (65% males). Thirty-one patients were randomized to either lidocaine or placebo groups. Patients who received lidocaine reported significantly less intense overall pain associated with NGTI compared to those who received placebo (37 ± 28 mm vs 51 ± 26 mm on 100-mm visual analog scale; P less than .05). The patients receiving lidocaine also had significantly reduced nasal pain (33 ± 29 mm vs 48 ± 27 mm; P less than .05) and significantly reduced sensation of gagging (25 ± 30 mm vs 39 ± 24 mm; P less than .05). However, conducting the procedure was significantly more difficult in the lidocaine group (2.1 ± 0.9 vs 1.4 ± 0.7 on 5-point Likert scale; P less than .05).
Conclusion
Lidocaine gel administered nasally 5 minutes before NGTI significantly reduces pain and gagging sensations associated with the procedure but is associated with more difficult tube insertion compared to the use of lubricant gel.
10. Optic Nerve Ultrasound Predicts Elevated Intracranial Pressure
In a small meta-analysis, ultrasound measurement of optic nerve sheath diameter had a sensitivity of 90% for predicting elevated ICP.
Bedside emergency department ocular ultrasound is increasingly used to detect retinal detachment, but does it also have other uses? Researchers performed a meta-analysis of six prospective studies (231 patients) in which researchers compared intracranial pressure (ICP) monitoring and ultrasound measurement of optic nerve sheath diameter (ONSD) in consecutive adult patients with suspected elevated ICP. ONSD was measured 3 mm behind the globe; ICP and ONSD measurements were performed within 1 hour of each other.
The pooled sensitivity for ONSD detection of elevated ICP was 90% and the pooled specificity was 85%. The pooled diagnostic odds ratio was 51, meaning that patients with elevated ICP were 51 times more likely to have a positive ONSD test than those without elevated ICP.
Comment: With a 90% sensitivity for ruling out elevated intracranial pressure, bedside ultrasound measurement of optic nerve sheath diameter shows promise as a new tool to guide decision making, including prioritizing patients for diagnostic studies and determining whether computed tomography is needed before an unstable polytrauma patient is taken to the operating room.
— Kristi L. Koenig, MD, FACEP. Published in Journal Watch Emergency Medicine May 20, 2011
Citation: Dubourg J et al. Ultrasonography of optic nerve sheath diameter for detection of raised intracranial pressure: A systematic review and meta-analysis. Intensive Care Med 2011 Apr 20; [e-pub ahead of print].
11. ED Septic Shock Protocol and Guideline for Children May Improve Care
Laurie Barclay, MD. May 24, 2011 — Implementing an emergency department (ED) septic shock protocol and care guideline for children improves compliance with fluid resuscitation and early antibiotic and oxygen administration, according to the results of a study reported online May 16 that will also appear in the June print issue of Pediatrics.
"Unrecognized and undertreated septic shock increases morbidity and mortality," write Gitte Y. Larsen, MD, MPH, from Pediatric Critical Care, Department of Pediatrics, Primary Children's Medical Center in Salt Lake City, Utah, and colleagues. "Septic shock in children is defined as sepsis and cardiovascular organ dysfunction, not necessarily with hypotension."
The hypothesis tested by this study was that a septic shock protocol and care guideline would facilitate early identification of septic shock, improve compliance with recommended treatment, and improve patient outcomes.
The investigators reviewed cases of unrecognized and undertreated septic shock in the ED of their institution, highlighting increased recognition at triage and more aggressive therapy once septic shock was identified.
From January 2005 to December 2009, the investigators studied the effect in all eligible ED patients of an ED septic shock protocol and care guideline they had developed to improve recognition beginning at triage. Of 345 pediatric ED patients identified, 49% were boys, and 297 (86.1%) met septic shock criteria at triage. Median age was 5.6 years; 196 patients (56.8%) had at least 1 chronic complex condition.
The most frequently observed signs of septic shock were skin color changes in 269 patients (78%) and tachycardia in 251 patients (73%); 120 patients (34%) had hypotension. During the study period, the median hospital length of stay decreased from 181 to 140 hours (P less than .05). However, mortality rate did not change during the study period (average 6.3%; 22/345). The most dramatic improvements in care were more complete recording of triage vital signs, timely fluid resuscitation and initiation of antibiotic treatment, and serum lactate measurements.
"Implementation of an ED septic shock protocol and care guideline improved compliance in delivery of rapid, aggressive fluid resuscitation and early antibiotic and oxygen administration and was associated with decreased length of stay," the study authors write.
Limitations of this study include failure to assess shock reversal for individual patients, inability to determine causality, lack of randomization, unrecognized factors that may have affected hospital length of stay or mortality, and inability to separately evaluate the effect of individual components of the care guidelines.
"Consistent successful treatment of septic shock cannot begin in the ICU [intensive care unit] for patients who present to the ED in shock; it must begin at the time of triage in the ED," the study authors conclude. "Early recognition and treatment of septic shock benefits all ED patients, because the effort to recognize early shock leads to a more meticulous patient assessment from the initial encounter. We developed a septic shock protocol and care guideline that led to improved compliance in delivery of rapid, aggressive fluid resuscitation, early antibiotic and oxygen administration, and decreased hospital LOS [length of stay]."
The study authors have disclosed no relevant financial relationships.
Pediatrics. Published online May 16, 2011.
12. ED Abnormal Vital Sign “Triggers” Program Improves Time to Therapy
McGillicuddy DC, et al. Acad Emerg Med. 2011;18:483-487.
Background: Implementation of rapid response systems to identify deteriorating patients in the inpatient setting has demonstrated improved patient outcomes. A “trigger” system using vital sign abnormalities to initiate evaluation by physician was recently described as an effective rapid response method.
Objectives: The objective was to evaluate the effect of a triage-based trigger system on the primary outcome of time to physician evaluation and the secondary outcomes of therapeutic intervention, antibiotics, and disposition in emergency department (ED) patients.
Methods: A separate-samples pre- and postintervention study was conducted using retrospective chart review of outcomes in ED patients for three arbitrarily selected 5-day periods in 2007 (pretriggers) and 2008 (posttriggers). There were 2,165 and 2,212 charts in the pre- and posttriggers chart review, with 71 and 79 patients meeting trigger criteria. Trigger criteria used to identify patients at triage were: heart rate below 40 or above 130 beats/min, respiratory rate below 8 or above 30 breaths/min, systolic blood pressure of below 90 mm Hg, and oxygen saturation less than 90% on room air. Median times (in minutes) were compared between pre- and posttrigger groups with interquartile ranges (IQRs 25–75), with the Wilcoxon rank sum test used to determine statistical significance.
Results: Overall median times were decreased among the posttriggers group. Median times to physician evaluation (21 minutes [IQR = 13–41 minutes] vs. 11 minutes [IQR = 5–21 minutes]; p less than 0.001), first intervention (58 minutes [IQR = 20–139 minutes] vs. 26 minutes [IQR = 11–71 minutes]; p less than 0.01), and antibiotics (110 minutes [IQR = 74–171 minutes] vs. 69 minutes [IQR = 23–130 minutes]; p less than 0.01) were significant. Median times to disposition (177 minutes [IQR = 121–303 minutes] vs. 162 minutes [IQR = 114–230 minutes]; p = 0.18) were not significant.
Conclusions: Implementation of an ED triggers program allows for more rapid time to physician evaluation, therapeutic intervention, and antibiotics.
13. Financial Factors Cause ED Closures
Emma Hitt, PhD. May 17, 2011 — From 1990 to 2009, the number of hospital emergency departments (EDs) in nonrural areas decreased by 27%. Factors associated with increased risk for ED closure included for-profit ownership, location in a competitive market, "safety-net" status, and low profit margin, a new study has found.
The analysis, led by Renee Y. Hsia, MD, from the Department of Emergency Medicine at the University of California–San Francisco, and colleagues was reported in the May 18 issue of the Journal of the American Medical Association.
According to the researchers, EDs are the "safety net of the safety net," being the only medical facilities in the United States that serve all patients. Although federal law requires EDs to treat all patients, "no federal law ensures the availability of hospital EDs," the authors note. The current study sought to evaluate hospital, community and market factors that may be associated with ED closings throughout the United States. The most recent data on EDs from 1990 through 2009 were acquired from American Hospital Association Annual Surveys and were combined with financial information collected through 2007 and derived from Medicare hospital cost reports.
From 1990 to 2009, the number of ED hospitals in nonrural areas declined from 2446 to 1779, including 1041 ED closings and 374 ED openings.
A higher percentage of for-profit hospitals, as well as hospitals with low profit margins, closed between 1990 and 2007 compared with EDs in each of those categories that stayed open (26% vs 16%; hazard ratio [HR], 1.8; 95% confidence interval [CI], 1.5 - 2.1; and 36% vs 18%; HR, 1.9; 95% CI, 1.6 - 2.3 respectively). In addition, hospitals in more competitive markets had an increased risk of closing their EDs (34% vs 17%; HR, 1.3; 95% CI, 1.1 - 1.6), as did safety-net hospitals (10% vs 6%; HR, 1.4; 95% CI, 1.1 - 1.7), and those serving a disproportionally poor population (37% vs 31%; HR, 1.4; 95% CI, 1.1 - 1.7).
Study limitations include assessment of only quantifiable factors; political or other community pressures to close or stay open, philanthropic efforts, the hospital’s ability to fill beds with non-ED admissions or other factors that may contribute to ED closure decisions were not considered. In addition, federal hospitals, such as those operated by the Veteran’s Administration, were not evaluated.
"Our findings underscore that market-based approaches to health care do not ensure that care will be equitably distributed," Dr. Hsai and colleagues conclude. "In fact, the opposite may be true."
According to the researchers, as long as "tens of millions of Americans are uninsured, and tens of millions more pay well below their cost of care, the push for 'results-driven competition' will not correct system-level disparities that markets cannot — and should not — be expected to resolve."
The study was funded by the National Institutes of Health and the Robert Wood Johnson Foundation. The authors have disclosed no relevant financial relationships.
JAMA. 2011;305(19):1978-1985.
14. Cost-ineffectiveness of Point-of-care Biomarker Assessment for Suspected AMI
Fitzgerald P, et al. Acad Emerg Med. 2011;18:488–495.
Objectives: Chest pain due to suspected myocardial infarction (MI) is responsible for many hospital admissions and consumes substantial health care resources. The Randomized Assessment of Treatment using Panel Assay of Cardiac markers (RATPAC) trial showed that diagnostic assessment using a point-of-care (POC) cardiac biomarker panel consisting of CK-MB, myoglobin, and troponin increased the proportion of patients successfully discharged after emergency department (ED) assessment. In this economic analysis, the authors aimed to determine whether POC biomarker panel assessment reduced health care costs and was likely to be cost-effective.
Methods: The RATPAC trial was a multicenter individual patient randomized controlled trial comparing diagnostic assessment using a POC biomarker panel (CK-MB, myoglobin, and troponin, measured at baseline and 90 minutes) to standard care without the POC panel in patients attending six EDs with acute chest pain due to suspected MI (n = 2,243). Individual patient resource use data were collected from all participants up to 3 months after hospital attendance using self-completed questionnaires at 1 and 3 months and case note review. ED staff and POC testing costs were estimated through a microcosting study of 246 participants. Resource use was valued using national unit costs. Health utility was measured using the EQ-5D self-completed questionnaire, mailed at 1 and 3 months. Quality-adjusted life-years (QALYs) were calculated by the trapezium rule using the EQ-5D tariff values at all follow-up points. Mean costs per patient were compared between the two treatment groups. Cost-effectiveness was estimated in terms of probability of dominance and incremental cost per QALY.
Results: Point-of-care panel assessment was associated with higher ED costs, coronary care costs, and cardiac intervention costs, but lower general inpatient costs. Mean costs per patient were £1217.14 (standard deviation [SD] ± 3164.93), or $1,987.14 (SD ±$4,939.25), with POC versus £1005.91 (SD ±£1907.55), or $1,568.64 (SD ±$2,975.78), with standard care (p = 0.056). Mean QALYs were 0.158 (SD ± 0.052) versus 0.161 (SD ± 0.056; p = 0.250). The probability of standard care being dominant (i.e., cheaper and more effective) was 0.888, while the probability of the POC panel being dominant was 0.004. These probabilities were not markedly altered by sensitivity analysis varying the costs of the POC panel and excluding intensive care costs.
Conclusions: Point-of-care panel assessment does not reduce costs despite reducing admissions and may even increase costs. It is unlikely to be considered a cost-effective use of health care resources.
15. More Hospital Deaths if Admitted on Weekends: Better Wait ‘Til Monday!
By Alison McCook. NEW YORK (Reuters Health) May 20 - People admitted to the hospital on the weekend are 10% more likely to die than those who checked in during the week, according to a new analysis of nearly 30 million people.
When applied to the entire U.S. population, that equals an additional tens of thousands of people each year, study author Dr. Rocco Ricciardi of Tufts University Medical School told Reuters Health.
"In other words, an extra 20 to 25 thousand people die each year in the United States because of admission on a weekend," he said.
This is not the first study to uncover a "weekend effect," in which patients are likely to fare worse during the weekends. Previous research has shown a "weekend effect" for patients admitted to the hospital for heart attack, a blood clot in a lung, a ruptured abdominal artery and strokes of all kinds. Still, the data are not always consistent: earlier this year, a survey of Pennsylvania hospitals found that people with injuries fare slightly better on weekends.
The current study is based on an analysis of a national sample of close to 30 million people who were admitted to hospitals in 35 states over a 5-year period. All were admitted for "non-elective" reasons, which represents most admissions, Dr. Ricciardi noted.
Reporting in the Archives of Surgery, he and his colleagues found that 2.7% of the people admitted during the weekend died while in the hospital, which happened to only 2.3% of those admitted on a weekday.
It's not entirely clear why people might fare worse when they come in during the weekend, Dr. Ricciardi said in an email. Looking specifically at traumas, he and his colleagues found no differences in death rates between weekend and weekday arrivals, which helps eliminate the possibility that people experience more life-threatening accidents on weekends.
But it's possible that care is different on weekends, he said - perhaps there is less nursing, fewer well-trained doctors, or less access to imaging and other necessary tools.
"Either (1) the patients coming to the hospital on weekends are sicker or else (2) the hospital is doing a worse job of treating them," said Dr. Raman Khanna at the University of California at San Francisco, who was not involved in the study.
Since the researchers found no differences in trauma rates, and also took into account whether weekend arrivers had other illnesses that could make them sicker, "the authors can make the case that number 2 is more likely," he told Reuters Health in an email. Dr. Ricciardi and his team also looked at death rates by admission day for different diagnoses, and saw that not all fared worse on the weekends.
The categories that did fare worst on weekends included problems with pregnancy and the female reproductive system, blood cell and bone marrow disorders, and circulatory and nervous system problems. The findings suggest that hospitals should focus their efforts on those specific conditions, Dr. Khanna said, "rather than a blanket increase in nursing ratios for everyone at every hospital over the weekend, since a more tailored solution may be just as effective while far less expensive."
Abstract: http://archsurg.ama-assn.org/cgi/content/short/146/5/545
16. PECARN: ED Observation of Children with Minor Head Injury Reduces Use of CT
But does not impair identification of clinically important traumatic brain injuries
In a secondary analysis of data from the Pediatric Emergency Care Applied Research Network (PECARN) Lancet publication, researchers evaluated whether observing children (age, below 18 years) with minor head injury before deciding whether to obtain a head computed tomography (CT) scan affects use of CT and diagnosis of traumatic brain injury (TBI). Data on duration of observation were not collected.
Of 40,113 patients (median age, 5.6 years), 5433 (14%) were observed. Observed patients were significantly less likely to undergo CT than patients who were not observed (31% vs. 35%). After adjustment for clinical covariates, the likelihood of CT scanning remained lower for patients who were observed (adjusted odds ratio, 0.53). Rates of clinically important TBI (defined as intracranial injury resulting in death, neurosurgical intervention, intubation for longer than 24 hours, or hospital admission for 2 nights) were similar between groups (0.75% and 0.87%, respectively).
The authors conclude that observing intermediate-risk patients would result in approximately 39 fewer CT scans per 1000 children who present to the emergency department with blunt head trauma; intermediate-risk children were defined as those with normal mental status and no evidence of skull fracture and at least one of the following: loss of consciousness, severe mechanism of injury, vomiting, not acting normally per parents (children younger than 2 years), or severe headache (children 2 years).
Comment: The lack of data on duration of observation makes practical application of these findings difficult. However, neurologically normal children with a history of loss of consciousness, transient vomiting, or headache can be observed before deciding about CT. Children with persistent symptoms or any sign of clinical deterioration should undergo immediate CT.
— Katherine Bakes, MD. Published in Journal Watch Emergency Medicine May 27, 2011. Citation: Nigrovic LE et al. The effect of observation on cranial computed tomography utilization for children after blunt head trauma. Pediatrics 2011 Jun; 127:1067.
17. BUN in the Early Assessment of Acute Pancreatitis: An International Validation Study
Wu BU, et al. Arch Intern Med. 2011;171:669-676.
Background Objective assessment of acute pancreatitis (AP) is critical to help guide resuscitation efforts. Herein we (1) validate serial blood urea nitrogen (BUN) measurement for early prediction of mortality and (2) develop an objective BUN-based approach to early assessment in AP.
Methods We performed a secondary analysis of 3 prospective AP cohort studies: Brigham and Women's Hospital (BWH), June 2005 through May 2009; the Dutch Pancreatitis Study Group (DPSG), March 2004 through March 2007; and the University of Pittsburgh Medical Center (UPMC), June 2003 through September 2007. Meta-analysis and stratified multivariate logistic regression adjusted for age, sex, and creatinine levels were calculated to determine risk of mortality associated with elevated BUN level at admission and rise in BUN level at 24 hours. The accuracy of the BUN measurements was determined by area under the receiver operating characteristic curve (AUC) analysis compared with serum creatinine measurement and APACHE II score. A BUN-based assessment algorithm was derived on BWH data and validated on the DPSG and UPMC cohorts.
Results A total of 1043 AP cases were included in analysis. In pooled analysis, a BUN level of 20 mg/dL or higher was associated with an odds ratio (OR) of 4.6 (95% confidence interval [CI], 2.5-8.3) for mortality. Any rise in BUN level at 24 hours was associated with an OR of 4.3 (95% CI, 2.3-7.9) for death. Accuracy of serial BUN measurement (AUC, 0.82-0.91) was comparable to that of the APACHE II score (AUC, 0.72-0.92) in each of the cohorts. A BUN-based assessment algorithm identified patients at increased risk for mortality during the initial 24 hours of hospitalization.
Conclusions We have confirmed the accuracy of BUN measurement for early prediction of mortality in AP and developed an algorithm that may assist physicians in their early resuscitation efforts.
18. A prospective, randomized, trial of phenobarbital versus benzodiazepines for acute alcohol withdrawal
Hendey GW, et al. Amer J Emerg Med. 2011;29:382-385.
Objective
The aim of this study was to compare phenobarbital (PB) versus lorazepam (LZ) in the treatment of alcohol withdrawal in the emergency department (ED) and at 48 hours.
Methods
Prospectively, randomized, consenting patients were assessed using a modified Clinical Institute Withdrawal Assessment (CIWA) score and given intravenous PB (mean, 509 mg) or LZ (mean, 4.2 mg). At discharge, LZ patients received chlordiazepoxide (Librium), and PB patients received placebo.
Results
Of 44 patients, 25 received PB, and 19 LZ. Both PB and LZ reduced CIWA scores from baseline to discharge (15.0-5.4 and 16.8-4.2, P less than .0001). There were no differences between PB and LZ in baseline CIWA scores (P = .3), discharge scores (P = .4), ED length of stay (267 versus 256 minutes, P = .8), admissions (12% versus 16%, P = .8), or 48-hour follow-up CIWA scores (5.8 versus 7.2, P = .6).
Conclusion
Phenobarbital and LZ were similarly effective in the treatment of mild/moderate alcohol withdrawal in the ED and at 48 hours.
19. 72-Hr Returns May Not be a Good Indicator of Safety in the ED: A National Study
Pham JC, et al. Acad Emerg Med. 2011;18:390–397.
Objectives: The objective was to measure the association between returns to an emergency department (ED) within 72 hours and resource utilization, severity of illness, mortality, and admission rate.
Methods: This was a retrospective, cross-sectional analysis of ED visits using data from the National Hospital Ambulatory Medical Care Survey (NHAMCS) from 1998 to 2006. Cohorts were patients who had been seen in the ED within the past 72 hours versus those without the prior visit. A multivariate model was created to predict adjusted-resource utilization and mortality or admission rate.
Results: During the study period, there were 218,179 ED patient visits and a 3.2% 72-hour return rate. Patients with Medicare (3.5%) and without insurance (3.5%) were more likely to return within 72 hours. Visits associated with alcohol (4.1%), low triage acuity (4.0%), or dermatologic conditions (5.9%) were more likely to return. Seventy-two-hour return visits used fewer resources (5.0 [±0.1] vs. 5.5 [±0.1] tests, medications, procedures), were less likely to be Level I triage acuity (17% vs. 20%), and had a similar admission rate (13% vs. 13%) as those not seen within 72 hours. The sample size was too small to evaluate mortality.
Conclusions: Patients who return to the ED within 72 hours do not use more resources, are not more severely ill, and do not have a higher hospital admission rate than those who had not been previously seen. These findings do not support the use of 72-hour returns as a quality or safety indicator. A more refined variation such as 72-hour returns resulting in admission may have more value
20. Even Short-Term NSAID Use Risky in Cardiac Patients
Megan Brooks. May 9, 2011 — In patients with prior myocardial infarction (MI), most nonsteroidal anti-inflammatory drugs (NSAIDs), even when taken for as little as 1 week, are associated with an increased risk for death and recurrent MI, new observational data indicate.
Use of NSAIDs was associated with a 45% increased risk for death or recurrent MI in the first 7 days of treatment and a 55% increased risk if treatment continued to 3 months. The findings were published online May 9 in Circulation.
"We found that short-term treatment with most NSAIDs was associated with increased and instantaneous cardiovascular risk," first author Anne-Marie Schjerning Olsen, MB, from Copenhagen University in Hellerup, Denmark, told Medscape Medical News.
"Our results indicate that there is no apparent safe therapeutic window for NSAIDs in patients with prior MI and challenge the current recommendations of low-dose and short-term use of NSAIDs as being safe," she said.
Results 'Completely Consistent' With 2007 AHA Advisory
In a 2007 scientific statement, the American Heart Association (AHA) advised clinicians about the risks of NSAID use among patients with known cardiovascular disease or those at risk for ischemic heart disease and provided a stepped-care approach for use of these agents in this patient population.
Asked to comment on the new study, Elliott Antman, MD, from Brigham and Women's Hospital and Harvard Medical School in Boston, Massachusetts, and lead author of the 2007 advisory, said, "Essentially, what this paper shows is that there is a gradient of risk among NSAIDs; some are associated with more risk than others; none appear to be completely safe, and the researchers could not identify a period that appeared to be safe, no matter how short."
"This is completely consistent with the advice that we put forward in 2007, which is to use the safest drug, in the lowest dose required to control musculoskeletal symptoms, for the shortest period of time," Dr. Antman told Medscape Medical News.
"We do have to be practical here," Dr. Antman said, "because despite very best efforts with physical therapy and nonpharmacologic treatments, there are individuals who have severe, debilitating arthritis, lupus, or rheumatoid arthritis and we do have to have a treatment plan for those patients." The AHA's stepped-care approach provides such a plan, Dr. Antman noted.
First Time-to-Event Analysis
Using the Danish National Patient Registry, Dr. Olsen and colleagues identified 83,675 patients who were admitted to a hospital with a first MI between 1997 and 2006 and were discharged alive. Their average age was 68 years, and 63% were men.
At least 1 prescription claim for NSAID treatment after discharge was identified for 35,405 patients (42.3%), most commonly ibuprofen (23%) and diclofenac (13.4%). The most commonly prescribed selective cyclooxygenase (COX)-2 inhibitors were rofecoxib (4.7%) and celecoxib (4.8%).
During the observation period, 35,257 deaths or recurrent MIs (42.1%) were registered in the database. According to the investigators, the risk for death or recurrent MI was elevated at the beginning of NSAID treatment (hazard ratio [HR], 1.45; 95% confidence interval [CI], 1.29 - 1.62) and the risk persisted throughout treatment (HR after 90 days, 1.55; 95% CI, 1.46 - 1.64).
Particularly worrying was the fact that…diclofenac was associated with early and higher cardiovascular risk than…rofecoxib, which was withdrawn from the market in 2004 due to its unfavorable cardiovascular risk profile.
All NSAIDs, except naproxen, were associated with an increased risk for death or recurrent MI, with diclofenac having the highest risk (HR in the first week of treatment, 3.26; 95% CI, 2.57 - 3.86).
"Particularly worrying," Dr. Olsen told Medscape Medical News, "was the fact that the widely used nonselective NSAID diclofenac was associated with early and higher cardiovascular risk than the selective COX-2 inhibitor rofecoxib, which was withdrawn from the market in 2004 due to its unfavorable cardiovascular risk profile."
"The accumulating evidence suggests that we must limit NSAID use to the absolute minimum in patients with established cardiovascular disease," she added.
"If NSAID therapy is necessary for patients with known cardiovascular disease, the doctors should choose a more selective COX-1 inhibitor in minimum dose (eg, naproxen ≤ 500 mg daily or ibuprofen ≤ 1200 mg daily) for the shortest period of time," she added.
Dr. Antman said this paper provides a "good reminder" for clinicians and patients about the risks of NSAIDs in this patient population.
"Many of the drugs that we are talking about," he noted, "can be obtained over-the-counter, and it is the presumption of many patients that if it is a drug that they can get over-the-counter it must be 'safer.' Very often they don't report those medications to their physician when they go for an office visit."
The authors and Dr. Antman have reported no relevant financial relationships.
Circulation. Published online May 9, 2011
21. Hospital-reported Data on the Pneumonia Quality Measure “Time to First Antibiotic Dose” Are Not Associated with Inpatient Mortality: Results of a Nationwide Cross-sectional Analysis
Quattromani E, et al. Acad Emerg Med. 2011;18:496–503.
Objectives: Significant controversy exists regarding the Centers for Medicare & Medicaid Services (CMS) “time to first antibiotics dose” (TFAD) quality measure. The objective of this study was to determine whether hospital performance on the TFAD measure for patients admitted from the emergency department (ED) for pneumonia is associated with decreased mortality.
Methods: This was a cross-sectional analysis of 95,704 adult ED admissions with a principal diagnosis of pneumonia from 530 hospitals in the 2007 Nationwide Inpatient Sample. The sample was merged with 2007 CMS Hospital Compare data, and hospitals were categorized into TFAD performance quartiles. Univariate association of TFAD performance with inpatient mortality was evaluated by chi-square test. A population-averaged logistic regression model was created with an exchangeable working correlation matrix of inpatient mortality adjusted for age, sex, comorbid conditions, weekend admission, payer status, income level, hospital size, hospital location, teaching status, and TFAD performance.
Results: Patients had a mean age of 69.3 years. In the adjusted analysis, increasing age was associated with increased mortality with odds ratios (ORs) above 2.3. Unadjusted inpatient mortality was 4.1% (95% confidence interval [CI] = 3.9% to 4.2%). Median time to death was 5 days (25th–75th interquartile range = 2–11). Mean TFAD quality performance was 77.7% across all hospitals (95% CI = 77.6% to 77.8%). The risk-adjusted OR of mortality was 0.89 (95% CI = 0.77 to 1.02) in the highest performing TFAD quartile, compared to the lowest performing TFAD quartile. The second highest performing quartile OR was 0.94 (95% CI = 0.82 to 1.08), and third highest performing quartile was 0.91 (95% CI = 0.79 to 1.05).
Conclusions: In this nationwide heterogeneous 2007 sample, there was no association between the publicly reported TFAD quality measure performance and pneumonia inpatient mortality.