Early resuscitation protocols have been shown to reduce sepsis mortality. However, studies have been primarily conducted in high-income, high-resource countries, with little research in low- and middle-income countries. In this randomized controlled trial, 212 adults with sepsis (suspected infection plus ≥2 systemic inflammatory response syndrome criteria) and hypotension (systolic blood pressure ≤90 mm Hg or mean arterial pressure ≤65 mm Hg) presenting to the emergency department at a large referral hospital in Zambia, were randomized to either an early resuscitation protocol for sepsis or usual care to determine whether an early resuscitation protocol decreases mortality. The early resuscitation protocol included intravenous fluid bolus administration with monitoring of jugular venous pressure, respiratory rate, and arterial oxygen saturation and treatment with vasopressors targeting mean arterial pressure (≥65 mm Hg) and blood transfusion (for patients with a hemoglobin level <7 g/dL), while the treating physician determined hemodynamic management for patients randomized to usual care. Notably, 89.5% of patients were HIV positive Researchers found that in-hospital mortality occurred in 51 of 106 patients (48.1%) in the sepsis protocol group compared to 34 of 103 patients (33.0%) in the usual care group (between-group difference 15.1%, 95% CI 2.0% to 28.3%; RR 1.46, 95% CI 1.04 to 2.05, p=0.03). Researchers also found that 6 hours from initial emergency department (ED) presentation, patients in the intervention group received a median of 3.5 L of IV fluids versus 2.0 L of IV fluids in the usual care group, and that 14.2% of patients received vasopressors in the sepsis protocol group versus 1.9% in the usual care group. This study therefore shows that the use of an early resuscitation sepsis protocol does not decrease in-hospital mortality, but rather increases it along with the use of IV fluids and vasopressors. As such, there is an ongoing need for further critical care research in low-resource settings.
Acute Heart Failure (AHF) is a frequent cause of hospitalization, with 90% of affected patients seeking care in the emergency department (ED). AHF patients pose a challenge to ED physicians, in part due to the lack of a dependable clinical decision tool to guide management. In this prospective cohort study, the Epidemiology of Acute Heart Failure in Emergency Departments (EAHFE) registry collected information on patients with a final AHF diagnosis at 34 Spanish EDs, and sought to predict mortality based on information available from ED admission. In the cohort of 4867 AHF patients admitted from the ED between 2009 and 2011, researchers created a logistic regression model and identified 13 independent predictors of mortality: Barthel index score at admission, systolic blood pressure, age, N-terminal pro–B-type natriuretic peptide level, potassium level, positive troponin level, New York Heart Association class IV disease at admission, respiratory rate, low-output symptoms, oxygen saturation, episode associated with acute coronary syndrome, hypertrophy on electrocardiography, and creatinine level. The validation cohort included 3229 AHF patients admitted to the ED during 2014. The resulting MEESSI-AHF score was able to predict 30-day mortality with good discrimination and calibration, especially in patients at very high (10% of patients, 45% risk) or low risk (40% of patients, <2% risk) of death in 30 days. While there are certain limitations to this study including missing data and subjectivity of some risk factor measurements, the resulting tool from this study may be valuable in guiding treatment and management of AHF patients in the ED.
Colorectal cancer (CRC) remains the second leading cause of cancer-related deaths in the United States. Screening can help decrease CRC incidence and mortality; however, the benefit of surveillance by colonoscopy is unclear. In this population-based retrospective cohort study, researchers conducted a microsimulation using the Adenoma and Serrated pathway to Colorectal Cancer (ASCCA) model to compare Fecal Immunochemical Testing (FIT) screening alone with FIT screening in addition to colonoscopy surveillance in terms of cost-effectiveness. Study participants were chosen from the Dutch CRC screening program and consisted of asymptomatic individuals aged 55 to 75 years without a previous CRC diagnosis. The authors also examined the effect of extending colonoscopy surveillance intervals. They found that FIT screening alone reduced CRC mortality by 50.4%. While adding colonoscopy reduced the CRC mortality to 52.1%, the lifetime demand for colonoscopy significantly increased from 335 to 543 colonoscopies per 100 persons (62%). This increase in colonoscopy resulted in an additional cost of €68 000 for an increase of 0.9 life-years. Based on an incremental analysis, incremental cost-effectiveness ratios (ICERs) for screening plus surveillance exceeded the Dutch willingness-to-pay threshold of €36 602 per life-year gained. However, the authors found that extending the interval of colonoscopy to 5 years would decrease demand by 41.7% while still providing a beneficial reduction in CRC mortality of 51.8%. Overall, the results of this study suggest that the addition of surveillance to FIT screening is not cost-effective based on the Dutch ICER thresholds, and associated with substantial increases in colonoscopy demand. However, extending surveillance intervals to 5 years may decrease colonoscopy demand without compromising overall effectiveness.
In patients with non-valvular atrial fibrillation (AFib), the use of non-vitamin K oral anticoagulants (NOACs) has become more common. NOACs share metabolic pathways with many other commonly prescribed medications that may increase the risk of bleeding. In this retrospective cohort study, researchers followed up 91,330 patients with nonvalvular AFib from the Taiwan National Health Insurance database that received at least one NOAC prescription of dabigatran, rivaroxaban, or apixaban to assess the association between the use of NOACs with and without concurrent medications, and the risk of major bleeding. Exposures of interest included NOAC with or without concurrent use of atorvastatin, digoxin, verapamil, diltiazem, amiodarone, fluconazole, ketoconazole, itraconazole, voriconazole, posaconazole, cyclosporine, erythromycin, clarithromycin, dronedarone, rifampin, or phenytoin. Researchers found that 4770 major bleeding events occurred during the 447,037 person-quarters with NOAC prescriptions. The most common medications co-prescribed with NOACs were atorvastatin (27.6%), diltiazem (22.7%), digoxin (22.5%), and amiodarone (21.1%). A significant increase in adjusted incidence rates per 1000 person-years of major bleeding was observed with concurrent use of amiodarone (difference 13.94, 99% CI 9.76 to 18.13), fluconazole (difference 138.46, 99% CI 80.96 to 195.97), rifampin (difference 36.90, 99% CI 1.59 to 72.22), or phenytoin (difference 52.31, 99% CI 32.18 to 72.44) compared to NOACs alone. Compared with NOAC use alone, the adjusted incidence rate for major bleeding was significantly lower with concurrent use of atorvastatin, digoxin, and erythromycin or clarithromycin, but not significantly different with concurrent use of verapamil; diltiazem; cyclosporine; ketoconazole, itraconazole, voriconazole, or posaconazole; and dronedarone. Limitations include the lack of edoxaban use, as its approval occurred after 2016 in Taiwan, along with the lack of renal and liver function data as these factors may interfere with drug-drug interactions, bleeding risk, and medication dosing. Overall, physicians should use caution when prescribing NOACs with concurrent use of amiodarone, fluconazole, rifampin, and phenytoin, as they are associated with increased major bleeding risk.
In Oregon, a state known for high rates of vaccine hesitancy, concerns arose when the Advisory Committee on Immunization Practices withdrew its recommendation encouraging the use of the live attenuated influenza vaccine (LAIV). Researchers feared needle- and vaccine-averse families would prefer the intranasal administration of the LAIV. In this retrospective cohort study, researchers utilized data from Oregon’s statewide immunization registry, the ALERT Immunization Information System (IIS), which includes more than 55 million immunization records, where researchers reviewed vaccine rates in children ages 2-17 during the last 5 influenza seasons. They then completed a matched-cohort study comparing those who received the LAIV with those who received the inactivated influenza vaccine (IIV) in the 2015-2016 season, analyzing the rates of IIV administration in each population during the 2016-2017 year, after withdrawal of preferential LAIV use recommendations. Over the last five seasons, researchers observed a stable vaccination rate, ranging from 32.6% to 35.2%, with the 2015-2016 and 2016-2017 seasons being equal at 32.6%. The matched-cohort portion of the study revealed 56.4% of children who received the IIV in the 2015-2016 season received the IIV again during the 2016-2017 season, while 53.1% of those who received the LAIV in 2015-2016 received the IIV in 2016-2017, making those who had received a previous IIV 1.05 times more likely to receive the vaccine again (95% CI 1.04 to 1.06). When 15 of the individual age groups were assessed within the larger cohort, 6 had no significant difference in IIV administration rate in 2016-2017 when children with previous IIV receipt were compared to those who had previously received LAIV. This study is limited to those places with high rates of vaccine hesitancy, but shows how form of vaccine administration is less critical in affecting vaccine rates than providers may have previously assumed.
©2017 2 Minute Medicine, Inc. All rights reserved. No works may be reproduced without expressed written consent from 2 Minute Medicine, Inc. Inquire about licensing here. No article should be construed as medical advice and is not intended as such by the authors or by 2 Minute Medicine, Inc.