Critical Care - Research Publications
Now showing items 1-12 of 57
Functional evaluation and practice survey to guide purchasing of intravenous cannulae
(BIOMED CENTRAL LTD, 2013-12-24)
BACKGROUND: There are wide variations in the physical designs and attributes between different brands of intravenous cannulae that makes product selection and purchasing difficult. In a systematic assessment to guide purchasing, we assessed two cannulae - Cannula P and I. We proposed that the results of in-vitro performance testing of the cannulae would be associated with preference after clinical comparison. METHODS: We designed an observer-blinded randomised head-to-head trial between the 18, 20 and 22 gauge versions of Cannula P and I. Our primary end-point was pressure (mmHg) generated during various flow rates and our secondary end-point was the force (Newton) required to slide the catheter away from the needle. This was followed by a prospective electronic survey following a two-week clinical trial period. RESULTS: The mean difference in resistance between Cannula P and I was: 307 mmHg.L-1.hr-1 (95% CI: 289-325, p < 0.001) for 22G; 135 mmHg.L-1.hr-1 (95% CI: 125-144, p < 0.001) for 20G; and 27 mmHg.L-1.hr-1 (95% CI: 26-28, p < 0.001) for 18G. The mean difference in the force needed to displace the catheter away from its needle was: 1.41 N (95% CI: 1.09-1.73, p < 0.001) for 22G; 0.19 N (95% CI: -0.04-0.41, p = 0.12) for 20G; and 1.96 N (95% CI: 1.40-2.52, p < 0.001) for 18G. After a trial period, all 16 anaesthetist who had used both cannulae preferred Cannula I to P. CONCLUSIONS: The evaluation process described here could help hospitals improve efficient product selection and purchasing decisions for intravenous cannulae.
The microbiological and clinical outcome of guide wire exchanged versus newly inserted antimicrobial surface treated central venous catheters
(BIOMED CENTRAL LTD, 2013-01-01)
INTRODUCTION: The management of suspected central venous catheter (CVC)-related sepsis by guide wire exchange (GWX) is not recommended. However, GWX for new antimicrobial surface treated (AST) triple lumen CVCs has never been studied. We aimed to compare the microbiological outcome of triple lumen AST CVCs inserted by GWX (GWX-CVCs) with newly inserted triple lumen AST CVCs (NI-CVCs). METHODS: We studied a cohort of 145 consecutive patients with GWX-CVCs and contemporaneous site-matched control cohort of 163 patients with NI-CVCs in a tertiary intensive care unit (ICU). RESULTS: GWX-CVC and NI-CVC patients were similar for mean age (58.7 vs. 62.2 years), gender (88 (60.7%) vs. 98 (60.5%) male) and illness severity on admission (mean Acute Physiology and Chronic Health Evaluation (APACHE) III: 71.3 vs. 72.2). However, GWX patients had longer median ICU lengths of stay (12.2 vs. 4.4 days; P < 0.001) and median hospital lengths of stay (30.7 vs. 18.0 days; P < 0.001). There was no significant difference with regard to the number of CVC tips with bacterial or fungal pathogen colonization among GWX-CVCs vs. NI-CVCs (5 (2.5%) vs. 6 (7.4%); P = 0.90). Catheter-associated blood stream infection (CA-BSI) occurred in 2 (1.4%) GWX patients compared with 3 (1.8%) NI-CVC patients (P = 0.75). There was no significant difference in hospital mortality (35 (24.1%) vs. 48 (29.4%); P = 0.29). CONCLUSIONS: GWX-CVCs and NI-CVCs had similar rates of tip colonization at removal, CA-BSI and mortality. If the CVC removed by GWX is colonized, a new CVC must then be inserted at another site. In selected ICU patients at higher central vein puncture risk receiving AST CVCs GWX may be an acceptable initial approach to line insertion.
Glycaemic control in Australia and New Zealand before and after the NICE-SUGAR trial: a translational study
(BIOMED CENTRAL LTD, 2013-01-01)
INTRODUCTION: There is no information on the uptake of Intensive Insulin Therapy (IIT) before the Normoglycemia in Intensive Care Evaluation and Surviving Using Glucose Algorithm Regulation (NICE-SUGAR) trial in Australia and New Zealand (ANZ) and on the bi-national response to the trial, yet such data would provide important information on the evolution of ANZ practice in this field. We aimed to study ANZ glycaemic control before and after the publication of the results of the NICE-SUGAR trial. METHODS: We analysed glucose control in critically ill patients across Australia and New Zealand during a two-year period before and after the publication of the NICE-SUGAR study. We used the mean first day glucose (Glu1) (a validated surrogate of ICU glucose control) to define practice. The implementation of an IIT protocol was presumed if the median of Glu₁ measurements was <6.44 mmol/L for a given ICU. Hypoglycaemia was categorised as severe (glucose ≤2.2 mmol/L) or moderate (glucose ≤3.9 mmol/L). RESULTS: We studied 49 ICUs and 176,505 patients. No ICU practiced IIT before or after NICE-SUGAR. Overall, Glu1 increased from 7.96 (2.95) mmol/L to 8.03 (2.92) mmol/L (P <0.0001) after NICE-SUGAR. Similar increases were noted in all patient subgroups studied (surgical, medical, insulin dependent diabetes mellitus, ICU stay >48/<48 hours). The rate of severe and moderate hypoglycaemia before and after NICE-SUGAR study were 0.59% vs. 0.55% (P =0.33) and 6.62% vs. 5.68% (P <0.0001), respectively. Both crude and adjusted mortalities declined over the study period. CONCLUSIONS: IIT had not been adopted in ANZ before the NICE-SUGAR study and glycaemic control corresponded to that delivered in the control arm of NICE-SUGAR trial. There were only minor changes in practice after the trial toward looser glycaemic control. The rate of moderate hypoglycaemia and mortality decreased along with such changes.
Age of red blood cells and transfusion in critically ill patients
(SPRINGER HEIDELBERG, 2013-01-15)
Red blood cells (RBC) storage facilitates the supply of RBC to meet the clinical demand for transfusion and to avoid wastage. However, RBC storage is associated with adverse changes in erythrocytes and their preservation medium. These changes are responsible for functional alterations and for the accumulation of potentially injurious bioreactive substances. They also may have clinically harmful effects especially in critically ill patients. The clinical consequences of storage lesions, however, remain a matter of persistent controversy. Multiple retrospective, observational, and single-center studies have reported heterogeneous and conflicting findings about the effect of blood storage duration on morbidity and/or mortality in trauma, cardiac surgery, and intensive care unit patients. Describing the details of this controversy, this review not only summarizes the current literature but also highlights the equipoise that currently exists with regard to the use of short versus current standard (extended) storage duration red cells in critically ill patients and supports the need for large, randomized, controlled trials evaluating the clinical impact of transfusing fresh (short duration of storage) versus older (extended duration of storage) red cells in critically ill patients.
Diabetic status and the relation of the three domains of glycemic control to mortality in critically ill patients: an international multicenter cohort study
INTRODUCTION: Hyperglycemia, hypoglycemia, and increased glycemic variability have each been independently associated with increased risk of mortality in critically ill patients. The role of diabetic status on modulating the relation of these three domains of glycemic control with mortality remains uncertain. The purpose of this investigation was to determine how diabetic status affects the relation of hyperglycemia, hypoglycemia, and increased glycemic variability with the risk of mortality in critically ill patients. METHODS: This is a retrospective analysis of prospectively collected data involving 44,964 patients admitted to 23 intensive care units (ICUs) from nine countries, between February 2001 and May 2012. We analyzed mean blood glucose concentration (BG), coefficient of variation (CV), and minimal BG and created multivariable models to analyze their independent association with mortality. Patients were stratified according to the diagnosis of diabetes. RESULTS: Among patients without diabetes, mean BG bands between 80 and 140 mg/dl were independently associated with decreased risk of mortality, and mean BG bands>or=140 mg/dl, with increased risk of mortality. Among patients with diabetes, mean BG from 80 to 110 mg/dl was associated with increased risk of mortality and mean BG from 110 to 180 mg/dl with decreased risk of mortality. An effect of center was noted on the relation between mean BG and mortality. Hypoglycemia, defined as minimum BG<70 mg/dl, was independently associated with increased risk of mortality among patients with and without diabetes and increased glycemic variability, defined as CV>or=20%, was independently associated with increased risk of mortality only among patients without diabetes. Derangements of more than one domain of glycemic control had a cumulative association with mortality, especially for patients without diabetes. CONCLUSIONS: Although hyperglycemia, hypoglycemia, and increased glycemic variability is each independently associated with mortality in critically ill patients, diabetic status modulates these relations in clinically important ways. Our findings suggest that patients with diabetes may benefit from higher glucose target ranges than will those without diabetes. Additionally, hypoglycemia is independently associated with increased risk of mortality regardless of the patient's diabetic status, and increased glycemic variability is independently associated with increased risk of mortality among patients without diabetes.
Continuous beta-lactam infusion in critically ill patients: the clinical evidence
There is controversy over whether traditional intermittent bolus dosing or continuous infusion of beta-lactam antibiotics is preferable in critically ill patients. No significant difference between these two dosing strategies in terms of patient outcomes has been shown yet. This is despite compelling in vitro and in vivo pharmacokinetic/pharmacodynamic (PK/PD) data. A lack of significance in clinical outcome studies may be due to several methodological flaws potentially masking the benefits of continuous infusion observed in preclinical studies. In this review, we explore the methodological shortcomings of the published clinical studies and describe the criteria that should be considered for performing a definitive clinical trial. We found that most trials utilized inconsistent antibiotic doses and recruited only small numbers of heterogeneous patient groups. The results of these trials suggest that continuous infusion of beta-lactam antibiotics may have variable efficacy in different patient groups. Patients who may benefit from continuous infusion are critically ill patients with a high level of illness severity. Thus, future trials should test the potential clinical advantages of continuous infusion in this patient population. To further ascertain whether benefits of continuous infusion in critically ill patients do exist, a large-scale, prospective, multinational trial with a robust design is required.
Urine hepcidin has additive value in ruling out cardiopulmonary bypass-associated acute kidney injury: an observational cohort study
(BIOMED CENTRAL LTD, 2011-01-01)
INTRODUCTION: Conventional markers of acute kidney injury (AKI) lack diagnostic accuracy and are expressed only late after cardiac surgery with cardiopulmonary bypass (CPB). Recently, interest has focused on hepcidin, a regulator of iron homeostasis, as a unique renal biomarker. METHODS: We studied 100 adult patients in the control arm of a randomized, controlled trial http://www.clinicaltrials.gov/NCT00672334 who were identified as being at increased risk of AKI after cardiac surgery with CPB. AKI was defined according to the Risk, Injury, Failure, Loss, End-stage renal disease classification of AKI classification stage. Samples of plasma and urine were obtained simultaneously (1) before CPB (2) six hours after the start of CPB and (3) twenty-four hours after CPB. Plasma and urine hepcidin 25-isoforms were quantified by competitive enzyme-linked immunoassay. RESULTS: In AKI-free patients (N = 91), urine hepcidin concentrations had largely increased at six and twenty-four hours after CPB, and they were three to seven times higher compared to patients with subsequent AKI (N = 9) in whom postoperative urine hepcidin remained at preoperative levels (P = 0.004, P = 0.002). Furthermore, higher urine hepcidin and, even more so, urine hepcidin adjusted to urine creatinine at six hours after CPB discriminated patients who did not develop AKI (area under the curve (AUC) receiver operating characteristic curve 0.80 [95% confidence interval (95% CI) 0.71 to 0.87] and 0.88 [95% CI 0.78 to 0.97]) or did not need renal replacement therapy initiation (AUC 0.81 [95% CI 0.72 to 0.88] 0.88 [95% CI 0.70 to 0.99]) from those who did. At six hours, urine hepcidin adjusted to urine creatinine was an independent predictor of ruling out AKI (P = 0.011). Plasma hepcidin did not predict no development of AKI. The study findings remained essentially unchanged after excluding patients with preoperative chronic kidney disease. CONCLUSIONS: Our findings suggest that urine hepcidin is an early predictive biomarker of ruling out AKI after CPB, thereby contributing to early patient risk stratification.
Near infrared spectroscopy (NIRS) of the thenar eminence in anesthesia and intensive care
(SPRINGER HEIDELBERG, 2012-01-01)
Near infrared spectroscopy of the thenar eminence (NIRSth) is a noninvasive bedside method for assessing tissue oxygenation. The NIRS probe emits light with several wavelengths in the 700- to 850-nm interval and measures the reflected light mainly from a predefined depth. Complex physical models then allow the measurement of the relative concentrations of oxy and deoxyhemoglobin, and thus tissue saturation (StO2), as well as an approximation of the tissue hemoglobin, given as tissue hemoglobin index.Here we review of current knowledge of the application of NIRSth in anesthesia and intensive care.We performed an analytical and descriptive review of the literature using the terms "near-infrared spectroscopy" combined with "anesthesia," "anesthesiology," "intensive care," "critical care," "sepsis," "bleeding," "hemorrhage," "surgery," and "trauma" with particular focus on all NIRS studies involving measurement at the thenar eminence.We found that NIRSth has been applied as clinical research tool to perform both static and dynamic assessment of StO2. Specifically, a vascular occlusion test (VOT) with a pressure cuff can be used to provide a dynamic assessment of the tissue oxygenation response to ischemia. StO2 changes during such induced ischemia-reperfusion yield information on oxygen consumption and microvasculatory reactivity. Some evidence suggests that StO2 during VOT can detect fluid responsiveness during surgery. In hypovolemic shock, StO2 can help to predict outcome, but not in septic shock. In contrast, NIRS parameters during VOT increase the diagnostic and prognostic accuracy in both hypovolemic and septic shock. Minimal data are available on static or dynamic StO2 used to guide therapy.Although the available data are promising, further studies are necessary before NIRSth can become part of routine clinical practice.
Acquired bloodstream infection in the intensive care unit: incidence and attributable mortality
INTRODUCTION: To estimate the incidence of intensive care unit (ICU)-acquired bloodstream infection (BSI) and its independent effect on hospital mortality. METHODS: We retrospectively studied acquisition of BSI during admissions of >72 hours to adult ICUs from two university-affiliated hospitals. We obtained demographics, illness severity and co-morbidity data from ICU databases and microbiological diagnoses from departmental electronic records. We assessed survival at hospital discharge or at 90 days if still hospitalized. RESULTS: We identified 6339 ICU admissions, 330 of which were complicated by BSI (5.2%). Median time to first positive culture was 7 days (IQR 5-12). Overall mortality was 23.5%, 41.2% in patients with BSI and 22.5% in those without. Patients who developed BSI had higher illness severity at ICU admission (median APACHE III score: 79 vs. 68, P < 0.001). After controlling for illness severity and baseline demographics by Cox proportional-hazard model, BSI remained independently associated with risk of death (hazard ratio from diagnosis 2.89; 95% confidence interval 2.41-3.46; P < 0.001). However, only 5% of the deaths in this model could be attributed to acquired-BSI, equivalent to an absolute decrease in survival of 1% of the total population. When analyzed by microbiological classification, Candida, Staphylococcus aureus and gram-negative bacilli infections were independently associated with increased risk of death. In a sub-group analysis intravascular catheter associated BSI remained associated with significant risk of death (hazard ratio 2.64; 95% confidence interval 1.44-4.83; P = 0.002). CONCLUSIONS: ICU-acquired BSI is associated with greater in-hospital mortality, but complicates only 5% of ICU admissions and its absolute effect on population mortality is limited. These findings have implications for the design and interpretation of clinical trials.
Resuscitation fluid use in critically ill adults: an international cross-sectional study in 391 intensive care units
(BIOMED CENTRAL LTD, 2010-01-01)
INTRODUCTION: Recent evidence suggests that choice of fluid used for resuscitation may influence mortality in critically ill patients. METHODS: We conducted a cross-sectional study in 391 intensive care units across 25 countries to describe the types of fluids administered during resuscitation episodes. We used generalized estimating equations to examine the association between patient, prescriber and geographic factors and the type of fluid administered (classified as crystalloid, colloid or blood products). RESULTS: During the 24-hour study period, 1,955 of 5,274 (37.1%) patients received resuscitation fluid during 4,488 resuscitation episodes. The main indications for administering crystalloid or colloid were impaired perfusion (1,526/3,419 (44.6%) of episodes), or to correct abnormal vital signs (1,189/3,419 (34.8%)). Overall, colloid was administered to more patients (1,234 (23.4%) versus 782 (14.8%)) and during more episodes (2,173 (48.4%) versus 1,468 (32.7%)) than crystalloid. After adjusting for patient and prescriber characteristics, practice varied significantly between countries with country being a strong independent determinant of the type of fluid prescribed. Compared to Canada where crystalloid, colloid and blood products were administered in 35.5%, 40.6% and 28.3% of resuscitation episodes respectively, odds ratios for the prescription of crystalloid in China, Great Britain and New Zealand were 0.46 (95% confidence interval (CI) 0.30 to 0.69), 0.18 (0.10 to 0.32) and 3.43 (1.71 to 6.84) respectively; odds ratios for the prescription of colloid in China, Great Britain and New Zealand were 1.72 (1.20 to 2.47), 4.72 (2.99 to 7.44) and 0.39 (0.21 to 0.74) respectively. In contrast, choice of fluid was not influenced by measures of illness severity (for example, Acute Physiology and Chronic Health Evaluation (APACHE) II score). CONCLUSIONS: Administration of resuscitation fluid is a common intervention in intensive care units and choice of fluid varies markedly between countries. Although colloid solutions are more expensive and may possibly be harmful in some patients, they were administered to more patients and during more resuscitation episodes than crystalloids were.
Hepatorenal syndrome: the 8th international consensus conference of the Acute Dialysis Quality Initiative (ADQI) Group
INTRODUCTION: Renal dysfunction is a common complication in patients with end-stage cirrhosis. Since the original publication of the definition and diagnostic criteria for the hepatorenal syndrome (HRS), there have been major advances in our understanding of its pathogenesis. The prognosis of patients with cirrhosis who develop HRS remains poor, with a median survival without liver transplantation of less than six months. However, a number of pharmacological and other therapeutic strategies have now become available which offer the ability to prevent or treat renal dysfunction more effectively in this setting. Accordingly, we sought to review the available evidence, make recommendations and delineate key questions for future studies. METHODS: We undertook a systematic review of the literature using Medline, PubMed and Web of Science, data provided by the Scientific Registry of Transplant Recipients and the bibliographies of key reviews. We determined a list of key questions and convened a two-day consensus conference to develop summary statements via a series of alternating breakout and plenary sessions. In these sessions, we identified supporting evidence and generated recommendations and/or directions for future research. RESULTS: Of the 30 questions considered, we found inadequate evidence for the majority of questions and our recommendations were mainly based on expert opinion. There was insufficient evidence to grade three questions, but we were able to develop a consensus definition for acute kidney injury in patients with cirrhosis and provide consensus recommendations for future investigations to address key areas of uncertainty. CONCLUSIONS: Despite a paucity of sufficiently powered prospectively randomized trials, we were able to establish an evidence-based appraisal of this field and develop a set of consensus recommendations to standardize care and direct further research for patients with cirrhosis and renal dysfunction.
Erythropoietin (EPO) in acute kidney injury
(SPRINGER HEIDELBERG, 2011-01-01)
Erythropoietin (EPO) is a 30.4 kDa glycoprotein produced by the kidney, and is mostly well-known for its physiological function in regulating red blood cell production in the bone marrow. Accumulating evidence, however, suggests that EPO has additional organ protective effects, which may be useful in the prevention or treatment of acute kidney injury. These protective mechanisms are multifactorial in nature and include inhibition of apoptotic cell death, stimulation of cellular regeneration, inhibition of deleterious pathways, and promotion of recovery.In this article, we review the physiology of EPO, assess previous work that supports the role of EPO as a general tissue protective agent, and explain the mechanisms by which it may achieve this tissue protective effect. We then focus on experimental and clinical data that suggest that EPO has a kidney protective effect.