Saturday, April 30, 2016

Non cardiac surgery in patients with aortic stenosis


Advances in perioperative management and the availability of transcutaneous techniques have improved the outlook for patients but have made decision making more complex. The topic is reviewed in this recent paper. Here is a key passage:

Emergency noncardiac surgery (NCS) obviously needs to be performed without consideration of the AS; these patients are at the highest risk of perioperative morbidity and mortality. Aortic balloon valvuloplasty (ABV) can be considered in patients needing urgent noncardiac surgery; transcatheter aortic valve replacement (TAVR) is an alternative, but the necessary assessment of vascular access and LVOT sizing cannot usually be performed in due time. Asymptomatic patients can in general proceed with elective noncardiac surgery; however, surgical aortic valve replacement (SAVR) or TAVR should be considered before high-risk surgical interventions, or in patients with revised cardiac risk index (RCRI) greater than or equal to 2. Symptomatic patients should in general undergo TAVR or SAVR before noncardiac surgery, unless the need for antithrombotic therapy required after TAVR or SAVR unduly delays or increases the risk of noncardiac surgery, or when the noncardiac surgery could decrease the risk of anticipated SAVR or TAVR for severe symptomatic aortic stenosis. Concomitant SAVR and noncardiac surgery can also be considered in selected patients (see the text for details).

The full text of the article is recommended.


Friday, April 29, 2016

Arrhythmic versus asphyxial cardiac arrest


We are comfortable thinking of cardiac arrest as one entity. That thinking is simplistic and flawed.  A recent review article highlights differences between two major categories of arrest. First some definitions. Arrhythmic cardiac arrest is primary cardiac arrest. It is caused by structural, electrical (channelopathy) or metabolic (eg electrolyte disturbance) disorders and the presenting rhythm is usually (though not always) VF or pulseless VT. Asphyxial arrest is the “respiratory code” which occurs as a result of respiratory failure and consequent hypoxemia or hypercapnia. VF may occur but it is almost never the presenting rhythm. These represent the main two causes of arrest. A third category, cardiac arrest as the end result of progressive circulatory shock, was not covered in the review.

The following sections from the body of the paper highlight key points:

Asphyxial CA is characterized by a prolonged time course and an important prearrest period where hypoxia (defined as critical reduction in arterial oxygen saturation or arterial oxygen tension), and hypercapnia (defined as increases in arterial carbon dioxide tension), progressively advance along with maintained but gradually deteriorating cardiopulmonary function...

Contrary to asphyxial, dysrhythmic CA leads to sudden and complete cessation of blood flow...

Although VF is a lethal tachyarrhythmia often associated with underlying cardiac disturbances and considered to be the immediate cause of CA, it can also occur during the asphyxial process. Ventricular fibrillation in this setting is uncommon, but not rare [15] . Asphyxia-induced or secondary VF has different underlying pathophysiologic mechanisms with regard to myocardial bioenergetics and electrophysiology...

The conversion of PEA and nonshockable rhythms to shockable during asphyxia is an interesting phenomenon and it seems that outcomes after asphyxial CA with asystole/PEA with subsequent VF are worse than after asystole/PEA without subsequent VF [20] . This is probably attributed to the fact that subsequent VF might be a marker of more severe myocardial dysfunction...

At cellular level, sudden CA of cardiac origin causes an immediate no-flow state with global ischemia, where high-energy phosphates are depleted rapidly. Especially in the brain, adenosine triphosphate (ATP) depletion is thought to occur within a few minutes [23] . On the contrary, asphyxial CA is characterized by progressive and global hypoxia with incomplete ischemia and results in gradually with the length of asphyxia ATP and phosphocreatine reduction. If ATP is depleted during hypoxia, necrosis occurs because of mitochondria transmembrane potential disruption, leading to cell swelling and ultimately to apoptosis and necrosis [24 25] . Depletion of cellular energy initiates biochemical cascades that lead to cell damage and death prior to the no-flow state...

Finally, maintained cardiovascular function during asphyxia prior to cardiac standstill results in CO 2 tissue production and accumulation in the alveoli, as there is no alveolar gas exchange. There are at least 5 laboratory studies that showed different patterns of end-tidal carbon dioxide ( et CO 2 ) levels during cardiopulmonary resuscitation (CPR) betpathophysiologic role. In particular, organ perfusion with hypoxemic blood during asphyxia prior to complete circulatory collapse may contribute to a different degree of reperfusion injury after ROSC compared with sudden dysrhythmic CA, affecting overall prognosis...

Although both asphyxial and dysrhythmic CAs lead to brain damage through global ischemia, it seems that significant histopathologic differences exist between the 2 conditions...

In summary, all available data support the assumption that the ischemic degree and final brain damage are greater and more severe after asphyxial CA than after dysrhythmic CA...

Myocardial dysfunction after resuscitated CA is a well-recognized and described component of the post-CA syndrome...

As for treatment implications based on the type of cardiac arrest, the authors suggest a traditional guideline based approach to asphyxial arrest versus cardiocerebral resuscitation as originally promulgated by the Arizona investigators for arrhythmic arrest. Post arrest hypothermia is recommended for both forms of arrest although it is more firmly established for arrhythmic arrest.

Tuesday, April 26, 2016

Advances in the treatment of acute liver failure


From a review:

Recent findings: As the treatment of ALF has evolved, there is an increasing recognition regarding the risk of intracranial hypertension related to advanced hepatic encephalopathy. Therefore, there is an enhanced emphasis on neuromonitoring and therapies targeting intracranial hypertension. Also, new evidence implicates systemic proinflammatory cytokines as an etiology for the development of multiorgan system dysfunction in ALF; the recent finding of a survival benefit in ALF with high-volume plasmapheresis further supports this theory.

Summary: Advances in the critical care management of ALF have translated to a substantial decrease in mortality related to this disease process. The extrapolation of therapies from general neurocritical care to the treatment of ALF-induced intracranial hypertension has resulted in improved neurologic outcomes. In addition, recognition of the systemic inflammatory response and multiorgan dysfunction in ALF has guided current treatment recommendations, and will provide avenues for future research endeavors. With respect to extracorporeal liver support systems, further randomized studies are required to assess their efficacy in ALF, with attention to nonsurvival end points such as bridging to liver transplantation.

Monday, April 25, 2016

Spironolactone superior as an add on drug for resistant hypertension



Background

Optimal drug treatment for patients with resistant hypertension is undefined. We aimed to test the hypotheses that resistant hypertension is most often caused by excessive sodium retention, and that spironolactone would therefore be superior to non-diuretic add-on drugs at lowering blood pressure.

Methods

In this double-blind, placebo-controlled, crossover trial, we enrolled patients aged 18–79 years with seated clinic systolic blood pressure 140 mm Hg or greater (or greater than or equal to135 mm Hg for patients with diabetes) and home systolic blood pressure (18 readings over 4 days) 130 mm Hg or greater, despite treatment for at least 3 months with maximally tolerated doses of three drugs, from 12 secondary and two primary care sites in the UK. Patients rotated, in a preassigned, randomised order, through 12 weeks of once daily treatment with each of spironolactone (25–50 mg), bisoprolol (5–10 mg), doxazosin modified release (4–8 mg), and placebo, in addition to their baseline blood pressure drugs. Random assignment was done via a central computer system. Investigators and patients were masked to the identity of drugs, and to their sequence allocation. The dose was doubled after 6 weeks of each cycle. The hierarchical primary endpoints were the difference in averaged home systolic blood pressure between spironolactone and placebo, followed (if significant) by the difference in home systolic blood pressure between spironolactone and the average of the other two active drugs, followed by the difference in home systolic blood pressure between spironolactone and each of the other two drugs. Analysis was by intention to treat. The trial is registered with EudraCT number 2008-007149-30, and ClinicalTrials.gov number, NCT02369081.

Findings

Between May 15, 2009, and July 8, 2014, we screened 436 patients, of whom 335 were randomly assigned. After 21 were excluded, 285 patients received spironolactone, 282 doxazosin, 285 bisoprolol, and 274 placebo; 230 patients completed all treatment cycles. The average reduction in home systolic blood pressure by spironolactone was superior to placebo (–8·70 mm Hg [95% CI −9·72 to −7·69]; p less than 0·0001), superior to the mean of the other two active treatments (doxazosin and bisoprolol; −4·26 [–5·13 to −3·38]; p less than 0·0001), and superior when compared with the individual treatments; versus doxazosin (–4·03 [–5·04 to −3·02]; p<0 0="" 285="" 6="" all="" and="" baseline="" being="" best="" bisoprolol="" blood="" but="" distribution.="" distribution="" drug="" effective="" ends="" exceeded="" for="" greater="" higher="" in="" individual="" its="" less="" likelihood="" lower="" many-fold="" margin="" mmol="" most="" occasion.="" of="" on="" one="" p="" patient="" patients="" plasma="" potassium="" pressure-lowering="" received="" renin="" serum="" six="" span="" spironolactone="" superiority="" than="" the="" throughout="" to="" tolerated.="" treatment="" treatments="" versus="" was="" well="" were="" who="">

Interpretation

Spironolactone was the most effective add-on drug for the treatment of resistant hypertension. The superiority of spironolactone supports a primary role of sodium retention in this condition.


Sunday, April 24, 2016

Fitness level and a fib risk: finding the sweet spot


From a recent study:

Methods

CRF, as assessed by maximal oxygen uptake (VO2max) during exercise testing, was measured at baseline in 1950 middle-aged men (mean age 52.6 years, SD 5.1) from the Kuopio Ischaemic Heart Disease (KIHD) study.

Results

During average follow-up of 19.5 years, there were 305 incident AF cases (annual AF rate of 65.1/1000 person-years, 95% confidence interval [CI] 58.2–72.8). Overall, a nonlinear association was observed between CRF and incident AF. The rate of incident AF varied from 11.5 (95% CI 9.4–14.0) for the first quartile of CRF, to 9.1 (95% CI 7.4–11.2) for the second quartile, 5.7 (95% CI 4.4–7.4) for the third quartile, and 6.3 (95% CI 5.0–8.0) for the fourth quartile. Age-adjusted hazard ratio comparing top vs bottom fourth of usual CRF levels was 0.67 (95% CI 0.48–0.95), attenuated to 0.98 (95% CI 0.66–1.43) upon further adjustment for risk factors. These findings were comparable across age, body mass index, history of smoking, diabetes, and cardiovascular disease status at baseline.

Conclusion

Improved fitness as indicated by higher levels of CRF is protective of AF within a certain range, beyond which the risk of AF rises again. These findings warrant further replication.


Saturday, April 23, 2016

Augmented renal clearance of antibiotics in critically ill patients


From a recent review:

Highlights



Augmented renal clearance (ARC) is a prevalent condition in the critically ill.


ARC may result in sub-therapeutic exposure of renally eliminated antibiotics.


Beta-lactams are particularly affected due to their pharmacokinetic and pharmacodynamic characteristics.


Dose optimization is necessary to circumvent the influence of ARC.


Therapeutic drug monitoring may be necessary to guide dose optimization.


Dose optimization might consist, in the case of beta lactam antibiotics and their congeners, of extended or continuous infusion dosing. Although ARC is defined by a creatinine clearance of greater than 130 mL/min/173 m2 routine clinical estimates may not be reliable. Moreover, enhanced tubular secretion or diminished reabsorption may account for ARC of some antibiotics. Less severely ill patients tend to be at greater risk for ARC.



Friday, April 22, 2016

Ca-125 levels may indicate perforation in acute appendicitis


From a recent report:

Results

Sixty patients with acute appendicitis were recruited prospectively in this study between May 2014 and March 2015. Blood samples were obtained to measure CA-125 levels before appendectomy. Of the 57 patients, 10 had perforated or gangrenous appendicitis intraoperatively. The CA-125 levels were significantly higher in patients with perforated or gangrenous appendicitis than patients with uncomplicated appendicitis (49.9 vs 10.5 U/mL, P = .000).

Conclusions

Cancer antigen 125 levels in patients with highly suspected or confirmed appendicitis could help clinicians determine the severity of the disease.

Thursday, April 21, 2016

Should you screen your patient for alpha 1?


A recent paper, linked here, outlines the indications for screening and the reasons why. This has been a bit controversial, since gene product replacement has been supported only by clinical data on surrogate endpoints. However, the article makes a compelling case for screening based on several reasons. First, the surrogate based evidence is mounting. Second, there are multiple reasons beyond product replacement to justify screening.

Wednesday, April 20, 2016

Joseph Alpert on graduate medical education


Dr. Alpert, Editor-in-Chief of the American Journal of Medicine, has written two articles [1] [2] about books he feels are must reads for anyone involved in graduate medical education. Relatively late in my career I am becoming involved in graduate medical education. I am eager to learn all I can about it and these titles naturally caught my eye. Alpert references a series of books on the subject by Kenneth M. Ludmerer. Drawing from these books he points to a degradation in the quality of graduate medical education over the past few decades. The end result of this decline is that trainees now have insufficient time to evaluate, read or think about individual patients and their disease processes. Dr. Alpert suggests two external pressures responsible for this: 1) regulations on resident work hours and 2) shorter and shorter lengths of stay for hospitalized patients.

Dr. Alpert suggests that the 80 hour work limit might be reasonable if programs had more flexibility in scheduling. It should be pointed out, though, that while the 80 hour limit makes life easier for residents there has been no evidence that it has improved patient safety, which is what it was designed to do. Like a lot of so called systems improvements the work hour regulations were not based on evidence at all. In fact, they were based on a single dramatic anecdote (a case of missed serotonin syndrome back in an era when that entity was not well defined).

Reduced hospital lengths of stay have been driven by economics. Gone are the days when patients would be kept on service a few extra days so they could be presented at a weekly subspecialty conference. Inefficient though that practice was it had its educational appeal.

A related development, not mentioned specifically by Dr. Alpert, is the pressure to make a diagnosis with too much specificity too early. This pressure comes from the coding world. It runs counter to appropriate clinical reasoning and has the potential to lead to diagnostic error.

Alpert's essays provide great food for thought but I partially disagree with one point. In advocating for the teaching of a systematic and logical approach to diagnostic evaluation he says:

The second element required is sufficient time for trainees to question and examine each patient carefully, followed by time for intellectual reflection on precisely which tests should be ordered. Trainees should be told that the current “shot gun” approach to testing should be carefully avoided.

That may be sound as a general clinical rule but what about situations in critically ill patients where multiple diagnostic possibilities exist each of which needs to be diagnosed and treated within mere minutes? Sometimes time is limited because time is muscle (in the case of acute coronary syndrome), brain (in the case of stroke) or mortality (in the case of, for example, sepsis or aortic dissection). In such cases, less well known in the days of my early training, a deliberate focused approach is impractical and may have to give way to a shotgun approach.

Tuesday, April 19, 2016

A protocol for expedited rhythm control of atrial fibrillation in the ER


From a recent paper:

Methods

We enrolled consecutive patients presenting to our community hospital with recent onset AF into a protocol, which called for rhythm control with procainamide and if unsuccessful electrical cardioversion and discharge home. We compared this prospective cohort with matched historical controls. Primary outcome was admission rate. We also compared ED conversion rates and lengths of stay (LOS). We reported 30-day data on the study group including ED recidivism, recurrent AF, outpatient follow-up, and any important adverse events.

Results

Fifty-four patients were enrolled in the study group with 4 being admitted compared with 30 of 50 in the historical control group. Ninety-four percent of the study group converted compared with 28% in the historical control. Both hospital and ED LOS were significantly shorter for the study group. Six patients had recurrent AF, and 4 of those returned to the ED.

The patient had to have had onset of atrial fibrillation within 48 hours. In patients without cardiac awareness this would be difficult to determine. If there was any doubt on the part of the treating physician the patient was excluded.

This makes it look so nice and easy but I think anticoagulation should enter the discussion. There are two aspects of anticoagulation to consider: prior to conversion (either immediately pre-prodedure or for a month or so prior) and after conversion, for a month or more, due to the thromboembolic risk resulting from atrial stunning post conversion, as well as the possibility that AF will recur. For people who have been in AF for less than 48 hours the widespread belief (and the tacit assumption in this article) is that you don't need to worry about it. The patients in this study, apparently, were not anticoagulated, given that the body of the paper states that no new medications were prescribed at ER discharge. However, the question deserves more nuance than it is usually given. There is actually some controversy around the decision not to anticoagulate in cases of AF of less than 48 hours duration. The evidence to guide clinicians is not the greatest. The guideline statement is very vague (one can consider anticoagulation, before and/or after, or not) and is in the form of a IIb recommendation. The condition for that statement is that the patient be deemed low risk. Up to Date expresses concern about this no anticoagulation approach and suggests it only for patients at very high bleeding risk or those who are CHADS VASC zero. Some of their experts recommend anticoagulation before and/or after (for a month) even for those low risk patients. This is an under-discussed elephant in the room.


Monday, April 18, 2016

Morphine in acute pulmonary edema: time for some healthy skepticism


Morphine has long been considered part of the standard treatment for pulmonary edema. However, it has never been supported by high level evidence and there has been recent concern about harm. From a recent review:


Morphine has for a long time, been used in patients with acute pulmonary oedema due to its anticipated anxiolytic and vasodilatory properties, however a discussion about the benefits and risks has been raised recently. A literature search in Medline and Embase using the keywords “pulmonary oedema” OR “lung oedema” OR “acute heart failure” AND “morphine” was performed. A certain vasodilation has been described after morphine administration, but the evidence for this mechanism is relatively poor and morphine-induced anxiolysis may possibly be the most important factor of morphine in pulmonary oedema and therefore some authors have suggested benzodiazepines as an alternative treatment. Respiratory depression seems to be a less relevant clinical problem according to the literature, whereas vomiting is common, which may cause aspiration. In the largest outcome study, based on the ADHERE registry, morphine given in acute decompensated heart failure was an independent predictor of increased hospital mortality, with an odds ratio of 4.8 (95% CI: 4.52–5.18, p less than 0.001). Other, smaller studies have shown a significant association between morphine administration and mortality, which was lost after adjusting for confounding factors.


Morphine is still used for pulmonary oedema in spite of poor scientific background data. A randomised, controlled study is necessary in order to determine the effect – and especially the risk – when using morphine for pulmonary oedema. Since the positive effects are not sufficiently documented, and since the risk for increased mortality cannot be ruled out, one can advocate that the use should be avoided.


Saturday, April 09, 2016

High flow nasal cannula in patients with hypercapnia


From a recent study:



Introduction

A high-flow nasal cannula (HFNC) has been used to treat patients with dyspnea. We identified changes in arterial blood gas (ABG) of patients visiting the emergency department (ED) with hypercapnic and nonhypercapnic respiratory failure after use of an HFNC.

Methods

This study was a retrospective chart review of patients with respiratory failure who visited the hospital and used an HFNC in the ED. The study period was July 1, 2011, to December 31, 2013. Patients with Paco2 greater than 45 mm Hg before the HFNC ABG analyses were included in the hypercapnia group; others comprised the nonhypercapnia group...

Results

A total of 173 patients were included after exclusion of 92 according to exclusion criteria. Eighty-one patients (hypercapnia group, 46, and nonhypercapnia group, 35) were included. Paco2 significantly decreased among all patients after use of HFNC (from 54.7 ± 26.4 mm Hg to 51.3 ± 25.8 mm Hg; P = .02), but the reduction was significant only in the hypercapnia group (from 73.2 ± 20.0 to 67.2 ± 23.4; P = .02). Progression to noninvasive or invasive ventilation and mortality rates were similar between the groups.
Conclusions

Use of an HFNC in patients with hypercapnia could show a significant trend of decrease in Paco2. Progression to noninvasive or invasive ventilation and mortality rates were similar in patients with and without hypercapnia.

Friday, April 08, 2016

ECMO: evidence based?


Strictly speaking, no, although the large body of accumulated experience since the 2009 flu pandemic suggests life saving potential for an increasing array of indications as a rescue modality.

Thursday, April 07, 2016

Just what is sepsis?

It's what those in authority say it is.  Though appeal to authority is sometimes considered a logical fallacy it can be legitimate if the authority has appropriate expertise and is true to the evidence.  While eschewed by EBM purists it can be useful when consensus is needed.  Such is the case with sepsis.  Though we know in essence what it is, it is incompletely understood and there is some wiggle room concerning the particulars of definition and classification.  A joint task force of the Society of Critical Care Medicine and the European Society of Intensive Care Medicine was convened to examine the 2001 definitions and found them lacking.  The paper outlining the new definitions (Sepsis-3) was recently published in JAMA as free full text.

From the abstract:

Sepsis should be defined as life-threatening organ dysfunction caused by a dysregulated host response to infection.

That's the definition in concept.  The notion of a dysregulated host response is new and means that the response is inappropriate in some way.  That is why inflammation, specifically SIRS, which represents an appropriate (and beneficial to the host) response, is de-emphasized.  That is not to say inflammation is not important, but it is not the entire picture and is not enough to produce sepsis.  This is in recognition of increasing evidence of injurious non-inflammatory pathways that are activated, and of studies like this one showing that SIRS criteria will miss a considerable number of infected patients in need of critical care.

The move beyond SIRS criteria is not really new.   Remember that the 2012 Surviving Sepsis guidelines had already broadened the definition by replacing SIRS criteria with the mere requirement that there be “some” of a long list of signs and symptoms including but not limited to SIRS.

Again, from the abstract, moving on to the specific criteria:

For clinical operationalization, organ dysfunction can be represented by an increase in the Sequential [Sepsis-related] Organ Failure Assessment (SOFA) score of 2 points or more, which is associated with an in-hospital mortality greater than 10%. Septic shock should be defined as a subset of sepsis in which particularly profound circulatory, cellular, and metabolic abnormalities are associated with a greater risk of mortality than with sepsis alone. Patients with septic shock can be clinically identified by a vasopressor requirement to maintain a mean arterial pressure of 65 mm Hg or greater and serum lactate level greater than 2 mmol/L (greater than 18 mg/dL) in the absence of hypovolemia.

There's a lot to unpack here.  What is meant by the absence of hypovolemia?  That's not clear at all for multiple reasons.  First, is this term relative to what would be considered euvolemia for the patient's size, or is it relative to whether the patient might be volume responsive, which might be something very different in a septic patient.  Worse, there is ongoing debate on how this can even be assessed.  Most clinicians, for practical purposes, will assume this means persistent hypotension requiring pressors after the initial fluid resuscitation. 

Note that post fluid resuscitation the shock criteria in terms of BP and lactate are now both/and (low MAP and lactate elevation) rather than either/or as was the case in the old definition, although the lactate cutoff has been lowered from 4 to 2. 

It is not made clear in the new definition whether infection need merely be suspected (as in the old definition) or must be confirmed.  Given the rapidity with which sepsis needs to be recognized and treated, “suspected” would be appropriate at the bedside.  Whether that is enough for reporting purposes is not made clear.

Although the qSOFA score, a rapid bedside determination based on clinical criteria, is suggested as an assessment tool, the full SOFA score appears to be necessary to satisfy the definition.  That means you have to get an ABG, a full set of labs and do a GCS (not that you wouldn't do those things anyway).

Note that the category “severe sepsis” has been eliminated.  Now there is just “plain Jane sepsis” and septic shock.  This may pose a problem for the “clinical documentation” coders who want you to code “severe sepsis.”  It is just another example of the fact that coding terminology is generally years, sometimes decades, behind current clinical definitions.  That is nothing new, and is something we will continue to have to navigate around. 

It would appear that the new definitions have sacrificed clarity and simplicity of use for a scheme that better reflects the current understanding of the pathophysiology of sepsis.

Let doctors be doctors


Back when the EMR dogma was being shoved down our throats we were intimidated against speaking out (it sort of reminds me of the fifth vital sign movement). That is changing.

Wednesday, April 06, 2016

Biomarker guidance for antibiotic therapy in critical care


Here is the latest from a review on this topic:

Several randomized controlled trials (RCTs) have shown the safety and efficacy of procalcitonin to discontinue antibiotic therapy in patients with severe sepsis or septic shock. In contrast, there is limited utility of procalcitonin for treatment initiation or withholding therapy initially. In addition, an algorithm using procalcitonin for treatment escalation has been ineffective and is probably associated with poorer outcomes. Little data from interventional studies are available for other biomarkers for antibiotic stewardship, except for C-reactive protein (CRP), which was recently found to be similarly effective and safe as procalcitonin in a randomized controlled trial. We finally briefly discuss biomarker-unrelated approaches to reduce antibiotic duration in the ICU, which have shown that even without biomarker guidance, most patients with sepsis can be treated with relatively short antibiotic courses of approximately 7 days.

Tuesday, April 05, 2016

Has EBM been hijacked? Yes but not in the way John Ioannidis says it has.

Dr. David Gorski is a surgical oncologist who writes for Science Based Medicine.  I am an enthusiastic follower of his posts there and on his other blog.  I also have a passionate interest in evidence based medicine (EBM).  So when Dr. Gorski posted recently about the hijacking of EBM my interest was piqued.  Though after a careful read of this long post and some of its many links I feel I mostly agree with him, there are some points there that concern me.  Before addressing them I should provide some background.  John Ioannidis, whom Gorski cites heavily and was the inspiration for the post, is famous for his 2005 paper titled Why Most Published Research Findings Are False.  More recently he wrote the article Evidence-based medicine has been hijacked: a report to David Sackett.  In that paper, which is in the form of an open letter to one of the founders of EBM David Scakett (now deceased),  Ioannidis opines that there are widespread problems with  the medical research agenda consisting of corruption of clinical trials and investigators asking the wrong questions due to various conflicting interests.

Dr. Gorski in response is mainly favorable to the article but is concerned about a missing piece:

In his “report” to David Sackett, Ioannidis does touch on a number of pertinent and interesting points regarding the adoption of EBM but, as you will see, pretty much ignores the one huge elephant in the room.

The elephant is in reference to a popular distortion of EBM that has the effect of, well, I know of no better way to put it, enabling quackery.  The distortion in question is a tendency to devalue basic science when considering various forms of evidence.  There's a long story as to how it came about but suffice it to say here it was not the original intent of EBM's founders.  While that is the main point of Dr. Gorski's post there is much more, including some points that concern me, (and another elephant he made only indirect reference to) which I will address below.

As for Ioannidis's statement that most published research findings are false, to me it just added shock value to something we knew for decades but discussed in less loaded terms.  While he did unpack some of the reasons in a way that had not been done before we always knew that research findings are tentative and that modification of prior research by new research is the usual case.    This is something that has been acknowledged and accepted in medicine for a long time.  (I considered aspects of this phenomenon at some length in a post on medical reversal).  This is not to say we shouldn't be concerned about the quality of research.

At the risk of sounding like a stickler for correctness of terminology, while Ioannidis describes what might be called a hijacking of the research agenda it is not a hijacking of EBM.  EBM, at least as it was originally defined, is focused on how the individual clinician uses expertise to integrate the best available evidence with the needs of the individual patient at the point of care, not the research agenda.  The design and implementation of clinical trials is something separate from EBM.  Sackett himself in effect acknowledged this when in 2000 he retired from the field of EBM and migrated to the other field of clinical trials:

Dr Sackett eventually returned to Canada and, leaving the EBM field for others, devoted himself to researching and writing about randomized clinical trials in a wooden cabin on Irish Lake in Ontario. There he canoed and snowshoed with family and friends.

So while while Ioannidis points to some very important concerns what he describes is not the hijacking of EBM.  However, as Gorski points out EBM has indeed been hijacked.  I would take the discussion a step beyond what Gorski said which brings me to the other elephant in the room.  Take a look at this from his post, in reference to the discussion by Ioannidis about the profession's and industry's resistance to AHRQ:

So, yes, there is resistance to the AHRQ. However, these days it is far more business interests, such as drug and device manufacturers, than physicians groups who want to abolish both the AHRQ and the PCORI, mainly because the AHRQ and PCORI’s research threatens these companies’ bottom lines by showing which treatments work better in the “real world” and influencing the Centers for Medicare & Medicaid Services (CMS) regarding which new drugs and devices will be paid for. In fact, I’d argue that, while Ioannidis is correct that drug and device manufacturers want to kill AHRQ and PCORI, he’s missed a sea change in attitude among physicians towards such government agencies whose purpose is to evaluate and compare treatments for effectiveness after they’ve been approved. It might have been true that EBM was not popular 15 or 20 years ago, but as new generations of medical students have been inculcated with its principles and importance, EBM has been “baked in” to physician education, with a resultant change in attitude towards efforts to promote EBM. That’s not to say that physician groups don’t protect their turf. Just look at how radiologists, for example, react to new guidelines that increase the recommended age at which to start mammography or how primary care physicians react to legislation expanding the scope of practice of advanced practice nurses. However, extreme hostility to comparative effectiveness research and EBM-based guidelines has mostly retreated to fringe physician groups like the American Association of Physicians and Surgeons. Unfortunately, resistance to EBM as a constraint on physician autonomy is still fairly common, particularly among older physicians.

It gets tricky here because this paragraph is a little confusing, packing a lot in a small space and making what seem to me to be questionable assumptions.  The last sentence of the paragraph above has an embedded link to one of Gorski's old posts correctly pointing out that the notion of EBM as something that threatens physician autonomy is a straw man.  However, by equating attitudes towards government agencies with attitudes towards EBM he seems to imply that EBM promotes a top down approach to medicine.  (It's not clear to me whether that is his intended meaning).  EBM, in fact, seeks quite the opposite.  In a recent post on this very subject I quoted Sackett and some of the other founders, from one of their early articles in BMJ:

 Here's what some members of the EBM working group had to say in their seminal article in BMJ some years ago:

    Evidence based medicine is not 'cookbook' medicine...External clinical evidence can inform, but can never replace, individual clinical expertise…

    Clinicians who fear top down cookbooks will find the advocates of evidence based medicine joining them at the barricades.

The paper I quoted from is here.   In that paper we see the founders of EBM willing to fight for individual clinical judgment.  Who knew?

But getting back to another of Dr. Gorski's points, have physicians become more accepting of top down medicine in the last 10-15 years?  He seems to think they have but I don't know where he gets that.  This question isn't well informed by data but if anything the recent widespread physician outrage about ABIM, the ABIM foundation and its Choosing Wisely campaign would suggest otherwise.  (That is nicely chronicled at the blog of Westby G. Fisher, MD, FACC where most of the posts from the past two years are devoted to the topic). 

The agenda of top down medicine in its various forms as substitutes for EBM is the other elephant in the room:  EBM is being hijacked by the proponents of top down medicine.  Although folks in public policy circles don't use the term EBM very often it seems to be a widespread assumption in many of the policy discussions that these initiatives will make, even force, doctors to be more evidence based.  As policy wonk and futurist Bob Wachter once said in a discussion on top down initiatives that would stem from comparative effectiveness research:

We simply must find ways to drive the system to produce the highest quality, safest care at the lowest cost, and we need to drag the self-interested laggards along, kicking and screaming if need be.

The top down agenda is moving forward but fortunately at a slow creep and is no where near in place at the level Wachter wishes it to be.  Many of the new payment models under Obamacare (eg the ACO) are considered pilot projects and are not yet mandatory and may never be.  The AHRQ never grew the teeth that policy makers hoped it would have back when it was AHCPR.  Not all of the top down initiatives come from big government.  Many of the care pathways and performance measures offered as someone's version of EBM are locally driven.  I can only hope that doctors and medical educators will read and re-read that seminal 1996 BMJ paper and help cultivate and spread a true understanding of what EBM really is.

Perioperative anticoagulant bridging in atrial fibrillation patients for procedures


Here are some findings from a recent study in Circulation:

Methods and Results—The ORBIT-AF registry is a prospective, observational registry study of US outpatients with AF. We recorded incident temporary interruptions of OAC for a procedure, including use and type of bridging therapy. Outcomes included multivariable-adjusted rates of myocardial infarction (MI), stroke or systemic embolism (SSE), major bleeding, cause-specific hospitalization, and death within 30 days...

Bleeding events were more common in bridged patients than non-bridged (5.0% vs. 1.3%, adjusted OR 3.84, p less than 0.0001). Incidence of MI, SSE, major bleeding, hospitalization, or death within 30 days was also significantly more common in patients receiving bridging (13% vs. 6.3%, adjusted OR 1.94, p=0.0001).

Conclusions—Bridging anticoagulation is used in one-quarter of anticoagulation interruptions and is associated with higher risk for bleeding and adverse events. These data do not support the use of routine bridging and additional data are needed to identify best practices around anticoagulation interruptions.

In their discussion the authors felt that patients in this study tended to be bridged more often than guidelines call for. Not all patients need bridging and the guidelines recommend deciding based on the level of risk. From my earlier post on the ACCP guidelines the bridging recommendations can be summarized as follows:

Bridging indications: whether in patients with mechanical heart valves, atrial fibrillation or VTE as the warfarin indication the decision to bridge or not depends on risk assessment. Yes for high, no for low, and individualized decision making for moderate. How is that risk determined? For mechanical heart valve prostheses mitral position, tilting disc or caged ball, or cerebrovascular event in the past 6 months constitute high; bileaflet aortic position with a fib, prior cerebrovascular event, HT, DM, CHF or age over 75 constitute moderate; aortic bileaflet absent the above constitutes low. For a fib, CHADS 2 of 5 or above, cerebrovascular event within 3 months or rheumatic disease constitute high; CHADS 2 of 3 or 4 constitutes moderate; CHADS 2 of 2 or less with no cerebrovascular history constitutes low. For VTE event within 3 months or severe thrombophilia (meaning protein C, S, or antithrombin deficiency or APLS) constitute high risk; non-severe thrombophilia, history of multiple VTEs, event within 12 months or active cancer constitute moderate; event greater then 12 months out absent the above constitutes low.

According to commentary in the ACP Hospitalist Weekly bridging should not be the default strategy but rather be selectively applied and based on the guidelines.


Monday, April 04, 2016

Are pre-4 hour APAP levels helpful in acute overdose?


They may be of some help, and allow for certain predictions, but do not supplant the need for a 4 hour post-ingestion level and application of the nomogram. This topic is nicely reviewed at Academic Life in Emergency Medicine.

Some cases are not easily applicable to the nomogram (e.g. unknown ingestion time, gradual ingestion over time, transaminases already elevated, etc). Up to Date has a nice section on these situations.

Saturday, April 02, 2016

Mediterranean diet may help prevent age related cognitive decline


---according to this study.

Ischemic preconditioning and AKI following cardiac surgery


A recent article in JAMA reports a huge benefit from a simple intervention:

Design, Setting, and Participants In this multicenter trial, we enrolled 240 patients at high risk for acute kidney injury, as identified by a Cleveland Clinic Foundation score of 6 or higher, between August 2013 and June 2014 at 4 hospitals in Germany. We randomized them to receive remote ischemic preconditioning or sham remote ischemic preconditioning (control). All patients completed follow-up 30 days after surgery and were analyzed according to the intention-to-treat principle.

Interventions Patients received either remote ischemic preconditioning (3 cycles of 5-minute ischemia and 5-minute reperfusion in one upper arm after induction of anesthesia) or sham remote ischemic preconditioning (control), both via blood pressure cuff inflation.

Main Outcomes and Measures The primary end point was the rate of acute kidney injury defined by Kidney Disease: Improving Global Outcomes criteria within the first 72 hours after cardiac surgery. Secondary end points included use of renal replacement therapy, duration of intensive care unit stay, occurrence of myocardial infarction and stroke, in-hospital and 30-day mortality, and change in acute kidney injury biomarkers.

Results Acute kidney injury was significantly reduced with remote ischemic preconditioning (45 of 120 patients [37.5%]) compared with control (63 of 120 patients [52.5%]; absolute risk reduction, 15%; 95% CI, 2.56%-27.44%; P = .02). Fewer patients receiving remote ischemic preconditioning received renal replacement therapy (7 [5.8%] vs 19 [15.8%]; absolute risk reduction, 10%; 95% CI, 2.25%-17.75%; P = .01), and remote ischemic preconditioning reduced intensive care unit stay (3 days [interquartile range, 2-5]) vs 4 days (interquartile range, 2-7) (P = .04). There was no significant effect of remote ischemic preconditioning on myocardial infarction, stroke, or mortality. Remote ischemic preconditioning significantly attenuated the release of urinary insulinlike growth factor–binding protein 7 and tissue inhibitor of metalloproteinases 2 after surgery (remote ischemic preconditioning, 0.36 vs control, 0.97 ng/mL2/1000; difference, 0.61; 95% CI, 0.27-0.86; P less than .001). No adverse events were reported with remote ischemic preconditioning.

Conclusions and Relevance Among high-risk patients undergoing cardiac surgery, remote ischemic preconditioning compared with no ischemic preconditioning significantly reduced the rate of acute kidney injury and use of renal replacement therapy. The observed reduction in the rate of acute kidney injury and the need for renal replacement warrants further investigation.

It was kind of the investigators to wait until after induction of anesthesia before applying the intervention. It would be torture on an awake patient.


Friday, April 01, 2016

Leadership must address hospitalist burnout


What makes a strong hospitalist program? The most important determinant (my subjective impression, mind you) is a team of happy, professionally satisfied doctors. This results in lower turnover, less reliance on locum help and less schedule chaos. But leadership has not paid enough attention here.

I've been watching the hospitalist movement for years as it has tried to mature. I've listened closely to the conversations about leadership along the way. The recurring theme has been a hard line approach to metrics. The metrics are artificial and the approach mainly negative. If you don't perform in a certain way you are talked to. One too many times and consequences ensue. It becomes professionally deflating after a while. I've heard more than one speaker at hospitalist meetings imply that if you don't like that type of environment you might want to find other work. Well, surprise, surprise, that's just what a lot of hospitalists seem to be doing.

A recent article in Today's Hospitalist addresses this concern and cites a study of physicians at Mayo Clinic showing a strong correlation between burnout and how they rate their bosses. Numerous leadership attributes are listed which, according to the study findings, are strong deterrents to burnout. These have been missing from the conversation up to now.

Should evidence based medicine be declared obsolete?


Not in my opinion but there are some policy makers out there who think it should though they won't admit it. Some by implication even falsely invoke the idea of EBM to support their agenda. These are folks who favor top down control of medicine in order to diminish the decision making power of individual doctors and patients, as Retired Doc pointed out here.

They believe that variation is the enemy of health care (remember the Dartmouth Atlas?) and that such variation is driven by the autonomy of clinicians. An example of such thinking is this quote (via the Retired Doc post) from the book “New Rules” by Drs. Donald Berwick and Troyen Brennan:

"Today, this isolated relationship[ they are speaking of the physician patient relationship] is no longer tenable or possible… Traditional medical ethics, based on the doctor-patient dyad must be reformulated to fit the new mold of the delivery of health care...Regulation must evolve. Regulating for improved medical care involves designing appropriate rules with authority...Health care is being rationalized through critical pathways and guidelines. The primary function of regulation in health care, especially as it affects the quality of medical care, is to constrain decentralized individualized decision making."

EBM, some may be surprised to learn, is opposed to this type of approach.

Here's what some members of the EBM working group had to say in their seminal article in BMJ some years ago:

Evidence based medicine is not 'cookbook' medicine...External clinical evidence can inform, but can never replace, individual clinical expertise…

Clinicians who fear top down cookbooks will find the advocates of evidence based medicine joining them at the barricades.

Clearly this is in opposition to what the policy leaders are saying. It would lend clarity to the debate if they would just be honest and say they are opposed to EBM.

Type 2 diabetes drugs, new and old


This post centers around a recent review of the topic in American Family Physician and contains a sprinkling of some of my own thoughts on this subject. Here are some key points made in the article:


Metformin monotherapy (along with lifestyle modifications) is the first intervention.

The article goes on to say that metformin is the only oral diabetes drug proven to reduce mortality and complications. That is not true in the simple manner in which it was stated and reflects a misinterpretation of the findings of the UK Prospective Diabetes Study which showed a reduction in microvascular complications attributable to treatment with sulfonylureas and insulin. This review is not the first place I've read such a statement. It seems to be a popular myth and I have no idea how it got started unless out of an old controversy regarding possible macrovascular harm attributed to sulfonylureas, as originally reported decades ago in the University Group Diabetes Program study.

There are nuances to this controversy which I have blogged previously in multiple posts. At the risk of oversimplification, evidence seems to show that reduction of blood sugar to a certain target, no matter by what means, reduces microvascular complications but that a number of agents, including insulin, have the potential to cause macrovascular harm. It is beyond the scope of this post to fully explore that controversy but I'll have a bit more to say about it below.

All that being said the evidence points to metformin as having the best efficacy in reducing complications and it is generally accepted as the front line drug.


Initial oral monotherapy can be expected to lower the Hgb A1C by about 1%.

Furthermore each oral drug added on should lower it by an additional 1%. If a patient's initial A1C is very high, e.g. in the double digits, one might consider combination drug therapy or insulin right off the bat. However, a guideline based algorithm shown in the article does not recommend that approach. Instead, metformin monotherapy is started and then drugs are added sequentially one at a time with insulin entering the mix as early as step 2.


Pills are added sequentially in no particular order

---or, as the article puts it, “in a patient centered fashion.” This reflects the lack of high level evidence to guide clinicians in this area. The sequence continues until the A1C goal is reached. Although the algorithm shows basal insulin as an option to add on as early as step 2, complex (multiple injection) regimens are reserved for later steps.


The appropriate A1C goal is controversial.

The UKPDS target was 7. Other studies have pointed to macrovascular harm with more aggressive targets. The review suggests higher targets for patients who are older, have macrovascular disease or multiple risk factors for same.


Newer agents (that is, those other than sulfonylureas and metformin), approved on the basis of safety and their ability to lower blood sugar, have not been shown to improve clinical outcomes.

That doesn't negate the possibility of outcome improvement that may surface years later, as discussed below.


A pearl in hypoglycemia management.

If you have to treat a hypoglycemic episode in a patient whose regimen includes an alpha glucosidase inhibitor (acarbose or miglitol) oral feeding may not work. Oral dextrose or IV therapy will be necessary. Due to the inhibition of alpha glucosidase the patient cannot break sucrose down into monosacharides.


The paradox of macrovascular disease
The concern that diabetes drugs might drive macrovascular disease is controversial. Here's my perspective. The principal macrovascular risk factor associated with DM 2 is the metabolic syndrome. Mere reduction of blood glucose does not alleviate that condition. Macrovascular benefit attributable to diabetes drugs is the exception, confined to metformin and possibly pioglitazone, likely representing pleiotropic effects.

How might glucose lowering drugs cause macrovascular harm? In the exceptional case of rosiglitazone it is probably a pleiotropic effect. But in the case of multiple other drugs, such as insulin and drugs which enhance insulin secretion such as sulfonylureas, it is probably because they promote weight gain and worsen insulin resistance, known drivers of macrovascular disease. More recently it has been found that hypoglycemia, a known consequence of intensive glycemic control, impairs endothelial function and induces hypercoagulability.

Initial concerns about macrovascular harm came from the UGDP study cited above. Serious questions have been raised about the validity of that study. Nevertheless it resulted in a black box warning for sulfonylureas. Adding fuel to the controversy is newer evidence suggesting that if patients whose glucose is lowered intensively with insulin or sulfonylureas are followed long enough macrovascular benefit might eventually be seen. This was illustrated in the 10 year UKPDS follow up. In that study the benefit appeared years after differences in glycemic control had disappeared, suggesting a delayed secondary effect rather than any direct beneficial effect of glucose lowering. The secondary effect (speculation on my part) might stem from prevention of diabetic nephropathy, as CKD is known to be a powerful driver of atherosclerotic complications.