Syncope and Clinical Decision Rules

Clinical Scenario and PICO question

Fresh from med school, you nervously approach your 1st ED shift.  Your 1st patient is Mr. Jones, a very active 80 y/o.  He enjoys tennis, golf, biking, boating, cards & spending time with his buddies, wife, siblings, children & grandchildren.  His wife provides the following eyewitness account.  Mr. Jones was preparing to hang a picture.  His wife heard a metal clank & turn to see his tape measure falling from his open hand & bouncing on the floor.  Mr. Jones was simply collapsing/falling backwards & Mrs. Jones couldn’t catch him in time.  He fell & hit the back of his head on the wooden floor.  Mrs. Jones ran over & found him unresponsive (no Sz activity), with a little blood coming from the back of his head.   She went to the phone & called 911.  Less than 60 secs later, she was back @ his side.  He was sitting up holding his head.  He had no idea what had happened & says he’s been “fine” since.

Mr. Jones’ ED exam was normal, including VS’s, P.Ox, no cardiac murmurs, no signs of CHF & heme (-) stool on rectal exam.  PMHx is “healthy a horse” with only HTN.  Meds = Lopressor-HCT (25/50 HCTZ-metoprolol) + 81 mg ASA/day.

CXR, plain CT head/neck, CBC, chemistries, & cardiac markers are (-).  BNP is 250.  EKG is identical to EKG from 2 years ago à NSR @ 72, with normal PR, QRS, QTc intervals & LVH.

Mr. Jones receives 1000 mg acetaminophen, Td, & 5 staples to his scalp.  4 hrs after his fall, he has had no dysrhythmias, & he wants to go home.  The ED faculty & his wife prefer admission.  When contacted, his PMD say, “I don’t like that story.  Get him admitted to a tele-bed.  My admits are covered by Dr. Hospitalist, so call him.”

Dr. Hospitalist declines the admit and launches into a prolonged explanation that includes, “My last 50 syncope admits got tele for 1-2 days, nothing ever happened & they all went home.”  “I find it amusing the ED doesn’t even know your own literature.”  “If you apply the SF Syncope Rules to Mr. Jones, he can be safely D/C from the ED.”  “If there is any additional concerns, his PMD can arrange an outpatient Holter monitor.”

When apprised of the conversation, your ED faculty parries, “Hey, I know the EM literature enough to know that the SF Syncope rules don’t work; they miss 10% of bad outcomes.  Plus isn’t there that new ROSE Syncope Rule that says to admit for elevated BNP.  Call the Hospitalist back.”  You find the Rose (Risk Stratification of Syncope in E.D.) study on line & discover Mr. Jones’ need a BNP of 300 for admission.  Your EM faculty solves the dilemma by suggesting, “Order a tilt test, a 2nd Troponin & 2ndBNP.  We’ll sign Mr. Smith out to the next team.”

Unfortunately, Harwood is part of the oncoming team & declines your sign-out plan.  “This guy needed admission 8 hours ago.  We’re not going to wait for more bogus testing.”  He suggests the EM faculty simply call Dr. Hospitalist & get Mr. Smith admitted.  “Mr. Jones meets ACEP & ESC (European Society of Cardiology) syncope admit criteria.  If you need literature, give Dr. Hospitalist the STePS (Short-Term Prognosis of Syncope) study or OESIL (Osservatorio Epidemiologico della Sincope nel Lazio).”

Your 1st shift is almost over & you haven’t even gotten your 1st Pt admitted.  You realize this residency thing isn’t as easy as the EM-3’s make it look.  Before you switch to a career in Pathology, you decide to read the Journal Club articles.  Still, you wonder:

1.   Is there a difference between a “Decision Rule”, a “Prediction Instrument”, a “Clinical Prognostic Model” and a “Guideline”?

2.   Are any of these decision tools worth using?

3.   If so, how do you tell the good “Decision Tools” from the bad ones?

P:   80 y/o male with syncope.

I:    ED discharge (based on a decision tool).

C:   Hospital admission.

O:  “Bad” outcomes in 7 &/or 30 days

 

Synopsis

Thanks to Harwood and Michelle for a bucolic mid-summer’s evening .  Also, excellent synopses and discussions by JoEllen, Gromis, Abbi, Nelson, Yenter and Ben.  The topic for Journal Club this month was decision rules; actually better described as decision instruments or tools, as they exist to augment, but never to replace physician judgment. 

The development of a decision tool should answer the following 6 questions:  Is there a need for the tool?  Was the tool derived according to sound methodologic standards?  Has the tool been prospectively validated and refined?  Has the tool been successfully implemented into clinical practice?  Would use of the tool be cost effective?  How will the tool be disseminated and implemented?

As an example of a prevalent presenting complaint that is often harmless but occasionally associated with significant morbidity or death (“low-risk, high-stakes”), we examined 2 decision instruments for syncope. 

The San Francisco Syncope Rule derivation set results were published in 2004 in the Annals of Emergency Medicine, and at the time the rule was considered a possible game-changer for syncope.   The goal of the SF Syncope Rule is to identify ED patients with syncope or near syncope who are at low risk for short-term (7 day) serious outcomes, allowing clinicians to potentially send home low-risk patients safely.  Sensitivity of the rule in the derivation cohort was 96% (95% CI 92%-100%) and specificity was 62% (95% CI 58%-66%). 

Our first article was:

  1.    

 1. Quinn JV, McDermott DA, Stiell IG, et al.  Prospective validation of the San Francisco Syncope Rule to predict patients with serious outcomes.  Ann Emerg Med. 2006;47:448–53.

In this study, the authors prospectively validated the SF Syncope Rule, evaluating 791 consecutive visits in adults for syncope, excluding patients with clear drug/trauma/alcohol/seizure associated syncope, or with altered mental status or new neurologic deficits.   Patients were predicted to be at high risk for an adverse short-term outcome if they met ANY of the following criteria:  history of CHF, Hct <30%, abnormal ECG (non-sinus or any new changes), Shortness of breath, or triage SBP <90.   The rule is often remembered by its mnemonic CHESS.  In this validation cohort,  sensitivity/specificity was based on serious outcomes that were undiagnosed during the ED visit.  Short-term serious outcomes included death, MI, arrhythmia, PE, CVA, SAH, significant hemorrhage/anemia, procedural interventions and re-hospitalization.  Sensitivity was 98% (95% CI 89%-100%) and specificity was 56% (95% CI 52%-60%). 

 

So what’s the problem?  Well, from a standpoint of decision rule development, one concern is that the same group both derived and validated the rule in the same (single) institution, raising concerns aboutexternal validity-how will it perform in another patient population?  From an internal validitystandpoint, there is no “fishbone” diagram-no accounting of how many patients were eligible for the study, how many were actually enrolled, why some weren’t enrolled, etc.  Gromis made the interesting point that patients meeting one of these criteria for high risk are already a high risk population-they could come in with an ankle sprain, and still have a risk for a serious cardiac or neuro outcome in the following 30 days-does this really help differentiate low/high risk patients, or are the factors in the rule just obvious common sense?  Second point from Gromis-there is no sensitivity analysis in the article.  Sensitivity analyses are key methodologic features of a paper wherein the authors re-analyze their data with different assumptions-what if there was one additional bad outcome which was missed?  What happens if all the lost to follow-up patients had bad outcomes…or good outcomes?  Small changes in missing or incomplete data can change the results dramatically, and this should always be discussed in the manuscript.  Miscalculating one bad outcome in this study easily drops the sensitivity of the rule to the low 90s, with lower CI limit in the 80s. 

 

Also, although ED physicians in this study made their own decisions about admission/discharge/management of the study patients, the physicians were filling out data forms, and were very aware of the rule and the study.  Significant bias was likely introduced with regards to management decisions because of this foreknowledge.  There was also probably a Hawthorne effect, with the patients’ overall care and possibly outcomes improving simply from everyone knowing about the study and making small, even unrecognized changes in care. 

 

There was also no “Table 1” in this paper; the initial chart documenting all of the demographic information about the patients in the study-key to comparing patient populations within and between studies.  Harwood asked the provocative question of when audience members are comfortable (or at least don’t feel physically ill) when they hear that one of their ED patients has died.  Is it at 7 days as in the derivation set?  Thirty days as in this validation set?  Longer?  Changing the outcome follow-up time changes one’s perspective.  As Sean said, maybe it’s just most important that the patient gets the appropriate workup, and if that is facilitated in a few days and then the patient has a bad outcome, you can at least feel that everything appropriate was done.  Ultimately, for syncope (and for other symptoms associated with possible badness), the importance of a rule is not in defining who needs to be admitted, but in defining who needs an appropriate and timely workup.

 

2. Birnbaum A, Esses D, Bijur P, et al.  Failure to validate the San Francisco Syncope Rule in an independent emergency department population.  Ann Emerg Med. 2008;52:151–9

Several studies have since been published questioning the high sensitivity initially reported for the SF Syncope Rule.  Birnbaum’s study in 2008 tested the SF Syncope Rule in 713 prospectively enrolled patients with syncope or near syncope.  They used the same inclusion, exclusion, and serious outcome definitions as the original derivation trial, as well as the original 7 day follow-up time.  It only included adults, whereas the original derivation/validation studies included children (changing expected sensitivity).  This study did provide a (nearly complete) fishbone diagram, as well as a Table including demographic specifics on the patients.  Physicians again were aware of the study and responsible for data collection, likely introducing bias.  A sensitivity analysis was performed, and making assumptions about missing data that would maximize the performance of the rule made no significant differences in the rule’s sensitivity.  In this study, the sensitivity of the SF Syncope Rule in predicting 7 day serious outcomes was 74% (95% CI 61% to 84%) with a specificity of 57% (95% CI 53%-61%).   This analysis was for serious outcomes, whether recognized in the ED or in the following 7 days.  Harwood made the point that what we care about are decision instruments that identify bad outcomes that are not obvious in the ED.  If someone has a GI bleed and happens to pass out, the primary admission diagnosis is GI bleed, not syncope-we don’t need help identifying those patients.   The authors of this paper performed a post hoc analysis of serious outcomes not identified in the ED, and the SF syncope rule performed even more poorly; sensitivity 68%.  Looking at the usefulness of the rule in another way, as Dan Nelson pointed out, from this study the negative likelihood ratio of 0.5 will influence the change from your pre to post test probability of a serious outcome by only a very limited amount.  Interestingly, the majority of serious outcomes missed by the rule were arrhythmias. 

 

3.  Reed MJ, Newby DE, et al.  The ROSE (Risk Stratification of Syncope in the  Emergency Department) Study.  J Am Coll Cardiol. 2010;55:713–21. 

Finally, a brand new syncope decision instrument, published in 2010.  What’s new and different about this tool?  First, it is a way excellent 9 page advertisement for BNP (my bias, although Biosite provided the test strips, the point of care machine, and paid for the author to travel to Spain to present the results).   The authors studied about 550 patients in a derivation cohort, and about 550 patients in a validation cohort (results of both reported in same study).  In each case, this was a little more than half of the potentially eligible patients-they missed a bunch of eligible patients, and the death rate was slightly higher in the non-enrolled patients.   Their tool, with the mnemonic BRACES, recommends admission if a patient has any of the following:  BNP >300, Bradycardia with HR <50, Rectal exam heme +, Anemia with Hct <9, Chest pain, ECG with Q waves, Saturation </= 94% on room air.  The authors reported an “excellent” sensitivity of 87.2% in the validation cohort to predict 30 day serious outcomes (specificity 65.5%).  No confidence intervals reported anyplace, so who knows how much higher your risk is than simply missing 1 in 10 bad outcomes.  No demographic information on the patients, no sensitivity analysis.  As often happens, the sensitivity dropped from 92.5% in the derivation set to 87% in the validation set…..what happens when it’s externally validated down the road?  Likely further reduction in sensitivity.  The authors use a lot of ink to discuss how great BNP was at predicting badness, although used alone it only picked up 41% of serious outcomes (an “excellent” predictor per the authors).  BNP increases with age, so could it be that BNP is just a surrogate for increasing risk in the elderly (order a BNP, or as Erik does, just ask the patient how old they are).  One small pro-BNP point-in this study they didn’t see the large number of missed arrthymias(they missed other things instead).  Maybe there’s some utility in  ordering the BNP in selected patients as an additional screen for higher risk, but this study doesn’t answer that question.  

As Vijay very reasonably asked at the end of the evening-so now what?  Neither of the reviewed syncope decision tools works well, and we still have 1% of our ED patients presenting with syncope, and approximately 4-6% of them will have serious short term outcomes not identified in the ED.  For the residents, first, it’s a reminder that medicine’s not easy.  It’s not all algorithms and checklists, but that’s also some of the beauty and joy of clinical practice (JoEllen said it better).   Channeling Harwood, if you use one of the syncope tools, especially SF, and it’s positive, you have a slam-dunk admission.  If the tool is negative, talk it over with your attending. EBM is ultimately a joining of the published evidence, clinical expertise, and patient values-having a few years of experience helps.  Andrea added an important point-remember that prior workup matters, and a prior history of benign syncope in an individual matters.  Also, the value of decision tools lies not only in their rote application, but in recognizing that the components of the tool are individually high-risk factors, and can be used to help develop your own clinical judgment.  Being familiar with the Ottawa ankle rules reminds me which parts of the ankle exam to really focus on.  As several in the audience pointed out, syncope is a complex presenting complaint, and therefore may not lend itself to the easy development of a decision instrument.  However, new rules for a variety of complaints are being rolled out every month, and understanding how to critically appraise articles describing new decision tools is crucial to helping you separate the Leatherman Wave from the Bassomatic.

TIAs

Timing of the Evaluation of TIAs

Over the past 10 years, a number of studies have established a one-week risk for CVA of up to 10% after a TIA, with up to half of those CVAs occurring in the first 2 days.  Given this high risk of completed CVA, we sought to examine 3 questions:  is there an accurate and simple clinical scoring system which helps identify patients at high risk of early CVA after TIA?  Does urgent evaluation and treatment of TIA improve clinical outcomes and decrease the risk of CVA after TIA?  Is an ED observation unit accelerated diagnostic TIA protocol feasible, efficient, and cost-effective?
1.  Asimos AW et al. A Multicenter Evaluation of the ABCD2 Score’s Accuracy for Predicting Early Ischemic Stroke in Admitted Patients With Transient Ischemic Attack. Annals of Emergency Medicine. 2010;55:201-210.
This large multicenter study of 1667 patients evaluated the ability of the ABCD2 score (age, BP, clinical features, duration, diabetes) to predict 7 day risk of CVA in patients admitted to the hospital within 24 hours after TIA.  This was a convenience sample, and although billed as a prospective study, actually retrospectively examined the patients’ charts.  Therefore ABCD2 scores were unavailable for almost 35% of patients.  Although they attempted to impute (estimate and fill in) missing data based on other observed variables, this still limits the study’s internal validity.  Twenty-three percent of patients were diagnosed with CVA within 7 days,  in part reflecting that the study only included admitted patients (many minor TIAs sent home).  Most CVAs occurred in the first 2 days.  The c statistic (=area under the ROC curve) was 0.59 for the risk of any ischemic CVA in 7 days, and 0.71 for disabling CVA within 7 days.  In other words, the ABCD2 score poorly predicted the risk of ischemic CVA, and was better but still not great at predicting disabling CVA.  The sensitivity of a low ABCD2 score (</= 3) was only 87% for identifying patients at low risk for CVA in 7 days (sensitivity was 96% identifying low risk for disabling CVA, CI 88-99%).  
Their definition of “disabling CVA” was Modified Rankin Scale score of greater than 2, which implies a degree of dependence.  Point was made that a MRS score of 2, although “mild,” still means that a pt is unable to do everything they were capable of before the CVA, eg maybe an EP couldn’t practice emergency medicine.  For us, that would be a pretty devastating CVA, so most people in the room wanted a score that is really good at predicting all CVAs, not just “disabling” ones.  Other points of conversation:  tremendous variability in the admission rates for TIA in the participating hospitals (35-100%) and no standardized work-up or treatment plans-really heterogeneous study.  They chose patient oriented outcomes (clinical TIA/CVA) rather than disease oriented outcomes (MRI findings) which is laudable, but there is an inherent difficulty in defining TIA vs. CVA early in the presentation (are some early TIAs really CVAs…probably yes).  We also don’t know when patients received MRIs and this is a problem since the natural history of MRI changes very early in TIA/CVA has yet to be well defined. 
Bottom line:  ABCD2 score not sufficiently accurate in this study to use it to predict short-term risk of CVA.  Also not sensitive enough to use a low ABCD2 score to identify patients who can be sent home (would miss about 10% of early CVAs in this study).  However, it is definitely true that as the ABCD2 score goes up, the pt’s risk of CVA increases,  so calculating the ABCD2 score is useful to more accurately counsel high scoring patients about their risk of CVA, encourage lifestyle modification, and these patients should be relatively easy to admit after discussion with PMD.
2.  Rothwell PM et al. Effect of urgent treatment of transient ischaemic attack and minor stroke on early recurrent stroke (EXPRESS study): a prospective population-based sequential comparison. Lancet. 2007;370:1432-1142.
In this prospective before-and-after English study of 1278 patients presenting with TIA or minor stroke, the authors compared their local  practice of having PMDs refer patients to a daily TIA clinic, which was appointment based, with no treatment immediately started (Phase I), to treatment at a TIA clinic where no appointments were necessary, and at which treated was immediately initiated if the diagnosis of TIA or small stroke was confirmed (Phase II).  All patients received neuroimaging and carotid US, some patients received Echo.  Patients were followed up for 24 months (100% followup), and the primary outcome was CVA within 90 days.  Risk of CVA at 90 days fell from 10.3% in Phase I to 2.1% in Phase 2 (huge).  Early treatment did not increase the risk of ICH or other bleeding.  Although this is an extremely positive study, it’s important to look at the access to care timing.  Median delay in assessment in the TIA study clinic fell from 3 days in Phase I to less than 1 day in Phase II.  Also, median delay to first prescription of treatment (ASA, clopidogrel, BP meds) fell from 20 days to 1 day.  So, it’s difficult to extrapolate this to the United States, where patients are seen/have a CT/and are started on ASA or other meds at the time of their presentation to the ED (not 20 days later).  Maybe some of the great outcomes in this study were because their Phase I was so slow.  The authors acknowledge that starting meds early likely was the largest factor in their positive results, although patients in Phase II requiring CEA received surgery earlier than in Phase I. 
Also, there is the possibility of the Hawthorne effect coming into play.  As an example, patients in Phase II were more likely to be on statins at the time of initial presentation than patients in Phase I, and it’s possible that other minor subtle changes in care were taking place during the second phase which influenced the overall outcomes of these patients and added to the positive results. 
Bottom line:  Early diagnosis and treatment of TIA is associated with improved clinical outcome, but it's unknown how much of their huge benefit was a reflection of the inefficiencies of their system at baseline.
3.  Ross MA et al. An Emergency Department Diagnostic Protocol for Patients With Transient Ischemic Attack: A Randomized Controlled Trial. Annals of Emergency Medicine. 2007;50:109-119.
Finally, this study examines the efficiency and potential cost savings of an ED Observation Unit based accelerated diagnostic protocol (ADP) for TIA.  The study randomized 149 patients with TIA either to an ADP which included cardiac monitoring, carotid dopplers, echo, neuro checks and a neuro consult, or to hospital admission.  The same order set was used for ADP patients and admitted patients (although some admitted patients never receive all of the tests).   Primary outcome was the index visit length of stay.  Secondary outcomes were 90 day cost, and 90 day clinical outcomes (which included CVA).  Ninety day followup occurred with all patients.  There were a large number of inclusion/exclusion criteria to be enrolled, resulting in many patients with TIA not being enrolled in the study.  However, when comparing the 2 groups, patients in the ADP had a significantly shorter length of stay than admitted patient (30 hours shorter-basically saved a day), and 90 day costs were $890 versus $1,547.  Approximately 85% of ADP patients were discharged.  Clinical outcomes were similar, with comparable rates of return visits, CVA, and major clinical events.  All ADP admissions were for clinical events detected on serial clinical exams; no admissions were primarily because of carotid stenosis, arrhythmia, or echo findings.  That being said, it’s accepted that echo/carotid US findings which lead to emergent interventions are unusual (eg only 3% echo findings of cardioembolic source of CVA/TIA in the absence of clinical suspicion of cardiac etiology-2009 AHA Stroke Guidelines), and this study only enrolled 174 relatively low risk patients.  Not really powered to find significant clinical outcomes differences.  If significant carotid stenosis is identified, early surgical intervention is associated with improved outcomes.  Carotid US and Echo were performed more frequently and more quickly in the ADP group-need a larger study to see if this would make a clinical outcome difference.
A couple of other philosophical points.  Assigning hospital resources to a TIA protocol and prioritizing these patients to receive rapid diagnostic evaluations and neurology consults may mean diversion of resources from other patients in the ED/hospital.  On the other hand, rapid throughput/discharge of ADP patients allows backfill of open beds upstairs with patients needing different/more specialized resources.  There is a potential for overuse of this protocol-enrolling such atypical patients that it becomes a “no-risk” rather than “low-risk” protocol.   Point made that there are fewer TIA presentations than chest pain presentations to the ED, so decent chance this wouldn’t be a problem.
Bottom line:  an ED accelerated diagnostic protocol appears to be feasible and when enrolling low risk patients saves a day of hospital admission and significant money, with similar clinical outcomes.

CTA for low risk chest pain

Use of coronary CT Angiography in the evaluation of low-risk CP 


For starters, important to remember that the discussion is restricted to low-risk CP patients (our typical CPEP).  Tests will always have different performance characteristics in different patient populations.  Also, as discussed by CK,  our goal was to emphasize the prognostic/clinical strength of the test (how will these patients do once they are discharged from the ED?, can we pick up the 3-5% “missed ACS” cases?) rather than simply the diagnostic efficacy of the test (do the number of 50% blocked lesions match the number of lesions seen on invasive angiography?).   This is important, as EK mentioned in passing, because there is a whole other discussion out there about whether or not lesions seen on invasive angiography should be stented.   The COURAGE trial (April 12, 2007 NEJM) took patients with “stable” CP and documented 70% blockages on angiography or abnormal stress tests, and showed that mortality/MI rates were the same with maximal medical management or stenting.  So, our articles:

1.    Goldstein J, et al:  A Randomized Controlled Trial of Multi-Slice Coronary Computed Tomography for Evaluation   of Acute Chest Pain.  JACC 2007; 49(8):863-8712.     

197 low-risk patients, really compared 2 protocols; either 0/4 hour ECG/CIP then CTA, or 0/4/8 hour ECG/CIP then nuclear med (SPECT) stress testing.   No test complications in the CT group, and no major adverse cardiac events at 6 months in any of the patients sent home from either group.  Ultimately, accuracy was  equivalent for the two approaches.  Twenty-four % of the CTA group had intermediate disease on CTA or nondiagnostic CTA; these patients all required a second test (SPECT).  There were also 11% false positive CTAs.   The article emphasized the shorter ED length of stay for the CTA patients, but this was largely because of the additional time built into the SPECT protocol (a shorter rule-out time would have cut out much of the difference), and there was a several hundred dollar difference in “cost of care”, and as SA and CM pointed out, “cost of care” determinations are pretty much hand-waving.  Also, only a 4% rate of disease in the whole group- in this small study of only 200 patients, safety conclusions will have wide confidence intervals.

2.  Hollander J et al:  Coronary Computed Tomographic Angiography for Rapid Discharge of Low-Risk Patients With Potential Acute Coronary Syndromes.  Annals of Emergency Medicine, In Press.  

568 patients evaluated with coronary CTA, low TIMI score, either receiving CTA without serial CIP (some received one set) or CTA after observation period (if they came to the ED at night).  Everybody did great (except for the guy who died in a car crash).  No major adverse cardiac events at 30 days (0%, 95% CI 0% to 0.8%).   Again, a very low risk population (6 patients out of 568 received stents).   Conclusion that CTA can be used to safely send home low risk patients (<1% risk of MI/death at 30 days).   One large issue with the study-patients were enrolled in part because emergency physicians had decided to order a coronary CTA on them, introducing a significant selection bias.

 3. Takakuwa K,  Halpern E:  Evaluation of a “Triple Rule-Out” Coronary CT Angiography Protocol:  Use of 64-Section CT in Low-to-Moderate Risk ED Patients Suspected of Having Acute Coronary Syndrome.  Radiology 2008;248(2):438-446. 

This study had the same primary outcome of adverse clinical outcomes at 30 days, 197 low risk patients, but used the “Triple Rule-out” protocol, which involves higher radiation but evaluates the rest of the thorax.  Negative predictive value for CTA 99.4%, but small study, low risk population, so CI 96.9%-100%.  They did find other stuff;  PEs, dissections, pancreatic and pulmonary masses, among others.  Unfortunately, no clinically information was reported about the patients, so impossible to say if clinicians were already worried about these other diseases or not (serendipitous finds vs. clinically suspected).  AN made the excellent point that in his case, a MRI (like a CT would have) diagnosed his constrictive pericarditis and gave him a new lease on life.  As a counterpoint, CK related how a CT with a ?tumor finding led to her unnecessary surgery.   Always a balance.


Other things to remember about coronary CTA:  

-Static rather than Functional (stress test) study.

-For now, you need to be in normal sinus rhythm, and usually need betablockers/NTG to slow the HR and max. open the vessels to get good pictures.  Stents and high calcium scores muck up the pictures.
  
-Think about the potential complications/patient exclusions.  The radiation dose is substantial (10-20 mSv), which is estimated to increased overall cancer risk by 1 in 200 to 1 in several thousand.  Doesn’t mean not to do it, but easily ordered technologies tend to be overused-just something to think about.  Along the same line, what happens when the patient returns the next year with similar pain?  Another CT and more radiation?  How long are they “good for”?  Unknown.

-In these studies, no renal issues from the dye load, but they (and all studies so far) have been small-no more than several hundred patients.

Can I wrap it up already?  The room was pretty evenly split at the end of the night on whether they would advocate for this test in the vignette patient.  I think the potential speed of the test (at least compared to our current CPEP) was appealing to some.  To others, the potential to find other disease/explanations for the pain is an important selling point (“triple rule-out).  Remember, in these low risk patients, there is such a small chance of a poor outcome that you could just send them all home without any testing and be right 90-95% of the time, so we really need much larger studies in this low risk group to be happy about safety  (CIs for adverse cardiac outcomes are just too wide in studies 1 and 3.   Study 2 with <1% risk of adverse event at 30 days but significant selection bias).  For now, based on available data, coronary CTA is probably safe in low risk CP patients (similar performance to stress echo or nuclear stress/SPECT), and if you are trying to get more “bang for your buck” (thinking cardiac vs. PE, or cardiac vs. dissection), this might be the way to go.   SA also brought up the excellent point that depending on where you practice, if it’s a small hospital, this test can be tele-radiologied to someone to read even if you don’t have a CTA radiologist on-site, and you might not have a cardiologist available to do stress echoes.   So it comes down to patient selection (is CTA safe for your patient, and how clear is their clinical presentation) and what are the available resources/alternative strategies at your institution.