Atormac
Neurology India
Open access journal indexed with Index Medicus
  Users online: 1101  
 Home | Login 
  About Current Issue Archive Ahead of print Search Instructions Online Submission Subscribe Etcetera Contact  
  Navigate Here 
 Search
 
  
 Resource Links
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
    Article in PDF (431 KB)
    Citation Manager
    Access Statistics
    Reader Comments
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this Article
   References

 Article Access Statistics
    Viewed265    
    Printed3    
    Emailed0    
    PDF Downloaded25    
    Comments [Add]    

Recommend this journal

 


 
Table of Contents    
COMMENTARY
Year : 2017  |  Volume : 65  |  Issue : 6  |  Page : 1262-1263

Predicting and explaining outcome after stroke


Department of Neurology, All India Institute of Medical Sciences, New Delhi, India

Date of Web Publication10-Nov-2017

Correspondence Address:
Dr. Kameshwar Prasad
Department of Neurology, All India Institute of Medical Sciences, New Delhi
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0028-3886.217998

Rights and Permissions



How to cite this article:
Prasad K, Singh N. Predicting and explaining outcome after stroke. Neurol India 2017;65:1262-3

How to cite this URL:
Prasad K, Singh N. Predicting and explaining outcome after stroke. Neurol India [serial online] 2017 [cited 2017 Dec 13];65:1262-3. Available from: http://www.neurologyindia.com/text.asp?2017/65/6/1262/217998


Prediction in clinical settings commonly involves predicting one or more of the three: risk of occurrence of a disease, its diagnosis, or prognosis. Predicting risk of a disease means predicting the probability that a person without the disease will exhibit the incident disease during follow up over certain time. Predicting diagnosis involves determining the probability that a patient who was not known to have the disease currently has the disease at this instant in time. Prediction of outcome requires predicting a future disease-related outcome in a patient with the disease. Prediction of an outcome in patients with stroke serves many purposes. Clinically, it can be used to individualize treatment for each patient and can form the basis for shared decision making about interventions like thrombolysis, hemicraniectomy or mechanical thrombectomy. A reasonably accurate prediction can help provide more reliable information to patients and/or care-givers than a rough guess. In research, these predictors can help in stratification of patients entering randomized controlled trials, in selecting a random allocation method (like minimization) to balance prognostic factors at baseline, and in pre-specifying subgroups to analyze the results of randomized trials or meta-analyses according to baseline predictors.

Several kinds of variables can be independent predictors of outcome: demographics like age, sex etc., medical history and physical examination findings, co-morbidities, pre-morbid state, disease severity (assessed clinically or through biomarkers), test results and others. A recent paper has compared several scoring methods based on different sets of variables in predicting 90-day functional outcome after stroke.[1] Stroke severity as measured by National Institutes of Health Stroke Scale (NIHSS) is an established strong predictor of outcome since 1999,[2],[3] but this is also a major determinant of outcome at 30-day, 90-day or one year. A paper by Bhaskar et al., in the current issue of the Journal illustrates this point.[4] The authors have used logistic regression for identifying predictors/determinants of outcome. Use of regression methods has become common with the easy availability of computer and statistical softwares, but this requires a thoughtful approach and careful interpretation of the results. For example, determining the relative importance of a variable as predictor or determinant only on the basis of the associated P value (as in Bhaskar et al's paper) is often misleading and not recommended, It is the magnitude of co-efficients (or the derived odds ratios) which may help in judging the importance of predictors/determinants. While interpreting the co-efficients, researchers need to keep in mind the purpose of a given regression analysis: is it 'prediction' or 'explanation'? Pedhazur states, “Explanatory research may serve as the most powerful means for prediction. Yet the importance of distinguishing between the two types of research activities cannot be overemphasized. The distinction between predictive and explanatory research is particularly germane to the valid use of regression analysis and to the interpretation of results.”[5]

For explanatory purposes, it is important that relevant variables are not omitted. For determining the functional outcome (measured using modified Rankin's scale [mRS]), an important variable is pre-stroke disability. Omission of such variables (technically called 'misspecification of model') results in biased estimate of co-efficients. Another important consideration in explanatory model is errors in measurement of independent variables, like NIHSS for stroke severity. The errors may be systematic (for example, consistently low score by untrained observers) or random (trained but tired observers). Errors lead to downward bias in the co-efficients. These and other considerations like collinearity, underlying biology etc., are important but often ignored by authors using regression analysis for explanation of outcome.

For prediction, Pedhazur [5] states in his book, “assuming one has a valid and reliable measure of the criterion, predictor variables are selected, preferably based on theoretical considerations and previous research evidence.” Before statistical modelling, clinical reasoning is important to select predictors. Predictors with unreliable measurement or with relatively high financial costs or burden associated with their measurement may be excluded. Further selection can be based on strength of their unadjusted (univariable) association with the outcome or by applying an automated variable selection method (like forward, backward, or stepwise) in the regression modeling. Few interaction terms with prior biologic rationale may be examined but rarely add to the predictive ability of the model.

All prediction models should have internal and external validation. Internal validation from the development data set is done using cross-validation, data splitting or bootstrapping. External validation requires assessment of performance of the prediction model on a new data set using measures such as discrimination, calibration, sensitivity/specificity, decision curve analysis or net reclassification improvement (NRI). Area under the receiver-operator characteristic curve, (AUC) as reported by Bhaskar et al.,[3] is an index of discrimination, which indicates the ability of the prediction model to differentiate between those who do or do not experience the outcome event (e.g. between those with mRS >2 from those with two or less score in mRS). Both AUC and its more general form, C-index (concordance index, also called c-statistic) provide an estimate of the extent to which a model predicts a higher probability of having an event among patients who will versus those who will not have an event. AUC or C-index of 0.75 or above is generally considered excellent, but this is not an index of predictive accuracy. A good discrimination is necessary but not sufficient for assessing a model's prediction capability. A second necessary property of a good prediction model is 'calibration'. Calibration is the extent of agreement between the outcome predicted by the model and the observed outcomes (for example, mRS >3). A poorly calibrated model will underestimate or overestimate the risk of outcome. Calibration is usually illustrated using a graphical plot (called, 'calibration plot'), in which the x-axis represents the predicted risk and the y-axis the observed risk. The extent to which the observed risk differs from the predicted risk is a measure of miscalibration, but then a question arises as to how much miscalibration renders a model useless. A statistical test of significance, termed Hosmer-Lemeshow test, can be applied to calibration but its value is highly questionable. Numerical estimates of calibration are of questionable value. Moreover, calibration is a joint property of the model and the particular cohort of patients on which it is developed. Therefore, a model showing a good calibration in one cohort may show a poor calibration in another.

Classification table approach has been used to express the results of prediction models in clinical terms, but they are useful only in comparing two models, and cannot be used to determine whether an individual model should be used in clinical practice. A clinically useful approach to assess a prediction model is 'decision curve analysis'. Decision curve analysis requires choosing a predicted probability threshold, and relating it to the relative value of false-positive and false-negative results to obtain a value of net benefit of using the model at that threshold. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD)[6] provides references to more details of this analysis.

In summary, stroke investigators are frequently using regression analysis but as Pedhazur put it, “unfortunately, all too often one encounters almost mindless interpretations of regression analysis in non-experimental research.” A thoughtful approach distinguishing prediction from explanatory research and following TRIPOD guidelines for reporting prediction models will go a long way to optimize the use of regression models in stroke research and practice.

 
  References Top

1.
Quinn TJ, Singh Sarjit, Lees KR, Bath PM, Myint Phyo K (on behalf of VISTA collaborators). Validating and comparing stroke prognosis scales. Neurology 2017;89:997-1002.  Back to cited text no. 1
    
2.
Adams HP, DavisPH, Leira EC, Chang KC, Bendixen BH, Clarke WR, et al. Baseline NIH Stroke Scale score strongly predicts outcome after stroke – A report of the Trial of Org 10172 in Acute Stroke Treatment (TOAST). Neurology 1999;53:126-31.  Back to cited text no. 2
    
3.
Prasad K, Dash D, Kumar A. Validation of the Hindi version of National Institute of Health Stroke Scale. Neurol India 2012;60:40-4.  Back to cited text no. 3
[PUBMED]  [Full text]  
4.
Bhaskar S, Stanwell P, Bivard A, Spratt N, Walker R, H Kissos G, et al. The influence of initial stroke severity on mortality, overall functional outcome and in-hospital placement a 90 days following acute ischemic stroke: A tertiary hospital stroke register study. Neurology India 2017;65:1252-9.  Back to cited text no. 4
    
5.
Pedhazur EJ. Multiple regression in behavioral research – Explanation and prediction (Third Edition). Eds. McPeek E, Stewart K. Publisher: Christopher P, Klein. USA, 1999.  Back to cited text no. 5
    
6.
Moons KG, Altman DG, Reitsma JB, Ioannidis JP, Macaskill P, Steyerberg EW, et al. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): Explanation and elaboration. Ann Intern Med 2015;162:W1-73.  Back to cited text no. 6
[PUBMED]    




 

Top
Print this article  Email this article
   
Online since 20th March '04
Published by Wolters Kluwer - Medknow