Neurol India Home 

Year : 2015  |  Volume : 63  |  Issue : 5  |  Page : 808--809

Author's reply

Venkatesh S Madhugiri 
 Department of Neurosurgery, Stanford University Medical Center, Stanford, CA-94305, USA

Correspondence Address:
Venkatesh S Madhugiri
Department of Neurosurgery, Stanford University Medical Center, Stanford, CA-94305

How to cite this article:
Madhugiri VS. Author's reply.Neurol India 2015;63:808-809

How to cite this URL:
Madhugiri VS. Author's reply. Neurol India [serial online] 2015 [cited 2020 Feb 19 ];63:808-809
Available from:

Full Text


I thank Vilanilam et al., for their insightful comments on the paper. They have, in their analysis, raised several germane and important issues. I would like to respond to these by pointing out certain facts:

The aim of this analysis was not to provide a ranking of clinical neuroscience institutes in India based on arbitrary criteria, akin to what the print media does each year. The title explicitly states that the paper analyses the publication performance and research output. I have stated in several places that exposure to clinical/basic research and paper writing is only one aspect of residency training – albeit the most easily quantifiable aspect.[1] Therefore, the rankings listed in this paper pertain to just clinical and basic research output. This paper is not (nor does it claim to be) an overall or general ranking of clinical training programsI agree that ours is a resource challenged setting. However, I would like to point out that I have very specifically looked at the institutes that train neurology and neurosurgery residents, which are likely to have a greater concentration of resources. While it is not necessary (although it may be desirable) for all hospitals providing clinical services to engage in research, I cannot agree that any teaching institute, no matter how burdened by clinical workload the faculty may be, is exempted from the necessity of performing research and training residents to do so. The fact that the dissertation is a mandatory part of the clinical residency training automatically implies that training in research methodology needs to be embedded in the curriculum. The goal is to train fresh specialists in the practice of evidence-informed medicine and research methodology, thereby improving the quality of neuroscience research and practice in India.[2]I entirely concur with Vilanilam et al., about the shortcomings of citation based scientometry, which are all obviously applicable to this paper. These shortcomings are, I believe, widely known and have not been dwelled upon in the paper in the interests of brevity. I can only hope that the authors have not diagnosed me as a bad case of “Impactitis!”Why then did I opt to use simple citation-based metrics? The answer has two parts:I have not ranked the institutes based only on citation metrics. I have also provided a ranking based on the number and type of the papers published.These metrics are the only measurable indices for which the data is reliably and easily available. I am fully aware of the value of the v-index.[3] Indeed, I had also attempted to normalize the publication performance based on the level of funding available to the institute and the seniority of the faculty employed therein. Whereas it would be relatively easy to obtain data about the funds granted to each institute, the proportion of funds designated and actually used for research may be impossible to compute. Indeed, I was not able to compute the figure with any reasonable degree of certainty for my own institute. Nor was it possible to readily obtain data about the faculty seniority.

These are the reasons why, despite being fully aware of the shortcomings of citation-based metrics, I opted to use them.

I agree with Vilanilam et al., that in teaching institutes at least, it is imperative that time and funds be protected, allotted, and actually utilized for research. My suggestion, therefore, would be to permit two categories of clinicians in teaching institutes – the pure clinicians and the clinician-scientists. The pure clinicians would confine themselves largely to clinical work (and clinical research, if they were to be so inclined). They would be evaluated almost solely on the basis of their clinical output, contributions to resident teaching and administrative duties. The clinician-scientists could be expected to shoulder less of the clinical burden and have time protected for research. Obviously, the publication performance would form an important part of their evaluation.

Finally, I would like to draw the attention of Vilanilam et al., to the most disturbing finding of this study – that a significant proportion of teaching institutes have not published a single paper over the past 5 years, despite every resident presumably submitting a dissertation. If this study addresses this and other elephants in the room – the poor neuroscience research output from India, the rise in the number of meaningless papers, the unfortunate “publish to get promoted”milieu and the absence of any research or publication output from several training institutes, the purpose of the analysis would have been served.

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.


1Madhugiri VS. Publication performance and research output of Neurology and Neurosurgery training institutes in India: A 5-year analysis. Neurol India 2015;63:338-46.
2Bala A, Gupta BM. Mapping of Indian neuroscience research: A scientometric analysis of research output during 1999-2008. Neurol India 2010;58:35-41.
3Vaidya JS. V-index: A fairer index to quantify an individual's research output capacity. Reforming research in the NHS. Br Med J 2005;331:1339-c-1340-c.