Neurol India Home 
 

CORRESPONDENCE
Year : 2016  |  Volume : 64  |  Issue : 6  |  Page : 1396--1398

Cited heavily, taken lightly, matters hardly

George C Vilanilam1, MS Gopalakrishnan2, Satyajeet Misra3, Nilay Chatterjee4,  
1 Department of Neurosurgery, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, Kerala, India
2 Department of Neurosurgery, Jawaharlal Institute of Post Graduate Medical Education and Research, Puducherry, India
3 Department of Anaesthesiology, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
4 Department of Anaesthesia, ICU and Pain Management, Khoula Hospital, Muscat, Sultanate of Oman

Correspondence Address:
George C Vilanilam
Department of Neurosurgery, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, Kerala
India




How to cite this article:
Vilanilam GC, Gopalakrishnan M S, Misra S, Chatterjee N. Cited heavily, taken lightly, matters hardly.Neurol India 2016;64:1396-1398


How to cite this URL:
Vilanilam GC, Gopalakrishnan M S, Misra S, Chatterjee N. Cited heavily, taken lightly, matters hardly. Neurol India [serial online] 2016 [cited 2019 Dec 16 ];64:1396-1398
Available from: http://www.neurologyindia.com/text.asp?2016/64/6/1396/193833


Full Text

Sir,

“Although I really know it is not a great paper, I secretly get a kick out of the response”

…. Late Oliver Lowry (biochemist) who authored the most cited work in history describing an assay to determine the amount of protein in a solution (3,05,000 citations).[1]

The article by Pandey et al.,[2] on the highest cited papers in Neurology India inspires us to strive for high-quality scientific work that attracts peer attention and citations. The key message that comes across is that an article should stand alone on its own scientific merit rather than the weight of the impact factor or the 'importance' of the journal that publishes it. Nevertheless, is “cite-worthiness” a true measure of an article's scientific worth?[3] We decided to subject this hypothesis to a critical thought and peer review thereby analysing citation trends, temperature and timelines.

Cited heavily

The Science Citations Index (SCI), initiated by the Institute for Scientific Information (ISI) in 1962 and Eugene Garfield's impact factor (1955) aim to measure the quality of research output and the scientific influence of an article. Teasdale and Jennet's [4] description of the coma scale is the highest cited neurosurgical work till date. However, in the ocean of citations across branches of scientific knowledge, neurosurgical papers are just a minuscule drop [Figure 1].{Figure 1}

Conventional citation-based metrics has many finer nuances and there is more to it than what meets the eye.[5] We cite a piece when we are inspired, informed, and initiated into action by it. We write a scientific article to inform the world about new science and also for promotions, jobs, fame, and visibility.[6],[7]

In a bibliometric world driven by “impact factor mania” and “impactitis,”[5] pure unadulterated science often takes a backseat and we may be forced to cite for the wrong reasons. The melee to get cited “by hook or by crook” could compel us to cut corners, falsify data, or fabricate research. Some papers retracted from prestigious journals continue to get cited heavily for the research idea or the ethical aspect.[5]

Taken lightly

Not always is high and mighty science cited heavily. Watson and Crick's DNA double helix, Alexander Fleming's penicillin, and the discovery of high-temperature superconductors have won Nobel prizes but are not among the top 100 cited papers.[7] Thus, mighty science may not often get cited and what is cited need not always be mighty science. Stories about how highly cited, “Nobel-calibre” papers were originally rejected by prestigious journals, thereby delaying scientific developments, are not uncommon.

Why we miss citations?

Informal influence

Ideas are rarely 100% original. Our thoughts and research inspiration are heavily influenced by the works of others. However, the influence is often considered informal and does not translate into a formal citation. Hence, significantly influential work may miss a note of honorable mention. “It is a sobering fact that 90% of articles that have been published in academic journals are never cited,” wrote Meho in 2007. But these uncited or seldom-cited works could have seriously influenced mass scientific thinking and opinions.[8]

Mathew effect

Successful publishing in highly selective journals begets more success. Thus, papers from the more famous centres and authors tend to get cited more. The average researcher would like to associate his study with those of the stalwarts, even if they have not influenced the study. This helps to perpetuate the cycle. The tendency for the rich to get richer and the poor to get poorer was designated the “Mathew effect” by the sociologist Robert K. Merton [6],[8] and citation trends are also influenced by this effect.

Multiple motives

Too much competitive citation based scientometry inspires researchers to form unholy alliances and promote self-citations and omit citations of competing researchers in the same field. Stooping low for power, prestige, and funding in a “bibliometric world” is not uncommon.[5] Multiple publications with different titles but the same data and central message could also set a wrong trend of inflating self-citations.

The umbrella effect

Heavy and conceptual science often gets quickly incorporated into textbooks and review articles, thus limiting opportunities for citation of the original work. Thus, Einstein's Theory of Relativity became common scientific thinking in treatises and texts, thereby being denied an opportunity to become a citation classic. Papers describing common experimental methods, review articles, and works in “fashionable” research fields tend to be more highly cited than those describing fundamental conceptual advances.[9],[10] Review articles absorb many original works under their “umbrella effect” denying citations to the original magnum opus.

Gestation period

Citation half-life (number of years, going back from the current year, that cover 50% of the citations in the current year to the journal) is also quite an important measure that speaks about the shelf life of an article. Truly original work usually takes longer than 2 years to be noticed and appreciated.[11] Ed Lewis's article on the genetic control of embryonic development, the cornerstone of the work that won him the Nobel Prize in 1995, was quoted little in the first 2 years and took 6 more years to reach its peak rate of citations.

Thus, we need to give an idea sufficient time to accrue citations, and metrics such as the citations per year (CPY) could serve as an impatient and imperfect surrogate [Figure 2].[11] The top 100 cited neurosurgical papers in history were published during 1976–1995. This concept may give a small edge to the chronologically older papers and journals but sounds scientifically fair.{Figure 2}

Matters hardly

In the era of information technology, the metrics about what we read and what influences our research thinking is fast changing. Newer measures such as the Eigenfactor and Altmetric score have crept into our lingua franca. The Eigenfactor assumes that scientific literature forms a vast network of articles connected to one another by their citations and uses the structure of this network to measure the relative impact of journals.[12] Altmetrics is the creation and study of new metrics based on the Social Web for analysing and informing scholarship. Sometimes your work may not be cited, but it is being read and used to reform daily clinical practice. That is all that matters.

Break the publication cascade

Authors usually follow the publication cascade to publish their scientific work. They begin by submitting their work to the more “fashionable” and higher impact factor journals.[6] The article then enters a rejection–resubmission cycle and finally manages to find or loose a footing. An email with the words “We regret that we receive many more meritorious submissions than we can publish” could perhaps be found in every researcher's email inbox. Furthermore, the stringent limits imposed by high-impact speciality journals can force authors to omit valuable information sometimes limiting the reach and citation rate of a paper.[6] A rejection compels a researcher to spend time revising and resubmitting, instead of working on new ideas. Sometimes the rejection–resubmission cycle demoralizes a scientist so much so that the idea is throttled and the paper is never published. Thus, a great idea may get delayed or smothered by the bureaucracy of science.

The message to take home is that good papers, even when published in a low impact factor journal, are valued and quickly recognized by the scientific community. They have the potential to achieve higher citations than the average papers in more “scholarly journals.” We should not judge science by its publication venue and should take care to cite appropriately in our manuscripts.[6] We end with a sincere hope that in future, the article by Pandey et al.,[2] (that inspired this discussion) would become a “citation classic,”(more than 400 citations) thus joining the hall of fame of the elite few articles that it measures and cites.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

References

1Lowry OH, Rosebrough NJ, Farr AL, Randall RJ. Protein measurement with the Folin phenol reagent. J Biol Chem 1951;193:265-75.
2Pandey P, Subeikshanan V, Madhugiri VS. Highest cited papers published in Neurology India: An analysis for the years 1993–2014. Neurol India 2016;64:703.
3Casadevall A, Fang FC. Impacted science: Impact is not importance. MBio 2015;6:e01593-15.
4Teasdale G, Jennett B. Assessment of coma and impaired consciousness: A practical scale. Lancet 1974;304:81-4.
5Vilanilam GC, Sudhir BJ, Kumar KK, Abraham MA, Nair SN. Are Indian neuroscience clinicians perishing without publishing? Neurol India 2015;63:807.
6Lawrence PA. The politics of publication. Nature 2003;422:259-61.
7http://www.nature.com/news/the-top-100-papers-1.16224.[Last accessed on 2016 Jul 20].
8Casadevall A, Fang FC. Causes for the persistence of impact factor mania. MBio 2014;5:e00064-14.
9Van Diest PJ, Holzel H, Burnett D, Crocker J. Impactitis: New cures for an old disease. J Clin Pathol 2001;54:817-9.
10Alberts B. Impact factor distortions. Science 2013;340:787.
11Aksnes DW. Characteristics of highly cited papers. Res Eval 2003;12:159-70.
12Durieux V, Gevenois PA. Bibliometric indicators: Quality measurements of scientific publication. Radiology 2010;255:342-51.