By Leila Jameel, PhD student at UCL Department of Experimental Psychology.
On behalf of Tom E Hardwicke, Matthew Jones, Eryk J Walczak, Lucia M Weinberg
In 2012 Daniel Kahneman sent an email to several prominent social psychologists warning that he saw a “train wreck looming” for the field. This statement was issued in response to the scandalous case of research fraud perpetrated by Dutch social psychologist Diederik Stapel. It sent shock waves rippling, and the entire psychological discipline was placed under intense scrutiny. Gradually the issues highlighted by this debate filtered across psychology, to the social and physical sciences.
It has become increasingly clear that the scientific enterprise has blown off-course. It is plagued by threats that undermine its progress including the aggressive ‘publish or perish’ culture and severe cuts suffered in response to austerity measures. Media exposure concerning the extent of Stapel’s fraudulent activities (more than 50 peer-reviewed articles were retracted), has led to a more nuanced understanding of scientific integrity. Whilst outright data falsification is considered to be a rare but dangerous threat, traditionally the majority of scientists are assumed to be honest and objective experts. However, a landmark paper claimed that questionable research practices are widespread amongst scientists (Martinson, Anderson & de Vries, 2005). Furthermore, the psychological sciences, and many others, are beleaguered by a lack of robust replication studies (Ioannidis, 2012), and all too often fall prey to the seduction of ‘bite-sized’ science so adored by top journals (Bertamini and Munfano, 2012).
In September 2014 a group of psychological and cognitive scientists at the University of Amsterdam decided to harness this fervent introspection to promote positive change. They organised an excellent event, “Improving Scientific Practice: Dealing with the Human Factors”, which gave an overview of threats to the scientific enterprise and debated potential solutions, ranging from wide-scale cultural change, to pre-registration of study protocols, or harnessing new technologies that allow for data sharing and transparency. A group of five post-graduate students, Tom Hardwicke, Leila Jameel, Matthew Jones, Eryk Walczak and Lucia Weinberg, received funding from the Department of Experimental Psychology, UCL to attend. To read their review of the event, and views on the issues and solutions debated please see: http://www.opticon1826.com/article/view/opt.ch.
Whilst the issues discussed were not new, indeed many were highlighted decades ago, they have often brushed under the carpet. So, what is different this time?
- The power of scientists to tackle these things independently has shifted – technology (i.e. the internet and various data tools) allows researchers to share and scrutinise their own and others’ work more effectively.
- The scale of the scientific enterprise has grown rapidly in the past decade, whilst government funding of the sciences has dwindled. This creates a culture of fierce competition where individuals, and institutions (i.e. universities and grant-funders) become focused on outputs. Whilst there are sensible and noble intentions behind these measures, it ultimately creates a system that unduly rewards individuals on the basis of the quantity of their output (i.e. number of papers produced and amount of money received) or head-line worthy research. As opposed to a focus on the quality of their work (i.e. truly original contribution to research, or work that translates into real-world applications) or their contribution to the scientific community (i.e. mentoring or inspiring others, and working collaboratively with other research groups). In such a competitive system individuals can become focused on ‘bite-sized’ science, which allows them to churn out research that is quick to conduct and easily digestible by the reader. Randy Schekman (winner of the 2013 Nobel prize for Medicine) even suggested that “the incentives offered by top journals distort science, just as big bonuses distort banking.” For the full article please see: http://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science).
- A human context has been provided. Rather than viewing researcher fraud as an isolated case of moral turpitude, scientists are beginning to acknowledge that they are human, and thus subject to the same biases and driven by the same motivations as any other individual. From the PhD student whose supervisor pressurises them to only report the studies that support their theories, to the postdoc who really needs one more paper for their CV to be in with a chance of winning that grant proposal, to the esteemed Professor who is so wedded to their views they unquestioningly dismiss alternative evidence. Scientists can be fooled and manipulated by the system and statistics that governs them and their work. Whilst this might seem like a depressing revelation it is actually very helpful. It allows scientists to view upholding research integrity as a joint endeavour and to realise their own limitations and to seek to mitigate these accordingly.
In their review Hardwicke et al. (2014) summarise these issues, and debate the relative merits of the proposed solutions for dealing with the ‘human factors in science’.
Bertamini, M., Munafo, M. (2012). Bite-size science and its undesirable side effects. Perspectives on Psychological Science 7(1) 67–71. DOI: http://dx/doi.org/10.1177/1745691611429353
Hardwicke, T.E., Jameel, L, Jones, M, Walczak, E.J. and Weinberg, L.M. (2014). Only human: Scientists, systems, and suspect statistics. Opticon1826 (16):25. DOI: http://dx.doi.org/10.5334/opt.ch
Ioannidis, J.P.A (2012). Why most published research findings are false. Perspectives in Psychological Science, 7(6): 645-654. DOI: http://dx.doi.org/10.1371/journal. pmed.0020124
Martinson, B. C., Anderson, M. S., de Vries, R. (2005) Scientists behaving badly. Nature, 435(7034): 737–738. DOI: http://dx.doi.org/10.1038/435737a