MTS 525-0
 Special Topics Research Seminar

Section 20: Generalizing about Message Effects
 Spring 2020

SYLLABUS: TOPIC 3 (part 1)

TOPIC 3: The “replication crisis” and affiliated ideas (part 1)

 

Overall topic 3 outline:

3.1  Crises of replication (especially in psychology)

3.2  Underlying elements

3.3  Toward improved research practices

3.4  Being a vigilant research consumer

 

 

Part 1 outline:

3.1  Crises of replication (especially in psychology)

            3.1.1  Precipitating events

            3.1.2  Fallout

3.2  Underlying elements

            3.2.1  Low statistical power

            3.2.2  Weaknesses of peer review

            3.2.3  Questionable research practices

            3.2.4  Lack of replication5

            3.2.4  Errors (numerical and otherwise)

 

 

 

 

 

3.1  Crises of replication (especially in psychology)

 

3.1.1  Precipitating events

 

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2, 696-701. doi: 10.1371/journal.pmed.0020124

 

For further reading:

            Ioannidis, J. P. (2005). Contradicted and initially stronger effects in highly cited clinical research. JAMA, 294, 218-228. doi: 10.1001/jama.294.2.218

            Ioannidis, J. P. A. (2008). Why most discovered true associations are inflated. Epidemiology, 19, 640-648. doi: 10.1097/EDE.0b013e31818131e7

 

            Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407-425.

            Wagenmakers, E. J., Wetzels, R., Borsboom, D., & van der Maas, H. L. (2011). Why psychologists must change the way they analyze their data: The case of psi: Comment on Bem 2011. Journal of Personality and Social Psychology, 100, 426–432.

            Galak, J., LeBoeuf, R. A., Nelson, L. D., Simmons, J. P. (2012). Correcting the past: Failures to replicate ψ. Journal of Personality and Social Psychology, 103, 933–948.

 

            Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype priming on action. Journal of Personality and Social Psychology, 71, 230-244.

            Doyen, S., Klein, O., Pichon, C. L., & Cleeremans, A. (2012). Behavioral priming: It’s all in the mind, but whose mind? PLoS One, 7, e29081. 

            Srivastava, S. (2012). Some reflections on the Bargh-Doyen elderly walking priming brouhaha. Blog post.  https://thehardestscience.com/2012/03/12/some-reflections-on-the-bargh-doyen-elderly-walking-priming-brouhaha/

 

 


 

3.1.2  Fallout

 

Pashler, H., & Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments examined. Perspectives on Psychological Science, 7, 531-536. doi:10.1177/1745691612463401

 

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349, no. 6251. doi:10.1126/science.aac4716

 

 

For further reading:

            Chang, A. C., & Li, P. (2015). Is economics research replicable? Sixty published papers from thirteen journals say ”usually not.” Finance and Economics Discussion Series 2015-083. Washington: Board of Governors of the Federal Reserve System. doi:10.17016/FEDS.2015.083

            Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., Heikensten, E., Holzmeister, F., Imai, T., Isaksson, S., Nave, G., Pfeiffer, T., Razen, M., & Wu, H.. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351, 1433-1436. doi:10.1126/science.aaf0918

            Lilienfeld, S. O., & Waldman, I. D. (Eds.). (2017). Psychological science under scrutiny: Recent challenges and proposed solutions. Malden, MA: Wiley Blackwell.

 

 

 


 

3.2  Underlying elements

 

3.2.1  Low statistical power

 

Cohen, J. (1962). The statistical power of abnormal-social psychological research. Journal of Abnormal and Social Psychology, 65, 145-153.  doi:10.1037/h0045186

 

Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105(2), 309–316. doi:10.1037/0033-2909.105.2.309

 

Christley, R. M. (2010). Power and error: Increased risk of false positive results in underpowered studies. The Open Epidemiology Journal, 3, 16-19. doi:10.2174/1874297101003010016

 

For further reading: 

            Chase, L. J., & Tucker, R. K. (1975). A power-analytic examination of contemporary communication research. Communication Monographs, 42, 29-41. doi:10.1080/03637757509375874 

            Garrison, J. P., & Andersen, P. A. (1979). A reassessment of statistical power analysis in Human Communication Research. Human Communication Research, 5, 343–350. doi:10.1111/j.1468-2958.1979.tb00647.x 

            Zaller, J. (2002). The statistical power of election studies to detect media exposure effects in political campaigns. Electoral Studies, 21, 297-329. doi:10.1016/S0261-3794(01)00023-3

            Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9, 147-163.

            Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365-376. doi:10.1038/nrn3475

            Szucs, D., & Ioannidis, J. P. A. (2017). Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. PLoS Biology, 15, e2000797. doi:10.1371/journal.pbio.2000797 

            Ioannidis, J. P. A., Stanley, T. D., & Doucouliagos, H. (2017). The power of bias in economics research. The Economic Journal, 127, F236-F265.  doi:10.1111/ecoj.12461 

            Nord, C. L., Valton, V., Wood, J., & Roiser, J. P. (2017). Power-up: A reanalysis of 'power failure' in neuroscience using mixture modeling. Journal of Neuroscience, 37, 8051-8061. doi:10.1523/JNEUROSCI.3592-16.2017 

            Elson, M., & Przybylski, A. K. (2017). The science of technology and human behavior: Standards, old and new. Journal of Media Psychology, 29, 1–7. doi:10.1027/1864-1105/a000212 

            Brodeur, A., Cook, N., & Heyes, A. (2018). Methods matter: P-hacking and causal inference in economics. IZA Institue of Labor Economics Discussion Paper 11796.

 

For further reading: misunderstanding of statistical power

            Hoenig, J. M., & Heisey, D. M. (2001). The abuse of power: The pervasive fallacy of power calculations for data analysis. The American Statistician, 55, 19-24.

            O’Keefe, D. J. (2007). Post hoc power, observed power, a priori power, retrospective power, prospective power, achieved power: Sorting out appropriate uses of statistical power analyses. Communication Methods and Measures, 1, 291-299. doi:10.1080/19312450701641375

            Wang, L. L. (2010). Retrospective statistical power: Fallacies and recommendations. Newborn and Infant Nursing Reviews, 10, 55-59. doi:10.1053/j.nainr.2009.12.012 

 


 

3.2.2  Weaknesses of peer review

 

For further reading:

            Sterling, T . D. (1959). Publication decision and the possible effects on inferences drawn from tests of significance-or vice versa. Journal of the American Statistical Association, 54(285), 30-34. doi:10.1080/01621459.1959.10501497 

            Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1, 161–175. https://doi.org/10.1007/BF01173636

            Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195. doi:10.1017/S0140525X00011183 

            Ernst, E., & Resch, K. L. (1994). Reviewer bias – a blinded experimental study. Journal of Laboratory and Clinical Medicine, 124, 178-182.

            Sterling, T. D., Rosenbaum, W. L., & Weinkam, J. J. (1995). Publication decisions revisited: The effect of the outcome of statistical tests on the decision to publish and vice versa. The American Statistician, 49, 108-112. http://www.jstor.org/stable/2684823 

 

 

 

 

 


 

3.2.3  Questionable research practices: Selective publication, p-hacking, etc.

 

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366. doi:10.1177/0956797611417632

 

For further reading: 

            Rosenthal, R. (1979). The “file drawer problem” and tolerance for null results. Psychological Bulletin, 86, 638‑641. doi:10.1037/0033-2909.86.3.638

            Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196-217.  doi:10.1207/s15327957pspr0203_4

            Chan, A. W., Hrobjartsson, A., Haahr, M. T., Gøtzsche, P. C., & Altman, D. G. (2004). Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles.  JAMA, 291, 2457-2465. doi:10.1001/jama.291.20.2457

            Chan, A. W., Krleza-Jeric´, K., Schmid, I., & Altman, D. G. (2004). Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ, 171, 735-740. doi: 10.1503/cmaj.1041723

            Gerber, A., & Malhotra, N. (2008). Do statistical reporting standards affect what is published? Publication bias in two leading political science journals. Quarterly Journal of Political Science, 3, 313-326. https://doi.org/ 10.1561/100.00008024

            Gerber, A. S., & Malhotra, N. (2008). Publication bias in empirical sociological research: Do arbitrary significance levels distort published reports? Sociological Methods & Research, 37, 3-30. https://doi.org/ 10.1177/0049124108318973

            Turner, E. H., Matthews, A. M., Linardatos, E., Tell, R. A., & Rosenthal, R. (2008). Selective publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine, 358, 252-260.

            Dwan, K., Altmanm D. G., Arnaiz, J. A., Bloom, J., Chan, A.-W., Cronin, E. Decullier, E., Easterbrook, P. J., Von Elm, E., Gamble, C., Ghersi, D. Ioannidis, J. P. A., Simes, J., & Williamson, P. R. (2008). Systematic review of the empirical evidence of study publication bias and outcome reporting bias.  PLoS One, 3(8), e3081.

            Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Publication bias. Chapter 30 (pp. 277-292) in: Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, West Sussex, UK: Wiley.

            Sutton, A. J. (2009). Publication bias. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 435-452). New York: Russell Sage Foundation.

            Bertamini, M., & Munafò, M. (2012). Bite-size science and its undesired side effects. Perspectives on Psychological Science, 7, 67-71. https://doi.org/10.1177/1745691611429353

            John, L. K., Loewenstein, G., & Prelec. D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524-532. https://doi.org/ 10.1177/0956797611430953

            Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502-1505. doi:10.1126/science.1255484

            Vermeulen, I., Beukeboom, C. J., Batenburg, A., Avramiea, A., Stoyanov, D., van de Velde, B., & Oegema, D. (2015). Blinded by the light: How a focus on statistical “significance” may cause p-value misreporting and an excess of p-values just below .05 in communication science. Communication Methods and Measures, 9, 253-279. doi:10.1080/19312458.2015.1096333 

            Seaman, C. S., & Weber, R. (2015). Undisclosed flexibility in computing and reporting structural equation models in communication science. Communication Methods and Measures, 9, 208-232. doi:10.1080/19312458.2015.1096329 

            Vermeulen, I. E., & Hartmann, T. (2015). Questionable research and publication practices in communication science. Communication Methods & Measures, 9, 189–192. doi:10.1080/19312458.2015.1096331

            Matthes, J., Marquart, F., Naderer, B., Arendt, F., Schmuck, D., & Adam, K. (2015). Questionable research practices in experimental communication research: A systematic analysis from 1980 to 2013. Communication Methods and Measures, 9, 193-207. doi:10.1080/19312458.2015.1096334 

            Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biology, 13(3), e1002106. https://doi.org/10.1371/journal.pbio.1002106 

            Franco, A., Malhotra, N., & Simonovits, G. (2015). Underreporting in political science survey experiments: Comparing questionnaires to published results. Political Analysis, 23, 306-312. doi:10.1093/pan/mpv006 

            Franco, A., Malhotra, N., & Simonovits, G. (2016). Underreporting in psychology experiments: Evidence from a study registry.  Social Psychological and Personality Science, 7(1), 8-12.  doi:10.1177/1948550615598377 

            Renkewitz, F., & Keiner, M. (2019). How to detect publication bias in psychological research. Zeitschrift für Psychologie, 227(4), 261-279.  doi:10.1027/2151-2604/a000386

            Cairo, A. H., Green, J. D., Forsyth, D. R., Behler, A. M. C., & Raldiris, T. L. (in press). Gray (literature) matters: Evidence of selective hypothesis reporting in social psychological research. Personality and Social Psychology Bulletin.  doi:10.1177/0146167220903896 

 

For further reading about effect size and sample size relationships:

            Gerber, A. S., Green, D. P., & Nickerson, D. (2001). Testing for publication bias in political science. Political Analysis, 9(4), 385-392.

            Levine, T., Asada, K. J., & Carpenter, C. (2009). Sample sizes and effect sizes are negatively correlated in meta-analyses: Evidence and implications of a publication bias against non-significant findings. Communication Monographs, 76, 286-302. doi:10.1080/03637750903074685

            Kühberger, A., Fritz, A., & Scherndl, T. (2014). Publication bias in psychology: A diagnosis based on the correlation between effect size and sample size. PLoS One, 9, e105825. doi:10.1371/journal.pone.0105825


 

3.2.4  Lack of replications

 

Easley, R. W., Madden, C. S., & Dunn, M. G. (2000). Conducting marketing science: The role of replications in the research process. Journal of Business Research, 48, 83-92. doi:10.1016/S0148-2963(98)00079-4 

 

For further reading:

            Smith, N. C., Jr. (1970). Replication studies: A neglected aspect of psychological research. American Psychologist, 25, 970-975. doi:10.1037/h0029774

            Reid, L. N., Soley, L. C., & Wimmer, R. D. (1981). Replication in advertising research: 1977, 1978, 1979. Journal of Advertising, 10(1), 3–13. doi:10.1080/00913367.1981.10672750 

            Neuliep, J. W. (Ed.). (1990). Handbook of replication studies in the social and behavioral sciences. Newbury Park, CA: Sage.

            Hunter, J. E. (2001). The desperate need for replications. Journal of Consumer Research, 28, 149-158. doi:10.1086/321953

            Evanschitzky, H., Baumgarth, C., Hubbard, R., & Armstrong, J. S. (2007). Replication research’s disturbing trend. Journal of Business Research, 60, 411-415. doi:10.1016/j.jbusres.2006.12.003

 

 

 


 

3.2.5  Errors (numerical and otherwise) 

 

García-Berthou, E., & Alcaraz, C. (2004). Incongruence between test statistics and P values in medical papers. BMC Medical Research Methodology, 4, article no. 13.  http://www.biomedcentral.com/1471-2288/4/13

 

For further reading:

            Dewald, W. G., Thursby, J. G., & Anderson, R. G. (1986). Replication in empirical economics: The Journal of Money, Credit and Banking project. American Economic Review, 76, 587-603.  https://www.jstor.org/stable/1806061  

            Edouard, L., & Senthilselvan, A. (1997). Observer error and birthweight: Digit preference in recording. Public Health, 111(2), 77-79.  doi:10.1016/s0033-3506(97)90004-4 

            Bakker, M., & Wicherts, J. M. (2011). The (mis)reporting of statistical results in psychology journals. Behavior Research Methods, 43, 666–678. doi:10.3758/s13428-011-0089-5

            Elson, M., & Przybylski, A. K. (2017). The science of technology and human behavior: Standards, old and new. Journal of Media Psychology, 29, 1–7. doi:10.1027/1864-1105/a000212 

            Brown, A. W., Kaiser, K. A., & Allison, D. B. (2018). Issues with data and analyses. PNAS, 115(11), 2563-2570.  doi:10.1073/pnas.1708279115 

 

 

 

 

Home page for this course
Daniel J. O'Keefe home page
Daniel J. O'Keefe email