To main content To navigation

Educational Sciences & Social Sciences

Detecting Bad Science: Reviewing and Improving Social Science Research

When:

06 January - 16 January 2025

School:

VU Graduate Winter School

Institution:

Vrije Universiteit Amsterdam

City:

Amsterdam

Country:

Netherlands

Language:

English

Fee:

630 - 930 EUR

Interested?
Detecting Bad Science: Reviewing and Improving Social Science Research
Top course Online

About

This intensive course will make you a Bad Science Detective with a good purpose: to improve social science research.

In case you missed it: science is in a credibility crisis. At least half of researchers in the social and behavioural sciences in the Netherlands admit having engaged in questionable research practices (Gopalakrishna et al., 2022a). Yes – for every pair of scholars you randomly choose, one is engaging in bad science. For every sixteen researchers you choose, one has even committed fraud or fabricated data. Though the number of retractions by academic journals for fraud, fabrication, plagiarism and other integrity violations is rising, most Bad Science is still undetected. The quality control system that peer review is commonly believed to be is a very lax one, and easy to fool (Smith, 2006). As a result, it should be no surprise that half of all studies do not replicate, and published effects are only half the original size upon replication (Open Science Collaboration, 2015). In sum: you cannot trust research to be valid and reliable, even when it is peer reviewed and published in the most prestigious journals.

How then can you tell the difference between good and bad science? What signals tell you something about the quality of research? As a bad science detective, you’ll be able to call bullshit on the texts that your professors require you to read – including their own work. At the same time, we will collectively improve the chances that bad science is identified. With a higher discovery rate of bad science, researchers will be more careful, and the quality of research will improve (Gopalakrishna et al., 2022b). In addition, by identifying the weaknesses in the work of others, you learn in which aspects you can improve your own research.

Course leader

Professor René Bekkers

Target group

This course is developed for students in research master programs and PhD candidates in the social and behavioural sciences broadly conceived, including psychology, neuroscience, data science, computational social science, sociology, political science, public administration, organization science, social geography, epidemiology, human health and life sciences, economics, marketing, management and business administration. You will find this course useful if you seek to uncover regularities in human cognition and behaviour or test hypotheses using empirical data.

You can enrol in this course if you are a PhD candidate or a student in a research master program or equivalent (e.g., advanced postgraduate research program, master program at a research-oriented institution of 120 ECTS).

Course aim

At the end of this course, you will be able to use analytical tools and software to identify the weaknesses of research and evaluate the quality of research in the social and behavioural sciences. The primary analytical tool is iQUESST – identifying Questionable Social Science through Transparency.

The iQUESST acronym refers to the evaluation of research quality with respect to the:

i information on

QU the Question that the research answers: how informative would potential answers to the research question be for practice and theories?

E the Estimation method: is it able to provide an answer to the question, and is it the best choice?

S the Sample: is it useful to make inferences about the target population?

S the Stringency criterion: are the data and methods the best possible stress test of the research claims?

T through Transparency of the research: what does the research report tell you about the data and methods used to produce the results?

The secondary analytical tool is the four validities framework (Vazire et al., 2022), to which iQUESST roughly corresponds as follows:

1. Construct validity ≈ QUestion: poorly defined and badly operationalized constructs, ill-documented measures, and hypothesizing after results are known;

2. Internal validity ≈ Estimation: selective attrition, non-causal mediation, lack of random assignment, reverse causality, incorrect chronology, omitted variable bias;

3. External validity ≈ Sample: constraints on generality due to survivorship bias, selection bias, biased samples, or selective response;

4. Statistical conclusion validity ≈ Stringency: problems with outliers, missing values, model misspecification, false assumptions, p-hacking, researcher degrees of freedom, the garden of forking paths, or low power.

Fee info

Fee

630 - 930 EUR, Students, PhD students and employees of VU Amsterdam, Amsterdam UMC or an Aurora Network Partner: €630. Students and PhD students: €730. Professionals: €930. Applications received before 15 October get a €50 Early Bird Discount!

Interested?

Visit school

Stay up-to-date about our summer schools!

If you don’t want to miss out on new summer school courses, subscribe to our monthly newsletter.