PSYC0146 Coursework 2020-2021
Since the publication of Kahneman’s Thinking, fast and slow (Kahneman, 2011), main ideas in so-called
“dual systems” or dual-process" theories have entered public discourse on psychology and decision making.
In his bestseller, Kahneman proposes a distinction between two modes of thought, “System 1” being fast,
instinctive and emotional, and “System 2” slower, more deliberative, and more logical. This distinction is
not Kahneman’s sole invention; dual systems and dual process theories have a long history in psychology
(e.g. Evans and Stanovich, 2013; Sloman, 1996; Stanovich and West, 1999), although the usefulness of such
dualisms is not universally accepted (e.g. Keren and Schul, 2009; Shanks and St. John, 1994). Table 1 lists
some of the purported characteristics of the two systems.
Table 1: General characterisctics of System 1 and System 2 pro-
System 1 System 2
low effort high effort
large capacity small capacity
default process inhibitory
domain specific domain general
evolutionary old evolutionary recent
independent of working memory limited by working memory
Biases in judgement and reasoning are often attributed to System 1 thought processes. Perhaps if people
slowed down and allowed System 2 processes to guide their reasoning and judgements, cognitive biases
could be avoided. This idea has gained some traction, and advice is often given to slow down, reflect, and
base decisions on rational deliberation rather than intuition. Although seemingly plausible, there is not
all that much experimental evidence to show that such advice is actually beneficial. One could also argue
that speeding up is sound advice, as unconscious and intuitive reasoning may be highly accurate as well
(e.g. Gladwell, 2005).
Consider the well known “Linda problem” (Tversky and Kahneman, 1983), which goes as follows:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student,
she was deeply concerned with issues of discrimination and social justice, and also participated in
anti-nuclear demonstrations. Which is more probable?
1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.
When asked this question, the majority of people tends to choose answer 2. However, according to the rules
of probability, the probability of a conjunction (e.g., bank teller and feminist) can never be larger than the
probability of one of it constituents (e.g., a bank teller): P (A and B) ≤ P (A). The normatively incorrect
answer is ascribed to the “representativeness heuristic”, as the description of Linda above seems representative
of someone active in the feminist movement. The representativeness heuristic is assumed to be an example of
a System 1 type thought process. Whilst reasoning on the basis of representativeness and similarity often
provides a quick and accurate answer, there are clearly cases where it fails. The question is whether slowing
down would help people to give the correct answer. If people don’t know the rules of probability, or otherwise
have access to the conjunction rule, would they be able to work out the correct answer through slow and
deliberative reasoning? And what if they have somehow acquired the wrong rule for solving these problems?
Might relying on System 2 processes then not make them more likely to fail in providing the correct answer?
Whether “slowing down” or “speeding up” is beneficial may thus rely on a variety of factors. Firstly, some
people may be more naturally inclined to rely on reflection and deliberation (i.e. System 2) than others.
Secondly, some people may have access to the correct rules to solve particular judgement and decision
problems, whilst others don’t. Finally, some of the problems seem to require a reasonable level of statistical
numeracy. Individual differences in this ability may also affect to what extent slowing down or speeding up is
An experiment was conducted to determine whether it is possible to improve people’s reasoning by asking
them to think slow or fast, and the role in this of individual differences in reflection, numerical competence,
and access to abstract rules. The experiment was conducted in two separate sessions (completed at least 24
hours apart). In the first session, participants completed a number of tasks aimed to measure their ability
for cognitive reflection, numerical reasoning, and the application of abstract rules. In the second session,
participants were randomly assigned to one of five conditions, with instructions on how they should solve
the subsequent judgement problems (e.g. fast or slow). They then completed a set of 12 judgement tasks
covering a range of classic problems where cognitive biases have found. The five conditions were:
• Control: Participants completed the judgement tasks at their own pace, without further instruction.
• Slow: Participants were instructed to provide their answers after extensively reflecting on each problem.
• Fast: Participants were instructed to provide the first intuitive answer that came to their mind.
• Fast-Slow: Participants were asked to provide two answers. First an initial intuitive answer. They were
instructed to then extensively reflect on the problem and their initial answer, before submitting their
final answer. The data contains the final answers only.
• Incentive: In this condition, participants were not instructed to answer in a particular way. However,
they were told they would be able to receive a monetary bonus payment for correct answers. Incentives
are often found to increase motivation and performance. This condition is then also a useful control to
assess the effectiveness of instructions to “slow down” or “speed up”.
In the data you have been assigned, there are n = 200 participants in each condition.
To measure participants’ natural inclination for cognitive reflection, they completed the Cognitive Reflection
Test (Frederick, 2005) as well as an alternative version which was designed to be less dependent on numerical
ability (Thomson and Oppenheimer, 2016). The original Cognitive Reflection Test consists of three questions
A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the
The alternative version consists of four questions.
The Berlin Numeracy Test (Cokely et al., 2012) was used to measure participants’ statistical numeracy. It
consists of four questions such as
Imagine we are throwing a five-sided die 50 times. On average, out of these 50 throws how many
times would this five-sided die show an odd number (1, 3 or 5)?
_____ out of 50 throws.
Application of rules
To assess whether participants would be able to solve the judgement problems in principle (i.e. they have
access to the abstract rules to solve them), problems such as the Linda problem were stated in alternative
“decontextualized” ways. For instance, the Linda problem might be stated as
Imagine John owns a laptop. Rank the following from most likely to least likely:
1. The laptop has a 14 inch screen.
2. The operating system is Linux.
3. The laptop has a 14 inch screen and the operating system is Linux.
There were two such decontextualized versions of each of the six classic judgement problems in the next
Participants were presented with two versions of each of six classic judgement problems. These were:
• Conjunction problem (Tversky and Kahneman, 1983). This problem taps into people’s tendency
to ignore that the probability of a conjunction can never be larger than the probability of one if its
constituent events. The Linda Problem is an example.
• Probability maximising problem (Stanovich and West, 2008). If one outcome has a slightly higher
probability than another, to maximise your probability of winning, you should always bet on the option
with the highest probability (maximising) rather than also sometimes betting on the option with the
lower probability (probability matching).
• Omission problem (Ritov and Baron, 1990). This problem taps into people’s tendency to choose a
default option, rather than an alternative option which is superior.
• Base rate problem (Kahneman and Tversky, 1973). This problem taps into people’s tendency
to ignore the base rate (e.g. probability of a disease) when judging posterior probabilities (e.g. the
probability of a disease after a positive test result).
• Denominator neglect problem (Kirkpatrick and Epstein, 1992). This problem taps into people’s
tendency to choose based on absolute, rather than relative frequencies. For instance, one bowl has 4 out
of 25 “wins”, and another 2 out of 10 “wins”, people often choose the first, even though the probability
of winning is larger in the second option.
• Covariation problem (Wasserman et al., 1990). This problem taps into people’s tendency to not
consider all relevant information when making judgements about causality.
Participants were recruited via an online platform and completed all the tasks in the study over two days. In
the first session, they completed the measures of reflection, numeracy, and access to abstract rules. In the
second session, they were randomly assigned to one of the five conditions. Upon reading the corresponding
instructions, they were then asked to rate, on a scale of 0 (not at all) to 100 (completely), how willing they
were to comply with these instructions. They then completed the 12 judgements tasks in two blocks, with
the order randomized within each block.
You should analyse the dataset you have been assigned (with either R or JASP) and write a short report (up
to 3000 words1) detailing your analyses and results. It is up to you to decide how you analyse the data. You
should at least test the following hypotheses:
1. Does instructing participants to “think slow” increase or decrease their overall performance in the
judgement tasks (compared to no instruction)?
2. Does instructing participants to “think fast” increase or decrease their overall performance in the
judgement tasks (compared to no instruction)?
3. Are these effects different for those with a different “natural tendency” to cognitively reflect on problems?
In addition, there are numerous other hypotheses that are of potential interest. You can use your own
intuition and imagination to explore this. If these additional hypotheses and analyses are inspired from the
background information, you might classify these analyses as “confirmatory”. If they are post-hoc and the
result from prior explorative analyses and tests, you should classify them as “explorative”.
Your write-up should roughly take the form of a results section of journal article reporting on an empirical
psychology study. You can include a (very brief) introduction outlining the main objectives of the study, as
well as a very brief summary of the methods. This probably should take no more than 10% of the total word
count, and is not mandatory. But if one of your hypotheses is supported by prior research, this may be useful
Before conducting your confirmatory tests, you should explore the data through informative plots and
descriptive statistics. You do not have to report on this part of the analysis, unless the results impact on
your choice of analysis. Yuu should consider what the most appropriate analysis is to test a given hypothesis.
In your write-up, you should briefly justify your choice of statistical model. For instance, if you were to
include covariates in a model, you should briefly explain your choice of covariates. When reporting results,
please follow the APA guidelines and include all relevant statistics in a succinct way. Don’t just focus on
the significance of the tests, but also interpret what they mean (e.g., is the outcome higher or lower in one
condition compared to another?). Support the test results with informative figures or tables (figures tend to
be better at conveying complex results, but sometimes a table may be needed). You should end the report
with a discussion of your findings, and what they imply in light of the objectives of the experiment.
You should submit, as separate files, your write-up, as well as a copy of the R script (or Rmarkdown file,
if you use this), or the JASP output file (if you use JASP). When possible, please submit your files in pdf
format. At the end of both documents, please indicate the number of the dataset you analysed
(e.g.: “Dataset analysed: data_130.csv”). Your mark will be based on the write-up, and not the R/JASP
file. However, where the report is unclear, it will be very useful to assess how you obtained your results. All
submissions will be marked anonymously, and we will only use the data identifier to use the appropriate
dataset if we need to check your results (not to link against your name).
You should download the unique data set assigned to you. Each data file is available as a comma separated
value (csv) file, and named e.g. data_130.csv if you were assigned dataset number 130. All datasets contain
real data from a real experiment, but each data set is different, and hence your results will differ from those
of other students. All datasets contain the following variables:
• id: unique numeric id for each participant
• gender: stated gender (“male”, “female”, or “other”).
• age_group: numeric variable indicating age group. 1 = “Under 18”, 2 = “18-24”, 3 = “25-30”, 4 =
“31-40”, 5 = “41-50”, 6 = “51-64”, 7 = “65 or over”.
• education: numeric variable indicating the education level. 1 = “elementary school”, 2 = “Middle
school/junior high”, 3 = “Some high school”, 4 = “High school diploma”, 5 “Associate degree”, 6 =
“Bachelor’s degree”, 7 = “Master’s degree”, 8 = “Doctorate (or more)”.
1The word count does not including tables, figures, or e.g. references if you need those.
• condition: character variable indicating condition: control, slow, fast, fast-slow, or incentive.
• follow_instructions: rated willingness (between 0 and 100) to follow the instructions given.
• reflection: numeric variable indicating score on a combined cognitive reflection test (between 0 and
• numeracy: numeric variable indicating score on the Berlin Numeracy Test.
• rules: numeric variable indicating total score on all rule application items (between 0 and 12).
• rules_conjuction to rules_covariation: score on the specific types of the rule application items
(each between 0 and 2). The rules variable is the sum of these 6 variables.
• performance: numeric variable indicating total score on all judgement items (between 0 and 12)
• perf_conjunction to perf_covariation: score on the specific types of the judgement items (each
between 0 and 2). The performance variable is the sum of these 6 variables.
• ave_rt: numeric variable indicating the average response time (in seconds) to the judgement items.
• rt_conjuction to rt_covariation: response time for the specific types of the judgement items. this
is the sum of the response times to each of the two items of a specific type. The ave_rt variable is the
sum of these 6 variables divided by 12.
• total_duration: overall time taken to complete the second part of the study.
Cokely, E. T., Galesic, M., Schulz, E., Ghazal, S., and Garcia-Retamero, R. (2012). Measuring risk literacy:
The berlin numeracy test. Judgment and Decision making.
Evans, J. S. B. and Stanovich, K. E. (2013). Dual-process theories of higher cognition: Advancing the debate.
Perspectives on psychological science, 8(3):223–241.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic perspectives, 19(4):25–42.
Gladwell, M. (2005). Blink: The Power of Thinking Without Thinking. Back Bay Books.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Kahneman, D. and Tversky, A. (1973). On the psychology of prediction. Psychological review, 80(4):237.
Keren, G. and Schul, Y. (2009). Two is not always better than one: A critical evaluation of two-system
theories. Perspectives on psychological science, 4(6):533–550.
Kirkpatrick, L. A. and Epstein, S. (1992). Cognitive-experiential self-theory and subjective probability:
further evidence for two conceptual systems. Journal of personality and social psychology, 63(4):534.
Ritov, I. and Baron, J. (1990). Reluctance to vaccinate: Omission bias and ambiguity. Journal of behavioral
decision making, 3(4):263–277.
Shanks, D. R. and St. John, M. F. (1994). Characteristics of dissociable human learning-systems. Behav
Brain Sci, 17(3):367–395.
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological bulletin, 119(1):3.
Stanovich, K. E. and West, R. F. (1999). Discrepancies between normative and descriptive models of decision
making and the understanding/acceptance principle. Cognitive psychology, 38(3):349–385.
Stanovich, K. E. and West, R. F. (2008). On the relative independence of thinking biases and cognitive
ability. Journal of personality and social psychology, 94(4):672.
Thomson, K. S. and Oppenheimer, D. M. (2016). Investigating an alternate form of the cognitive reflection
test. Judgment and Decision making, 11(1):99.
Tversky, A. and Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in
probability judgment. Psychological review, 90(4):293.
Wasserman, E. A., Dorner, W., and Kao, S. (1990). Contributions of specific cell information to judgments of
interevent contingency. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16(3):509.