Wednesday, November 27, 2019

Framing and Anchoring (my first year microeconomics students)

In addition to the Endowment Effect I also test my first year microeconomics students for a couple of other classic behavioural economics biases. Since last year I've added a seperate lecture on behavioural economics and I think it helps to make the material more concrete to the students if you can show their own irrational behaviour with a couple of simple exercises.

I must confess that I am not really covering the whole width of behavioural economics in my three non-Endowment Effect exercises because 2 of them are about Framing but they always work and the students usually seem interested in the results. I mean, for the Endowment Effect exercise it takes some effort to explain why WTA and WTP should be similar but in most framing exercises it is pretty clear that we should see the same answers in both conditions.

First of all I do the classic Asian Disease Problem. Which starts:

Imagine that the U.K. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume the exact scientific estimate of the consequences of the programs are as follows. Which program would you prefer?
And then there are two versions of the answers:
Program A: 200 people will be saved
Program B: there is a one-third probability that 600 people will be saved, and a two-thirds probability that no people will be saved
and
Program a: 400 people will die
Program b: there is a one-third probability that nobody will die, and a two-third probability that 600 people will die
Students see randomly one of the two versions (AB or ab). Reading them both it should be clear that option A and a are numerically the same and B and b are too but that they are either phrased in terms of saving or losing lives. So yes, you can have a preference for the risky or the certain outcome but it shouldn't matter how it is presented. But it does. The majority of students choose the certain outcome (A) when the problem is presented as saving lives but when the problem is presented in terms of lives lost the vast majority goes for the risky option (b).
The second example is known in the literature as the p-bet and $-bet problem. Here students are presented with the following scenario:
You are offered a ticket in one of the following lotteries:

Lottery A provides a 9/12 chance of winning £110 and a 3/12 chance of losing £10.

Lottery B provides a 3/12 chance of winning £920 and a 9/12 chance of losing £200.

And are asked to answer one of two questions (again randomly determined):
Which one would you prefer? Lottery A or Lottery B
Or:
How much would you be willing to pay for a ticket in lottery A (In pounds)? And in lottery B?
Again, there is no right or wrong answer but the answers should be consistent. Meaning that, if (on average) students prefer lottery A over B then we should also expect students (on average) to be willing to pay more for lottery A than for lottery B. And they don't. When given a choice lottery A is more popular (chosen by 62%) but when asked for how much they are willing to pay for a ticket, the average price for lottery B (£80.00) is higher than the price for A (£33.78).

I don't actually spend a lot of time going over possible reasons for these findings (for the Asian Disease Problem for instance it is related to loss aversion and that we, as humans, are more likely to take risks when something is about losses, or presented as such). I keep that for my third year advanced micro-economics module about behavioural economics but here simply present them as examples of framing: it matters how you present an (economic) question.

The third exercise is supposed to show the Anchoring Effect and is a version of one of Tversky & Kahneman's classic experiments. Each respondent starts with a random number - in my case the last two numbers of their student ID - and they are subsequently asked:

Think of the number above as a percentage. The number of all the countries in Africa that are a member of the United Nations, is that higher or lower than this percentage?
And then:
How many of all the countries in Africa - as a percentage - do you think are a member of the United Nations?
The correct answer to the last question is not really relevant (I have run this experiment for years and I have never bothered looking it up). What's important is that probably not many students know it and, obviously, that the random number the students start with should have no influence on their guess. But in Tversky and Kahneman's case (and many other examples) it does. Respondents who start with a higher random number guess, on average, a higher percentage. The idea is that if irrelevant anchors can have an influence in a question like this, they also might have in other, econonomic, situations. For instance with respect to prices for unfamiliar goods.

The trouble is that I so far have never found this effect in my students. I have been trying this exercise for a number of years but I keep finding no relationship between the random number and the percentage they answer. In previous years I often didn't include the higher/lower question so I blamed that but this year I did it exactly as Tversky and Kahneman did it and I do find a positive relationship, see below, but it's nowhere near statistically significant.

Thursday, November 21, 2019

The Endowment Effect (in my first year microeconomics students)

To liven up a pretty dry and technical lecture on the standard model of consumer choice - marginal rate of substition, tangency condition, income and subsitution effect and all that - I run a fairly simple exercise showing the endowment effect in one of the classes of my first year microeconomics module.

Using the online questionnaire tool Qualtrics I ask the students randomly one of two versions of the following question at the start of the lecture:

Imagine you have a bag of winegums (with approx. 100 winegums) and your friend has a Twix. How many winegums would you be willing to give to your friend to get one of his/her Twix bars in return?
or
Imagine you have a Twix and your friend has a bag of winegums (with approx. 100 winegums). How many winegums would your friend have to give to you for you to want to give him/her one of your Twix bars in return?
So what I'm doing is asking for the Willingness to Pay (WTP) and Willingness to Accept (WTA) of one Twix bar expressed in number winegums. Over the course of the lecture I explain about the marginal rate of substitution and how theoretically we would expect the WTA and the WTP of a particular good, for instance Twixes, in terms of the other good, say winegums, to be pretty much the same.

I include some comments at the end about to what extent the model accurately describes how regular people make actual consumption decisions and point out that some of the assumptions, in particular the one about the WTP and WTA being the same for one, are falsified by behaviour in the real world. Pointing out their own valuations for the Twix when exchanging it for winegums, or the other way around. This year again, the WTA was higher than the WTP and I use that for a quick discussion of the endowment effect.

It took me a couple of years of trying to find the right way to ask this question. And still this year there were some issues. There were a couple of responses explaining extreme valuations because winegums are haram or non vegetarian. There were even some students who didn't know what winegums were. But, as I explained to the students a couple of weeks ago when I talked about the recent Nobel Prize in Economics for RCTs (in development economics), if you randomly assign participants to the different treatments and you have enough observations, these kind of issues should average out. But still, maybe next year I will specify that these hypothetical winegums are vegetarian, despite the name don't contain any alcohol and include a picture.

Wednesday, November 13, 2019

Market Fairness (according to my students)

In my first-year Principles of Micro-economics class I like to use one of Kahneman, Knetsch, and Thaler (1986)'s classic exercises on fairness in markets. I simply ask my students if this:
A hardware store has been selling snow shovels for €15. The morning after a large snowstorm, the store raises the price to €20.
is fair or unfair. The standard economic argument is that it is fair because the increase in price will make sure that all the snow shovels are allocated efficiently. In the real world regular people don't necessarily see it that way. In Kahneman et al.'s study 82% of the respondents think the price increase is unfair.

In my experience my students think much more like economists. Which is not entirely surprising since they are economics students (albeit at the very start of their education). In previous years I asked the question with a simple show of hands and usually the vast majority thought the change in price was fair. This year I used an online questionnaire tool ánd I asked a little earlier (after they had learned about demand and supply and the market equilibrium but before any explicit coverage of efficiency etcetera) and still 52% of the students thought it the action was unfair.

Although I should be content that a considerable number of my students already seem to think like economists in the third week of their degree I do use the results also as an introduction to discuss Richard Thaler's arguments against price gouging.

Using the online questionnaire tool made it easy to ask some other questions on the topic of market outcomes and fairness as well. I framed these two extra questions a bit differently and asked explicitly whether they thought the government should intervene in a particular market. The results suggest that personal experience influences their answers. The majority of the students (who live in London, mostly still with their parents because they can't afford to move out) are in favour of rent control:

On the other hand an almost similar sized majority was against making ticket touts illegal (maybe I should check to what extent individual respondents gave opposite answers to these two questions):

Friday, July 26, 2019

Nudging Students

As a behavioural economist, I obviously teach and do research on behavioural economics-y topics but I don't really, explicitly use any of the tools of behavioural economics in my work. When I want to influence the behaviour of my students I usually resort to the traditional, classical economical methods of fiddling around with the incentives, which in practice means making part of the grade depend on the behaviour I want them to exhibit (ie. working hard, attending lectures). That usually works but not in all situations. So this past year I thought I'd set myself the challenge of applying a proper behavioural economics intervention somewhere in my work.

Here at the Economics Department of Middlesex University we offer a tutoring service to our students provided by either someone called a Graduate Academic Assistant (full-time employee of the university, usually a recent graduate, hired to assist with all kinds of things around the department) or a Student Learning Assistant (a second- or third-year student with high grades who, as far as I understand, gets paid per hour of tutoring). This service is free but it isn't particularly popular. Very few of our students actually take advantage of it. A couple of students come to the GAA over the course of the academic year but I couldn't find any record of students having a tutoring session with an SLA. And it's not that all of our students are acing all their modules and there’s no one that maybe could do with a bit of extra help.

Previously any attempts to increase the pick-up rate of the tutoring service were of the classical economic variation. We, the lecturers, would publicize the tutoring service in our lectures. Especially in the weeks before a big test. Focussing on the benefits (higher grades) and the costs (zero). This approach doesn't seem to have had much of an effect. No matter how often we advertised the service, the number of students using it stayed close to zero. So it looked like a good challenge to apply some of that behavioural economics magic and try to nudge more students in taking advantage of the tutoring service.

But where to start? The Behavioural Insights Team of the UK government has created a very handy framework to help practitioners think about applying the tools and methods of behavioural economics. They’ve called it EAST and the name refers to the idea that if you want to nudge people to behave in a particular way, you should make this behaviour Easy, Attractive, Social and Timely.

The A-element, the attractiveness, I've always found the least interesting. Mainly because to me it seems closest to the traditional economic approach. It focuses on the incentive, the benefit (to the student) of exhibiting the behaviour. The traditional economic approach will generally involve increasing the size of the incentive but, especially in situations where such an increase is not possible, making the incentive more salient is also a fairly standard method. And something that in our present context we already do. We publicize the tutoring service in our lectures and explain their potential positive effect on grades.

The S-element is probably the sexiest. It's the one that gets a lot of exposure when people talk about behavioural economic interventions. Probably also because the social aspect of our behaviour is something that traditional economics doesn't pay a lot of attention to. The classic example of the positive effect of making people aware of existing social norms is Cialdini's towel experiment. Where telling hotel guests that 75% of previous occupants of their hotel room reused their towel, instead of requiring fresh ones every day, made these people more likely to reuse their own towels. The same effect has also been shown in other contexts, from electricity use to littering (but, under certain circumstances, it can also backfire).

I have no doubt that appealing to the social motivations of students could be very successful in changing their behaviour, even it was simply in the form of an enthusiastic testimonial. But in this context privacy issues might be a complicating factor. Even if it was a positive experience for you and it led to, say, finally understanding how to use the mid-point method to calculate price elasticity (and getting a high grade for the test), the fact that you asked for help from a tutor maybe isn't necessarily something you want to brag about or even have announced to your fellow students. Also percentagewise, the number of students who are likely to make use of this service is probably too low to get similar effects as for the 75% of previous guests in your hotel room.

The T-element refers to how people can be more susceptible to change their behaviour at different points in time. I like to think we already used this and that we've probably exhausted its possibilities. As said, an obvious moment to advertise the existence of the tutoring service is right before a big test. The benefits are pretty salient even without having to stress it too much. Overconfidence - I don't need any tutoring. I'm smart enough to figure this stuff out by myself - could be a factor that works in the opposite direction so I usually advertise the service after a big test as well, when the students get their grade back and it is maybe a bit lower than they expected. (Also, I don't think it is very useful to start talking about the service in the first few weeks of the year because of the aforementioned overconfidence being probably even higher and because there are usually enough new and unfamiliar things going on in these weeks so that the message easily gets ignored).

Just like the A, the E-element, for Easy, can be interpreted as a bit of actually fairly traditional economics. In my mind, making particular behaviour easy means lowering the cost. But this cost doesn't necessarily have to be financial which is probably where the behavioural economics comes in. The tutoring service is already free for the students. But there are other costs involved as well. Time, for instance, but I don't think there's a lot of gain to be made there. One aspect of the process that is quite cumbersome, is the way in which appointments for a tutoring session were supposed to be made. The GAA was relatively easily approachable because they have a dedicated office where students could drop by and they didn't necessarily have to e-mail beforehand to make an appointment. But still, for some students having to approach someone in person like could be a bit of a psychological threshold. The SLA's were even harder to track down. As part of publicizing the tutoring service, we always included the contact details of the SLA's but that still meant that the student wanting extra help needed to approach someone they didn't know to ask them for a favour at an unspecified time and location. I can imagine all of this might be a bit of a hurdle to ask for help.

So I figured that here was the opportunity to make improvements and apply some behavioural economics. By simply making the process to make an appointment easier. I am a lecturer on one of the first-year modules so that seemed like a good enough context to try it out. I asked the GAA and the SLA's which hours during a typical week they would be available for tutoring sessions. I put these hours into Acuity, a very simple appointment making website, and added options for each of the four first-year modules. The end result was a very basic webpage – with an easy to remember url: http://bit.ly/extraeconhelp - where students were asked which of the modules they wanted help in and to pick free timeslot on their preferred day.

When they had entered their contact details the website would send an e-mail with the request to me and I had to forward it to the GAA or SLA who had said they were available at the requested time (one drawback of using the free version of Acuity was that it only allowed for one user on our side). The agreement was that they, the GAA/SLA, would then get in touch with the student and make further arrangements. Including suggesting a location (the library for instance). The proposed time wasn’t set in stone. If tutor and student came to the realization that another time worked better, they could settle on that time without having to go through the system again. The main idea behind the website was mainly to make it as easy as possible for the student to take the first step in asking for help from a tutor (in that respect it is also an advantage that the, probably confusing, difference between GAA and SLA didn’t matter anymore. A tutor was simply a tutor on the system).

If this was a proper RCT we would have created a control group and randomized allocation to the treatment condition, but that was just not doable in the current context. To be honest, the whole data collection aspect of the intervention was a bit of a mess. To start with, I don't really have exact numbers with regard to how many students made appointments with either the GAA or an SLA previously. I asked around and the GAA said she saw a handful of students in past years but for the SLA the numbers were apparently so low that nobody bothered to register them.

And for this year, with the intervention, I really only have a register of how many requests for an appointment I have forwarded to either the GAA or the SLA’s. That was 26. That is bigger than the ‘zero to a handful’ from previous years, so in that respect it seems that the intervention has worked. It should be noted that 15 (=58%) of the requests came from the same student. This isn’t actually that remarkable. From previous years I got the impression that there often 1 or 2 students in the year who have figured out that getting help from a tutor is something that works for them and make regular appointments. The other 11 requests were from 8 other students in total. (We had about 60 first-year students this year).

But I don’t know how many actual tutoring sessions came out of these 26 requests. I’ve asked the GAA to check. And where I seem to have forwarded her 12 requests, it only led to 6 actual tutoring sessions in her records. Sometimes students didn’t respond to the follow-up e-mails trying to establish the details for the appointment. Sometimes students didn’t show up for an appointment that was made. I assume that this percentage is similar for the SLA’s, but can’t check because they didn’t keep any records.

Despite these numbers being a bit dodgy from all sides I still put them in a graph because that’s what we do with data. Especially if it makes it look like the intervention was a big success.

So, what have I learned from this project?

- Collecting reliable data in a real-world situation like this is much harder than collecting data in a laboratory or online setting, my usual experimental surroundings.
- Still, I like to think the intervention was a success. Talking to some of the SLA’s who had been in this role in previous years as well, they told me that they actually had things to do this year. Sure, there’s lots about the process that can still be improved but creating the website with the timeslots seems to make taking the first step of asking for help from a tutor easier for students.
- Some additional insights: most requests were for the Quantitative Methods module (which wasn’t that unexpected, from their grades it is pretty obvious that a lot of our students struggle with mathematics). And there doesn’t seem to be a very strong relationship between the timing of the requests and when the big tests are. I expected the system to be especially busy before exam time but the requests trickled in pretty much evenly spaced out over the year.
- It was really fun to do. I know the intervention in itself is really very simple. I’m almost a bit embarrassed to call it behavioural economics. It’s simply ‘making something that was a bit of hassle slightly easier’ but a lot of nudging can be described that way. I liked the thinking about the problem in terms of the EAST framework and the solving the puzzles associated with the implementation of the solution idea I had. I especially enjoyed seeing the intervention being quite successful.

(The picture at the very top I've borrowed from this tweet).

Wednesday, July 17, 2019

IV Middlesex University Summer Experimental Economics Workshop

Earlier this month me and some of my colleagues here at Middlesex University organized the fourth annual Summer Experimental Economics Workshop. The set-up didn't differ much from previous editions: we invited Year 12 students from schools around London and Hertfordshire, ordered enough pastries and juice and spent two mornings playing a number of experimental economic games. In total about 80 students from 9 schools took part this year.

I like using classroom experiments in my regular teaching as well. An obvious reason to this is because, as a behavioural economist, it is a fun and interesting way to show various irrationalities and errors in the standard economic decision making theories. And I still use them to do that. Most of these workshops I start out with a (Keynesian) Beauty Contest where there is a clear theoretically correct answer that in practice never is the winning strategy, because it depends strongly on all the decision makers in the game knowing that this is the theoretically correct answer (and in turn knowing that every one else knows etc.).

But over the years I've become more and more convinced that classroom experiments are even more interesting to do, in a sense, the exact opposite. Many fundamental (micro)economic concepts may seem clear and obvious, especially to us professional economists, when explained in a textbook or a lecture but can be counterintuitive to students. I like to think that playing the role of a decision-maker in such a situation and experiencing the trade-offs and incentives and uncertainties, is much more insightful and leads to better comprehension of the underlying ideas.

My favourite example of this is the simple competitive market equilibrium. You can explain how this is supposed to work by drawing a demand and a supply function on the board, maybe create a little bit of a background story, and then it is fairly easy to show that in a well-functioning market the equilibrium price and quantity will be there where the two lines cross. My belief is that by creating a simple induced value market and putting students in the shoes of one of the participants and have them try to maximize their profits, they will gain a better understanding of the reasons and mechanisms behind why the equilibrium is the equilibrium.

This is probably especially so for cases where asymmetric information plays a role. Again, these are fairly easily explained, but always from a outsider perspective. The actual experience of a situation where your optimal decision depends on you knowing something that the person you're interacting doesn't know (or the other way around) I think is much more interesting. So one of the other games I usually do in workshops like these is a version of George Akerlof's Lemons Market. Holt and Sherman (1999)'s design (pdf) works very well in my experience. It consists of two stages. First as a market with full information, where you can show that, within a couple of rounds, the market manages to find the equilibrium price(s). And subsequently as a market with asymmetric information where, in principle, the same benefits of trade are available as before but won't be generated because the sellers can only sell the lowest grade products because they can't convince the buyers to pay a high enough price for their higher quality products.

I like to think that in particular on a beginner level this second approach of taking active part in the economic situation and experiencing the forces and incentives that make the equilibrium become the equilibrium is very useful for getting to understand economics. And also not necessarily less fun than the games designed to make the students aware of their irrational behaviour. But you often need to have more than a basic knowledge of economic theory to understand why something is irrational. In these workshops I like to mix these two approaches up a bit - the third game this time was a Common Value Sealed Bid Auction where there was a lot of overbidding as expected - and that always seems to work out pretty well.

(Photos by Praveen Kujal)

Friday, March 01, 2019

What Do First-Year Economics Students Know About Inequality?

I dedicate one of the lectures in the first year Micro-economics course I teach to talking about inequality. It starts out in the context of the labour market, covering things like wage differentials, the education premium and discrimination in the labour market, but subsequently I also talk about more general aspects of income and wealth inequality like the Gini coefficient and Lorenz Curves.

For the last few years I have been using Norton and Ariely (2011) as an exercise to get the students to think about (wealth) inequality. It's a very simple experiment with fun results. I ask the students to guess the wealth distribution (for the UK, where we are) by getting them to say what share of the total wealth they think is owned by the poorest 20%, by the second poorest 20% etc. In a similar fashion I also ask what their ideal distribution would be.

Typically, the results are very similar to those in the original Norton and Ariely paper (and my experience in this respect is pretty much the same as for Barnes, Easton and Leupp Hanig (2018) who write about how they have also used this experiment in the classroom). Students seem to underestimate the existing inequality by a considerable margin. Below I've created Lorenz Curves (using this website) for the average distribution as guessed by the students (left) and the actual distribution (right, based on data from the Equality Trust). The ideal distribution from the students is even more equal.

I really like this exercises but there are a couple of problems with it. For one, my students seem to have some trouble working with quintiles in this context. I often got back distributions where the share for a lower quintile was higher than that for a higher quintile. Following Norton and Ariely I usually reordered the percentages so that it made logical sense. But still, it suggests that students are not necessarily thinking about the question correctly.

Eriksson and Simpson (2011) (pdf) have another point of criticism with regards to the original paper. Their hypothesis is that big effect that is found using the Norton and Ariely method, is largely caused by an anchoring effect. By asking participants to describe the distribution in five percentages, that need to add up to 100, the equal division of 20% each might function like an anchor and steer answers into a more equal direction. Eriksson and Simpson run a version where they simply ask for average amounts: 'What is [should be] the average household wealth, in dollars, among the 20% richest households in the United States?' (and similarly for the other quintiles). The effect of this small change in how to phrase the question is startling. The distributions are much closer to the actual ones.

Inspired by this I decided to try and run the Eriksson and Simpson version this year in my class. I was a bit worried that by asking for amounts, instead of percentage shares, I would get lots of nonsensical answers. And my students indeed seemed to have strange ideas with regards to typical household wealth. Guesses of the total average wealth run from £3100 to £42.000.000. But if you look at the distribution of this wealth my students, on average at least, seem to have a very good grasp of the wealth inequality in the uk. The figure below compares the average distribution as guessed by the students using the Eriksson and Simpson method (left) with the actual distribution (right, same as above). And they are remarkably similar.

I guess there is no point going back to Norton and Ariely now. As suggested by Eriksson and Simpson, their finding seems pretty much driven by how the question is asked. A bit of a shame, 'students vastly underestimate how unequal the UK is' is a much more interesting starting point in the classroom than 'students pretty acurately guess how unequal the UK is'.

Thursday, December 13, 2018

Conditional Non-Takers


One of my favourite papers ever is Fischbacher, Gächter and Fehr's 2001 paper where they investigate conditional cooperation. Using a simple but clever design around the standard public good game they are able to divide their participants into a couple of different types with regard how they react to how much other people in the game cooperate. There is a not insignificant percentage of free-riders - around 30% - but a plurality of their subjects (50%) can be classified as conditional cooperators. They are willing to cooperate as long as the other players in the game also cooperate. There is another 14% of players whose behaviour can be described as 'hump-shaped' (matching cooperative behaviour when others' contributions are low but free-riding more and more as total contributions by others increase) and a small percentage of participants whose behaviour can't be easily be classified ('others').

Lately I have been working on some projects involving public good games and framing. Seeing how presenting the decision to contribute to the public good might influence the contribution rates. One often used example of this kind of framing is the difference between a Give- and a Take-frame. In the former you start with an endowment and your decision is how much of this to put in the public good. In the latter version the money starts out in the public good and your decision is how much to take out. The two situations are mathematically the same and as such shouldn't lead to different behaviours, but often do.

It made me wonder if this Give- and Take-frame difference might have an influence on the conditional cooperation as introduced by Fischbacher, Gächter and Fehr. Especially if it would change the distribution of types. Being told how much other people contributed and then being asked how much to contribute might have a different effect on behaviour than being told how much other people had taken out and then being asked how much you wanted to take out. I hadn't thought about it hard enough to have formulated a particular hypothesis about the direction this effect would work in but I did know I wanted to use the concept of 'conditional non-taking' in the title. But, as you might have guessed from the tweet quoted above, when I started to do some reading on the topic it turned out the idea had already been studied.

The main question Martinsson, Medhin and Persson study in their 2016 paper Framing and Minimum Levels in Public Good Provision is the effect of forced minimum contribution levels in Give- and Take-frames but they do so using the methods of Fischbacher, Gächter and Fehr and in their results section they investigate the conditional cooperative behaviour as well. The graph below (figure 1 from the paper) summarizes the average effect: average contributions are slightly lower for Take-decisions but there is conditional cooperation in both the Give and the Take frame. In both conditions the relationship between how much the other people in the group are contributing and how much the average participant would like to contribute is positive.

They also divide their participants into the four different behavioural types. (By the way, they run their study as a lab-in-the-field experiment in Ethiopia which makes it extra interesting). The table below is based on their Tables 6 and 7. The numbers look pretty similar for both conditions and are, according to a Chi-Square test, not statistically significantly different.

So, without doing any actual work myself, I still have an answer for the question that I had. Does presenting a public good game in a Give- or Take-frame influence the distribution of conditional cooperative types? No, not really.