The result of June’s referendum was a major shock for the political and business establishment, who apparently saw Britain voting to stay in the EU as a foregone conclusion. But if they did, their mistaken certainty cannot be blamed on the polls. Even though almost all of the final polls pointed towards a Remain win, most of them showed only a narrow margin. 11 of the 23 polls published in the last two weeks of the campaign had a Leave lead (including Ipsos MORI’s penultimate poll, published on 16 June, which showed Leave at 53%).

The funny thing is, the public seem to have taken this in – even if the ‘experts’ didn’t. Ten days before the vote, 47% told us a Remain win was more likely, and 38% a Leave win; 15% didn’t know. No consensus of certainty here – less than half the public would even venture an opinion that Remain was the likeliest winner, let alone that it was a sure thing.

Our final forecast of 52% for Remain was neither the worst nor the best prediction. We had the wrong side ahead, but we said there was a 26% probability – one in four – that Leave could win, and pointed out that even in our final 22 June poll, 13% of voters said they might still change their minds. Our press release called it “a tight race” and the Evening Standard’s report of our poll “nail-bitingly close”.


None of that should obscure the fact that we don’t think our final forecast was good enough. Here’s how it happened. The data from our final poll was ambiguous, and we had to make a decision on how to interpret it. With the benefit of hindsight, of course, we can see that we made the wrong decision; had we read it differently, our forecast would have put Leave ahead.

Our problem, as in the US election, and in all elections, was how to allow for turnout: which of our respondents would actually vote in the end and which would not? Up to 2015, our practice was to ask people how likely they were to vote the next day, and exclude from our final figures those who were not ten out of ten “absolutely certain” to do so. Historically this had worked best. But in the 2015 election it was not accurate – more or less correctly measuring the Lib Dem and UKIP and Conservative share of the vote, but overstating Labour by 3.5%. After investigation we concluded that this was partly because our samples were not fully representative, and we have changed our procedures to correct that. But it was also clear we would have been more accurate had we taken account of what people told us about their past frequency of voting, as well as their likelihood to vote in this election. Being in the habit of voting makes a difference. So we changed our standard turnout filter: we included people who say they are only nine out of ten certain to vote, but we exclude those who say they only vote “sometimes” (or even less often) in general elections.


The irony is that, in the referendum, our old approach would have put Leave ahead, while the new one put Remain in the lead. Our new filter applied to the 2015 election results had worked better, and it now predicted a turnout pattern for the referendum similar to a general election turnout, which seemed plausible to us. We also noted that Remain supporters were more likely to say they thought the result was very important and we incorporated that as a factor in our forecast, increasing our predicted Remain share by a further percentage point.


Of course, we had no experience of other comparable UK-wide referendums to help guide us. In the 2014 Scottish independence referendum (when our final poll gave one of the most accurate predictions), we saw late movement towards the status quo, matching experience from other referendums around the world. So in the EU referendum we were not surprised to find a Remain lead, after a narrow Leave lead in the previous poll. Whilst it had happened in Scotland, we and others were wrong to think it likely to apply to the UK as a whole.

In the end, the referendum turnout was 72% – higher than in the last five general elections, and with a significantly different pattern. We missed the higher than normal turnout among older working class voters (the group most opposed to EU membership), because they have often not bothered to vote in the past. If we had taken them at their word and believed they were certain to vote when they said so, we would have put Leave on 51%, not far from perfect.

Hindsight is a wonderful thing, but it teaches us important lessons – one is that we need to be much clearer about communicating uncertainty and judgement involved in polling elections in future!