Major opinion polls of the upcoming 294-seat West Bengal assembly elections so far have been more or less consistent. Earlier this week, the Times Now-CVoter poll predicted 155 seats for the ruling Trinamool Congress (TMC) and 107 seats for BJP, while the ABP-CNX poll has 159 seats for TMC and 107 again for BJP. In the 2016 state polls, most opinion polls predicted a TMC victory by a slender margin. They failed to forecast the huge TMC victory — 211 of the 294 seats. In the 2019 Lok Sabha elections, however, most polls predicted 28-plus seats for TMC out of 42 in the state. It finally bagged only 22, with BJP nipping at its heels with 18.The inexact science of predicting elections has certainly been in crisis for some time. One serious problem is that most polls make mistakes in the ‘same direction’. Even though almost all opinion polls for the 2019 Lok Sabha elections predicted an NDA victory, they failed to come close to the 353-mark that the BJP led alliance finally got. Again, none of the major polls in the 2020 Delhi assembly elections predicted close to 62 of the 70 seats going to Aam Aadmi Party (AAP).One would expect some pollsters to underestimate one party’s eventual result while others to overestimate it. Instead, opinion polls are often skewed in one particular direction. Why?In the run-up to the 1992 British elections, out of 50 opinion polls, 38 indicated small Labour leads, while the Conservatives eventually won by 7% of the popular votes. There were three reasons for this discrepancy: a late swing, samples improperly representing electorates, and the ‘Shy Tory’ factor attributing Conservatives to be less likely to reveal their loyalties than Labour voters.Again, none of the 92 polls prior to the 2015 British elections could predict the 7% lead of the Conservative Party. 42 polls suggested Labour leads, and 81 indicated 0-3% leads for either party. A new class of electorates puzzled pollsters in 2015: ‘Lazy Labour’ voters. They were identified as those who declared their intention to vote Labour to pollsters but did not turn up to vote.Across the Atlantic, opinion polls grossly failed to predict the 2016 US presidential elections. Not only did they fail to translate popular vote predictions to the all-important electoral college, but also a polling error of about 2.5% in most closely fought and Democrat states was observed. Pollsters still struggle to understand how they missed the point that White male voters without a college degree could overwhelmingly support Donald Trump. Also, the ‘Shy Trump voter’ theory, and the theory of indecisive voters who ended up in larger numbers voting for Trump over Hillary Clinton, popped up.Four years on, while most polls predicted a Joe Biden victory, the error in opinion polls was almost the same. The vote margin went to Trump by a median of 2.6 additional percentage points in Democrat states, and 6.4 points in Republican states. It’s possible that the Hispanic support factor for Trump was missed. However, there was less criticism this time around as the race wasn’t as close.Opinion polls often suffer from serious ‘non-response error’. Many people may be sceptical of polls, especially when their opinions are in the voter’s ‘wrong’ direction. In the US, response rates to telephone public opinion polls conducted by Pew Research Center have exhibited steady decline — a 9% response rate in 2016 to 6% in 2018, to 5% or less today. Poll predictions are bound to ignore the views of the remaining non-responsive 95%, while the respondents may not be a representative sample at all. And there is no guarantee that some respondents aren’t lying.Circumventing non-response by ‘attributing’ the answer to them is sometimes attempted — and the results can be disastrous. Pollsters should use a ‘randomised response’ design that allows respondents confidentiality. Also, to gauge ‘lazy’ voters, respondents should be asked how likely they are to vote on election day.The writer is professor of statistics, Indian Statistical Institute (ISI), Kolkata.