- Campaigns release internal polls strategically — to attract donors, generate earned media, or discourage challenger entry — not at random; this creates measurable selection bias toward results showing their candidate performing better than they actually are.
- Released partisan polls systematically overstate their candidate's position by an average of 4-6 points compared to contemporaneous independent polls of the same race — the quantified cost of ignoring selection bias.
- Independent (public) polls have their own quality spectrum: live-interview surveys with large samples and transparent methodology are meaningfully more reliable than automated calls, online panels, or surveys that don't disclose their likely-voter screen.
- "Herding" — multiple polls clustering suspiciously close together — is the most dangerous form of systematic error because it appears as consensus while actually reflecting shared methodological bias that will miss real movement simultaneously across all participants.
- A single poll of any type is a noisy data point, not evidence; quality-weighted poll averages that incorporate methodological rigor ratings are the most reliable tool for distinguishing genuine movement from noise in any competitive race.
The Strategic Logic of Poll Release
Campaigns conduct internal polls continuously throughout the cycle — typically every 2-4 weeks in competitive races, more frequently in the final months. These polls are conducted by partisan pollsters who are paid by the campaign and whose professional future depends on the campaign's satisfaction. The fundamental incentive structure means that internal poll results are not released at random: they are released when strategically useful.
There are three common reasons a campaign releases an internal poll showing them ahead. First, to attract donor attention: in early campaign stages, a poll showing momentum helps fundraising. Second, to shape earned media: a poll showing a surprising lead generates news coverage that functions as free advertising. Third, for strategic intimidation: a poll showing dominance in a primary can discourage challenger entry. None of these motivations have anything to do with the poll's accuracy, and all of them create selection bias in the sample of released internals.
The inverse is equally important: campaigns almost never release internals showing them behind. An analysis by FiveThirtyEight found that in 2022 Senate majority math, released partisan polls favored the releasing party by an average of 4.2 percentage points. In races rated Toss-up or Lean, the bias was even larger — approximately 5.8 points. The conclusion is stark: a released internal poll is primarily a communications document, not a measurement tool.
Public Poll Quality: A Tiered View
| Poll Type | Examples | Bias Risk | Herding Risk | Forecaster Weight | Best Use |
|---|---|---|---|---|---|
| Academic / university | Marquette, Monmouth, Marist | Low | Low | 1.0x (full) | Primary data source |
| Major media-sponsored | NYT/Siena, WaPo/ABC, CBS/YouGov | Low–medium | Medium | 0.9x | Primary data source |
| Independent public pollster (A-rated) | Quinnipiac, Suffolk, Emerson | Medium | Medium | 0.7–0.8x | Averaging required |
| Partisan pollster (D-aligned) | PPP, Global Strategy Group | High | High | 0.2–0.3x | Directional only |
| Partisan pollster (R-aligned) | Trafalgar, Rasmussen, McLaughlin | High | High | 0.2–0.3x | Directional only |
| Campaign internal | Any released internal | Very high | Very high | 0.1–0.2x | Communications signal only |
Herding: When All Polls Are Wrong Together
Herding is the systematic phenomenon where pollsters adjust their results toward the consensus to avoid being an outlier. It is rational behavior at the individual pollster level — a pollster who publishes an outlier that turns out to be wrong suffers reputational damage, while a pollster who publishes a result consistent with the consensus suffers no individual reputational cost even if the consensus is wrong. The result is collective irrationality: the consensus underestimates its own uncertainty.
Statistical detection of herding is straightforward: if polls are independent random samples of the same underlying population, their results should follow a normal distribution with variance equal to (p(1-p)/n), where n is sample size. When actual poll variance is substantially smaller than this theoretical variance, herding is occurring. Analysis of 2016 Wisconsin polls found variance roughly one-third of what randomness would predict — a classic herding signature that contributed to every model underestimating Trump's probability of winning the state.
How to Be a Smarter Poll Reader
The practical rules for reading polls intelligently: First, always check who paid for the poll. A campaign-funded poll showing the funder ahead should be weighted near zero. Second, look at the pollster's historical bias record (FiveThirtyEight's pollster ratings are the best public resource). Third, weight averages over individual polls — any single poll carries high noise. Fourth, compare the likely voter screen methodology; if a poll uses registered voters rather than likely voters, expect it to run 2-4 points more Democratic than the final result. Fifth, be aware that poll aggregators themselves can be fooled by herding — a polling average is only as good as the diversity of its inputs.