The Aggregator Landscape: Who Remains Standing
For sixteen years, FiveThirtyEight defined polling aggregation for a broad public audience — blending statistical rigor with accessible writing to give readers probabilistic forecasts rather than simple poll averages. Its closure in 2024 following Nate Silver's departure and ABC News's restructuring removed the most prominent independent voice in the space. The replacement ecosystem is fragmented: Silver continues independently at Silver Bulletin; The Economist has operated a sophisticated election model since 2020; academic forecasters at several universities maintain quantitative assessments; and traditional qualitative forecasters — Cook, Sabato, Inside Elections — continue their race-by-race ratings. What's missing is a single widely-trusted institutional home for integrated quantitative polling aggregation.
Aggregator Track Records — Key Metrics
The Pollster Quality Problem
Not all polls are equal. Firms like Trafalgar Group (Republican-affiliated, A/C rating) and Rasmussen Reports (historically Republican-leaning results) are included in RCP averages but excluded or down-weighted in more rigorous aggregations. The 2020 and 2022 cycles showed that including these firms in simple averages systematically biased results in the Republican direction. The AAPOR pollster ratings, adopted in modified form by FiveThirtyEight and Silver Bulletin, provide a defensible quality framework — but public understanding of pollster quality varies widely, and media often report any poll as equally credible.
The 2022 Polling Miss and What Changed
In 2022, most aggregators showed a competitive or Republican-favorable generic ballot heading into the midterms, suggesting a potential "red wave." Democrats outperformed substantially. The polling miss was attributed to: herding (pollsters adjusting toward consensus rather than publishing outliers), likely voter screen issues (Democratic turnout was higher than models expected), and the Dobbs effect which activated female voters who don't usually vote in midterms. Post-2022, top pollsters adjusted methodology — but 2024 saw a different kind of miss, with polls generally underestimating Trump's Latino and working-class gains. No single methodological fix has solved the consistent partisan polling error problem.
How to Read Aggregators: A Practical Guide
For 2026 races: use Silver Bulletin or The Economist model as primary references. Check Cook, Sabato, and Inside Elections for race-level qualitative assessments. Treat RCP averages as a supplementary data point but examine which polls are included. Look for trend direction as much as absolute numbers — a candidate moving +3 in a 3-week period is more meaningful than a single poll's margin. Discount polls from firms that have shown consistent partisan lean in past cycles. Weight recent polls more heavily in fast-moving environments like a wave election.
Frequently Asked Questions
What happened to Nate Silver after FiveThirtyEight closed?
Silver launched Silver Bulletin on Substack in 2023 before FiveThirtyEight formally closed, offering a mix of free and paid content on polling, elections, and political forecasting. He continued his FiveThirtyEight-style weighted poll averaging and probabilistic election forecasting. Silver Bulletin has attracted a substantial subscriber base and is considered the primary successor to FiveThirtyEight's quantitative forecasting legacy, though it lacks the institutional resources and dedicated team of the original site.
What is the best single source for tracking 2026 Senate races?
For comprehensive tracking: Cook Political Report's Senate ratings updated frequently provide the best institutional qualitative benchmark. Silver Bulletin's race-by-race aggregation, when available, provides quantitative supplement. For individual state polls, the Polling Report aggregates state-level data. Senate Money Race tracking at the FEC.gov provides fundraising data that is independently predictive of outcomes. No single source is sufficient — the best analysis combines qualitative race ratings, quantitative poll averages, and contextual factors including candidate quality and economic conditions.
Why do polls consistently underperform relative to actual Republican results?
Several theories have been advanced: differential non-response (Trump supporters are less likely to respond to pollsters, a phenomenon called "shy Trump voter" effect); educational sampling bias (phone and online polls over-sample college-educated respondents who skew Democratic); social desirability bias (respondents underreport socially stigmatized opinions even in anonymous polls); and herding bias (firms adjust toward consensus rather than reporting genuine outliers). Multiple cycles of Trump outperforming his polls suggest the problem is structural, not accidental. Academic pollsters at Pew, YouGov, and NORC that use probability sampling rather than opt-in panels have generally performed better.