How we predict which candidates a voter will support

October 24, 2020
At Deck, we build predictions using an approach we call “contextual inference.” Most predictive models in politics are trained on survey responses. Those responses are then generalized to a broader audience based on the demographic and socioeconomic traits of the respondents. Our approach instead captures real data on individuals’ past behaviors (at the precinct level) […]

At Deck, we build predictions using an approach we call “contextual inference.” Most predictive models in politics are trained on survey responses. Those responses are then generalized to a broader audience based on the demographic and socioeconomic traits of the respondents.

Our approach instead captures real data on individuals’ past behaviors (at the precinct level) and the context around those behaviors to anticipate what people in new contexts might do in the future.

One of the most important behaviors we predict is who a person will vote for in a given contest.


How the models are built

The first step in preparing this model is to assemble its training data. That involves assembling:

  • Representations of historic voter traits and election results
  • Demographic and political traits of the candidates on the ballot in past contests
  • Features describing the volume and sentiment of media coverage of past races
  • Detailed campaign finance data, including the demographic traits of contributors
  • Data on which audiences were most likely to be exposed to certain types of media coverage.

Each instance of a precinct/block-level result, voter traits, and candidate traits constitutes a single training sample. Our database currently includes over 940 million training samples for our candidate support models.

Next, we use our training data to identify the features most likely to have high predictive power — either alone or in combination with others — and those most likely to confuse a model into overfitting or diminish the impact of other features. At this stage, we prune highly correlated features and features without meaningful variation, use a technique called VSURF (variable selection using random forest) to better interpret how features will interact with each other, and impute missing data.

Finally, we iteratively design a deep learning architecture to predict our outcome. In this case, we’ve built a ten-layer neural network. The model uses the Adam optimization algorithm, optimizing for low binary crossentropy.


Evaluating accuracy

To validate this model, we trained a version of it with no knowledge of data from 2018. The model was instead only trained on data from 2010 through 2016 — containing millions of unique campaign-voter representations.

The result was a model with significant predictive power. When validated on over 56,000 testing samples from 2018, we found that the model’s area under the ROC curve was 0.88, indicating that the model’s ranking of candidate support aligned with the actual ranking of candidate support in our testing samples 88% of the time. The model’s sensitivity (or true positive rate) was 0.90, its specificity (or true negative rate) was 0.88. And in a lift chart organized by decile, the top decile had a lift of 228 while the bottom had a lift of 7. This means the people with the top decile of scores were more than twice as likely as a random person to support a given candidate. Those in the bottom decile were less than a tenth as likely to support a given candidate.


Survey validation

Through an analysis of survey responses collected by YouGov in North Carolina throughout September 2020, we were able to see how our support scores for Joe Biden, Cal Cunningham, and Democratic candidates for the U.S. House compared with actual stated support for these candidates. We were also able to see how our scores compared to generic scores developed by other Democratic data vendors. Raw data on this analysis is available here.

Below, you can see which share of survey respondents in one of five Deck support score buckets indicated support for a given Democratic candidate. In most cases (except in very low sample size buckets, as indicated by the parenthetical), the share of survey respondents indicating support falls squarely within the expected Deck probability range.

We also took a look at how our scores predicting support for Cal Cunningham compared with survey-based Democratic Party support scores from another vendor. As shown in the area under the curve and gains charts below, we found that our scores were more precise.


Why do Democrats need another support score?

Right now, most Democratic campaigns identify likely supporters with one of three different approaches:

  1. Expensive but high-quality survey-based predictive models, using thousands of individual survey responses to predict the traits of supporters
  2. Generic survey-based models that identify who is most likely to identify as a Democrat
  3. Demographic and socioeconomic filtering based on local knowledge of a district (e.g., focusing on young voters, voters of color, recent registrants, or registered Democrats)

However these approaches aren’t always a good fit. Campaign- specific survey-based models are too expensive for most of these campaigns to afford, and their districts are often too small to get the number of IDs one of these models would require.

In our analysis of generic Democratic Party support scores provided by other Democratic data vendors, we’ve found that these scores don’t always correlate well with precinct level results.

We believe all campaigns, with limited resources and so much on the line, are in need of high quality targeting data. Our support scores are built to fill that need.

More Like This

Loading...