Research on Election Programs
We aim to find electoral funding opportunities that strengthen democracy as much as possible. We start by identifying the most competitive races.
Then we focus on finding organizations that effectively implement evidence-based programs with the potential to substantially affect those races, either through non-partisan programs that prevent distortions of democracy by encouraging citizens to vote regardless of their candidate preference, or through partisan programs.
Initially we conduct brief assessments of a number of programs, screening them for evidence of impact and cost-effectiveness. We provide in-depth reviews of the most promising programs and develop a cost-effectiveness estimate.
If a program meets our criteria, we make a recommendation to our network and/or support the program from our Focus for Democracy All Programs grant fund.
Identifying Programs for Review
Once we determine the key competitive races to focus on, we initiate a broad search of a variety of programs to consider through various channels:
Review independent scientific studies.
We review randomized controlled trials and studies published in academic journals, as well as studies from progressive hubs for non-public election-related research.Independently review evidence brought to us by organizations seeking a recommendation.
We often rely on self-reported evidence provided by organizations to initially assess the effectiveness of programs in generating net votes for critical races. Our evaluation team then reviews the study design, methodology, and raw data and performs its own analysis to estimate a program’s cost-effectiveness.Actively gather information on new programs and implementing organizations.
We consult with practitioners, academics, and experts working in the field of campaign tactics. We attend myriad conferences and webinars to stay informed about the latest developments in the field and to build relationships with practitioners and donors.Engage with implementing organizations.
We regularly engage in discussions with organizations implementing programs to gain a deeper understanding of their initiatives.Research the funding landscape.
We conduct research on the sources of funding for various programs, considering factors such as the availability of resources, whether funding will be raised in time to execute programs efficiently, and the likelihood that other donors will fund the programs.
Our Evaluation Process
We review numerous programs each election cycle and conduct an in-depth analysis of the most promising candidates.
Here's an overview of our evaluation process:
Preliminary review: We conduct a cursory review of various programs, considering their alignment with our criteria. If a program does not seem likely to meet our standards, we deprioritize it.
Digging deeper: For programs that show promise based on our initial review, we assess them at progressively deeper levels to understand their potential impact and effectiveness.
Analyzing promising programs: We allocate the majority of our time and attention to programs that appear highly promising in terms of meeting our criteria. We elevate programs that demonstrate the following:
Strong evidence: There is compelling evidence supporting the effectiveness of the program.
Cost-effectiveness: The program exhibits a high level of cost-effectiveness, maximizing the impact per dollar invested.
Need for our funding: If the program is likely to scale to its optimum level without our network’s funding assistance, we do not recommend it–unless it becomes apparent that our help will make that difference. Our evaluators keep track of the real-time budget gaps of our recommended programs, and convene with funders outside of our network about their interest and capacity to funding our recommended programs.
Key competitive races: The program is implemented in critical races in battleground states and swing districts.
Scalability: The program has the capacity to reach the majority of target voters in key geographies.
Considering organizational capacity and execution risk: We consider the organization’s ability to effectively implement a particular program and the competency of its leadership team. We also assess the feasibility and challenges associated with program implementation.
Based on our comprehensive evaluation, we recommend funding organizations that demonstrate excellence in implementing the most cost-effective programs and have the potential to make a significant impact on critical races.
Establishing Causation: How do we know the program made a difference?
One of the main challenges in evaluating programs is establishing a convincing argument that the positive change in an election is a result of the program under investigation, rather than other factors.
For example, suppose an organization reports that compared to the previous election, voter turnout was 5 percentage points higher in the district where they implemented a door-knocking voter education program. How do we know what portion of the increase in voter turnout was because of the door-knocking and what portion was due to TV ads, news coverage, mailings, and social media? Perhaps more buzz about the race made voters in the district more engaged than they had been in the earlier election.
Many studies rely on simplistic before-and-after comparisons, which often fail to differentiate between program effects and unrelated changes that occur in different elections.
Randomized Controlled Trials (RCTs)
An effective approach to addressing the challenge of causal attribution involves randomization. In a randomized control trial (RCT), participants are randomly assigned to either the treatment group or the control group. The treatment group receives the intervention or treatment being studied (e.g., door knocking program), while the control group does not. The two groups are then compared to see if the treatment has had any effect.
Scientists have long considered RCTs the best way to establish causal attribution because it helps to ensure that the two groups being compared are similar in all aspects except for the treatment being studied. By randomly assigning participants to the treatment or control group, researchers can be reasonably sure that observed differences between the two groups (e.g., higher voter turnout) are due to the treatment being studied, rather than other factors that could affect the outcome of the study.
In the example below, the RCT randomly assigned unregistered voters to either the treatment group (recipients of the voter registration mailers) or the control group (did not receive a mailer).
Both treatment and control groups would be similarly exposed to all other election news, advertising, campaign tactics, and voter registration drives of other organizations.
After the election, the state’s publicly available voter file is reviewed to verify who actually voted. If the treatment group had a significantly higher percentage of people who voted compared to the control group, we can conclude that the voter registration mailers were effective in increasing voter turnout.
That being said, it is important to note that our assessment of a study goes beyond its classification as an RCT. We find nonrandomized studies compelling in certain cases, while some RCTs may not provide compelling evidence. Furthermore, if preregistration were more prevalent, we would likely consider it a more crucial and encouraging characteristic of a study than randomization.
While we believe that RCTs possess multiple qualities that make them more credible than other study designs when all else is equal, they are often more expensive and challenging to conduct, and are not the sole determining factor in our decision to fund a program.
Note: Conducting RCTs may be financially or practically unfeasible in some instances. In such cases, alternative techniques for attributing causality are employed. Where possible, we strongly prefer randomized controlled trials as we can have much greater confidence in their validity than other methods.