The Device You Trust
You use your phone to research candidates. You scroll social media for news. You watch clips, read posts, maybe check a fact or two. You think you’re forming your own opinion. And in a sense, you are — but you’re forming it from a feed that was built specifically for you, by an algorithm that knows what makes you click.
Most Americans now get their political information through social media or search engines. Not newspapers. Not town halls. Platforms that make money by keeping you engaged — and engagement means showing you what you already believe, or what will make you angry enough to keep scrolling.
That was already a problem. Then AI got involved.
AI doesn’t just filter what you see. It generates content tailored to you. A campaign can now produce thousands of variations of the same message — different words, different tone, different emphasis — and deliver each one to a different voter based on their profile. Your neighbor sees a completely different candidate than you do. Same name on the sign. Different person behind it.
What It’s Actually Doing
AI microtargeting doesn’t just personalize ads. It fragments shared reality. When every voter sees a different version of the same candidate, there is no common ground left to stand on. You can’t debate a policy position if your neighbor was shown a completely different one. You can’t hold a politician accountable for a promise they only made to your zip code.
This is how it works: a campaign feeds voter data into an AI system — your age, your neighborhood, your browsing history, your consumer profile, your likely concerns. The AI generates a message designed to resonate with you specifically. Pro-gun voter in a rural district? The candidate is a Second Amendment champion. Suburban mom worried about school safety? Same candidate, different message — now they’re the common-sense gun reform candidate. Both messages go out the same day. Neither voter knows the other message exists.
This isn’t hypothetical. Political campaigns already use AI to generate fundraising emails, draft social media posts, and produce targeted video content. The technology to run personalized messaging at scale is here. The guardrails are not.
When every voter sees a different version of the same candidate, there’s no shared reality left to hold anyone accountable. That’s not a campaign. That’s a con.
The older version of this trick was simpler. A politician would say one thing in one town and something different in the next. But at least there were witnesses. Reporters could compare notes. Voters could compare yard signs. The contradictions were catchable because the messages existed in the same world.
AI microtargeting removes even that. Each voter gets a private feed, a private candidate, a private set of promises. There are no witnesses because there’s no shared experience. It’s not that politicians are lying more. It’s that the technology lets them tell different truths to different people, simultaneously, at scale, and never get caught.
We’ve Known This for Ten Years
In March 2016, Saturday Night Live ran a sketch about politicians saying different things to different audiences. Kate McKinnon played a candidate who shape-shifted depending on who was in the room. It was brilliant. It was devastating. And it was ten years ago.
I laughed out loud when I saw that video. It was really funny. But as I was laughing, I also knew it was true.
That sketch aired a decade ago, and nothing has changed — except the technology got better. In 2016, a candidate had to physically go to different rooms and say different things. Now AI does it for them, automatically, to millions of people, all at once. The joke landed because everyone recognized the behavior. The tragedy is that we recognized it and did nothing about it.
If you’re a younger voter seeing that clip for the first time, it hits fresh. If you’re old enough to remember when it aired, it hits different — because you’ve had ten years of watching the problem get worse. Either way, the question is the same: are we going to keep laughing about it, or are we going to fix it?
The Alternative: Democratize the Tool
The best defense against AI manipulation isn’t banning AI. It’s giving everyone the same tool.
Right now, AI is a weapon that campaigns point at voters. But it doesn’t have to be. The same technology that generates personalized propaganda can also cut through it. AI can read a candidate’s website and summarize their actual positions. It can compare two candidates side by side. It can flag contradictions between what a candidate says in one district and what they say in another. It can answer the question you actually have, instead of feeding you the answer a campaign wants you to hear.
You don’t need to take my word for any of this. You can check it yourself, right now. Open any AI assistant — ChatGPT, Claude, Gemini, whatever you prefer — and ask it to compare candidates, check for contradictions, and tell you who explains the how, not just the what. I walk through specific prompts and how to think critically about the results on my How I Use AI page.
I’m telling you to do this to my own website. If AI finds contradictions in my positions, I want to know about them too. A candidate who can’t survive a five-minute AI audit doesn’t deserve your vote. That includes me.
Here’s what this looks like as policy:
- AI transparency in campaigns — require political campaigns to disclose when content is AI-generated or AI-targeted. Voters deserve to know if the message they’re reading was written by a human or assembled by an algorithm based on their consumer profile. Sunlight is the best disinfectant.
- Microtargeting limits for political ads — ban the use of AI to deliver fundamentally different policy messages to different voter segments. Tone and language can vary. The substance cannot. If a candidate supports a policy, every voter should know it. If they oppose it, same rule.
- Public AI tools for voter education — fund open-source, non-partisan AI tools that let voters compare candidates, verify claims, and analyze policy proposals. The technology already exists. What’s missing is a public version that isn’t owned by a company with its own agenda.
- Algorithmic audit requirements — require social media platforms to submit their political content algorithms to independent audits during election seasons. If the algorithm is shaping how people vote, the public has a right to know how it works.
I use AI in my own campaign. I’m transparent about how — you can read the full breakdown on my How I Use AI page. I use it to draft, to research, to organize. I don’t use it to tell different voters different things. Every page on this website says the same thing to everyone. That’s not a technical limitation. That’s a choice.
The question for every candidate in every race is simple: will your website survive an AI audit? If the answer is no, voters should ask why.