
Campaign Trend Podcast
The Campaign Trend Podcast brings you into conversation with professionals who drive the business of Politics.
Hosted by Eric Wilson, Executive Director of the Center for Campaign Innovation.
Visit CampaignTrend.com/podcast for more.
Campaign Trend Podcast
Unstructured Data, Structured Insights: AI Tools for Campaign Analysis
Eric Wilson discusses groundbreaking research from the Center for Campaign Innovation on using AI to analyze open-ended survey responses. In this special episode, producer Adam Belmar interviews Eric about testing four AI models—ChatGPT, Claude, Grok, and Gemini—to code voter responses in a battleground congressional district. Discover how AI can revolutionize campaign research by processing qualitative voter feedback faster and more affordably, plus practical implementation strategies for political professionals.
Subscribe on YouTube:
Visit the Campaign Trend Website:
Follow us on X
Follow us on LinkedIn
Subscribe to our Newsletter
Become a Campaign Trend Insider
Eric Wilson (00:00):
What if we take off this restriction, what are people going to say is their most important issue? Welcome to the Campaign Trend Podcast, where you are joining in on a conversation with the entrepreneurs, operatives, and experts who make professional politics happen. I'm your host, Eric Wilson. Today we're going to be a little bit different, the Center for Campaign Innovation, where I'm the executive director, just published new research on using artificial intelligence to analyze the open-ended survey responses from a recent poll we did in a career congressional battleground. And I thought it would be interesting to walk through our findings, what they mean for campaign. And so I called in my good friend Adam Belmar from Advocacy Content Kitchen, who produces this podcast to turn the tables, get behind the microphone and ask me about the research. So Adam, thanks for jumping on today and being on the show.
Adam Belmar (00:53):
Always happy to help out, Eric and I love this research. It caught my attention right away. It certainly tackles a problem, a real problem that campaigns face every day. So let's start with the basics. What made you want to tackle the survey analysis with ai?
Eric Wilson (01:08):
I think one of the things that frustrates me the most about the conversation about politics and AI is that it's always pessimistic. And I'm in general, very optimistic when it comes to technology and prefer to see the good side of it. We hear most of the conversation is about deep fakes and fake voices and synthetic media and all of the bad things that could happen. People really just kind of have this 2001 a Space Odyssey or Terminator sort of vision of catastrophic future because of ai. I think if we properly view it in its role as a tool that gives us leverage and helps us be more effective, it can actually become a really great equalizer, make campaigns more responsive to voters. And so got this idea, we were going to go do a poll for a project and I said, Hey, instead of most important issue as a multiple choice question, why don't we ask it as an open end?
(02:01):
And the pollster said, oh, well that's going to cost extra money. And so I won the battle on that. And so we got these responses back and I immediately turned to AI to analyze it and then realize that I needed to document what I was doing so I could share it with other people. And I think one of the possibilities that is unlocked with AI is getting more nuances from our data. So think about your Quinny Piac poll or New York Times poll, any poll, they're going to say, okay, what are your most important issues? And the principle of anything you measure, you're affecting it. And so by choosing the choices of six things plus other, you are deciding what's not on the list and kind of leading the witness or shifting the frame a little bit. And so it was really interesting to say, okay, well what if we take off this restriction? What are people going to say is their most important issue? And we probably could have guessed a few of them, but I was surprised to see that there were some topics emerge that were not necessarily something that a pollster would put on a multiple choice answer.
Adam Belmar (03:07):
Absolutely. And I mean evaluating those answers presents a whole lot of other challenges, but when you don't do that, you are more or less leading the witness.
Eric Wilson (03:17):
That's right. And if I sort of say, Hey, Adam, what's your favorite ice cream chocolate or vanilla? I don't really give you room to say, oh, I like Rocky Road. And so that was what's really fascinating. If I say, what issue is most important to you facing Congress today? And one of the choices I give you is abortion and you say abortion's the most important issue facing Congress today. I don't know if you're pro-life or pro-choice or if there's some other issue that you would've said had I given you the opportunity? And so one of the really interesting things in this data before we ran it through ai, we were seeing lots of mentions of the big beautiful bill. So we were in the field at the time that this legislation was being debated, and I personally was surprised to see how well voters knew the actual specifics of the legislation, even as it was really hard to know the specifics of it as it was going through Congress, but people knew it was called the Big Beautiful Bill. They were either for it or against it. And that was a sort of nuance that I would not have included big beautiful bill as one of the multiple choices, but it emerged as a topic that voters cared about.
Adam Belmar (04:27):
And this thing got hammered home in terms of its naming, but give everybody listening an idea of exactly what you tested.
Eric Wilson (04:36):
Yeah. So we asked all your sort of typical poll questions. What do you think about the job the president's doing Republican to Congress, things like that. Instead of giving people a multiple choice on what issue do you think Congress should focus on the most? We said, tell us what you think. And so people could either respond to that over the phone or on a written survey. This is what we call a multimodal survey where it's both live interviews and online, and then we get back what are called the verbatims. And they're not always grammatically correct. There are lots of spelling errors. It's very messy, unstructured data as opposed to put in the year that you were born, who did you vote for in the last election? Those sorts of things where it's structured and we know what to expect. In this case, it was more free flowing data. And so what we did is we then ran it through AI chatbots, large language models, and ask it to perform analysis and identify what are the topics, how would you categorize this as a very standard exercise that a pollster would have their staff do. It's called creating a code book and then coding the responses. Really helpful analysis, but hugely time consuming, as you can imagine. Fortunately, with these AI tools, it happened a lot quicker. There was a little bit of learning involved though.
Adam Belmar (05:57):
Do I understand it correctly that in performing this analysis, you started by having these LLMs or these models write their own prompts?
Eric Wilson (06:06):
Well, so that's where we ended up. I mentioned this in the report. We did a little bit of pre-testing, just sort of, Hey, help me out here. And the prompt that I wrote was okay, but then I remembered that one of the things that you can do with AI is tell it what your problem is and ask it to come up with a solution. In fact, this is a best practice for using ai, and it's a process called prompt engineering. And so we went to all of the four major models, Claude, Chad, GPT, grok, and Gemini, and said, here's what we're trying to do. Will you write a prompt? And it was really interesting to see all the differences that they came out with.
Adam Belmar (06:50):
Well, we're certainly sitting here in mid August. We've got chat GPT 5.0 being castigated by some online. And it is obvious that all of these bespoke LLMs have their different approaches. What kind of different results did that lead to for you?
Eric Wilson (07:08):
Yeah, so it was really interesting. It was sort of working with a group project where each individual brings their own perspective. And so each model kind of picked up on certain things from my instruction. For example, Claude had pretty thorough instructions outlining step-by-step on what to do, and even included example input and output. Grok on the other hand, was really adamant about and make sure you code as many responses as possible and it really latched onto that. Whereas Gemini, I gave it a role, which is one of the best practices I teach in our AI training is give AI a job and it'll be more effective. So it was really interesting to see all four models come with something different. And what was really fascinating is when we ran the data through those tools using the prompts that they developed.
Adam Belmar (08:16):
So that's a pretty wide range. Is there some way for you to understand or for us to understand how we account for the differences?
Eric Wilson (08:24):
One of the things that we did is provided, I won't go into the technicalities of it, but there's some analysis that we can do and we identify numerically some of the similarities, and we have some different ways of quantifying the semantic differences of there. But I think probably the most illustrative example is, well, what happened when we tested them? And so we used each prompt in ROC and Chat GPT and tested them. And one thing we is that they sort of all converged around similar topics. There were some topics that appeared multiple times like taxation and immigration, and there were some topics that were only surfaced one time in each model. So the totally unique, but the ones that you would expect, like immigration, the economy, taxes, some models identified border control in addition to immigration. Again, that's not something that we would've coded in a survey, but the fact that it emerged as one of the top 10 responses was noteworthy. Eventually we were able to sort of code those on our own with the human being and saw that we got similar results with things like immigration, the economy, budget, national debt, and the big beautiful bill legislation.
Adam Belmar (09:51):
When I started using AI in some of my workflows, I'm a big listener to this show as well as the podcast producer. I realized right away, and you had mentioned this, that there are some technical challenges that we can all run into. There are some limitations. Was that something that you confronted as you did this research?
Eric Wilson (10:14):
I did, and it required a lot of patience. So I think probably one of the things that we wanted to do is, yes, this is going to be a time saver, but we've got to do some work on the front end to figure out what's the best way to do it. So my hope is is that someone reads this report and we shared all of the prompts on our website, and then when they have their survey, they can just kind of copy off of our homework and get there quickly. A couple of things that we face. One is sort of context window exhaustion is the technical term. If you've ever felt like your AI is running out of steam, we were giving it, I've
Adam Belmar (10:55):
Been there, actually, I know exactly what you're talking, I bet our listeners do too.
Eric Wilson (10:58):
Yeah, 432 responses and start, and then you ask it to do things over and over again and it starts to slow it down. Some of that's just you got to pay for more juice in the system. And I have the super duper Chat GPT subscription, which I don't have for grok for example. But then we also had an issue with Claude specifically around this, where it kept wanting us to run our own Python script because doing the analysis it wanted to do would be too much for its context window. And so again, with more patients on my part with more working, I'm sure I could have gotten it there, but it was very just like, no, no, no, you need to go off and do this in Python. And it was really resistant to doing it natively, if you will. Similarly with Gemini from Google, it had a really good prompt, but there was an issue where it kept telling me to download a file and you click on the file and it says, not found. So again, those are just systems that I don't use as often. And when I'm doing this training, often tell people it's sort of like Coke versus Pepsi or chocolate versus vanilla. It really is a matter of personal preference. And we kind of discovered that where there was some convergence around the topics, but one tool seemed to be more reliable for the job than another.
Adam Belmar (12:26):
This is also fascinating, and it makes me wonder, you mentioned at the beginning that the team members didn't have any prior coding experience. Did the AI compare favorably to the human analysis in the end?
Eric Wilson (12:39):
Absolutely. And I think the key, we weren't done there. We then ran the analysis and took the best parts of all four of those prompts and created one new mega prompt and then repeated the whole process with chat GPT. And so through two rounds, we've got to our final product where we got 21 different codes that meet our criteria. And that was obviously more than a human being would typically code in a code book, but we did do a final quality check and there's some issues where it maybe missed something or added a code that shouldn't have been there, or maybe there was a code that it should have been added. And so you definitely want to have the human being in the loop. And in this case, it was really helpful to be able to have team members go in and verify what we were doing.
Adam Belmar (13:29):
I love the idea of context windows because context is everything and judging it is its supremely humane thing to do even with the ai such as it is. Okay, so for everybody who's listening, then people want to know, did we gain a lot of insight here into how we go about doing traditional multiple choice polling?
Eric Wilson (13:47):
Yeah, I think so. And I would say let's start though with some more general advice, which is AI specifically LLMs are really useful for managing unstructured data. I think on campaigns we do a great job of if something is structured, we know this is a cell phone column, we know this is an email column, we know what to do with that with the unstructured data, things like responses to our text messages or verbatims from surveys. We haven't really had the analytical throughput to make use of that. Now we do that. I think a second sort of general takeaway is you've got to be patient with ai. It will be a time saver after you get used to using it. And then specifically with polling, if I were on a race right now, I would be looking at, well, how can we get more open-ended feedback from voters? Because if we just give them the typical five to seven choices of topics, you're going to be running the same. What you may find is that there is a lane there for people who are really concerned about government corruption. That's not something that we would've put in our multiple choice poll. One of the hot topics in any election is abortion. It did not come up as often as you would think when you ask people what's the most important issue? But if you gave them the choice, they might pick that.
Adam Belmar (15:21):
That is very unsettling, I feel, because it tells me that we might have a situation here where polling is artificially inflating the importance of voter priorities just based on the way we're asking the questions. And I feel like that questioning these numbers is something that's out there. We've seen polls hit and miss, but that kind of insight could significantly change the way we design polls or maybe the way that we should.
Eric Wilson (15:45):
Yeah, I think so. And so much of the way campaigns are run is sort of this red versus blue framing. And so one of the tough things with polling is we go out and ask, well, what do you think about this thing? And well, they maybe had never even thought about that before. We ask them to have an opinion on it. And so this gives us a little bit of maybe a second dimension, if you will, of, okay, we know this issue matters to people, but not as much as these five other issues.
Adam Belmar (16:18):
So if we're looking at campaign trends here though, always campaigns are going to be looking at cost and benefit. What kind of considerations are there on that side?
Eric Wilson (16:27):
So obviously asking an open-ended question takes longer because you've got to give time for them to respond. And so with polling, we're really trying to shorten the length of time so we can get more people to respond. I think there are a couple of things here. One, I think for some surveys, the insight are going to be worth the increased costs. You don't have to do this for every survey because now we have the list of punches to then go out and ask about, because we did some discovery work. One thing that I think would be worthwhile, and we see this with some other media polling, is typically a campaign won't take data from an incomplete survey. So let's say we go out and we poll a thousand people, but only 800 of them complete the survey. Well, 200 people gave us some degree of information, and that could include some verbatim. So maybe you can turn something useful out of your sawdust. So I think it certainly opens up polling to more richer insights because the whole industry of public opinion research is just getting harder and harder because we can't get people on the phone and people don't want to spend the time answering polling questions.
Adam Belmar (17:47):
These are the same challenges confronting everyone, and this wouldn't be the first time that you found something unexpected. So I'm wondering, as you were conducting the research from your understanding and working with the LLMs to maybe any discoveries in that research, was there one thing that stood out for you that just really smacked you in the face and said, I did not see that coming?
Eric Wilson (18:06):
Yeah, I went into this thinking, okay, we're going to come up with a prompt and it's going to do the job right away. But what we eventually tried is to say, Hey, can you keep going? Tell me how many responses don't have a code? And it said, okay, there are 109 responses. And then I said, okay, well look at those 109 responses, figure out if there are any codes that meet our criteria. The criteria really was just, it had to be mentioned at least three times, and then it found in that pass eight new codes and then recoded all of the responses with that. And I said, well, why don't you do it one more time? And then it added another four categories and again, updated its count. So each pass through it got a little bit better. Eventually we found that it was just coming up with new codes that didn't meet our criteria. Maybe it only applied to one or two responses. And so we stopped there after two follow-ups.
Adam Belmar (19:00):
So this has applications beyond just polling, doesn't it, Eric?
Eric Wilson (19:04):
It does. So I think it really unlocks any qualitative unstructured data that a campaign gets. So things like the messages that you're getting to your voicemail, if you're an elected official, I think it would be great to get those transcripts run through an LLM to figure out if there's a pattern. Things like talking to people at the doors, you spot check the notes from volunteers. But what if you take a week's worth of data? Maybe you're going to find a pattern there from the AI that as a human being, we wouldn't have the vantage point to see because we just can't hold all of that data in our working memory.
Adam Belmar (19:44):
And something else that I think is interesting as a writer and somebody who's worked on crafting messages, sometimes the most common language that you can identify by hearing it at the door from a whole lot of different people, perhaps that fundamental message is the same, but the words around it are slightly different. Coalescing into a message that resonates with people sometimes means being able to listen clearly enough to hear that you are hearing the same things from many different people across geographic area or even the political spectrum with similar views. So I think getting more unstructured data could be a real boon to understanding what people are feeling.
Eric Wilson (20:22):
And I think that's one of the frustrations that I've had with how we get data from voters over the last several years. I mean, the data that's available to us is richer than ever before. We've got social media data, we've got people's own posts, the responses they give to us as campaigns. We haven't really figured out how to synthesize that into structured data in the way that we have polls and getting sample sizes and things like that. And so I think AI gives us a window to alternative sources of data where we might get new insights.
Adam Belmar (20:51):
So what cautions do you have for campaigners out there who are thinking maybe I'm going to try this on for a size?
Eric Wilson (20:56):
Yeah. Well, I think start small, and obviously AI is kind of the shiny object right now, but don't forget what the fundamentals of campaigns are. I wouldn't go off and totally change my campaign strategy because of something that the AI surfaced. Take it with the sort of grain of salt that you would from a volunteer or an unpaid consultant giving you advice. But it is another data point. I'd like the fact that it is data, but we've got to trust our instinct and what the ground truth is for a campaign.
Adam Belmar (21:31):
Well, I guess the goal isn't to replace that human judgment, right? We've got to augment it. What's next for this research, Eric?
Eric Wilson (21:37):
As I mentioned earlier, all of the prompts are available on our website. So if people wouldn't go try this out themselves with their own data, they can go do that. It would encourage them to try that. And really, we're just trying to show some of the positive use cases for AI that aren't scary because there's a conversation happening at the state level right now with can AI be used in politics? And we definitely take the view that it can, and it should be. And there's some really beneficial angles here, like getting a better understanding of what voters want, making campaigns more responsive to voters, which is a good thing. So we'll keep looking for ways to use AI to improve research, campaign technology, things like this. But this is an exciting first foray for us,
Adam Belmar (22:23):
No doubt. And for people who are just listening to this on the podcast, where do they go to get that full report?
Eric Wilson (22:29):
So we'll have the links in the show notes, but it's at the Center for Campaign Innovation website. That's campaign innovation.org. All the prompts are there. You can see detailed charts, all of the codes that AI came up with comparisons between the different models, and they can go check it out there. So thanks Adam for joining us today as my co-host. And for those of you who are listening to the podcast, we always ask that you just like and subscribe wherever you get podcasts. If it made you smarter and made you learn something, we ask that you share it with a friend or colleague. You look smarter in the process and more people hear about our show. Don't forget to sign up for our newsletter to get the very latest insights and best practices. With that, I'll say thanks for listening. We'll see you next time.
Adam Belmar (23:13):
The Campaign Trend Podcast is produced by Advocacy Content Kitchen, a media production studio.