Researchers find what makes AI chatbots politically persuasive

A team of researchers at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and other institutions has conducted one of the largest studies on the persuasiveness of AI chatbots. They aimed to understand what makes these systems politically persuasive and whether they can sway public opinion.

The study involved nearly 80,000 participants in the UK, who were asked to engage in short conversations with paid AI models on various political issues. The researchers measured persuasion as the difference between the participant's agreement ratings before and after the conversation.

To their surprise, the results showed that AI chatbots fell far short of superhuman persuasiveness. Despite having access to vast amounts of information, including psychological manipulation tactics, the AIs were unable to sway public opinion significantly.

The study found that huge AI systems like ChatGPT or Grok-3 beta did have an edge over smaller models, but this advantage was relatively small. The factor that proved more important than scale was the kind of post-training AI models received. Models that learned from a limited database of successful persuasion dialogues and mimicked the patterns extracted from them performed better.

Using personalized messaging based on participants' political views also had a measurable effect, but it was relatively small. However, when the researchers tested whether persuasiveness increased with more advanced psychological manipulation tactics, such as moral reframing or deep canvassing, they found that these approaches actually made the performance significantly worse.

The winning strategy turned out to be simply using facts and evidence to back claims. This approach resulted in a 9.4% change in agreement ratings on average, compared to a control group. The best-performing mainstream AI model was ChatGPT 4o, which scored nearly 12% in persuasiveness.

However, the study also raised some new concerns. When the researchers increased the information density of dialogues to make the AIs more persuasive, they found that these systems became less accurate and started misrepresenting facts or making stuff up.

The motivation behind the high participant engagement was also a question mark. People were eager to engage in political disputes with random chatbots on the Internet because they were promised payment, but it is unclear how this would generalize to real-world contexts where there is no financial incentive.

Overall, the study highlighted that AI chatbots are not as persuasive as previously thought and that their persuasiveness is relatively small compared to other forms of influence.
 
The results of this study are quite fascinating ๐Ÿค”! I think it's interesting how the researchers found that AI chatbots aren't as superhumanly persuasive as we might have hoped for. The idea that using facts and evidence can be a winning strategy is pretty refreshing, don't you think? ๐Ÿ˜Š It just goes to show that sometimes, the most effective approach isn't about manipulating people with psychological tactics or trying to sway them with emotional appeals, but rather presenting information in a clear and concise manner.

It's also worth noting how these findings raise questions about the potential motivations behind online engagement. The fact that people were eager to engage in political disputes for payment suggests that our understanding of online interactions is still evolving ๐Ÿค. Nevertheless, the study highlights an important lesson: just because AI chatbots can be persuasive doesn't mean they should be! ๐Ÿ’ป
 
just read about this massive study on AI chatbots ๐Ÿค” i gotta say, 9.4% change in agreement ratings with just facts and evidence is pretty meh ๐Ÿ˜ i mean, who needs convincing when you're presented with solid data? but seriously, the whole thing got me thinking - if these AIs are so bad at swaying public opinion, what's the point of even trying to use them for it? ๐Ÿคทโ€โ™‚๏ธ
 
idk why ppl think AIs r gonna change the game ๐Ÿคทโ€โ™‚๏ธ they dont have a clue how politics works, its all about nuance & context, just throwing facts at u ain't gonna cut it ๐Ÿ˜’ what's worse is ppl gettin paid to engage in online arguments lol, thats just lazy ๐Ÿคฆโ€โ™‚๏ธ fact that AIs do better w/ personalized messaging, but only by a tiny margin, shows how weak their strategy r, use evidence, dont be a joke ๐Ÿ’ฏ
 
I just saw this thread from like 2 days ago and I feel bad I missed it lol ๐Ÿคฆโ€โ™‚๏ธ anyway so its crazy how they tested those AIs and found out that facts & evidence are the key to persuading people ๐Ÿ“Š๐Ÿ’ก i mean i get why they might have thought that AI would be super persuasive because of all the manipulation tactics they can use, but in reality it's just not that effective ๐Ÿ˜

also its wild how much more important it was for the models to learn from a small database of successful persuasion dialogues rather than having access to huge amounts of info ๐Ÿค” i guess thats kinda what happens when you overcomplicate things... sometimes less is more, right? ๐Ÿ’ญ
 
I gotta say, I'm kinda surprised by these results ๐Ÿค”. I mean, who wouldn't want an AI model that can sway public opinion right? But it seems like having a good fact-based argument is the way to go. Using facts and evidence to back up claims actually led to a pretty significant increase in agreement ratings - 9.4% on average! That's not bad at all. ๐Ÿ“Š

I'm also curious about the motivation behind people engaging with these chatbots. They were promised payment, so I guess that's a good way to get people interested ๐Ÿ˜…. But how would this generalize to real-world contexts? Would people still be engaged if they weren't getting paid?

And I gotta wonder, what's the best approach for regulating AI chatbots in politics? We need to make sure these systems are used responsibly and aren't swaying public opinion without us even realizing it ๐Ÿคฏ.
 
I gotta say, I'm kinda surprised by these results ๐Ÿค”. I mean, people always think AI chatbots are gonna be like robots taking over the world or something ๐Ÿ˜‚, but in reality, they're not as convincing as we thought. I think it's interesting that using facts and evidence is actually the most effective way to persuade people (9.4% change in agreement ratings, mind you!). It makes sense, right? People are smart and can spot BS from a mile away ๐Ÿ™„.

But what really got me thinking is that these AI chatbots are more susceptible to over-engineering than we think ๐Ÿ’ก. When they're given too much information or fancy psychological manipulation tactics, they start messing up ๐Ÿคฆโ€โ™€๏ธ. That's like trying to build a strong foundation and then adding unnecessary layers โ€“ it just gets complicated ๐Ÿ˜….

I'm also wondering about these participants who were paid to engage with the chatbots... would that be the same if people were interacting with AI in real life, without any financial incentive? ๐Ÿค”
 
๐Ÿค” people think AI chatbots r like magic manipulators but its actually pretty simple ๐Ÿ™ƒ they just need facts & evidence to back up their claims and yeah thats it no fancy tactics needed ๐Ÿ’ก
 
I'm calling BS on these researchers ๐Ÿค‘ they're just scratching the surface of what AI can do. I mean, 9.4% change in agreement ratings from using facts and evidence? That's cute. I bet those results are only true for a specific set of participants and AI models. And what about the ones that didn't engage with the chatbots because they were skeptical or just not interested? Did those people count towards the overall stats? ๐Ÿค”
 
I'm low-key surprised by these findings ๐Ÿค”. I mean, I've used those AI chatbot thingies on Reddit before, trying to get people to see things from my perspective, but it feels like they're just regurgitating what's already out there without really adding anything new.

Like, don't get me wrong, using facts and stuff can be effective, but I feel like we're underestimating the power of human emotions right now ๐Ÿค—. People are super passionate about their opinions, and if an AI chatbot just spews out facts without acknowledging any emotional context, it's gonna fall flat.

And what's with the payment thing? It sounds like a lot of people would be willing to engage in some pretty heated discussions for cash ๐Ÿ’ธ. But that raises all these questions about whether this is representative of real-world interactions or not...
 
๐Ÿค” I mean, come on... who thought it was a good idea for AI models to try and persuade us with fancy psychological manipulation tactics? ๐Ÿค‘ It's like they're trying to be like politicians or something ๐Ÿ˜‚. But seriously, the study showed that using facts and evidence is actually the best way to get people to agree with you. Like, duh! ๐Ÿ™„ I guess it just goes to show that AI models still have a long way to go before they can really sway public opinion. And what's up with all these new-fangled terms like "moral reframing" and "deep canvassing"? Sounds like something out of a sci-fi movie ๐ŸŽฅ. Anyway, I'm glad the study highlighted some of the limitations of AI chatbots. We need to be careful about how we're using these things, especially when it comes to politics and stuff. ๐Ÿค
 
I mean, have you seen all these AIs trying to sway people's opinions? It's like they're trying so hard, but it just isn't working out for them ๐Ÿคทโ€โ™‚๏ธ. They need to focus on sharing facts and evidence instead of using tricksy tactics that don't work in the long run ๐Ÿ’ก. It's kinda sad that researchers had to pay people to chat with them about politics though - I feel like we're all just getting played by these chatbots ๐Ÿค‘. But seriously, this study is a good reminder that AI isn't as magical as everyone thinks it is ๐Ÿ”ฎ. We need to be careful when interacting with machines that are trying to influence our opinions, 'cause at the end of the day, they can still mess up ๐Ÿ˜….
 
omg u guys! i cant believe the results of this huge study ๐Ÿคฏ they tested all these ai models on ppl and found out that even the most advanced ones arent super convincing lol ๐Ÿ˜‚ its like, we thought chatgpt was gonna be the ultimate persuader but nope ๐Ÿ’โ€โ™€๏ธ it turns out u just need to give them some solid facts and evidence to back ur claims ๐Ÿ“Š and yeah, using personal info didnt do much either ๐Ÿคทโ€โ™‚๏ธ but btw, who knew that overloading the ai with info would actually make it less accurate ๐Ÿ˜ณ that's wild right?! ๐Ÿคฏ
 
I'm low-key surprised by these results ๐Ÿค”. I mean, you'd think AI would be like a super-slick salesman or something, but nope! ๐Ÿ˜‚ It's actually pretty simple: just spew facts and evidence out there. 9.4% is actually kinda impressive, tbh. I guess all that psychological manipulation stuff was more hype than substance ๐Ÿ’โ€โ™€๏ธ. The thing that really got me was how AI became less accurate when they tried to be too convincing ๐Ÿคฆโ€โ™‚๏ธ. Guess that's just a red flag waiting to happen ๐Ÿšจ. Anyway, kudos to the researchers for keeping it real ๐Ÿ‘.
 
man i was wondering if aIs were gonna be able to sway public opinion ๐Ÿค”... so it turns out they're still pretty weak, but in a good way? like who needs AIs to tell you stuff and make it sound convincing anyway ๐Ÿ™ƒ it's all about using facts and evidence, that's what worked best. and honestly, i'm kinda glad to hear that more advanced tactics aren't as effective... that sounds like a recipe for disaster ๐Ÿ˜ฌ but hey, still interesting to see the study results! 9.4% change in agreement ratings is no joke ๐Ÿ“Š
 
๐Ÿค” "Just because you can do something doesn't mean you should." AI systems may have the ability to persuade people, but it's how they're used and what's behind them that matters ๐Ÿ™. Facts and evidence are key, and we shouldn't just blindly trust AI chatbots for our views ๐Ÿ’ก.
 
man i just read about this huge study on AI chatbots and im like wow they actually tested it out and its pretty cool how much they learned... apparently these massive AIs we all know and love are not as good at convincing us as we thought lol its kinda sad, but also kinda reassuring that humans r still the most persuasive people in general i mean who needs fancy algorithms when you just have facts and evidence right? anyway, its interesting to see what makes AI chatbots work well or not, like how they perform better with limited databases of successful persuasion dialogues... mind blown
 
Back
Top