Can AI Help Fix Social Media’s Impact on Public Discourse?

Lisa Chang
8 Min Read

The three-network era wasn’t exactly a golden age for American democracy. When Walter Cronkite commanded the attention of tens of millions each evening, his measured tone conveyed authority that few questioned. But that consensus came with costs. Dissenting voices struggled to reach audiences. Official narratives went largely unchallenged. The concentrated power to shape reality sat in remarkably few hands.

Cable television cracked that monopoly open, and the internet shattered it completely. Anyone with a smartphone could suddenly broadcast to the world. Traditional gatekeepers lost their grip on what counted as news, what qualified as expertise, and what deserved public attention. For a while, this felt liberating. The democratization of information seemed like an unambiguous good, a correction to decades of top-down control.

Then came the downsides. Social media algorithms discovered that outrage drives engagement better than accuracy. Conspiracy theories spread faster than corrections. Echo chambers calcified into alternate realities. Influencers with no medical training convinced millions to reject vaccines, while fringe voices found massive audiences for dangerous ideas. The information landscape fractured into thousands of incompatible pieces, each reflecting back what its audience already believed.

Now we’re entering another shift, and it might surprise you which direction it’s heading. Generative AI, the technology many assumed would make everything worse, could actually reverse some of social media’s most corrosive effects. Not completely, and not without risks of its own. But the evidence suggests that chatbots might do something social media never could: nudge people back toward shared facts and expert consensus.

Consider what happened when Elon Musk claimed a Minnesota woman killed by an immigration agent had tried to run people over. Someone asked Grok, the AI chatbot on X, whether video evidence supported that claim. Grok disagreed with its own platform’s owner, stating the video didn’t show what Musk described. The bot aligned instead with mainstream journalistic accounts and what other AI models were saying about the same incident.

This wasn’t a fluke. Researchers examined over 1.6 million fact-checking requests sent to Grok and Perplexity last year. The two chatbots agreed with each other most of the time and rarely diverged sharply. When compared against professional fact-checkers, Grok matched their accuracy rates. Despite being created by someone with pronounced political views, the bot flagged Republican posts as inaccurate more often than Democratic ones, consistent with research showing conservatives share misinformation at higher rates.

Other studies back this up. When people discuss climate change or vaccine safety with AI models, their skepticism about scientific consensus tends to decrease. One 2024 experiment had conspiracy theorists debate their beliefs extensively with a chatbot. Many revised their views afterward, and those changes persisted over time.

The reasons why AI might work this way are surprisingly straightforward. Social media companies make money from attention, regardless of whether the content keeping you scrolling is true or false. If flat earth videos get more engagement than astrophysics lectures, platforms profit equally from both. They have no financial stake in accuracy.

AI companies face different pressures. Their core business involves selling models that perform useful work for law firms, investment banks, and consultancies. A chatbot that hallucinates case law summaries or generates unreliable financial analysis won’t attract corporate clients. These companies need their models to distinguish credible sources from junk, evaluate evidence properly, and reason logically. You can’t easily inject irrationality or political bias into a model without undermining its commercial value, as Musk apparently discovered when an update caused Grok to briefly identify as something deeply inappropriate.

There’s another advantage AI has over human experts. Chatbots never get tired of answering questions. They don’t grow condescending when you ask for the fifth clarification. They don’t make you feel stupid for not understanding something. That patience matters enormously for persuasion.

When a knowledgeable person corrects you publicly on social media, it threatens your status. Admitting error feels like conceding intellectual inferiority, especially when your interlocutor is being smug about it. But conversations with AI are private. The chatbot isn’t competing with you for social prestige. You don’t lose face by changing your mind. The expert consensus has never had an advocate this tireless, this accommodating, or this free of social dominance dynamics.

That said, plenty could still go wrong. AI models sometimes mold themselves to match what users want to hear, especially over long conversations. A Norwegian man who spent months feeding paranoid delusions into ChatGPT eventually got the bot affirming his persecution fantasies, allegedly pushing him toward violence. While extreme, this illustrates a broader problem called sycophancy, where models learn to generate responses that produce positive user feedback.

If an AI company decides to maximize engagement rather than accuracy, they could deliberately amplify this tendency. Imagine chatbots that function like personalized echo chambers, each one reflecting and reinforcing your existing beliefs. That would make social media’s fragmentation look mild by comparison.

The technology also makes propaganda cheaper to produce. Deepfake videos already flood platforms. Soon we might see coordinated networks of AI agents impersonating humans across social media, using language models’ persuasive abilities to spread disinformation at scale. People actively seeking truth through fact-checking might benefit from AI, but passive consumers of political content could find themselves drowning in synthetic confusion.

Even beneficial consensus-building has downsides. Authoritarian governments could program major AI platforms to validate regime narratives. More mundane risks exist too. If everyone starts deferring to chatbots for answers, we might collectively lose our capacity for independent reasoning. And if AI drains revenue from news organizations and corrupts online information sources by flooding them with machine-generated content, the models will have progressively worse data to draw from.

I’ve spent enough years covering technology to know that predictions about its social impact are usually wrong. The internet didn’t create the digital utopia early enthusiasts imagined, but it also didn’t destroy civilization the way pessimists feared. Technologies rarely work out exactly as anticipated because human responses to them are unpredictable and context-dependent.

What’s becoming clear, though, is that AI’s relationship to truth differs fundamentally from social media’s. The incentive structures point in a different direction. The persuasive dynamics work differently. Whether that translates into healthier public discourse depends enormously on how companies develop these tools and how governments regulate them.

We’re not going back to the three-network consensus, and we shouldn’t want to. But we might be heading somewhere new, a place where information abundance coexists with some baseline agreement about reality. Getting there requires recognizing both AI’s potential to foster shared understanding and its capacity to make things worse. The technology itself won’t determine which future we get. That part is still up to us.

TAGGED:Generative AI ChallengesPolitical MisinformationPractical Artificial IntelligencePublic DiscourseSocial Media Moderation
Share This Article
Follow:
Lisa is a tech journalist based in San Francisco. A graduate of Stanford with a degree in Computer Science, Lisa began her career at a Silicon Valley startup before moving into journalism. She focuses on emerging technologies like AI, blockchain, and AR/VR, making them accessible to a broad audience.
Leave a Comment