Artificial Intelligence
AI Is Reshaping Social Media and Online Trust

AI content is everywhere, crowding out authentic human content and completely transforming the social media landscape. Its influence has now extended beyond platforms to impact the political arena, the broader Internet, and even the global economy.
So, does this mean that social media is approaching its death? Well, let’s find out.
From Human Connection to Synthetic Feeds

It was roughly four decades ago that early forms of online social interaction first emerged, beginning with bulletin board systems (BBS), Usenet, and early forums in the late 1970s and 1980s. However, the first recognizable social networking platforms did not appear until the mid-to-late 1990s, laying the groundwork for what would later become modern social media.
The beginning of the 21st century marked the widespread popularity of social media with sites like Friendster, MySpace, and Facebook. This modern era of social media enabled instant global connection, allowing people to maintain relationships, access real-time news, and build communities around shared interests.
But now, just a couple of decades later, the face of social media is changing with the proliferation of AI content.
Traditionally, the core values of social media have been connection and expression, but these have been replaced by automated mass production. With people’s feeds dominated by synthetic noise, disengagement and distrust among the masses are growing fast.
According to Instagram’s top executive, a carefully curated grid is now a thing of the past.
In a message on Threads, Instagram head Adam Mosseri warned that the rise of AI has killed off the social media site’s polished aesthetic. All that makeup, skin smoothing, beautiful landscapes, and high contrast photography is now “dead,” as “people largely stopped sharing personal moments to feed years ago,” he said.
Instead, unpolished, “unflattering candids” are being shared via direct messages.
The pervasiveness of AI images means creators have to shift away from curated grids and professional-style photography to a “more raw aesthetic,” with people wanting “content that feels real” rather than flattering imagery, which is “cheap to produce and boring to consume.”
Soon, we’ll see the maturation of AI tools and with that the expansion of the range of aesthetics that they can produce, said Mosseri, while noting that already, social media feeds are starting to fill up with “synthetic everything,” which means platforms need to evolve to handle the flood of AI-generated content.
Meanwhile, Substack co-founder and CEO Chris Best said late last year that AI-generated “slop” could flood the internet, which “keeps dumb people clicking.”
“Slop” was chosen as the word of the year for 2025 by the US dictionary Merriam-Webster, which defines it as “digital content of low quality that is produced, usually in quantity, by means of artificial intelligence.”
The Rise of the Bot Internet
Swipe to scroll →
| Signal | Human Content | AI-Generated Content |
|---|---|---|
| Production Cost | High (time, effort) | Near-zero marginal cost |
| Posting Frequency | Irregular | High-volume, automated |
| Emotional Context | Situational, lived experience | Simulated or generic |
| Trust Signal | Identity, history, relationships | Labels, metadata, disclosure |
| Engagement Longevity | Lower volume, higher meaning | High volume, rapid decay |
Data shows that almost one-third of all internet traffic comes from bots, automated software programs that mimic human behavior.
This has given rise to the dead Internet theory, which holds that the Internet primarily consists of bot activity and algorithmically generated content designed to control the global population.
Reddit (RDDT -0.73%) cofounder Alexis Ohanian believes there’s some truth to the idea, and a new era of social media will emerge because of it. “Having proof of life, like live viewers and live content, is really f–king valuable to hold attention,” he said on a podcast.
OpenAI founder and CEO Sam Altman shared a similar view on X last year, posting:
“I never took the dead internet theory that seriously, but it seems like there are really a lot of LLM-run Twitter accounts now.”
Bot activity on X (formerly known as Twitter) is actually getting worse. It is estimated that about 10% of X accounts are bots, totaling millions of users.
Meanwhile, over 20% of the videos that YouTube’s algorithm shows new users have been found to be “AI slop.”
In fact, among the 15,000 most popular YouTube channels surveyed, 278 contain only AI-generated content, and together they have amassed 221 million subscribers and more than 63 billion views, generating about $117 million in revenue each year.
This isn’t limited to just one or a handful of platforms; it is true of the entire internet.
As the 2025 Imperva Bad Bot Report noted, automated traffic has overtaken human traffic, accounting for 51% of all internet traffic, with 20% of that coming from “bad bots” engaged in malicious activity.
A few months ago, Marshall Miller, the senior director of product at the Wikimedia Foundation, the nonprofit that runs Wikipedia, also noted that the website’s human traffic has decreased by about 8% as people change how they search for information online.
The drop was revealed after the Foundation changed how it differentiates between human and bot traffic to better understand real readership and limit third-party bots scraping its data for commercial search and AI tools.
“We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content,” wrote Miller, adding that this wasn’t really a surprise as search engines increasingly use AI to have answers directly on results pages and younger users turn to TikTok and YouTube.
But this means the community that writes and edits Wikipedia’s content could shrink. As such, the platform is urging search engines, social platforms, LLMs, and AI chatbots to help drive more traffic back to the site.
How Social Media Platforms Are Amplifying AI Content
Social media platforms have been swamped with AI-generated content thanks to an explosive growth in the ecosystem of generative tools.
For instance, tools like Sora, Midjourney, Pika, Runway, and Stable Diffusion have made it extremely easy to create images and videos of almost anything.
A growing number of solutions even make it easier for people to boost views and cash in on creator-rewards programs or affiliate links. They not only auto-generate but also post content directly to platforms like YouTube, for a fee, doing more than 100 videos a month on a single account. These solutions aim to automate the complete process of being a content creator.
But that’s not all. Researchers have found that platforms have actually been boosting AI-generated posts. Platforms like TikTok, Instagram, and Meta (META -2.65%) even offer native AI features that allow users to generate visuals, scripts, or remix existing media without external software.
Meta, in particular, has embraced AI-generated content with full gusto, even creating its own slop. In 2023, the company introduced AI-powered profiles and announced plans to populate its platform with AI characters. But after receiving significant user backlash, Meta killed off the AI-profiles last year.
Instagram, meanwhile, launched an AI studio that enables users to produce custom chatbots, including their own digital versions.
Recently, TikTok revealed there were at least 1.3 billion AI-generated posts on its platform, with more than 100 million pieces of content uploaded every day. For transparency, TikTok labels AI-generated content and even gives users the option to reduce the amount of AI content they see.
But AI Forensics, a Paris-based non-profit, found that less than 2% of accounts had the TikTok label for AI content. At times, they also escape the platform’s moderation for months, despite posting content that’s prohibited by TikTok’s terms of service.
According to the report, more than 350 AI-focused accounts pushed 43,000 posts made with the help of gen AI tools, accumulating 4.5 billion views over a month-long period. The researchers said:
“The blurring line between authentic human and synthetic AI-generated content on the platform is signalling a new turn towards more AI-generated content on users’ feeds.”
Can Authenticity Survive Automation?
Once bustling with real activity, where friends gathered and shared special moments, social networks are now becoming ghost towns, populated by artificial entities simulating human engagement and publishing AI-generated content.
While you may think you can just block those users and content, a study found1 that people cannot distinguish AI from a human, even when they are familiar with the LLM’s subject matter.
The study evaluated how successfully humans detect AI-generated content and found that human text was accurately recognized as such only 67% of the time, while GPT-4 was inaccurately detected as human 54% of the time.
According to Instagram’s Mosseri, social platforms will only get worse at identifying AI-generated media over time as the technology improves. One solution could be camera companies cryptographically signing photos when taken to prove they are real.
The solution to mitigating the power of bots, as per Ohanian, is seeing “a next generation of social media emerge that’s verifiably human.”
While advances in technology have most of the internet optimized for engagement, which Best compared to “drug addiction,” and believes “is going to get supercharged,” he’s still hopeful for an alternative future.
“The other purpose of media is culture,” he added, “That is something that people really, really want as well.”
According to Substack’s Best, this very same technology has the potential to enable a “future where there’s way more creative leverage” for independent creators. The real bottleneck for media, he noted, isn’t content, of which there’s “no scarcity,” but attention. “We’ve entered a world where attention is the scarce resource,” he said.
“We have won the war on boredom,” said Best, adding that “there’s a huge scarcity of good content.”
Amidst this, Wikimedia Deutschland launched the Wikidata Embedding Project, which converted about 120 million open data points in Wikidata into a format easier for LLMs to use, helping AI systems access higher-quality data for free and improve their accuracy.
Regulation, Research, and the Fight for Trust
As AI takes over the internet, researchers have taken to understanding its impact. One such study, titled The Impact of Generative AI on Social Media: An Experimental Study2 examined how AI assistance affects content production and user perceptions and found AI tools to be beneficial in increasing content volume and engagement but decreasing perceived quality and authenticity.
Based on their findings, they recommended that social media platforms, policymakers, and stakeholders design tools with user-focused personalization, incorporating context-sensitivity, and prioritizing intuitive user interfaces for an ethical and effective integration of gen AI into social media.
The researchers also recommended ensuring transparent disclosure of AI-generated content, as other studies3 have found4 that this improves trust without harming engagement.
When it comes to regulations, governments around the world have also taken steps to promote and regulate the responsible use of artificial intelligence.
In the European Union (EU), the AI Act focuses on transparency, risk classification, and obligations for high-risk systems.
The Online Safety Act (OSA), meanwhile, has given the communications regulator Ofcom broad authority to require platforms to manage harmful or misleading content in the UK, though policymakers have called for an upgrade to more effectively tackle AI-driven misinformation.
In the US, President Donald Trump has issued an Executive Order to remove barriers to America’s leadership in AI and published an action plan to secure its dominance in the sector. Then again, last month, he ordered a national standard that enables AI companies to innovate without cumbersome regulations.
Meanwhile, China’s framework has been one of the most detailed globally in regulating AI content creation and distribution. Its Cyberspace Administration has implemented mandatory labelling for AI-generated content, requiring visible disclaimers and embedded metadata to avoid misinformation and fraud.
Some governments are also pushing investigations into platforms over AI-generated disinformation. Most recently, Poland urged the European Commission to probe TikTok over hosting AI content it deemed destabilizing.
“The disclosed content poses a threat to public order, information security, and the integrity of democratic processes in Poland and across the European Union,” said Deputy Digitalization Minister Dariusz Standerski in a letter sent to the Commission. “The nature of the narratives, the manner in which they are distributed, and the use of synthetic audiovisual materials indicate that the platform is failing to comply with the obligations imposed on it,” he added.
For now, the policy landscape for AI remains an emerging issue worldwide, fragmented across jurisdictions and struggling to keep pace with the speed at which generative systems are reshaping information flows, public trust, and democratic discourse.
Investing in AI Infrastructure
In the world of AI, Palantir Technologies Inc. (PLTR -2.65%) stands out for its specialization in AI-driven big data analytics.
It has built four main software platforms, including Apollo, Gotham, Foundry, and Artificial Intelligence Platform (AIP). Apollo is a control layer that harmonises the delivery of new features, configurations, and security updates to ensure the continuous operation of critical systems. Gotham helps identify patterns within datasets, and Foundry helps organizations create a central operating system for their data. AIP enables effective activation of large language models and other AI within any organization.
Notably, Palantir’s platforms are used by governments and private enterprises to detect misinformation, bot networks, and synthetic media distribution. These tools are built for identifying authentic vs. inauthentic behavior across complex information ecosystems.
Moreover, AI analytics are increasingly used by organizations and governments to monitor health and safety threats. With a market cap of $400 billion, PLTR shares are currently trading at $174, up over 110% in the past year. It has an EPS (TTM) of 0.43 and a P/E (TTM) of 392.56.
As for its financial position, Palantir announced revenue of $1.181 billion for the third quarter ended September 30, 2025, representing 63% YoY and 18% QoQ growth, achieving a new revenue record in its over twenty-year history. Its profit for the quarter was $476 million.
“We are still at the very start of things. This remains the beginning, the first moment of the first chapter.”
– Co-founder and CEO Alex C. Karp, in a letter to shareholders
Its U.S. revenue, in particular, grew 77% YoY and 20% QoQ to $883 million. This includes $486 million in U.S. government revenue, which jumped 52% YoY and 14% QoQ, and $397 million in U.S. commercial revenue, which surged 121% YoY and 29% QoQ. Karp called the U.S. commercial segment “an absolute juggernaut” that he believes “will become, on its own, one of the most significant business stories of the century in American economic life.”
During this period, Palantir’s customer count grew 45% YoY. Meanwhile, its adjusted free cash flow was $540 million, GAAP earnings per share were $0.18, and adjusted EPS were $0.21. It ended the quarter with $6.4 billion in cash, cash equivalents, and short-term U.S. Treasury securities.
“These results make undeniable the transformational impact of using AIP to compound AI leverage,” noted the company.
For Q4 2025, Palantir expects revenue to be between $1.327 and $1.331 billion and adjusted income from operations to be between $695 and $699 million.
As for the full year 2025, Palantir raised its revenue guidance to $4.396-$4.400 billion and projects at least 104% growth in U.S. commercial revenue to surpass $1.433 billion. Meanwhile, adjusted income from operations guidance has been raised to $2.151-$2.155 billion, and adjusted free cash flow guidance to $1.9-$2.1 billion.
Latest Palantir Technologies Inc. (PLTR) Stock News
4 Leading Tech Stocks to Buy in 2026
Can Palantir Double Again in 2026? This Analyst Is Confident
Where Will Palantir Stock Be in 5 Years?
Premarket Movers: Palantir Still Running on Venezuela Operation
Lauren Elaina Friedman Stat Sells 3,400 Shares of Palantir Technologies (NASDAQ:PLTR) Stock
Should You Worry About an AI Bubble? This Trend Among Retail Investors Offers a Strikingly Clear Answer.
Conclusion
Social media, as we know it, is clearly coming to an end. It isn’t the same place where people actively shared their interests, memories, and ideas in an authentic manner. What was once built on human connection is now overrun by automated content and bots.
While AI has unlocked unprecedented creative leverage and economic efficiency, it has also eroded the very signals that once made online interaction meaningful.
The future of social media will likely splinter, with one path leading toward hyper-automated feeds optimized for engagement and the other pointing to verifiably human spaces that prioritize culture, credibility, and context over reach.
References
1. Jones, C. R. & Bergen, B. K. “People cannot distinguish GPT-4 from a human in a Turing test.” arXiv preprint arXiv:2405.08007 (2024). https://doi.org/10.48550/arXiv.2405.08007
2. Møller, A. G., Romero, D. M., Jurgens, D. & Aiello, L. M. “The Impact of Generative AI on Social Media: An Experimental Study.” arXiv preprint arXiv:2506.14295 (2025). https://doi.org/10.48550/arXiv.2506.14295
3. Gamage, D., Sewwandi, D., Zhang, M. & Bandara, A. “Labeling Synthetic Content: User Perceptions of Warning Label Designs for AI-generated Content on Social Media.” arXiv preprint arXiv:2503.05711 (2025). https://doi.org/10.48550/arXiv.2503.05711
4. Chen, J., Wang, T., Williams, M., Jordan, N., Shao, M., Zhang, L., & Fussell, S. R. “Examining the Impact of Label Detail and Content Stakes on User Perceptions of AI-Generated Images on Social Media.” arXiv preprint arXiv:2510.19024 (2025). https://doi.org/10.48550/arXiv.2510.19024















