What if AI wrote all the content on the internet?

ChatBot, Bing, and Bard now boast AI that writes convincing college essays. But what could happen if bots wrote all the content on the Internet?

Person on a black wall with brightly colored lights in lines over her face

The robot takeover

Artificial intelligence (AI) has been a molten hot topic across industries since at least the 1980s (remember “Short Circuit?” “Blade Runner?” “The Terminator?!”), with each evolution sending a new shockwave into the collective psyche. 

The latest iterations of artificial intelligence - OpenAI’s ChatGPT, Microsoft’s Bing, and Google’s Bard - are no exceptions. This next-gen AI can produce convincing college-level (and higher!) essays, a somewhat unfortunate development alongside the widespread use of e-learning in response to the COVID-19 pandemic. AI, combined with remote learning, has undoubtedly contributed to mounting academic plagiarism and cheating, not to mention the loss of knowledge by those who use this shortcut.  

But what happens when bots graduate from academia? We know that tons of businesses use AI for customer service already, so who’s to say there’s not room for a role expansion, or to promote the bots from within? Furthermore, what if AI wrote everything on the internet? What could happen? How bad could it really be?

“Bing, am I training my replacement by talking to you?”

Most web content writing is loosely based on the five-paragraph essay format we learn in grade school: thesis, supporting argument/evidence #1, supporting argument/evidence #2, supporting argument/evidence #3, conclusion. It’s already so formulaic, people copy ideas off one another without even realizing they are plagiarizing, so why not just let Bing or Bard or ChatBot have at it? 

Rather than ask another reporter, academic, or think tank, we asked ChatBot directly: What would happen if all the content on the Internet was written by bots?

Here’s the response:

If all of the content on the internet was written by bots, it would have a significant impact on the way we consume and interact with information. Here are some potential scenarios:

Quality of Content: Bots are capable of generating large volumes of text quickly, but they lack the creativity, nuance, and context that human writers bring to their work. As a result, the quality of the content may suffer, and it may become difficult to distinguish between high-quality and low-quality content.

Lack of Diversity: Bots are programmed with specific rules and algorithms, and they cannot think outside the box or provide different perspectives on a topic. This may lead to a lack of diversity of thought and opinion, and may limit our exposure to new ideas and perspectives.

Trust and Credibility: If all the content on the internet is written by bots, it may be difficult to know whether the information is accurate, trustworthy, and unbiased. The lack of human oversight and critical thinking could erode our trust in the information we find online.

Economic and Social Impact: If bots were responsible for generating all the content on the internet, it could lead to significant job losses in the writing and publishing industries. Additionally, it could have a negative impact on social interactions, as the lack of human interaction in online content may lead to a sense of isolation and disconnection.

In summary, while bots have many useful applications, they are not a replacement for human creativity, critical thinking, and perspective. If all the content on the internet were written by bots, it could have significant consequences for the quality, diversity, and credibility of the information we consume, as well as for the job market and social interactions.

Reflecting on ChatBot’s response

At first glance, this is a well-reasoned piece with good supporting arguments and a clear beginning, middle, and end. But it’s…well…boring, thereby proving its own point about the quality of content. 

The consequences of homogenous content

Digging in a little bit more on ChatBot’s point about diversity, it’s accepted that linguistic evolution primarily happens on the fringes of society, gradually working its way into the Zeitgeist thanks to adoption by celebrities/influencers and increased exposure and use. It’s how we now have words like “razzmatazz” and “gobbledygook” and “offbeat” and, well…”blog.” 

Our capacity for word play and making up new words is borne out of the deep-seated need that most people have to connect with one another, a need that machines simply don’t have. (After all, does the bot get any joy from sharing its Wordle result or a perfect pun or portmanteau?) 

Moreover, where does ChatBot get these replies? It steals them from real, human writers, who worked very hard to research and craft coherent, intriguing answers to questions. AI then spits out a composite answer it essentially plagiarized from those writers. And even if imitation is the purest form of flattery, having content plagiarized by a bot dilutes that flattery (and the essence of the writing itself) with each derivative response. 

Producing or perpetuating a post-truth world with AI

Quality and diversity of content aside, the social implications potentially arising from a bot-driven content landscape could actually be devastating. 

Imagine: More and more bots compose content-chimeras out of existing human content and then incorporate one another’s content-chimeras with the human originals, back and forth in a content chum churn, forever. They then regurgitate all of that knowledge into senselessness, churning and changing the truth until it is wholly unrecognizable. 

Without injections of human judgment, fun, or new thoughts, bots are likely to grind all of the zest out of online content until it’s completely unreadable, both literally and figuratively. Eventually, readers might be left with only the Frankenstein’s-monster-esque remnants of the worst game of telephone ever played. 

And if there’s one thing the last few years have taught us, it’s the importance of trust and credibility in reporting. Without human oversight, AI will simply retrieve anything it’s programmed to find online, plagiarizing content without confirming its veracity and exercising only the bias with which it was programmed. 

There are many who believe that we already live in a post-truth world, but imagine how much worse it might be if everything online was produced by AI that simply copies whatever it finds in its endless virtual echo chamber, filters it only by the bias with which it was programmed, and never verifies any of it: 

Yikes. 

Long-term social repercussions

We have to consider job loss due to automation as a long-term repercussion any time new technology is introduced. Luckily, since so many writers are contract workers and freelancers, they’re accustomed to the cyclic nature of the work and usually have other ways to earn money. However, if that cycle stays more “famine” than “feast” for too long, well, that’s a lot of very smart, creative people on unemployment. And we miss out on their valuable insight into universal issues in the process. 

Meanwhile, since no one would know what's true and not, it may become even more difficult to engage meaningfully online. There are already examples of bots repeating the hyperbolic language that’s become characteristic of online interactions in the last decade.  

It follows that because much of our lives is now lived through social media, this may deepen the feelings of disconnect and dissatisfaction that contribute to depression and anxiety, particularly in young adults. 

So, what do we make of AI-generated content?

A powerful and important tool

The fact is that AI is here to stay, and it’s an impressive content tool that can save time, creative energy, and money when used carefully. We’d be foolish, especially as a tech company, to approach this (or any other new technology) with indignation, clutching our pearls and squealing about a time when the Internet was “soooo much better” (here’s a hint: it’s always been…imperfect, to put it mildly). 


AI is a shiny new thing, and early adopters are already showing the world both the utility and the pitfalls of AI-generated content. It’s great for outlining or working through writer’s block, but we have to take care not to let it actually steer the whole thing, lest we lose the human touch that breathes life into our content for the people reading it. 


The final thing to consider is that search engine algorithms are also AI-powered, and they are constantly learning how to sift through the sediment to pan the gold to the top of the page. And content that is organized, interesting, and specific will outshine the competition, even if it takes more time to appreciate. Don’t let fool’s gold distract you from the real thing: We here at By the Pixel are committed to quality, and this commitment includes content that is AT LEAST edited and verified by a human. ;)