Google Study Reveals Generative AI’s Negative Impact on the Internet

Google Study Reveals Generative AI’s Negative Impact on the Internet

This post may contain affiliate links that allow us to earn a commission at no expense to you. Learn more

Google’s latest study sheds light on the unforeseen perils of generative AI, spotlighting the misuse of this technology in real-world applications, which is already harming the digital landscape.

Short Summary:

  • Generative AI is being misused to create fake, misleading content online.
  • The study identifies tactics like deepfakes and fraudulent activities as major concerns.
  • Google itself is caught in a conflict as it both promotes and warns against AI misuse.

The rapid evolution of generative AI, championed by tech giants like Google, is a double-edged sword. While promising revolutionary advancements, it also opens a Pandora’s box of ethical concerns and practical misuse. According to a report by Google DeepMind, the potential for generative AI to degrade the quality of online content is significant. The study, noted by 404 Media, lays out how AI technologies, including deepfakes and other manipulative tactics, are compromising the integrity of the internet.

To kick things off, let’s acknowledge the elephant in the room: Google’s relationship with AI is paradoxical. On one hand, it has invested heavily in developing AI projects like Gemini and Vertex. On the other, it’s issuing grave warnings about the very technology it has helped pioneer. This dichotomy is both fascinating and unsettling. Google is like the parent who buys their kid a BB gun but lectures them on gun safety.

First off, Google is all too aware of the hazards. “The widespread availability, accessibility, and hyperrealism of GenAI outputs across modalities has also enabled new, lower-level forms of misuse that blur the lines between authentic presentation and deception,” the Google DeepMind report states. In essence, the very strengths of generative AI—its accessibility and realism—are also its biggest flaws.

“The primary harm of AI is its ability to obscure the distinction between what is real and what is fake online,” the report concludes.

This is alarming. The most alarming aspect is just how easy it is to misuse these tools. You don’t have to be an Einstein to wreak havoc. Most of the misuse requires only a rudimentary understanding of generative AI, utilizing readily available models.

Manipulative Tactics and Vulnerabilities

The study identifies several real-world misuse tactics, the most concerning being manipulations of human likeness—like deepfakes—and the falsification of evidence. These methods are predominantly used for nefarious ends: influencing public opinion, enabling scams, or generating profit.

Interestingly, the report notes that these instances of AI misuse often don’t violate content policies or terms of service explicitly, which suggests that the problem is systemic and by design. The tools are doing exactly what they were built to do. Bizarrely, this mismatch between intended use and regulation highlights a glaring regulatory gap.

One case in point: the widespread creation and dissemination of deepfakes. It’s easy to imagine a future riddled with AI-generated videos of politicians making inflammatory statements or worse. Does Google know something we’re in the dark about? Are we bracing for a future where fact and fiction are indistinguishable?

Public and Publisher Reactions

But what does this mean for the everyday internet user and content creator? Consider the story from CNBC where Rutledge Daugette of TechRaptor criticized Google’s AI as essentially lifting content without proper attribution. Websites fear that Google’s AI-generated search results will hoard traffic, directing fewer visitors to their original pages.

“Their focus is on zero-click searches that use information from publishers and writers who spend time and effort creating quality content,” Daugette lamented.

This concern is echoed by Luther Lowe, a long-time Google critic and chief of public policy at Yelp. “The exclusionary self-preferencing of Google’s ChatGPT clone into search is the final chapter of bloodletting the web,” Lowe told CNBC. While these critiques may sound hyperbolic, they underscore a legitimate worry about the future of web content and the internet economy.

Legal Ramifications and Future Prospects

Legal experts and industry leaders like Barry Diller, Chairman of IAC, foresee a brewing storm over copyright and fair use. Diller warns that if AI firms can scrape and repurpose massive amounts of web data without proper compensation, the very structure of internet publishing could collapse. He advocates for redefined copyright laws to protect publishers from this new form of digital exploitation.

“What you have to do is get the industry to say you cannot scrape our content until you work out systems where the publisher gets some avenue toward payment,” Diller argued.

Diller’s point is particularly poignant as lawsuits loom on the horizon against firms like Google and OpenAI, alleging unlawful use of copyrighted material. What’s clear is that the boundaries of “fair use” must be reconsidered in the age of AI. Publishers, reeling from this existential threat, are already discussing how to adapt or defend their interests legally.

Search Engine Dynamics and Quality Control

The introduction of Google’s AI Overview feature is causing quite a stir among publishers. This feature aggregates and summarizes information from various sources, often with mixed results. Within days of its release, users reported bizarre and incorrect advice, such as adding glue to pizza or eating small rocks.

MIT Technology Review criticized Google’s AI Overviews for its unreliability. The system, despite using advanced models like Gemini, can still spit out erroneous or even dangerous advice. Google’s Liz Reid has assured the public of technical improvements aimed at mitigating these issues. But the underlying problem persists: AI Overviews relies on probabilistic models that predict text sequences based on statistical calculations, leading to potential blunders.

“The large language model generates fluent language based on the provided sources, but fluent language is not the same as correct information,” said Suzan Verberne, a professor at Leiden University.

Even with techniques like retrieval-augmented generation (RAG), which cross-references data outside the AI’s training set, the system can still falter. Misinterpretations occur, such as when the model quoted an academic book entirely out of context, stating Barack Obama was a Muslim president.

The beta label attached to Google’s AI Overviews is a nod to its experimental stage, but experts like Chirag Shah argue for greater caution. “Until it’s no longer beta, it should be completely optional,” Shah insists. This underscores the importance of transparency and responsible deployment of AI tools.

Publisher Resilience and Adaptation

In light of these challenges, what should publishers do? For starters, they must build direct relationships with their audiences, reducing reliance on search engines. Unique voices and compelling content will drive return visits even as referral traffic from Google declines.

Furthermore, as discussed by Raptive‘s Marc McCollum, collaboration with legal frameworks to secure fair compensation for content could be vital. Strategic partnerships, like the $250 million deal between News Corp and OpenAI, signify a potential path forward, combining high-quality content with AI’s transformative abilities.

Conclusion

Google’s revealing study on generative AI’s dark side is a wake-up call. Misuse of this powerful technology is already rampant, and its dual-edged potential poses substantial risks to the quality and trustworthiness of the internet. It’s a complex tapestry of opportunities and pitfalls, one that demands vigilant oversight, robust legal protections, and ethical stewardship.

As the digital landscape evolves, stakeholders across sectors must come together to navigate these challenges, ensuring the transformative benefits of AI while mitigating its potential for harm. Whether through legislative action, technological safeguards, or industry-wide cooperation, the aim must be to foster a safer, more reliable AI ecosystem.


Photo of author
Author
SJ Tsai
Chief Editor. Writer wrangler. Research guru. Three years at scijournal. Hails from a family with five PhDs. When not shaping content, creates art. Peek at the collection on Etsy. For thoughts and updates, hit up Twitter.

Leave a Comment