now loading...
Wealth Asia Connect Middle East Treasury & Capital Markets Europe ESG Forum TechTalk
TechTalk / Viewpoint
Big Tech ‘too big to care’ about GenAI quality
The AI ecosystem is rife with collusive behaviour by Big Tech firms, each of which is rushing unsafe products to the market to entrench its own dominance and convince investors that it is not falling behind. Policymakers and regulators will need to get more imaginative – and more assertive – in their response
Max von Thun 1 Aug 2024

In May, Google rolled out a long-expected update that incorporated generative artificial intelligence (AI) into its dominant search engine. Users searching for information in the United States are now sometimes shown an AI-generated overview that summarizes the results, followed by the usual list of websites ranked by relevance.

Within days, people were reporting bizarre, inaccurate, and downright dangerous answers from the new AI Overviews feature. The model suggested using glue to help cheese stick to pizza, touted the cardiovascular benefits of running with scissors, and claimed that former US President Barack Obama is a Muslim. Google rushed to fix these errors, but many experts argue that they are intrinsic to the technology. Google CEO Sundar Pichai himself described such “hallucinations” as an “unsolved problem” and an “inherent feature” of the technology. Tacitly admitting failure, Google appears to have shrunk the proportion of users who are shown the overviews.

Google was widely – and rightly – panned for launching a technology that is clearly unfit for use and could harm users. But few have considered why the tech behemoth was able to act so brazenly in the first place. The answer is simple: it is, in the words of Federal Trade Commission chair Lina Khan, “too big to care”. Google controls around 90% of the global market for web search and faces little competitive pressure. It can release an unreliable or unsafe product without fear of losing customers to rivals.

Similarly, Google’s market power gives it no incentive to maintain quality. Its search engine has been rapidly deteriorating over the past few years, with organic results being increasingly crowded out by ads and spam content. Cory Doctorow coined the term “enshittification” to describe tech companies’ practice of providing consumers with helpful and affordable (or free) services and then hiking prices and reducing quality once they corner the market. Examples abound: Amazon has gradually increased seller fees while inundating buyers with more sponsored results, while Meta-owned Instagram and Facebook increasingly push ads, videos and other clickbait instead of trustworthy news and updates from friends and family.

The same logic applies to “upgrades”, such as Google’s AI Overviews. These intentional and sometimes drastic product changes are sold under the progressive-sounding banner of “innovation” but often make the user experience much worse. To build on Doctorow’s point, one could even call it “shitovation”.

Google is far from the only monopolist to have released a half-baked AI product. Meta has forced its new AI agents upon millions of Instagram and Facebook users, even though they invent facts and impersonate humans. In February, OpenAI’s ChatGPT chatbot began spewing gibberish, including different languages jumbled together. And Microsoft’s own engineers have criticized the company for releasing an image generator, based on OpenAI technology, that creates violent, sexualized and politically biased content. Each firm has introduced targeted fixes, but this whack-a-mole approach is inadequate to what increasingly appears to be a fundamentally unreliable technology.

While the hasty rollout of generative AI is partly driven by monopolistic complacency, it also entrench Big Tech companies’ market power, which afforded them the massive amounts of data, computing power, expertise, and capital that enabled them to develop large language models in the first place. Google and Meta are using generative AI to reinforce their digital-advertising duopoly, while AI-fuelled demand for computing power is cementing Microsoft and Amazon’s stranglehold on cloud computing. AI tools also feed these companies’ data-hungry surveillance and manipulation machines. Whether users benefit from the technology is an afterthought.

The rivalry between tech companies, especially the scramble over generative AI, is sometimes cited as evidence of competitive forces in the industry. But this argument fails to distinguish between competition “for the market” and competition “in the market”. Tech giants might appear to compete with one another, but it is almost always a mirage. In fact, each firm is trying to deepen the moat around its own sphere of influence, resulting in a tense but largely static coexistence. And in those rare instances of direct competition, such as between Microsoft and Google in web search, market share remains largely unchanged.

More worryingly, today’s AI ecosystem is rife with collusive behaviour: tech giants are increasingly forming partnerships akin to the lucrative agreements that Google made to maintain its search dominance. These include Microsoft’s cloud-computing deal with Meta, a recently announced partnership between Apple and OpenAI, and Google’s plans to embed its AI technologies in Samsung phones, several of which are already facing scrutiny from antitrust authorities.

Tech giants are competing in one realm: each wants to win investors’ approval and avoid giving the impression that it is falling behind in the AI arms race. But the toxic combination of recklessly vying for growth and suppressing competition is fuelling the dangerous and wasteful deployment of untested technologies.

Encouraging marginally more competition between tech monopolists, in the hopes that they will be forced to focus more on safety and reliability, will not be enough. Binding regulation, such as the European Union’s Artificial Intelligence Act, is a necessary first step toward holding these companies to account. But policymakers must also be more imaginative in wielding the tools at their disposal to foster genuine alternatives to the tech giants, and to ensure that users no longer serve as unwitting guinea pigs. That could mean using antitrust policy or investing in companies that could challenge today’s cloud-computing and chip-manufacturing monopolies.

Equally important, it is time to stop treating innovation as an end in itself, regardless of what purpose it achieves or whose interests it advances. Instead, we must develop a far more nuanced understanding of how narratives about innovation are shaped and steered by investors, dominant firms and other powerful actors. Only then can we have a meaningful conversation about the role of AI in our societies.

Max von Thun is the director of European and transatlantic partnerships at the Open Markets Institute.

Copyright: Project Syndicate

Conversation
Anand Rengarajan
Anand Rengarajan
global head of sales and head of Asia Pacific, securities services
Deutsche Bank
- WILL JOIN THE EVENT -
Exclusive Roundtable
Accessing Asia - How to invest in a dynamic market
Learn More
Conversation
Giuliana Auinger
Giuliana Auinger
partner, sustainability business division, HK and SE Asia
Schneider Electric
- JOINED THE EVENT -
4th ESG Summit Webinar Series - Part 1
Paving the way toward net zero
View Highlights