AI needs to be protected from spin. Its future cannot be decided by Big Tech: Amba Kak

AI needs to be protected from spin. Its future cannot be decided by Big Tech: Amba Kak


Time magazine’s list of the ‘100 Most Influential People in AI 2024’ includes CEOs like Sundar Pichai and Satya Nadella, as well as Amba Kak, a prominent voice against the concentration of power in the hands of a few companies. Speaking to the Sunday Times, Kak, executive director of the AI ​​Now Institute, explained why AI is being called the new ‘snake oil’
1) Were you surprised to see your name on Time magazine’s list or did you know you were on it?
I have been working on technology policy for over a decade, connected to the broader project of shaping technology in the public interest across all contexts. The fact that our work at the AI ​​Now Institute asking tough questions of the AI ​​industry is supported by the AI ​​Now Institute is a testament to the importance of technology in the public interest. Big Tech The CEOs on the TIME list acknowledge that defining the horizons of ‘innovation’ is not the preserve of tech companies alone, and certainly not the preserve of Silicon Valley and only Silicon Valley. It also reflects a changing consensus – despite the enormous lobbying power of these firms, many governments are waking up to the dangers of concentrating power in a very narrow segment of the tech industry, which has become increasingly erratic and resistant to scrutiny. We are grateful to work with many others in advancing an actionable policy agenda focused on reasserting democratic control over this technology.
2) Generative AI Tech giants like Sam Altman (who is also on the same Time list) are being hailed as messiahs who are going to lead us to a new promised land. Is that sheen wearing off a bit as people realise that despite the name OpenAI, it is not as transparent as it seems?
It’s not just about Open AI. Tracing the history of AI as a field, and the last decade in particular, is a master class in marketing! In fact, the Federal Trade Commission, the US consumer protection and competition regulator, where I advise on AI, has warned the public about AI “snake oil” because of misleading claims about the technology’s promise. The aim of the work is to defend words like openness and ‘democratisation’ from corporate spin, and to expose the reality of a highly concentrated, unaccountable and notoriously opaque sector.
If you take a step back, you also see that these grand narratives serve an important business function – keeping staggering amounts of capital flowing in. This is despite there being nothing resembling a viable business model for the generative AI space yet and many unresolved challenges of accuracy, privacy and security.
3) In India, IT ministry officials have said they want to regulate AI apps, but not at the cost of innovation. Which country/bloc should we learn from when it comes to security for AI?
So, one thing that needs to be clear is that “AI regulation” is no different from other forms of technical regulation. Data privacy, competition regulation, copyright, labour law… all of these influence and shape key factors in the AI ​​supply chain and should be understood as part of the toolkit for refining and implementing AI. The other big lesson to be drawn from the global regulatory debate is really on process, not substance – we need to broaden who counts as an “expert” on the issue of AI so that not only narrow technical or business voices are heard. Things are changing, although not fast enough. Many major global initiatives have been pushed back because the way they are designing the conversation excludes people who have real subject matter expertise in the areas where AI is being implemented or who will be directly affected or subject to these technologies, such as labour unions. We recently published a report with several other US-based organizations in response to the AI ​​“roadmap” led by Democratic Majority Leader Chuck Schumer, in which we criticized the corporate capture driving this process, resulting in a set of very weak and powerless proposals.
4) Every few months there’s a new product/tool ​​launching. There’s Microsoft’s controversial Recall AI feature that takes screenshots of everything we do on our computers and X’s new image generator that could create a gun-wielding Kamala Harris. For you, what’s the scariest part of this AI juggernaut?
What keeps me up at night is this: if AI is going to be the social and economic infrastructure of the future, is it acceptable for it to be driven by such a limited group of companies and individuals. In our work, we have documented how a handful of Big Tech companies dominate access to cloud infrastructure and GPUs, and how this, combined with their pre-existing access to vast amounts of data (our data — the subject of unchecked surveillance in a regulatory vacuum over the past decade) and monetizable consumer base means that this shift to large-scale AI gives those same firms the ability to consolidate their existing dominance. For example, 75% of global AI startup funding in 2023 came from the 3 largest cloud Big Tech companies — Amazon, Microsoft, Google. So you can see how they are well positioned to shape the future trajectory of innovation in ways that suit their incentive structures and deepen their existing advantages.
Why is this a problem? Because economic power directly translates to political influence and control over the regulatory agenda. Companies are eager to use their capital to remind governments who’s boss. Whether it’s Microsoft’s investment in UK cloud infrastructure, which came awkwardly under scrutiny from competition authorities, or OpenAI’s (ultimately empty) threat to leave the EU over the AI ​​Act, or the Big Tech-funded campaign to add cronies to the US Congress, we’re seeing an increasingly aggressive stance from big companies trying to consolidate their dominance by using their considerable economic and political power.
5) Even people who have used ChatGPT or image generators see it as a fun, useful tool or even science-fiction, but don’t really understand the impact it can have on them. Can you give us some examples of the insidious ways it is harming society?
The harms of ChatGPT and other image and text generators are far from science fiction, they are as dangerous as they are depressingly common. We have seen the problems of plagiarism, misinformation and the spread of deepfakes, which are highly believable, AI-generated images often used to degrade women or create division. You don’t need to be an astrologer to see that releasing these tools commercially at scale will lead to predictable, nefarious uses. And yet because of the super-charged hype cycle there is a race to the bottom in terms of timelines for releasing products to market.
It is also important to remember that AI was not created with the launch of ChatGPT, and we have the loss of a decade to learn from it – including in India. For example, we have seen AI systems like so-called “predictive policing” systems or algorithms that determine access to welfare entitlements, job screening tools or even facial and other biometric technologies. We have seen these systems not only fail, but fail disproportionately for those who are more vulnerable or otherwise marginalised in society. This is a reminder that the question is not just “do we want innovation”, but who wins/who loses with different kinds of innovation.
6) Is there one thing we are forgetting in this whole ‘AI will make the world a better place’ narrative.
What we are missing is a reminder that the public has agency, and a voice in shaping what “AI” means and will mean for the future. The critical debate about the future of this technology needs to move beyond the stale imagination of Silicon Valley and the whims of erratic tech CEOs. And we need to re-examine all the questions, including whether this particular paradigm of large-scale AI is economically or ecologically sustainable — who decided that Google’s AI answer that consumes about 30 times more energy to produce text than a regular search is correct? Perhaps the question that also needs to be asked is what are the alternative R&D trajectories that we cannot see because the market is not incentivized to go there.




Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *