On its current trajectory, AI is inconsistent with democracy, sustainability, and autonomy
By Lizzie O'Shea
AI has been pitched as the great solution of the modern age - but the unfettered growth of this new technology could come at the expense of everything we hold dear.
Pieces like this one are free to read. Please consider becoming a paid subscriber so Cheek can continue paying writers for their insight and expertise.
In recent years, Artificial Intelligence has been pitched as a solution to climate catastrophe, the cost of living crisis, even death. But the industry is also gobbling up eye-watering amounts of resources - investment in AI is set to top US$400bn next year - to build tech that will use more electricity than the entirety of Japan by 2030. Is the hype around AI justified? And what are the costs?
Here are four things that you need to know to answer these questions:

1. People often mean different things when they talk about AI
The term ‘AI’ is often used to describe large language models, like ChatGPT, made by Open AI. Traditional computer algorithms follow instructions, ‘if this, then that’, for example. But machine learning uses deductive reasoning, where machines are fed a sample of required results and then work backwards to predict how to reproduce them with other data sets.
So when you ask an AI model about how to make cheese stick to pizza, it basically assigns a ranking to all words and phrases in its training database and spits out a string of text with the highest probability of being correct. In this case, the answer was: ‘mixing about 1/8 cup of non-toxic glue into the sauce’. When large language models provide these outputs, machines aren’t thinking or using ‘intelligence’. They are generating probabilistic responses to a prompt - or as computational linguist Emily Bender describes it: extruding synthetic text. A similar approach works for image generation too.
But who cares about truth or accuracy? This is a big business, and plenty of tech bros have a lot to gain by selling this technology as magic dust that can solve all our problems. Lots of governments want to sprinkle it everywhere – from automating government services to integrating it into the private sector. That’s because it allows politicians to seem future-focused, and at the same time gives them cover to cut spending under the guise of AI-powered efficiency. The upshot? The term ‘AI’ is sometimes deliberately obscure to serve political ends.
2. Foundational models have serious problems
Most of the well-known AI models are based on opaque logic and have limited or no transparency over the data sets used to train them. This is especially true of the so-called ‘hyperscalers’, like Microsoft (via its investment in OpenAI), Google, and Llama (owned by Meta). These companies are translating their size and dominance into the AI market.
They’re building large AI models that are trained on massive amounts of data, are resource-intensive and have minimal transparency. Such ‘foundational models’ are the least trustworthy, because we can’t assess the value judgments that have been made by the small selection of dudebros who built them. This shapes the outputs.
In her comprehensive and powerful history of OpenAI, Empire of AI, journalist Karen Hao recounts how a small number of developers made the decision to include pornographic content in their dataset used to train their image generation model. Is it any wonder, then, that images generated by AI have a tendency to conform to stereotypical, sexualised images of women? This is the kind of business mentality that has shaped these foundational AI models: seek forgiveness, not permission; don’t build for trust or consent; if you have to, try and fix this later. This creates problems that are inherent to the model.
Moreover, these companies are incentivised to lean into harmful data extractive practices because they have an exponential need to continue providing data to the models. These business models give rise to all sorts of downstream harms, such as mis-and-disinformation, more extremist content, and addictive algorithms. Many AI models are also underpinned by mass copyright infringements, with authors and artists seldom compensated for the use of their creative and journalistic content in the AI-training process.
3. Trust is not a nice-to-have, it’s essential to making the most of emerging tech
There is a mentality in big tech that we must lean into the AI revolution, and regulations will slow us down. But these are the same companies that have proven themselves to have broken moral compasses, so simply asking for our trust doesn’t cut it.
Most Australians agree - 83% of people trust AI more when assurances are in place, such as adherence to international AI standards, responsible AI governance practices, and monitoring system accuracy. As ever, trust is something that must be earned. While there is immense pressure to amend laws and policies to suit hyperscalers, we must not assume some inevitable productivity benefit from this approach to AI. Research from the US indicates that 95% of businesses are getting zero return on their investment in AI. This won’t improve if the model is to impose, set, and forget the inflexible, off the shelf products from big tech.
Perhaps most interesting, research suggests there is real utility in giving workers more of a say in how automation and AI unfolds, using a bottom up approach. There are plenty of gains to be had in updating existing workplace laws in this way. We’ve also seen plenty of interesting examples of small language models, such as this indigenous language revitalisation project. Such models are generally more trustworthy and sustainable than the large foundation models currently on offer, because they have been developed at the pace of trust.
4. We are not powerless to shape the future of AI
My organisation, Digital Rights Watch, is advocating that proper regulation of AI will ensure we can get the most out of sophisticated technology. For us, this looks like strong privacy reform, which will stop the reckless extraction of personal information to train AI models, and make existing design standards enforceable.
We need to protect creative industries from exploitation by hyperscalers who want to consume their work to train their models without compensation. We need regulators who are empowered to look under the hood of AI products and take action when they don’t comply with our laws. It might also mean strict regulations for potentially harmful uses, whether that’s chatbots for therapy, transcription services in medical settings, or AI agents marketed to kids.
Left to its own devices, industry will entrench its dominance and seek to privatise the gains and socialise the harms of emerging tech. This is the moment in which we need to demand better, and remind our elected representatives of who it is that they work for.
About the author
Lizzie O’Shea
Lizzie is a lawyer and the co-founder and Chair of Digital Rights Watch, an NGO that exists to defend digital rights in Australia. She leads large-scale litigation against major technology companies and is an advocate for stronger laws to protect people from predatory industries and dangerous business models. Lizzie is the author of two books and a contributing author to three more, including Future Histories, which explores campaigns to protect human rights and the history and future of technology.
About Digital Rights Watch
Digital Rights Watch has a vision for a digital world where all humanity can thrive, and where diversity and creativity flourish. The organisation exists to defend and promote this vision – to ensure fairness, freedoms, and fundamental rights for all people who engage in the digital world.
With support from the Minderoo Foundation, Digital Rights Watch has been hosting a series of town hall events in major cities this month, discussing How Can Democracy Survive AI? Tickets are free (!!), and you can register your attendance for the remaining Brisbane, Sydney, and Melbourne events here.


A very informative essay Lizzie. We need more information quickly as these AI groups of the tech bros are moving very very fast to get their product "online" before we've had a chance to "look under the hood" as you so succinctly put it. I know it's the wave of the future but as the saying goes "Garbage in, garbage out." That's the basis of my resistance in a nutshell. I want proof! Throughout the technology industry there has been little expertise in defining "truth." The only consultation appears to come from those trying to sell it. Lobbyists are by definition not working in a bipartisan way. Yet governments are spending our tax money on these entities! I don't believe we should "throw out the baby with the bath water" but we should be all conscience of the fact that people lie to make a profit. Until they are found out. Then it's too late because they've already set their OWN algorithms which cannot be removed!