Toner, who is in her early 30s and is an AI policy researcher, graduated from Melbourne Girls Grammar with a perfect VCE score. She joined OpenAI’s board in late 2021 after stints in China, where she studied its AI industry, and Washington DC, where she helped form Georgetown’s Centre for Security and Emerging Technology, a think tank focused on AI and national security, where she still works today.
Her subsequent departure from OpenAI’s board was widely characterised at the time as a showdown between ethics and profits. Between slowing down or speeding up.
Instead, Toner says there was board mistrust and that Altman had created a toxic atmosphere; claims that Altman and board chair Bret Taylor have denied.
For Toner, it is critical that governments – including Australia’s – play an active role and tech companies not be left to their own devices or trusted to self-regulate what’s quickly becoming a massively important sector.
As of right now, however, it’s a losing argument.
This month, Google’s AI-based search variously told users to eat at least one small rock a day, to thicken pizza sauce using 1/8 of a cup of non-toxic glue, and to stare at the sun between five and 15 minutes a day.
It’s unpredictable technology that clearly isn’t ready for prime time, but it doesn’t matter.
We’re quickly entering an era in which technology companies – predominantly US-based heavyweights like Google, Meta, Nvidia and OpenAI – are racing to build generative AI into every product and service we use, even if the results are wrong or nonsensical.
Companies like Google and Meta are hoping generative AI will supercharge their platforms, making them far more engaging – and useful – than they were before. And there’s a lot of money at stake: it is estimated generative AI will be a $2 trillion market by 2032.
Loading
Most of Google’s billions of global users may not have used a chatbot before, but will soon be exposed to AI-generated text in its answers. Similarly, many of the images you scroll through on Facebook, or see in the pages of The Daily Telegraph, are now generated by AI.
This week, an image spelling out “All Eyes on Rafah” was shared by more than 40 million Instagram users, many of whom would have had no idea it was likely generated by artificial intelligence.
AI’s rapid ascent into the zeitgeist is reminiscent of bitcoin’s rise five years ago. As with bitcoin, everyone is talking about it, but no one really understands how it works. Unlike bitcoin, however, generative AI’s potential, as well as its impact, is very real.
According to Toner, no one truly understands AI, not even experts. But she says that doesn’t mean we can’t govern it.
“Researchers sometimes describe deep neural networks, the main kind of AI being built today, as a black box,” she said in a recent TED talk. “But what they mean by that is not that it’s inherently mysterious, and we have no way of looking inside the box. The problem is that when we do look inside, what we find are millions, billions or even trillions of numbers that get added and multiplied together in a particular way.
“What makes it hard for experts to know what’s going on is basically just, there are too many numbers, and we don’t yet have good ways of teasing apart what they’re all doing.”
How AI works
The deep neural networks are complex systems that power large language model chatbots like ChatGPT, Gemini, Llama and Lamda.
Loading
They’re effectively computer programs that have been trained on huge amounts of texts from the internet, as well as millions of books, movies and other sources, learning their patterns and meanings.
As ChatGPT itself puts it, first you type a question or prompt into the chat interface. ChatGPT then tokenises this input, breaking it down into smaller parts that it can process. The model analyses the tokens and predicts the most likely next tokens to form a coherent response.
It then considers the context of the conversation, previous interactions, and the vast amount of information it learned during training to generate a reply. The generated tokens are converted back into readable text, and this text is then presented to you as the chatbot’s response.
Apart from the war over ethics and safety, there is another stoush brewing over the material used to train the likes of ChatGPT. Publishers like News Corp have signed deals to allow OpenAI to learn from its content, while The New York Times is suing OpenAI over alleged copyright infringement.
For now, the chatbots are working with limited datasets and in some cases faulty information, despite rapidly popping up in every classroom and workplace.
A recent RMIT study found 55 per cent of Australia’s workforce are using generative AI tools like ChatGPT at work in some capacity. Primary school teachers are creating chatbot versions of themselves to work with students, and ad agency workers are using ChatGPT to create pitches in minutes, work that would have taken hours.
Parliamentarians are wondering how to react. Some 20 years after Mark Zuckerberg invented Facebook, the Australian parliament is grappling with the prospect of enforcing age verification for social media. Decades into the advent of social media we are still coming to terms with its effects and how we might want to rein it in.
People close to the technology, including Toner, are warning governments to not make the same mistake with AI. They say there’s too much at stake.
Loading
Some argue the nation’s parliament is also already years behind grappling with artificial intelligence. Science and industry minister Ed Husic says he is keenly aware of the issue: he’s flagged new laws for AI use in “high-risk” settings and has appointed a temporary AI expert group to advise the government.
Researchers and industry members say those efforts have lacked urgency, however. A senate committee on the adoption of the technology in May heard that Australia has no laws to prevent a deepfake Anthony Albanese or Peter Dutton spouting misinformation ahead of the next federal election.
“I’m deeply concerned at the lack of urgency with which the government is addressing some of the risks associated with AI, particularly as it relates to Australian democracy,” independent senator David Pocock told this masthead.
“Artificial intelligence offers both opportunities and huge risks.”
Pocock wants specific laws to ban election-related deepfakes while others, including Australian Electoral Commission chief Tom Rogers, think codes of conduct for tech companies and mandatory watermarking would be more effective.
Either way, there’s a broad consensus that Australia is far behind other jurisdictions when it comes to grappling with both the risks and opportunities presented by AI. Simon Bush, chief executive of peak technology lobby group AIIA, fronted the Senate hearings and pointed out that Australia ranks second-largest globally in adopting AI across the economy according to several surveys.
“The rest of the world is moving at pace,” he said. “This is a technology that is moving at pace. We are not.”
The most recent federal budget allocated $39 million for AI advancement over five years, which Bush says is a negligible amount compared to the likes of Canada and Singapore, whose governments have committed $2.7 billion and $5 billion respectively.
For Bush, the narrative around fear and Terminator-esque imagery has been too pronounced, at the expense of AI adoption. He wants Australia to help build the technology its citizens will inevitably end up using.
Loading
“Australians are nervous and fearful of AI adoption, and this is not being helped by the Australian government running a long, public process proposing AI regulations to stop harms and, by default, running a fear and risk narrative,” he told the senate committee hearing.
Toner says, however, that Australia, as with other countries, should be thinking about what kind of guardrails to put around these systems that are already causing harm and spreading misinformation. “These systems could change pretty significantly over the next five, 10 or 20 years, and how do you get ready for that? That’s definitely something we need to grapple with.”
While Australia dithers, the tech is moving forward whether we like it or not.
Toner wants us to not be intimidated by AI or its developers, and says our collective involvement is crucial in shaping how AI technologies are used. “Like the factory workers in the 20th century who fought for factory safety, or the disability advocates who made sure the World Wide Web was accessible, you don’t have to be a scientist or engineer to have a voice.”
The very first step, for Toner, is to start asking better questions. “I come back to this question of, ‘is it just hit the accelerator or the brakes’. Or you know, are we thinking about who is steering? How well does the steering work, and how well can we see out of the out of the windscreen? Do we know where we are, do we have a good map?
“You know, thinking about all these kinds of things, as opposed to just floor it and hope for the best.”
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.