Even though the Albanese government has unveiled its teen social media ban bill, no-one knows the precise details about how it will work in practice.
That is by design. The bill has a 12-month lead-in, and handballs the decision of what “reasonable steps” social media platforms should take to stop Australians under 16 from having social media accounts to the eSafety commissioner. There’s a government trial underway that’s looking at the effectiveness of the technologies that could be used.
The details of the bill are a live issue. Last night a committee reviewing the legislation published a report which recommended a series of changes to the bill. The communication minister’s office says it is “engaging in good faith” with amendments to strengthen privacy protections. But until there are amendments on the table that have the support of enough MPs to pass, it’s only worth engaging with what’s currently in the bill.
This is all to say we can’t know crucial details about how the teen social media ban — and its requirement for social media companies to use “age assurance methods” to determine each Australian’s age online — will work.
It is fair to say the government doesn’t know yet either. (Don’t get angry at me for saying this, staffers, because if you do know how it works that means you are admitting you are undermining the process that you set up to figure it out).
While acknowledging we can’t know for sure, the best way to imagine how the ban might work in practice is by looking at the tech industry’s existing de facto social media age ban. These show that the government’s law is certain to end up requiring more Australians provide things like government ID or facial scans to access social media.
All the major social media platforms (including Facebook, Instagram, TikTok, Snapchat, Reddit and X) currently have a minimum age of 13 to use their platform.
What’s really useful to consider is that the steps that each platform takes to enforce this age restriction differ. For example, Snapchat asks a user to volunteer their age when signing up for an account and, unless someone reports a user for being underage, leaves it at that. This is technically “age assurance” like the government wants — Snapchat did ask, after all! — but is extremely unlikely to reach the bar of “reasonable steps” to stop underage children from accessing the app because it’s not hard to fudge it.
On the other end of the spectrum, Meta uses a number of methods to verify a user’s age — some of which the government is trialing for its social media ban. When you sign up for a Facebook or Instagram account, Meta asks you to volunteer your age. But the company doesn’t just take your claim as gospel. Instead it uses clues from your account to judge whether you might have been lying, in what the government calls “algorithmic profiling”.
For example, if an account changes its age from 14 years old to 24 years old — perhaps to get around the platform’s existing restrictions on children’s accounts — the company will flag the user as needing to provide more proof of age.
Meta is now using AI, too, to judge if an account’s behaviour suggests it might be run by a child. The company says it looks at things like the types of posts on the account (for example, if someone posts “just had the best 11th birthday!”), the people the account is friends with (say, other 11-year-olds) and the type of content it engages with (perhaps they’re a big Jake Paul fan). If the company determines a user might have lied about their age, it asks them to provide more proof.
So what happens if Meta decides you need to put forward more evidence of your age? At the moment, it asks for one of two things: a facial scan, or government ID. It works with a company named Yoti that provides some of this technology. If you don’t cooperate, Meta locks you out of your account until you fulfill one of these requirements.
I think it’s credible to look at Meta’s current system and imagine something like this would likely reach or be close to the level of “reasonable steps”.
It uses what the government calls a “waterfall” approach by mixing various methods, depending on how confident the company is that you’re an adult. If you’ve had a social media account for 17 years? Well, you probably don’t need to provide your government ID. But if you only follow Paw Patrol on Instagram and post “skibidi toilet”, you might have to provide some more proof that you are the adult you say you are.
In some ways, I think this should moderate some of the more extreme claims that are out there about the ban. This corroborates the communication minister’s claim that Australians will not be required to give government ID to social media companies. The law will not require that TikTok ask every Australian user to log in using the Digital ID scheme.
In other ways, this demonstrates the real costs of the ban — costs the government is trying to tiptoe around.
While it’s true all Australians won’t have to put in their government ID, this law all but guarantees that more Australians will be forced to either provide government ID or have their face scanned if they want to stay online — something that many Australians will object to.
Or, when the government says you won’t be forced to give your ID to social media companies, that doesn’t rule out having to give your ID to a third-party company like Yoti on behalf of social media companies.
Every policy, including the teen social media ban, is about balances. The impact of this ban, as we can see from the tech industry currently, is to trade privacy and effort to try to stop young Australians from having social media accounts. Regardless of whether we agree with the aim or not, we must acknowledge this trade if we are to debate whether it’s worth it.
Have something to say about this article? Write to us at letters@crikey.com.au. Please include your full name to be considered for publication in Crikey’s Your Say. We reserve the right to edit for length and clarity.