Australia’s online watchdog has criticised the world’s biggest social platforms of not adequately implementing the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and insufficient measures to stop new account creation. In its initial compliance assessment since the prohibition came into force, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, cautioning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Regulatory Breaches Exposed in Initial Significant Review
Australia’s eSafety Commissioner has outlined a troubling pattern of non-compliance amongst the world’s biggest social media platforms in her first formal review following the ban came into effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have jointly neglected to establish sufficient safeguards to prevent minors from accessing their services. Julie Inman Grant raised significant concerns about structural gaps in age verification processes, noting that some platforms have allowed children who initially declared themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings represent a notable intensification in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has emphasised that merely demonstrating some children still maintain accounts is insufficient; platforms must instead provide concrete evidence that they have established robust systems and processes designed to prevent under-16s from creating accounts in the first place. This shift demonstrates the government’s determination to hold tech giants responsible, with possible sanctions looming for companies that do not meet the legal requirements.
- Allowing previously banned users to re-verify their age and regain account access
- Permitting multiple tries at the identical verification process without penalty
- Inadequate systems to stop accounts for under-16s from being opened
- Limited reporting tools for parents and the general public
- Shortage of publicly available information about compliance actions and account deletions
The Magnitude of the Problem
The substantial scale of social media usage amongst Australian young people underscores the compliance challenge confronting both the government and the platforms themselves. With millions of accounts already restricted or removed since the implementation of the ban, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s conclusions suggest that the operational and technical barriers to enforcing age restrictions have proven far more complex than anticipated, with platforms having difficulty to differentiate authentic age confirmations from false claims. This complexity has left enforcement authorities grappling with the core issue of whether current age verification technologies are adequate to the task.
Beyond the technical obstacles lies a broader concern about the readiness of companies to place compliance ahead of user growth. Social media companies have consistently opposed strict identity verification requirements, citing privacy concerns and the real challenge of confirming age online. However, the Commissioner’s report suggests that some platforms may not be making sufficient effort to deploy the infrastructure required by law. The move to active enforcement represents a pivotal moment: either platforms will substantially upgrade their compliance infrastructure, or they stand to incur significant penalties that could reshape their business models in Australia and possibly affect regulatory approaches internationally.
What the Numbers Reveal
In the opening month after the ban’s launch, Australian regulators stated that 4.7 million accounts had been limited or removed. Whilst this statistic initially appeared to show regulatory success, later review reveals a more complex picture. The sheer volume of account takedowns implies that many under-16s had managed to establish accounts in the initial stages, indicating that preventative measures were inadequate. Additionally, the data prompts inquiry about whether removed accounts reflect genuine enforcement or merely users deleting their pages of their own accord in in light of the latest limitations.
The restricted transparency concerning these figures has frustrated independent observers seeking to assess the ban’s genuine effectiveness. Platforms have disclosed minimal information about their implementation approaches, effectiveness metrics, or the characteristics of removed accounts. This absence of transparency makes it challenging for regulators and the general public to assess whether the ban is working as intended or whether young people are merely discovering alternative ways to access social media. The Commissioner’s push for comprehensive proof of structured adherence protocols reflects growing frustration with platforms’ reluctance to provide complete details.
Sector Reaction and Pushback
The social media giants have addressed the regulatory enforcement measures with a combination of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which runs Facebook and Instagram, stressed its commitment to complying with Australian law whilst at the same time contending that accurate age determination continues to be a major challenge across the industry. The company has advocated for a alternative strategy, suggesting that robust age verification and parental approval mechanisms implemented at the app store level would be more effective than platform-level enforcement. This stance demonstrates broader industry concerns that the existing regulatory system puts an impractical burden on individual platforms.
Snap, the developer of Snapchat, has adopted a more assertive public position, announcing that it had locked 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, industry observers dispute whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the regulatory requirement to systematically remove an entire age demographic persists unaddressed. Companies have long resisted rigorous age verification methods, citing privacy concerns and technical limitations, creating a standoff between authorities and platforms over who bears responsibility for implementation.
- Meta maintains age verification ought to take place at app store level rather than on individual platforms
- Snap asserts to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups cite privacy concerns and technical challenges as impediments to effective age verification
- Platforms contend they are doing their best whilst questioning the ban’s overall effectiveness
More Extensive Questions Concerning the Prohibition’s Effectiveness
As Australia’s under-16 social media ban moves into its enforcement phase, fundamental questions remain about whether the legislation will accomplish its stated objectives or merely push young users towards unregulated platforms. The regulator’s first compliance report reveals that following implementation, significant loopholes exist—children keep discovering ways to bypass age verification systems, and platforms have had difficulty stop new underage accounts from being established. Critics contend that the ban’s effectiveness depends not merely on regulatory vigilance but on whether young people will truly leave major social networks or simply shift towards alternative services, secure messaging apps, or VPNs designed to mask their age and location.
The ban’s worldwide effects increase the complexity of assessments of its effectiveness. Countries such as the United Kingdom, Canada, and multiple European countries are monitoring Australia’s experiment closely, evaluating similar legislation for their own citizens. If the ban fails to reduce children’s digital engagement or does not protect them from dangerous online content, it could damage the case for similar measures elsewhere. Conversely, if regulation becomes sufficiently robust to truly restrict underage participation, it may embolden other nations to pursue similar approaches. The result will probably shape international regulatory direction for many years ahead, making Australia’s enforcement efforts scrutinised far beyond its borders.
Who Gains and Who Loses
Mental health supporters and child safety organisations have championed the ban as a essential measure to counter algorithmic manipulation and exposure to harmful content. Parents and educators maintain that taking young Australians off platforms built to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks linked to social media use amongst adolescents, lending credibility to these concerns. However, the ban also removes legitimate uses of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around common interests. The regulatory framework assumes harm exceeds benefit, a calculation that some young people and their families dispute.
The ban’s practical impact extends beyond individual users to influence content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that depend on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously employed effectively. Meanwhile, the ban unexpectedly benefits large technology companies with resources to develop age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects extend far beyond the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has signalled a notable transition from passive monitoring to direct intervention, marking a critical turning point in the implementation of the youth access prohibition. The regulator will now compile information to establish whether services have omitted “reasonable steps” to restrict child participation, a regulatory requirement that surpasses simply noting that young people stay within these platforms. This method demands demonstrable proof that companies have introduced suitable mechanisms and processes designed to exclude minors. The regulatory body has stated it will pursue investigations systematically, developing arguments that could lead to substantial penalties for non-compliance. This shift from observation to enforcement reveals increasing dissatisfaction with the services’ existing measures and suggests that consensual engagement alone will no longer suffice.
The enforcement phase highlights important questions about the appropriateness of fines and the practical mechanisms for ensuring platform accountability. Australia’s statutory provisions delivers compliance mechanisms, but their effectiveness hinges on the eSafety Commissioner’s readiness to undertake official proceedings and the platforms’ ability to adapt meaningfully. Overseas authorities, notably regulators in the UK and EU, will carefully track Australia’s enforcement strategy and results. A successful enforcement campaign could set a template for further jurisdictions contemplating similar bans, whilst shortcomings might weaken the overall legislative structure. The next phase will determine whether Australia’s pioneering regulatory approach delivers genuine protection for young people or remains largely symbolic in its impact.
