Australia’s online watchdog has criticised the world’s largest social media companies of failing to properly enforce the country’s prohibition preventing under-16s from accessing their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the prohibition came into force, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, cautioning that platforms must demonstrate they have implemented “appropriate systems and processes” to stop under-16s from using their services.
Non-compliance Issues Uncovered in Initial Significant Review
Australia’s eSafety Commissioner has detailed a troubling pattern of non-compliance amongst the world’s biggest social media platforms in her first formal review since the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish adequate safeguards to prevent minors from accessing their services. Julie Inman Grant raised significant concerns about structural gaps in age verification systems, noting that some platforms have permitted children who originally stated themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings represent a significant escalation in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has made clear that simply showing some children still hold accounts is insufficient; platforms must instead provide concrete evidence that they have established robust systems and processes intended to stop under-16s from creating accounts in the first place. This shift reflects the government’s determination to hold tech giants responsible, with potential penalties looming for companies that fail to meet the statutory obligations.
- Allowing formerly prohibited users to confirm again their age and regain account access
- Enabling multiple tries at the same age assurance method without consequences
- Weak mechanisms to prevent accounts for under-16s from being opened
- Limited notification systems for parents and the general public
- Lack of clear information about compliance actions and user account terminations
The Magnitude of the Problem
The considerable scale of social media usage amongst young Australians underscores the compliance challenge facing both the government and the platforms themselves. With millions of accounts already removed or restricted since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s conclusions indicate that the operational and technical barriers to enforcing age restrictions have proven far more complex than expected, with platforms struggling to distinguish genuine age declarations from false claims. This intricacy has left enforcement authorities grappling with the fundamental question of whether existing age verification systems are adequate to the task.
Beyond the operational challenges lies a wider issue about the readiness of companies to place compliance ahead of user growth. Social media companies have long resisted strict identity verification requirements, citing privacy concerns and the genuine difficulty of confirming age online. However, the Commissioner’s report suggests that some platforms might not be demonstrating sufficient effort to deploy the infrastructure mandated legally. The shift towards active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they risk facing significant penalties that could transform their operations in Australia and possibly affect regulatory approaches internationally.
What the Figures Indicate
In the opening month subsequent to the ban’s introduction, Australian authorities reported that 4.7 million accounts had been limited or removed. Whilst this number initially looked to demonstrate enforcement effectiveness, later review reveals a more layered picture. The sheer volume of account deletions implies that many under-16s had managed to establish accounts in the initial stages, revealing that preventative measures were inadequate. Moreover, the data casts doubt about whether removed accounts reflect genuine enforcement or merely users closing their pages voluntarily in reaction to the latest limitations.
The minimal transparency regarding these figures has disappointed independent observers seeking to assess the ban’s genuine effectiveness. Platforms have disclosed scant details about their enforcement methodologies, performance indicators, or the profile of removed accounts. This opacity makes it hard for regulators and the public to determine whether the ban is operating as planned or whether younger users are simply finding other methods to reach social media. The Commissioner’s insistence on comprehensive proof of systematic compliance measures reflects mounting dissatisfaction with platforms’ reluctance to provide complete details.
Sector Reaction and Pushback
The social media giants have addressed the regulator’s enforcement action with a combination of compliance assurances and doubts regarding the practical feasibility of the ban. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst simultaneously arguing that accurate age determination remains a significant industry-wide challenge. The company has advocated for a different approach, proposing that strong age verification systems and parental consent requirements implemented at the application store level would be more effective than platform-level enforcement. This stance reflects broader industry concerns that the existing regulatory system places an impractical burden on individual platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, announcing that it had suspended 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, industry observers dispute whether such figures reflect authentic adherence or simply represent reactive account management. The core conflict between platforms’ commercial structures—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to systematically remove an entire age demographic persists unaddressed. Companies have long resisted stringent age verification, citing privacy issues and technical constraints, creating a standoff between authorities and platforms over who bears responsibility for implementation.
- Meta contends age verification should occur at app store level instead of on individual platforms
- Snap claims to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups highlight privacy concerns and technical challenges as barriers to effective age verification
- Platforms assert they are doing their best whilst questioning the ban’s overall effectiveness
Larger Considerations About the Prohibition’s Impact
As Australia’s under-16 social media ban enters its enforcement phase, fundamental questions persist about whether the legislation will achieve its stated objectives or merely drive young users towards less regulated platforms. The regulatory authority’s first compliance report reveals that following implementation, significant loopholes exist—children continue finding ways to circumvent age verification systems, and platforms have struggled to stop new underage accounts from being established. Critics contend that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will truly leave mainstream platforms or simply shift towards alternative services, encrypted messaging applications, or VPNs designed to conceal their age and location.
The ban’s worldwide effects contribute further complexity to assessments of its effectiveness. Countries such as the United Kingdom, Canada, and several European nations are monitoring Australia’s experiment closely, considering similar laws for their own populations. If the ban fails to reduce children’s digital engagement or fails to protect them from harmful content, it could undermine the case for similar measures elsewhere. Conversely, if enforcement becomes sufficiently rigorous to genuinely restrict underage access, it may inspire other governments to adopt comparable measures. The result will likely influence international regulatory direction for the foreseeable future, making Australia’s implementation efforts examined far beyond its borders.
Who Gains and Who Is Disadvantaged
Mental health supporters and organisations focused on child safety have championed the ban as a necessary intervention to counter algorithmic manipulation and contact with harmful content. Parents and educators maintain that removing young Australians platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health linked to social media use amongst adolescents, lending credibility to these concerns. However, the ban also eliminates valid applications of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around shared interests. The regulatory framework assumes harm exceeds benefit, a calculation that some young people and their families dispute.
The ban’s concrete implications extends beyond individual users to impact content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that depend on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously used effectively. Meanwhile, the ban unexpectedly advantages large technology companies with resources to develop age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects extend far beyond the simple goal of child protection.
What Happens Next for Compliance Monitoring
Australia’s eSafety Commissioner has announced a notable transition from hands-off observation to proactive action, marking a pivotal moment in the implementation of the youth access prohibition. The watchdog will now gather evidence to establish whether platforms have neglected to implement “reasonable steps” to restrict child participation, a regulatory requirement that extends beyond simply documenting that children remain on these systems. This method demands tangible verification that platforms have established proper safeguards and processes meant to keep out minors. The Commissioner’s office has stated it will pursue investigations systematically, building cases that could trigger significant fines for failure to comply. This shift from oversight to action reveals growing frustration with the platforms’ current efforts and signals that willing participation by itself is insufficient.
The enforcement phase presents critical issues about the sufficiency of sanctions and the operational systems for holding tech giants accountable. Australia’s legislation provides regulatory tools, but their effectiveness relies on the eSafety Commissioner’s commitment to initiate regulatory enforcement and the platforms’ capability to adjust effectively. International observers, particularly regulators in the UK and EU, will carefully track Australia’s regulatory approach and results. A successful enforcement campaign could establish a blueprint for further jurisdictions contemplating comparable restrictions, whilst shortcomings might undermine the entire regulatory framework. The forthcoming period will determine whether Australia’s innovative statutory framework translates into genuine protection for adolescents or becomes largely performative in its impact.
