Social media companies will not be required to verify the age of every user or meet a minimum standard for removing underage accounts, according to new federal guidelines. The government confirmed a more lenient approach to enforcing the ban on users under 16 years old. Details about the operation of this ban will be released on Tuesday, outlining the necessary steps for platforms like Facebook, Snapchat, and TikTok to comply with the law.

While these platforms must demonstrate to the eSafety watchdog that they have taken "reasonable steps" to remove accounts belonging to users under 16, there will be no legally enforceable standard for accuracy in age verification. A recent study on age verification methods indicated that while several approaches are viable, none are foolproof, raising concerns about both accuracy and privacy.

Platforms will not be mandated to use specific technologies for age screening. However, they must ensure their policies are transparent and consistent, and they must establish a process for handling disputes. If platforms fail to show they have taken the required steps, eSafety can initiate legal action, with courts able to impose fines of up to $49.5 million for violations.

Compliance is expected from the start of the ban on December 10, although a spokesperson for the Labor party indicated that an adjustment period may be allowed. "The government has done the work to ensure that platforms have the information they need to comply with the new laws — and now it's on them to take the necessary steps," said Communications Minister Anika Wells.

Wells and eSafety Commissioner Julie Inman-Grant have previously suggested that the regulatory guidelines would not be overly prescriptive. They acknowledged that some accounts would likely slip through the cracks. This acknowledgment reflects the government's effort to balance the ban with user privacy. Platforms will be instructed not to implement a heavy-handed approach of blanket verification and are not expected to retain user age data.

Concerns have also been raised about the effectiveness of age-checking technology. During government tests, children as young as 15 were misidentified as being in their 20s and 30s, casting doubt on the viability of the teen social media ban. The law passed last year stipulates that platforms cannot rely solely on government-issued identification for age verification, despite findings that this method is the most effective.

Instead, the guidelines will encourage a "layered" approach to age assessment, utilizing multiple methods while minimizing user friction. This could include AI-driven models that analyze facial scans or track user behavior. Wells has previously pointed to these models as examples of advanced technology, although experts have raised concerns about their reliability.

The upcoming publication of the guidelines marks the final step before the ban takes effect. This is likely to raise concerns within the industry, particularly regarding the immediate compliance requirements and potential fines. During consultations, platforms expressed a desire for a grace period and some larger companies sought more detailed guidelines to reduce ambiguity.