Australians using some artificial intelligence chatbots will need to prove their age from March next year in a bid to reduce children's access to confronting or sexual content online.
The nation's digital watchdog is concerned about young people engaging in sexually explicit conversations with AI platforms, sometimes for hours at a time.
The eSafety Commissioner is also hoping to limit children's exposure to chatbots which have, in some cases, encouraged self-harm or suicidal thoughts.
Under six regulatory codes released by the commission on Tuesday, platforms which show "high risk" or "harmful" content will be required to check a user's age before allowing them in.
Pornography websites and social media apps which allow pornography, self-harm material or graphic violence are also cover