Australia’s new rule banning under-16s from social media takes effect – a move that could spark wider global restrictions
Australia first exempted YouTube from the ban due to its educational benefits but later withdrew the exemption in July 2025 after a major regulator reported that it was the platform most frequently linked to harmful content exposure among children.

Starting today, Australia will become the first country to set a minimum age for social media use, forcing platforms like Instagram, YouTube, and Snapchat to block more than a million accounts belonging to users under the age of 16.
The Australian law has been criticized by tech companies but supported by many parents in the country. It is expected to set a global precedent for tightening regulations on online safety for young users.
The Online Safety Reform (Social Media Minimum Age) Act states that age-restricted platforms will be expected to take “reasonable” steps to identify existing accounts belonging to individuals under 16 years of age.
It mandates that they must deactivate or delete these accounts, prevent them from opening new accounts, and prohibit any methods that allow children under 16 to bypass the restrictions.
The platforms must also ensure that no accounts are wrongly removed if someone accidentally misses or is included in the ban.
This rule has put major tech companies in a difficult position. They have all publicly opposed the law while also stating that they will comply with it.
Local reports indicate that Meta has already begun disabling accounts belonging to users under the age of 16.
While the law doesn’t penalize young Australians who attempt to access social media after it comes into effect, platforms that fail to block them could face fines of up to $33 million.
According to the Australian government, the aim of these restrictions is to protect young people from the “pressures and risks” they face when logging into social media accounts.
This is due to design features that encourage them to spend more time on screen, as well as content that can be harmful to their health and well-being.
Previously, a government regulator found in a survey that more than half of young Australians had experienced cyberbullying on social media platforms.
It’s certain that dating websites, gaming platforms, and AI chatbots are excluded from the legislation, even though chatbots have recently been in the spotlight for allegedly allowing children to engage in “sensual” chats on these platforms.
Besides tech companies, the Australian Human Rights Commission has also stated that a complete ban on social media for those under 16 may not be the “right answer,” as it could infringe on their freedom of expression.
THE LAW APPLIES TO ALL PLATFORMS
From December 10, Facebook, Instagram, Kik, Reddit, Snapchat, Threads, TikTok, Twitch, X, and YouTube will be required to take “reasonable steps” to prevent Australians under 16 from creating accounts on their platforms.
The Australian government may reconsider the list based on evolving circumstances and if young users migrate to other platforms that are not currently covered.
Australia initially exempted YouTube from the ban, citing educational value, but in July 2025 a key regulator found that it was the most cited platform for exposure to harmful content among children, leading it to reverse it.
Age restrictions will apply to social media platforms that meet three specific conditions: the sole purpose, or a significant purpose, of the service is to enable online social interaction between two or more end users; the service allows end users to link to or interact with some or all other end users; and the service allows end users to post content on the service.
Australia’s reasoning for imposing the restrictions
According to the government, allowing children under 16 to log in to social media accounts increases their risk of exposure to pressures and harms that can be difficult to manage.
This can expose them to cyberbullying, stalking, grooming, and harmful and hateful content.
This can be due to the design features of social media platforms, which encourage children to spend more time on screens and display content that can be detrimental to their health and well-being.
According to a survey by Australia’s online safety regulator, eSafety, between December 2024 and February 2025, nearly 3 in 4 (74 percent) children had seen or heard harmful content online.
More than 1 in 2 (53 percent) had experienced cyberbullying. Three in five (60 percent) had seen or heard online hate, while 1 in 4 (27 percent) had experienced it themselves. One in four (25 percent) had experienced non-consensual tracking, monitoring, or harassment.
The survey also found that 38 percent of people had someone say hurtful things to them online, 17 percent had their private messages, information, or secrets shared, 16 percent had been sent or tagged in offensive or upsetting photos or videos, and 13 percent had been told online that they should harm or kill themselves or that they should die.
How have technology firms responded?
While companies are complying with the law, they opposed its implementation during the consultation phase.
YouTube said the law, which requires children to use the platform without an account, “removes parental controls and safety filters designed to protect them—it will not make children safer on our platform.” Meta called the law “ineffective” and said it would “fail to achieve its stated goals of making young people safer online and supporting people who experience harm from using technology.”
Snap said that disconnecting teens from their friends and family does not make them safer but could push them towards less secure, less private messaging apps.
X said it was concerned about the potential impact of the law on the human rights of children and young people, including freedom of expression and access to information.
