Parents have expressed outrage at Meta’s decision to lower the age limit for WhatsApp, accusing the social media giant of putting children at risk in favor of profits. The move, which now allows children as young as 13 to use the messaging app, has been met with criticism from various sectors of society.
Campaigners have called the decision “tone deaf,” with concerns raised about the potential dangers of allowing younger users access to a platform that is end-to-end encrypted. Vicky Ford, a member of the Commons’ education committee, highlighted the risks of illegal content being shared on WhatsApp, which can be difficult to monitor and remove due to its encryption.
The campaign group Smartphone Free Childhood has garnered support from 60,000 parents who oppose the age limit change. Co-founder Daisy Greenwell criticized Meta for prioritizing profits over children’s safety, stating that the move ignores the warnings from experts in various fields about the potential harm social media can have on young people.
Researchers have also raised concerns about the impact of private group chats on young users, warning that such platforms can normalize harmful rhetoric and extreme material. Meta has defended its decision, stating that users have options to control who can add them to groups and report any suspicious activity.
In response to the backlash, Meta has announced new safety features aimed at protecting users from harmful content, including a filter in Direct Messages on Instagram called Nudity Protection. The feature will automatically blur images containing nudity and provide users with options to block and report inappropriate content.
Despite the company’s efforts to enhance safety measures, critics remain skeptical of Meta’s motives and the potential risks associated with allowing younger users access to messaging apps like WhatsApp. The debate over online safety for children continues as parents, lawmakers, and experts call for greater accountability from social media companies.