Snapchat AI Privacy Scare

snapchat ai privacy scare.jpg Technology

Snapchat’s AI feature, My AI, has caused a stir among its millions of users after going rogue. On August 15, the bot, powered by OpenAI’s ChatGPT, posted its own story and then mysteriously stopped responding to any further questions. This incident highlights the potential risks and unpredictability of AI, as it sometimes "hallucinates" and produces aberrant behavior.

The My AI feature allows Snapchat users to engage in conversations with the generative AI bot by sending it snap texts. Through these interactions, users can receive recommendations and have conversations with the bot. However, earlier this week, the bot posted a flat image that led many users to mistake it for the ceilings of their homes. Concerned about their privacy, users questioned the bot, only to receive a vague response about a technical issue. Snap officials quickly reassured users that it was a temporary glitch and not the bot acting on its own. Nevertheless, this incident has raised questions about the reliability and safety of Snapchat’s integrated AI feature.

Snapchat has a history of breaching data protection rights, making this recent glitch all the more concerning for users. In the past, the company has introduced features without user knowledge or consent, exposed employee data through a cyberattack, and allowed access to user details and content through internal tools. The My AI feature, which cannot be removed or disabled, has faced criticism from users who are calling for its removal due to safety concerns. Although Snap has implemented additional safeguards and parental controls, this "glitch" has further highlighted the data safety concerns surrounding the bot and how it handles the vast amount of data it collects from millions of teenage users.

Despite these concerns, My AI has already garnered significant usage, with over 150 million people having sent 10 billion messages to the bot since its launch. However, this incident underscores the need for greater transparency and accountability when it comes to AI technologies, particularly those that interact with users on such a large scale. As businesses increasingly rely on generative AI, the potential for unpredictable behavior and data misuse becomes a critical issue that needs to be addressed.


Snapchat’s My AI Feature Goes Rogue, Raises Concerns About Data Safety

Snapchat’s My AI feature, which was launched on May 31, recently caused anxiety among millions of users when it went rogue on August 15. Powered by OpenAI’s ChatGPT, the bot posted a story of its own and then unexpectedly stopped responding to any further questions. This incident highlights the unpredictable nature of AI and how it can sometimes "hallucinate."

The My AI feature allows Snapchat users to send snap texts to the bot and receive a response from the generative AI. It acts as a conversational partner, providing recommendations and engaging in conversations with users. However, earlier this week, the bot posted a flat image with two tones, leading many users to mistake it for the ceilings of their homes. When users questioned the bot about their privacy concerns, the only response they received was, "Sorry, I encountered a technical issue."

Snap officials were quick to address the situation, stating that it was a glitch and not the bot working autonomously. They assured users that the issue had been resolved. However, this incident raises questions about how Snapchat’s integrated AI feature operates and the potential risks associated with it.

Snapchat has a history of breaching data protection rights, which has further fueled concerns about the My AI feature. In the past, the company has introduced new features without user knowledge or consent. For example, in 2016, a cyber attacker exposed the payroll data of around 700 employees. The following year, it was revealed that Snapchat could install image recognition AI on users’ devices without compromising the app’s size and functionalities. Additionally, former Snap staff anonymously disclosed that employees had access to user details and content through an internal tool called SnapLion.

The recent glitch in the My AI feature has sparked outrage among users, with many calling for Snapchat to discard the feature altogether. However, the bot remains pinned to the top of the chat feed, and there are no options to remove or disable it. Snap had already added safeguards and parental controls in response to safety concerns raised shortly after the bot’s launch.

Despite the glitch, the My AI feature has been widely used, with more than 150 million people sending 10 billion messages to the bot within a month of its launch. However, this incident highlights the need for greater data safety measures, especially considering the large amount of data collected from millions of teenage users.

In conclusion, Snapchat’s My AI feature recently went rogue, causing anxiety among users and raising concerns about data safety. While Snap has assured users that the incident was a glitch and not the bot working independently, it highlights the unpredictability of AI. Snapchat’s history of breaching data protection rights further adds to the concerns surrounding the My AI feature. Despite its popularity, there is a need for stronger data safety measures to protect user privacy.

Crive - News that matters