In July 2022, TikTok unveiled a new tool, Filter Video Keywords, aimed at enhancing user safety and customization of content. The tool, designed to allow users to block videos containing specific words or hashtags they preferred not to see, was hailed by Cormac Keenan, Head of Trust and Safety at TikTok, as a significant stride in ensuring a safer and more personalized user experience. Despite promises that the feature would be universally accessible within weeks, more than a year later, hundreds of users report either an absence of the setting or its failure to function as promised.
The issue came to light last month when, after posting a tutorial video on my personal TikTok account about using the Filter Video Keywords setting to block content, numerous users reported either the absence of the feature on their apps or its malfunction. As comments flooded in, it became evident that the much-touted safety feature was falling short of its promise. It raised questions about TikTok’s commitment to user safety and the potential violation of laws that prohibit unfair and deceptive business practices in the social media industry.
TikTok’s Filter Video Keywords Tool: A Safety Feature With Unresolved Issues
In July 2022, TikTok launched a new tool, Filter Video Keywords, designed to enhance user experience by enabling users to block unwanted content. The tool allowed users to filter out videos that contained certain words or hashtags. The Head of Trust and Safety at TikTok, Cormac Keenan, hailed the feature as a critical safety measure that would "help viewers customize their viewing preferences and continue to have a safe and entertaining experience." Despite the promising introduction, hundreds of TikTok users complained about either not having access to the feature or its ineffectiveness.
Promising Tool With Unfulfilled Promises
The Filter Video Keywords feature was supposed to be universally accessible within weeks of its announcement. However, more than a year later, numerous users reported that they couldn’t find the feature in their settings. Even those who had access to it complained that it didn’t function as promised.
After a query from Gizmodo, TikTok reportedly intervened to rectify the issue. Despite the intervention, user complaints persisted. Some users speculated that TikTok had intentionally sabotaged the feature, a claim that seems to be more of a reflection of their frustration than an accurate assessment of the situation.
User Complaints and FTC Involvement
Several users claimed that the Filter Video Keywords tool did not block videos with the hashtags they had filtered out. They reported seeing content with hashtags such as #tiktokshop and #taylorswift, despite having blocked them. One user even claimed to have blocked over a hundred hashtags but still saw videos featuring them.
The Federal Trade Commission (FTC) has previously stated that misleading users about privacy and safety settings could be considered a violation of laws prohibiting unfair and deceptive business practices. In light of this, TikTok’s unmet promises about the Filter Video Keywords tool could potentially land them in hot water with the FTC.
In response to the complaints, a TikTok spokesperson attributed the issues to a glitch that affected a small number of users. They assured users that a fix was issued and encouraged everyone to update their app to the latest version.
However, this explanation does not address the concerns of users who had access to the feature but found it ineffective. TikTok’s guidelines state that users cannot block certain hashtags, including #ad and #sponsored. The spokesperson confirmed that users could block #tiktokshop, but failed to explain why users still saw videos containing hashtags they had blocked.
The introduction of TikTok’s Filter Video Keywords tool was seen as a significant step forward in enhancing user experience and fostering a safe and entertaining environment. However, the tool’s unfulfilled promises and unresolved issues have left many users disappointed. It is essential for TikTok to address these concerns promptly to maintain its users’ trust and avoid potential regulatory scrutiny. The company’s handling of the situation will be a test of its commitment to user safety and privacy.