Study reveals TikTok algorithms actively censor criticism against Chinese government
Research from Rutgers University suggests that heavy TikTok users tend to have a more positive outlook on China, indicating potential indoctrination.
Researchers at Rutgers University and the Network Contagion Research Institute (NCRI) found that TikTok’s algorithms actively downplay content critical of the Chinese Communist Party (CCP) while promoting pro-China propaganda and irrelevant distractions.
The platform uses content creators linked to the CCP to suppress discussions on sensitive topics such as ethnic genocide and human rights abuses.
The study also noted that TikTok users, especially heavy users, showed a shift in their attitudes towards China, suggesting successful indoctrination based on a psychological survey.
According to the report, users may unintentionally absorb biased narratives on TikTok due to targeted information and engineered environments that limit free speech, leading to distorted perceptions of global issues.
A spokesperson from TikTok dismissed the findings of the study, calling it a flawed experiment intended to reach a predetermined conclusion.
The study analyzed over 3,400 videos from search results using keywords related to CCP abuses and classified them as pro-China, anti-China, neutral, or irrelevant.
Results showed that TikTok had significantly less anti-China content compared to YouTube and Instagram, with a bias towards pro-China narratives, especially on controversial topics like Xinjiang and Tiananmen.
Furthermore, TikTok’s algorithm displayed favoritism towards pro-China content, indicating potential manipulation and bias unique to the platform.
Survey
The researchers also surveyed 1,214 American TikTok users to gauge their perceptions of China based on their app usage.
Heavy TikTok users exhibited a notable increase in positive views of the CCP’s human rights records, indicating potential psychological manipulation and alignment with the CCP’s objectives.
The study suggested the creation of a Civic Trust to identify and address platforms that manipulate user perceptions, especially if they threaten democratic values.