With 90% Accuracy, AI Unmasks Anonymous Accounts

With 90% Accuracy, AI Unmasks Anonymous Accounts
With 90% Accuracy, AI Unmasks Anonymous Accounts
Artificial Intelligence has successfully unmasked a group of anonymous accounts on social media platforms, according to a recent study conducted by ETH Zurich (the Swiss Federal Institute of Technology) in collaboration with Anthropic, as reported by the American tech site The Verge.اضافة اعلان

The study utilized a variety of AI models, including undisclosed commercial models and open-source models accessible to any user. All models were equipped with internet access and advanced web-crawling capabilities.

These models identified anonymous accounts by analyzing several factors, such as information embedded in anonymous posts, writing patterns, posting frequency, and other inferable metadata. This data was then cross-referenced with millions of identified social media profiles to find similarities. The AI progressively narrowed down the list until it reached the final account or a cluster of potentially linked accounts.

The study revealed that AI identified 68% of anonymous accounts with an accuracy rate of up to 90% in most cases, varying slightly depending on the model used.

Security and Privacy Concerns
A separate report from The Guardian highlighted the security risks associated with the proliferation of this technology. There are growing fears that governments could use such models to unmask the identities of dissidents and activists. Additionally, hackers could leverage this technology to execute more precise spear-phishing attacks.

AI-enhanced surveillance is a rapidly evolving field that concerns computer scientists and privacy experts alike, as it can access user information online in ways that were previously impossible.

Simon Lerman, a co-author of the study, told The Verge that the cost of the entire experiment did not exceed $2,000, with the cost of unmasking a single account ranging between $1 and $4.

This low cost alarms Peter Bentley, a computer science professor at University College London (UCL). In an interview with The Guardian, he emphasized that this technology could soon lead to commercially available products, significantly increasing the danger of its misuse.

Furthermore, the margin of error in AI results remains a factor. Reliance on this technology could lead to "false positives," incorrectly linking users to anonymous posts.

Call for Action
The study called for measures to protect user data from AI models, including:

Enhancing encryption mechanisms on social media and data-hosting platforms.

Limiting the volume of public data that AI models can scrape and access.