Hugging Face is a popular platform for sharing AI models that has recently been subjected to attacks hackers use the platform’s large library to place their malicious models code into download models. Given that AI innovation and testing are growing into critical facilities for this field, this issue highlights that AI archives require more reliable security measures. Multiple security researchers including the groups from Protect AI, Hiddenlayer, and Wiz have shared information on Hugging Face and have found that thousands of malicious models are being hosted and leak sensitive data and pose threats to users.

AI’s Trojan Horse: The New Wave of Cyber Threats
Just like the Trojan horse of the earlier days of computing where malicious instructions hide behind authentic code, these malicious models are a threat to any system. These malicious models embeds additional code that steals private data, tokens from cloud services and AI payment frameworks . As an instance of this, Protect AI’s CEO Ian Swanson pointed out that disguised code has gone from being simple software viruses to AI model hijinks.
Since we found more than 3,000 malicious files on Hugging Face, Protect AI has been very active, looking for those that are not being detected by regular AI security that many AI developers used. Such unscrupulous models may seem to be AI models developed by reputable firms, and people will download them without further pondering about it.
Fake Profile and Phishing TechniqueAI’s Trojan Horse: The New Wave of Cyber Threats
Because some users aim to hack fellow Hugging Face users or to con them, spammers make subversive avatars of themselves imitating reputable technical companies such as Meta, Facebook, Visa, SpaceX, among others. The purpose is to trick users into downloading malicious models which appear legitimate because of the brands they impersonate. A recent case of a model that claimed to be from genomics startup, 23andMe, is a good example of the seriousness of this problem. The malicious code within it was designed to seek for AWS credentials and pose a threat to cloud resources owned by organisations out there and this model was downloaded by thousands before it was revealed to be dangerous.
A Heightened Focus on Security
Such threats became global, so cybersecurity organizations, including the CISA in the United States, Canada, and the United Kingdom has issued a joint statement and advisory. These agencies urged companies to exercise caution when selecting AI models, and to limit the applications of such models in high-risk systems if those applications were not proven to be innocuous. Thus, the inclusion of rogue code into downloadable malicious models has raised questions about TOPs similarity to, scanning similarly to, traditional software repositories that have evolved over time.
Security has been an important focus of Hugging Face, which began profile verification of large technology companies in 2022 and ramped up the scanning frequency for unsafe code. According to the CTO of Hugging Face, Julien Chaumond, he optimistic these partnerships and security integrations will enhance the user confidence and safe AI model sharing.
Scaling Challenges for Hugging Face
Currently after the company’s pivot in 2018 from a chatbot application to an AI platform Hugging Face has proved to be successful and recently closed its funding round of $235M placing its valuation at $4.5B. However, the platform has grown to be large meaning that it has become very attractive to hackers. With AI moving from being a research curiosity to an everyday utility for industries, Hugging Face as the ‘GitHub of AI’ is going to have specific security considerations.
With $400 million in funding to date and growing, Hugging Face still addressed a need to strengthen its security measures, according to Chaumond. As he stated, AI has become a hot target for bad actors as well due to its involvement in ever more sectors of society, requiring a more nuanced approach to making data available while still maintaining security for researchers.

Conclusion: Securing the Future of Open AI Platforms
Currently after the company’s pivot in 2018 from a chatbot application to an AI platform Hugging Face has proved to be successful and recently closed its funding round of $235M placing its valuation at $4.5B. However, the platform has grown to be large meaning that it has become very attractive to hackers. With AI moving from being a research curiosity to an everyday utility for industries, Hugging Face as the ‘GitHub of AI’ is going to have specific security considerations.
With $400 million in funding to date and growing, Hugging Face still addressed a need to strengthen its security measures, according to Chaumond. As he stated, AI has become a hot target for bad actors as well due to its involvement in ever more sectors of society, requiring a more nuanced approach to making data available while still maintaining security for researchers.
Follow Zntus for more content!
Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?