Menu

Mode Gelap
Innovation Becomes Secondary at Small Firms as Tariffs Dominate Their Focus

Technology

New Malware Uses Prompt Injection to Exploit AI Models in Real-World Settings

badge-check


					New Malware Uses Prompt Injection to Exploit AI Models in Real-World Settings Perbesar


Hugging Face, a leading open-source platform widely used for hosting, sharing, and deploying AI models and datasets, has come under scrutiny for critical security flaws that could compromise the AI ecosystem. As a central hub for collaboration in the AI community, Hugging Face offers powerful tools through services like Inference API, Endpoints, and Spaces. However, researchers have discovered vulnerabilities that expose users to threats such as remote code execution (RCE), model manipulation, and lateral movement within shared cloud infrastructure. These findings raise serious concerns about the safety and integrity of AI-as-a-service (AIaaS) platforms and call for improved safeguards across the entire machine learning supply chain.


I. Understanding the Role of Hugging Face in the AI Community

1. An Open Hub for AI Innovation

Hugging Face functions as a collaborative space where developers and researchers can train, store, and deploy machine learning models with ease. It enables users to experiment with pre-trained models and datasets, accelerating AI development while fostering a vibrant open-source ecosystem.

2. A Target-Rich Environment for Cyber Threats

Given its widespread adoption and the sensitive nature of the AI models it hosts, Hugging Face has become a high-value target for cyber attackers. Unauthorized access to hosted models and datasets could lead to intellectual property theft or introduce malicious payloads, undermining trust in shared AI infrastructure.


II. Key Security Vulnerabilities in Hugging Face’s AIaaS Platform

1. Exploiting the Inference Process

Security analysts have discovered that Hugging Face’s handling of AI inference, particularly models saved using the Pickle format, exposes users to remote code execution. Pickle, while convenient, is inherently insecure, and attackers can embed malicious code that runs during model inference—potentially impacting shared environments.

2. Insecure CI/CD Pipelines Enable Supply Chain Attacks

By manipulating the Continuous Integration and Continuous Deployment (CI/CD) process, threat actors can inject harmful code into AI applications. This manipulation may not only affect one model but could propagate throughout the platform, creating a large-scale supply chain threat within AI-as-a-service systems.

3. Model, Code, and Infrastructure All at Risk

AI/ML workflows typically consist of models, supporting code, and the infrastructure that executes them. Each of these layers presents unique vulnerabilities—from adversarial inputs and faulty logic to compromised containers—that attackers can exploit to degrade performance, steal information, or gain control over resources.


III. Research Insights from Wiz on Infrastructure Vulnerabilities

1. Investigating Isolation Gaps

Security researchers from Wiz conducted a deep-dive analysis into three Hugging Face offerings—Inference API, Inference Endpoints, and Spaces. Their goal was to assess whether these environments offered sufficient isolation to prevent cross-user interference. Findings showed that custom models uploaded by users could potentially breach isolation barriers.

2. Malicious Model Upload Demonstration

To test their theory, the researchers uploaded a Pickle-based malicious model through the Inference API. While Hugging Face’s scanners did flag the model, it was still allowed for inference due to its popularity within the community. This loophole permitted the execution of malicious code, challenging the effectiveness of the platform’s safeguards.

3. Exploiting Kubernetes and Amazon EKS

Once the rogue model was executed, attackers gained entry into a Kubernetes pod operating on Amazon EKS. From there, they exploited insecure configurations to extract a Kubernetes token with node-level privileges. This token enabled access to pod metadata, secrets, and potentially, lateral movement within the broader cluster.


IV. Broader Implications and Supply Chain Risk

1. Breaching Shared Registries and Overwriting Images

The vulnerabilities didn’t stop at model execution. The attacker managed to manipulate Hugging Face Spaces by creating a Dockerfile with embedded malicious scripts during the build process. This method gave them access to a shared internal container registry and, due to weak access controls, they could overwrite other users’ images—jeopardizing the integrity of their applications.

2. AI Supply Chain Trust Erosion

Such vulnerabilities highlight a pressing concern: the absence of reliable tools to validate the integrity of shared models. In the current setup, downloading and deploying models from platforms like Hugging Face carries an inherent risk. Malicious actors can subtly embed code that avoids detection while granting them system-level access during inference or deployment.

3. Need for Secure Model Verification

The researchers recommend implementing model verification systems and sandboxed environments for inference to limit the fallout of any compromise. Additionally, AI platforms must consider enforcing stricter scanning, access policies, and runtime restrictions to reduce attack surfaces and improve tenant isolation.


V. Recommendations for Platform Security Enhancement

1. Harden Inference and Deployment Layers

To safeguard users and their data, AI service providers like Hugging Face must strengthen the way inference environments are isolated. This includes sandboxing containers, restricting internet access for unverified models, and enforcing strict permission boundaries.

2. Secure CI/CD Pipelines from Manipulation

AI platforms should treat CI/CD infrastructure as a critical attack vector. Regular code audits, secure image builds, and validation of build-time inputs must be incorporated to avoid supply chain contamination that may affect thousands of models or apps downstream.

3. Adopt Trusted Model Sharing Practices

Developers and researchers should shift toward using signed models and adopting chain-of-custody tools that log the entire lifecycle of a model—from training and validation to deployment. Trust frameworks can provide peace of mind when sourcing third-party assets in high-stakes environments.


Conclusion
The discovery of multiple vulnerabilities within Hugging Face’s AI-as-a-service platform underscores the urgent need for robust security mechanisms in the growing field of artificial intelligence. As shared resources become the norm in AI development, attackers will continue to find creative ways to exploit system weaknesses—from unsafe file formats and poorly isolated containers to weak CI/CD controls. To preserve trust in collaborative AI ecosystems, both platform providers and users must embrace stronger security hygiene, enforce model integrity verification, and architect infrastructures that anticipate and withstand advanced threats. Hugging Face’s case serves as a critical reminder: as AI technology evolves, so too must the security frameworks that support it.

Facebook Comments Box

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *

Baca Lainnya

WhatsApp Defends ‘Optional’ AI Tool That Can’t Be Turned Off

2 Juli 2025 - 00:38 WIB

Meta Urged to Do More in Crackdown on “Nudify” Apps

2 Juli 2025 - 00:38 WIB

Meta AI Searches Made Public – But Do All Its Users Realize?

2 Juli 2025 - 00:38 WIB

Council Says AI Trial Helps Reduce Staff Workload

2 Juli 2025 - 00:33 WIB

Trump Says He Has ‘A Group of Very Wealthy People’ to Buy TikTok

2 Juli 2025 - 00:33 WIB

Trending di Tech News