DevOps 2024

Subpage Hero

Use the featured image to change the subpage hero.

Loading

The State of Malicious ML Models Attacks

06 Mar 2024
DevSecOps Theatre
The State of Malicious ML Models Attacks

The adoption of ML models is rising across all industries, as a side-effect Malicious ML models are a new and emerging threat that can compromise your systems by executing code when loaded. In this session, you will learn how these attacks work, and how to protect yourself from it. You will see the results of a large-scale scan of ML models from the Hugging Face repository, and the impact of the malicious models that were found. You will also learn the ML-Ops best practices for applying security controls, scanning, and actions to safeguard your ML models and systems. This session is essential for anyone who works with ML models or maintains tools for ML-Ops, as this poses a serious risk to any modern organisation.

Speakers
Carmine Acanfora, Solutions Architect - JFrog

2024 Partners

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

Media Partner


 

2024 SPONSORS

Platinum Sponsors


Theatre Sponsors


Get the latest industry news in your inbox!