The State of Malicious ML Models Attacks
The adoption of ML models is rising across all industries, as a side-effect Malicious ML models are a new and emerging threat that can compromise your systems by executing code when loaded. In this session, you will learn how these attacks work, and how to protect yourself from it. You will see the results of a large-scale scan of ML models from the Hugging Face repository, and the impact of the malicious models that were found. You will also learn the ML-Ops best practices for applying security controls, scanning, and actions to safeguard your ML models and systems. This session is essential for anyone who works with ML models or maintains tools for ML-Ops, as this poses a serious risk to any modern organisation.