Secure Python ML: The Major Security Flaws in the ML Lifecycle (and how to avoid them)
- Room:
- Liffey B
- Start (Dublin time):
- Start (your time):
- Duration:
- 30 minutes
Abstract
Every phase across the end-to-end machine learning lifecycle exposes a plethora of security risks that often go unnoticed by machine learning practitioners. In this talk we uncover the most critical (and common) security risks in the machine learning lifecycle, covering in-depth concepts as well as practical examples of ways in which these can be exploited as well as resolved and mitigated (analogous to the OWASP Top 10 industry standard).
Throughout the talk we will be using a hands on example, where we will be training, packaging and deploying a model from scratch, outlining key risk areas for each step together with tools and practices that can be used to mitigate these risks. By the end of this talk, machine learning practitioners will have a robust intuition of the importance of security best practices throughout the machine learning lifecycle, together with tools and frameworks that can help mitigate undesirable outcomes due to security flaws.
TalkSecurity
Description
Abstract
Every phase across the end-to-end machine learning lifecycle exposes a plethora of security risks that often go unnoticed by machine learning practitioners. In this talk we uncover the most critical (and common) security risks in the machine learning lifecycle, covering in-depth concepts as well as practical examples of ways in which these can be exploited as well as resolved and mitigated (analogous to the OWASP Top 10 industry standard).
Throughout the talk we will be using a hands on example, where we will be training, packaging and deploying a model from scratch, outlining key risk areas for each step together with tools and practices that can be used to mitigate these risks. By the end of this talk, machine learning practitioners will have a robust intuition of the importance of security best practices throughout the machine learning lifecycle, together with tools and frameworks that can help mitigate undesirable outcomes due to security flaws.
Overview
The operation and maintenance of large scale production machine learning systems has uncovered new challenges which have required fundamentally different approaches to that of traditional software. The area of security in MLOps has seen a rise in attention as machine learning infrastructure expands to further critical usecases across industry.
In this talk we introduce the conceptual and practical topics around MLSecOps that data science practitioners will be able to adopt or advocate for. We will also provide an intuition on key security challenges that arise in production machine learning systems as well as best practices and frameworks that can be adopted to help mitigate security risks in ML models, ML pipelines and ML services.
We will cover a practical example showing how we can secure a machine learning model, and showcasing the security risks and best practices that can be adopted during the feature engineering, model training, model deployment and model monitoring stages of the machine learning lifecycle.
Benefits to the ecosystem
This talk will provide practitioners with the intuition and tools to secure production machine learning systems, as well as further the discussion around best practices reinforcing SecOps into MLOps.
It will provide best practices on a critical area of machine learning operations which is of paramount importance in production.