Skip to main content
EuroPythonCode of ConductLive đŸ“č

Synergize AI and Domain Expertise - Explainability Check with Python

Room:
Liffey Hall 1
Start (Dublin time):
Start (your time):
Duration:
30 minutes

Abstract

The talk focuses on establishing guidelines for Explainable AI by diving into fundamental concepts and checkpoints, before accepting AI models to make decisions. We go through explainers, types, and algorithms with a simple implementation in Python, to strengthen our understanding of "WHY?" the model predicts a certain value and "HOW?" to validate it with experiential learning of experts to bridge potential gaps

TalkPyData: Ethics in AI

Description

We will go through the Why? How? and What? of Model Explainability to build consistent, robust and trustworthy models. We explore the inability of complex models to deliver meaningful insights, cause-effect relationships and inter-connected effects within data and how explainers can empower decision makers with more than just predictions. We evaluate an intuitive game-theory based algorithm, SHAP, with a working implementation in Python. We will also pin-point intersections necessary with domain experts with 2 practical industry applications to facilitate further exploration.


The speaker

Pranjal Biyani

Pranjal is an experienced AI Scientist building the first AI powered platform to accelerate R&D for Material Sciences across the globe. He loves opening black-box models to reveal insightful AI secrets that help decision makers adapt with the ever changing Industry needs. He also loves to teach and mentor passionate individuals aspiring to be a part of the Data Science Community, all with his favourite language, Python!



← Back to schedule