Featured

MLOps Platform Data Scientist Use Case: Model Catalog, Versioning | "The Datatron" - Part III of VII



Published
In this "Data Scientist" Use Case, you'll learn how a Data Scientist leverages an MLOps & AI Data Governance platform to monitor models for bias, drift, & performance anomalies, and how the model "Health" Dashboard allows the Data Scientist to catch issues before they become problems, and quickly drill down to kick-start the root cause analysis.

Starting in the "Model Catalog" all models are registered with metadata and proper versioning. Because the MLOps platform is model/library/stack agnostic, you can register any model built on any stack (E.g., PyTorch, xgboost, Jupyter Notebook, Tensorflow, scikit, Seldon Core, h20.ai, Raw Python)

Within the Model Catalog, deep metadata is captured, including who worked on the model, when it was registered, features input/output/(Optional: feedback) contracts, and what training data set was used. While the Data Scientist may not be as concerned with audit reports as AI executives, the "Reference Dataset" (i.e. training data) is critical for satisfying compliance requirements when the Governance, Risk, & Compliance (GRC) team comes a'knocking.

With hundreds to thousands of models, management becomes cumbersome. Early AI programs employ manual tools (e.g., Gsheets) but eventually realize that is unsustainable. In the video, you'll see how you can leverage Tags and search to quickly find a specific model even when working with thousands of models. Just another benefit of an MLOps platform.

In this video, Datatron VP of Operations & Customer Success, Victor Thu, channels his inner Data Scientist to demonstrate their workflow within an MLOps and AI governance platform. This is part III in a VII part series introducing "The Datatron" MLOps & AI Governance platform, and how it fits into the AI/ML ecosystem.

While this series does focus on "The Datatron," it is intended to educate AI/ML practitioners, even those who use Open Source, on MLOps processes, pitfalls, features, and best practices, as Datatron can be integrated into/alongside Open Source solutions via API to fill in gaps integrating Datatron's keystone features (e.g., Model Catalog, Monitoring, Health Dashboard).

Interested in learning more about MLOps "best practices" Book a Demo of Datatron at:

https://datatron.com/book-a-demo/

About Datatron:

Datatron is a FLEXIBLE, enterprise-grade, & production-proven MLOps and AI Governance platform. With AI/ML mission-critical features such as Model Catalog, Model Deployment, Monitoring/Alerts for bias, drift, & anomalies, and the AI/ML Model "Health" Dashboard, enterprises are able to accelerate model deployment velocity from months to minutes, comply with regulatory audit requirements, and avoid unnecessary headcount bloat to support AI, as a single ML Engineer can manage hundreds to thousands of models.

***

Interested in downloading technical whitepapers on:

* Whitepaper - Unique Challenges of Machine Learning Models
* Whitepaper - Model Monitoring
* Whitepaper - Model Auditability & Governance
* Whitepaper - Model Deployment
* Whitepaper - Model Governance & Management
* Whitepaper - Life Cycle of Machine Learning Models

Visit: https://datatron.com/whitepapers/
Category
Management
Be the first to comment