Remove Measurement Remove Modeling Remove Testing
article thumbnail

Data Observability and Data Quality Testing Certification Series

DataKitchen

Data Observability and Data Quality Testing Certification Series We are excited to invite you to a free four-part webinar series that will elevate your understanding and skills in Data Observation and Data Quality Testing. Register for free today and take the first step towards mastering data observability and quality testing!

article thumbnail

What are model governance and model operations?

O'Reilly on Data

A look at the landscape of tools for building and deploying robust, production-ready machine learning models. We are also beginning to see researchers share sample code written in popular open source libraries, and some even share pre-trained models. Model development. Model governance. Source: Ben Lorica.

Modeling 195
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Bringing an AI Product to Market

O'Reilly on Data

Product Managers are responsible for the successful development, testing, release, and adoption of a product, and for leading the team that implements those milestones. When a measure becomes a target, it ceases to be a good measure ( Goodhart’s Law ). You must detect when the model has become stale, and retrain it as necessary.

Marketing 363
article thumbnail

The Quality of Auto-Generated Code

O'Reilly on Data

Kevlin Henney and I were riffing on some ideas about GitHub Copilot , the tool for automatically generating code base on GPT-3’s language model, trained on the body of code that’s in GitHub. We know how to test whether or not code is correct (at least up to a certain limit). First, we wondered about code quality.

Testing 306
article thumbnail

Preliminary Thoughts on the White House Executive Order on AI

O'Reilly on Data

While I am heartened to hear that the Executive Order on AI uses the Defense Production Act to compel disclosure of various data from the development of large AI models, these disclosures do not go far enough. These include: What data sources the model is trained on. Operational Metrics. Energy usage and other environmental impacts.

article thumbnail

The DataOps Vendor Landscape, 2021

DataKitchen

Testing and Data Observability. DataOps needs a directed graph-based workflow that contains all the data access, integration, model and visualization steps in the data analytic production process. It orchestrates complex pipelines, toolchains, and tests across teams, locations, and data centers. Testing and Data Observability.

Testing 312
article thumbnail

Why you should care about debugging machine learning models

O'Reilly on Data

Not least is the broadening realization that ML models can fail. And that’s why model debugging, the art and science of understanding and fixing problems in ML models, is so critical to the future of ML. Because all ML models make mistakes, everyone who cares about ML should also care about model debugging. [1]