Remove Measurement Remove Modeling Remove Testing
article thumbnail

Beyond “Prompt and Pray”

O'Reilly on Data

The Evolution of Expectations For years, the AI world was driven by scaling laws : the empirical observation that larger models and bigger datasets led to proportionally better performance. This fueled a belief that simply making models bigger would solve deeper issues like accuracy, understanding, and reasoning.

article thumbnail

Escaping POC Purgatory: Evaluation-Driven Development for AI Systems

O'Reilly on Data

Weve seen this across dozens of companies, and the teams that break out of this trap all adopt some version of Evaluation-Driven Development (EDD), where testing, monitoring, and evaluation drive every decision from the start. What breaks your app in production isnt always what you tested for in dev! The way out?

Testing 174
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

What are model governance and model operations?

O'Reilly on Data

A look at the landscape of tools for building and deploying robust, production-ready machine learning models. We are also beginning to see researchers share sample code written in popular open source libraries, and some even share pre-trained models. Model development. Model governance. Source: Ben Lorica.

Modeling 230
article thumbnail

Data Observability and Data Quality Testing Certification Series

DataKitchen

Data Observability and Data Quality Testing Certification Series We are excited to invite you to a free four-part webinar series that will elevate your understanding and skills in Data Observation and Data Quality Testing. Register for free today and take the first step towards mastering data observability and quality testing!

article thumbnail

Why you should care about debugging machine learning models

O'Reilly on Data

Not least is the broadening realization that ML models can fail. And that’s why model debugging, the art and science of understanding and fixing problems in ML models, is so critical to the future of ML. Because all ML models make mistakes, everyone who cares about ML should also care about model debugging. [1]

article thumbnail

The Quality of Auto-Generated Code

O'Reilly on Data

Kevlin Henney and I were riffing on some ideas about GitHub Copilot , the tool for automatically generating code base on GPT-3’s language model, trained on the body of code that’s in GitHub. We know how to test whether or not code is correct (at least up to a certain limit). First, we wondered about code quality.

Testing 360
article thumbnail

Bringing an AI Product to Market

O'Reilly on Data

Product Managers are responsible for the successful development, testing, release, and adoption of a product, and for leading the team that implements those milestones. When a measure becomes a target, it ceases to be a good measure ( Goodhart’s Law ). You must detect when the model has become stale, and retrain it as necessary.

Marketing 364