Remove Predictive Modeling Remove Risk Remove Testing
article thumbnail

Experiment or Die. Five Reasons And Awesome Testing Ideas.

Occam's Razor

There is a tendency to think experimentation and testing is optional. Just don't fall for their bashing of all other vendors or their silly claims, false, of "superiority" in terms of running 19 billion combinations of tests or the bonus feature of helping you into your underwear each morning. 4 Big Bets, Low Risks, Happy Customers.

Testing 113
article thumbnail

What is Model Risk and Why Does it Matter?

DataRobot Blog

With the big data revolution of recent years, predictive models are being rapidly integrated into more and more business processes. This provides a great amount of benefit, but it also exposes institutions to greater risk and consequent exposure to operational losses.

Risk 111
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Bridging the Gap: How ‘Data in Place’ and ‘Data in Use’ Define Complete Data Observability

DataKitchen

In the context of Data in Place, validating data quality automatically with Business Domain Tests is imperative for ensuring the trustworthiness of your data assets. Running these automated tests as part of your DataOps and Data Observability strategy allows for early detection of discrepancies or errors. What is Data in Use?

Testing 182
article thumbnail

Automating the Automators: Shift Change in the Robot Factory

O'Reilly on Data

Building Models. A common task for a data scientist is to build a predictive model. You’ll try this with a few other algorithms, and their respective tuning parameters–maybe even break out TensorFlow to build a custom neural net along the way–and the winning model will be the one that heads to production.

article thumbnail

A Guide to the Six Types of Data Quality Dashboards

DataKitchen

Without contextual specificity, these dimensions risk becoming check-the-box exercises rather than actionable frameworks that help organizations identify and address the root causes of data quality issues. The DAMA Data Quality Dimension dashboards are crap.

article thumbnail

How to Set AI Goals

O'Reilly on Data

Technical competence results in reduced risk and uncertainty. AI initiatives may also require significant considerations for governance, compliance, ethics, cost, and risk. There’s a lot of overlap between these factors. Defining them precisely isn’t as important as the fact that you need all three.

article thumbnail

Why you should care about debugging machine learning models

O'Reilly on Data

Because all ML models make mistakes, everyone who cares about ML should also care about model debugging. [1] 1] This includes C-suite executives, front-line data scientists, and risk, legal, and compliance personnel. Model debugging is an emergent discipline focused on finding and fixing problems in ML systems.