This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Groundbreaking Study: Trusting Your Gut 78.45% More Effective Than Data-Driven Decisions, Say Top Execs April 1, 2025: CAMBRIDGE, MA In a shocking reversal of modern business orthodoxy, a joint study from Harvard Business School and Gartner has concluded that trusting your gut and doing what sounds good are 78.45% more effective than traditional data analytics, integration, or any attempt to be data-driven.
Announcing Actionable, Automated, & Agile Data Quality Scorecards Are you ready to unlock the power of influence to transform your organizations data qualityand become the hero your data deserves? Watch the previously recorded webinar unveiling our latest innovation: Data Quality Scorecards, powered by our AI-driven DataOps Data Quality TestGen software.
Unlocking Data Team Success: Are You Process-Centric or Data-Centric? Over the years of working with data analytics teams in large and small companies, we have been fortunate enough to observe hundreds of companies. We want to share our observations about data teams, how they work and think, and their challenges. We’ve identified two distinct types of data teams: process-centric and data-centric.
How Data Quality Leaders Can Gain Influence And Avoid The Tragedy of the Commons Data quality has long been essential for organizations striving for data-driven decision-making. Despite the best efforts of data teams, poor data quality remains a persistent challenge, leading to distrust in analytics, inefficiencies in operations, and costly errors. Many organizations struggle with incomplete, inconsistent, or outdated data, making it difficult to derive reliable insights.
Announcing DataOps Data Quality TestGen 3.0: Open-Source, Generative Data Quality Software. Now With Actionable, Automatic, Data Quality Dashboards Imagine a tool that can point at any dataset, learn from your data, screen for typical data quality issues, and then automatically generate and perform powerful tests, analyzing and scoring your data to pinpoint issues before they snowball.
No Python, No SQL Templates, No YAML: Why Your Open Source Data Quality Tool Should Generate 80% Of Your Data Quality Tests Automatically As a data engineer, ensuring data quality is both essential and overwhelming. The sheer volume of tables, the complexity of the data usage, and the volume of work make manual test writing an impossible task to get done.
The Gartner presentation, How Can You Leverage Technologies to Solve Data Quality Challenges? by Melody Chien, underscores the critical role of data quality in modern business operations. High-quality data is the blood that sustains the organizational value chainimpacting everything from logistics to services, sales, and marketing. Poor data quality, on average, costs organizations $12.9 million annually , or 7% of their total revenue.
A Drug Launch Case Study in the Amazing Efficiency of a Data Team Using DataOps How a Small Team Powered the Multi-Billion Dollar Acquisition of a Pharma Startup When launching a groundbreaking pharmaceutical product, the stakes and the rewards couldnt be higher. This blog dives into the remarkable journey of a data team that achieved unparalleled efficiency using DataOps principles and software that transformed their analytics and data teams into a hyper-efficient powerhouse.
Would you like help maintaining high-quality data across every layer of your Medallion Architecture? Like an Olympic athlete training for the gold, your data needs a continuous, iterative process to maintain peak performance. We covered how Data Quality Testing, Observability, and Scorecards turn data quality into a dynamic process, helping you build accuracy, consistency, and trust at each layerBronze, Silver, and Gold.
A Guide to the Six Types of Data Quality Dashboards Poor-quality data can derail operations, misguide strategies, and erode the trust of both customers and stakeholders. Data quality dashboards have emerged as indispensable tools, offering a clear window into the health of their data and enabling targeted actionable improvements. However, not all data quality dashboards are created equal.
The Race For Data Quality In A Medallion Architecture The Medallion architecture pattern is gaining traction among data teams. It is a layered approach to managing and transforming data. The Medallion architecture is a design pattern that helps data teams organize data processing and storage into three distinct layers, often called Bronze, Silver, and Gold.
“That should take two hours, not two months. Can’t your Data & Analytics Team go any faster?” “The executives’ dashboard broke! The data’s wrong! Can I ever trust our data?” If you’ve ever heard (or had) these complaints about speed-to-insight or data reliability, you should watch our webinar, DataOps for Beginners, on demand. DataKitchen’s VP Gil Benghiat breaks down what DataOps is (spoiler: it’s not just DevOps for data) and how DataOps can take your Data & Analytics factory fro
Read Our New White Paper: Data Quality The DataOps Way Data quality isn’t just a technical hurdle—it’s a strategic necessity in the data-driven world. Traditional methods fall short, but the DataOps approach to data quality offers a transformative path forward. It empowers individuals to act swiftly, enables continuous improvement, and fosters collaboration across organizational silos.
Christopher Bergh is the CEO and Head Chef at DataKitchen. Chris has more than 30 years of research, software engineering, data analytics, and executive management experience. At various points in his career, he has been a COO, CTO, VP, and Director of engineering. Enjoy the chat.
A DataOps Approach to Data Quality The Growing Complexity of Data Quality Data quality issues are widespread, affecting organizations across industries, from manufacturing to healthcare and financial services. According to DataKitchen’s 2024 market research, conducted with over three dozen data quality leaders, the complexity of data quality problems stems from the diverse nature of data sources, the increasing scale of data, and the fragmented nature of data systems.
From Cattle to Clarity: Visualizing Thousands of Data Pipelines with Violin Charts Most data teams work with a dozen or a hundred pipelines in production. What do you do when you have thousands of data pipelines in production? How do you understand what is happening to those pipelines? Is there a way that you can visualize what is happening in production quickly and easily?
Data Quality Circles: The Key to Elevating Data and Analytics Team Performance Introduction: The Pursuit of Quality in Data and Analytic Teams. According to a study by HFS Research, 75 percent of business executives do not have a high level of trust in their data. High-quality data underpins the reliability of insights, models’ accuracy, and decision-making processes’ efficacy.
DataOps, the promising future that nobody seems to be able to make reality. But not for lack of trying: meet Chris Bergh, "Head Chef" at DataKitchen, joining us again to tell us how te filed evolved over the last few years. To get in.
2024 Gartner Market Guide To DataOps We at DataKitchen are thrilled to see the publication of the Gartner Market Guide to DataOps, a milestone in the evolution of this critical software category. As the pioneer in the DataOps category, we are proud to have laid the groundwork for what has become an essential approach to managing data operations in today’s fast-paced business environment.
Summary
In this episode of the Data Engineering Podcast, host Tobias Macey welcomes back Chris Berg, CEO of DataKitchen, to discuss his ongoing mission to…
DataKitchen’s Data Quality TestGen found 18 potential data quality issues in a few minutes (including install time) on data.boston.gov building permit data! Imagine a free tool that you can point at any dataset and find actionable data quality issues immediately! It sure beats having your data consumers tell you about problems they find when you are trying to enjoy your weekend.
Christopher Bergh, CEO of DataKitchen, is transforming data analytics with his DataOps approach. By applying principles from agile and lean manufacturing, Bergh aims to eliminate the 70-80% waste in data processes. DataKitchen's suite of open-source tools offers solutions for observability, testing, and automation, addresses challenges in rapid change management, error detection team productivity.
Navigating the Storm: How Data Engineering Teams Can Overcome a Data Quality Crisis Ah, the data quality crisis. It’s that moment when your carefully crafted data pipelines start spewing out numbers that make as much sense as a cat trying to bark. You know you’re in trouble when the finance team uses your reports as modern art installations rather than decision-making tools.
Data Observability and Data Quality Testing Certification Series We are excited to invite you to a free four-part webinar series that will elevate your understanding and skills in Data Observation and Data Quality Testing. This series is crafted for professionals eager to deepen their knowledge and enhance their data management practices, whether you are a seasoned data engineer, a data quality manager, or just passionate about data.
Harnessing Data Observability Across Five Key Use Cases The ability to monitor, validate, and ensure data accuracy across its lifecycle is not just a luxury—it’s a necessity. Data observability extends beyond simple anomaly checking, offering deep insights into data health, dependencies, and the performance of data-intensive applications. This blog post introduces five critical use cases for data observability, each pivotal in maintaining the integrity and usability of data throughout its journe
The Five Use Cases in Data Observability: Ensuring Data Quality in New Data Sources (#1) Introduction to Data Evaluation in Data Observability Ensuring their quality and integrity before incorporating new data sources into production is paramount. Data evaluation serves as a safeguard, ensuring that only cleansed and reliable data makes its way into your systems, thus maintaining the overall health of your data ecosystem.
The Five Use Cases in Data Observability: Effective Data Anomaly Monitoring (#2) Introduction Ensuring the accuracy and timeliness of data ingestion is a cornerstone for maintaining the integrity of data systems. Data ingestion monitoring, a critical aspect of Data Observability, plays a pivotal role by providing continuous updates and ensuring high-quality data feeds into your systems.
The Five Use Cases in Data Observability: Mastering Data Production (#3) Introduction Managing the production phase of data analytics is a daunting challenge. Overseeing multi-tool, multi-dataset, and multi-hop data processes ensures high-quality outputs. This blog explores the third of five critical use cases for Data Observability and Quality Validation—data Production—highlighting how DataKitchen’s Open-Source Data Observability solutions empower organizations to manage this critical s
The Five Use Cases in Data Observability: Fast, Safe Development and Deployment (#4) Introduction The integrity and functionality of new code, tools, and configurations during the development and deployment stages are crucial. This blog post delves into the third critical use case for Data Observation and Data Quality Validation: development and Deployment.
DataKitchen Training And Certification Offerings For Individual contributors with a background in Data Analytics/Science/Engineering Overall Ideas and Principles of DataOps DataOps Cookbook (200 page book over 30,000 readers, free): DataOps Certificatio n (3 hours, online, free, signup online): DataOps Manifesto (over 30,000 signatures) One Day DataOps training (paid) Data Observability (the first step in DataOps) I deas and Principles of Data Observability Four-part Da
Introducing DataKitchen’s Open Source Data Observability Software Today, we announce that we have open-sourced two complete, feature-rich products that solve the data observability problem: DataOps Observervability and DataOps TestGen. With these two products, you will know if your pipelines are running without error and on time and can finally trust your data.
The hosted by Christopher Bergh with Gil Benghiat from DataKitchen covered a comprehensive range of topics centered around improving the performance and efficiency of data teams through Agile and DataOps methodologies. Gil Benghiat, co-founder of Data Kitchen, began by explaining the overarching goal of achieving data team excellence, which involves delivering business value quickly and with high quality.
Today, Bristol Myers Squib (BMS) has fully acquired Karuna Therapeutics. We congratulate our customer on their fantastic success. Their product, KarXT, an antipsychotic, is revolutionary and is lined up for an FDA Prescription Drug User Fee Act (PDUFA) in September 2024 for the treatment of schizophrenia in adults. KarXT also has potential for the treatment of Alzheimer’s disease and Bipolar disorder.
Key Success Metrics, Benefits, and Results for Data Observability Using DataKitchen Software Lowering Serious Production Errors Key Benefit Errors in production can come from many sources – poor data, problems in the production process, being late, or infrastructure problems. Reducing the errors your customers find and those they do not are key success metrics of Data Observability Using DataKitchen DataOps Observability and DataOps TestGen.
ngx-toolkit, a new open-source project from DataKitchen At DataKitchen, we use Angular and strive for well-tested and maintainable code. We’ve created three libraries that have helped accelerate Angular development in our software projects. We are proud today to present to the open source community our monorepo with some of the libraries we have developed for Angular and Jest.
Why Not Hearing About Data Errors Should Worry Your Data Team In the chaotic lives of data & analytics teams, a day without hearing of any data-related errors is a blessing. Your team is on top of things, deliveries are on schedule (you think), and no major complaints are making their way to your desk. It’s tempting to adopt the “ What, me worry?
Your LLM Needs a Data Journey: A Comprehensive Guide for Data Engineers The rise of Large Language Models (LLMs) such as GPT-4 marks a transformative era in artificial intelligence, heralding new possibilities and challenges in equal measure. LLMs have the potential to revolutionize how we interact with data, automate processes, and extract insights.
DataKitchen Resource Guide To Data Journeys & Data Observability & DataOps Data (and Analytic) Observability & Data Journey – Ideas and Background Data Journey Manifesto and Why the Data Journey Manifesto? Five Pillars of Data Journeys Data Journey First DataOps “You Complete Me,” said Data Lineage to Data Journeys. Bridging the Gap: How ‘Data in Place’ and ‘Data in Use’ Define Complete Data Observability The Need For Personalized Data Journeys for Your Data Consumers Data Te
The Art of Data Buck-Passing 101: Mastering the Blame Game in Data and Analytic Teams Welcome, dear readers, to the hallowed halls of Data Buck-Passing University, where the motto is “ Per Alios Culpa Transfertur ” (Blame is Transferred to Others). In the world of data and analytics, one skill stands timeless and universal: the art of blaming someone else when things go sideways.
Do you have data quality issues, a complex technical environment, and a lack of visibility into production systems? These challenges lead to poor quality analytics and frustrated end users. Getting your data reliable is a start, but many other problems arise even if your data could be better. And your customers don't care where the problem is in your toolchain.
The Perilous State of Today’s Data Environments Data teams often navigate a labyrinth of chaos within their databases. The core issue plaguing many organizations is the presence of out-of-control databases or data lakes characterized by: Unrestrained Data Changes: Numerous users and tools incessantly alter data, leading to a tumultuous environment. Extrinsic Control Deficit: Many of these changes stem from tools and processes beyond the immediate control of the data team.
We organize all of the trending information in your field so you don't have to. Join 42,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content