Remove Data Lake Remove OLAP Remove Snapshot
article thumbnail

Build an Amazon Redshift data warehouse using an Amazon DynamoDB single-table design

AWS Big Data

A key pillar of AWS’s modern data strategy is the use of purpose-built data stores for specific use cases to achieve performance, cost, and scale. Deriving business insights by identifying year-on-year sales growth is an example of an online analytical processing (OLAP) query. To house our data, we need to define a data model.

article thumbnail

Unlock scalability, cost-efficiency, and faster insights with large-scale data migration to Amazon Redshift

AWS Big Data

The data warehouse is highly business critical with minimal allowable downtime. We can determine the following are needed: An open data format ingestion architecture processing the source dataset and refining the data in the S3 data lake. The following figure shows a daily usage KPI. Vijay Bagur is a Sr.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Accelerating revenue growth with real-time analytics: Poshmark’s journey

AWS Big Data

The data from the Kinesis data stream is consumed by two applications: A Spark streaming application on Amazon EMR is used to write data from the Kinesis data stream to a data lake hosted on Amazon Simple Storage Service (Amazon S3) in a partitioned way.

article thumbnail

Unleashing the power of Presto: The Uber case study

IBM Big Data Hub

Uber understood that digital superiority required the capture of all their transactional data, not just a sampling. They stood up a file-based data lake alongside their analytical database. Because much of the work done on their data lake is exploratory in nature, many users want to execute untested queries on petabytes of data.

OLAP 86