article thumbnail

How Gupshup built their multi-tenant messaging analytics platform on Amazon Redshift

AWS Big Data

A series of materialized view refreshes are used to calculate metrics, after which the incremental data from S3 is loaded into Redshift. This compiled data is then imported into Aurora PostgreSQL Serverless for operational reporting. Incremental analytics is the main reason for Gupshup to use Redshift. “The

Analytics 113
article thumbnail

How HPE Aruba Supply Chain optimized cost and performance by migrating to an AWS modern data architecture

AWS Big Data

The application supports custom workflows to allow demand and supply planning teams to collaborate, plan, source, and fulfill customer orders, then track fulfillment metrics via persona-based operational and management reports and dashboards. In his spare time, he likes reading, exploring new places and watching movies.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Complexity Drives Costs: A Look Inside BYOD and Azure Data Lakes

Jet Global

Azure Data Lakes are highly complex and designed with a different fundamental purpose in mind than financial and operational reporting. For more on Azure Data Lakes, download this guide: “ Diving into Data Lakes: Is Microsoft’s Modern Data Warehouse Architecture Right for Your Business? ”.

article thumbnail

Migrate data from Azure Blob Storage to Amazon S3 using AWS Glue

AWS Big Data

Conclusion In this post, we showed how to use AWS Glue and the new connector for ingesting data from Azure Blob Storage to Amazon S3. This connector provides access to Azure Blob Storage, facilitating cloud ETL processes for operational reporting, backup and disaster recovery, data governance, and more.

article thumbnail

How REA Group approaches Amazon MSK cluster capacity planning

AWS Big Data

Capacity monitoring dashboards As part of our platform management process, we conduct monthly operational reviews to maintain optimal performance. This involves analyzing an automated operational report that covers all the systems on the platform.

Metrics 79
article thumbnail

Migrate data from Google Cloud Storage to Amazon S3 using AWS Glue

AWS Big Data

Conclusion In this post, we showed how to use AWS Glue and the new connector for ingesting data from Google Cloud Storage to Amazon S3. This connector provides access to Google Cloud Storage, facilitating cloud ETL processes for operational reporting, backup and disaster recovery, data governance, and more.

article thumbnail

Use AWS Glue to streamline SFTP data processing

AWS Big Data

Introducing the SFTP connector for AWS Glue The SFTP connector for AWS Glue simplifies the process of connecting AWS Glue jobs to extract data from SFTP storage and to load data into SFTP storage. Solution overview In this example, you use AWS Glue Studio to connect to an SFTP server, then enrich that data and upload it to Amazon S3.