scenarios/features/firehose/README.md
This example demonstrates how to use AWS SDKs to work with Amazon Data Firehose, focusing on putting individual records (PutRecord) and batches of records (PutRecordBatch) to a delivery stream. The workflow showcases creating, configuring, and utilizing a Data Firehose Delivery Stream to handle data ingestion.
Amazon Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards.
This scenario consists of three key user tasks:
This script is designed to be executed locally using your own AWS account with minimal customization.
pip install -r resources/requirements.txt.aws cloudformation create-stack \
--stack-name FirehoseStack \
--template-body file://resources/firehose-stack.yml \
--capabilities CAPABILITY_IAM
python resources/mock_data.py (requires Python >=3.6, with faker library installed)sample_records.json containing 5,550 "fake" network records.The technical specification for this scenario is found in SPECIFICATION.md.
You can either implement this specification using your own skill and cunning, or use the provided start.sh script, which uses Ailly.