A containerized processing pipeline designed to analyze weekend “no-disturbance” sensor data and generate high-confidence anomaly and efficiency insights using a custom AI model.
A cold storage and refrigeration company needed predictive insights from sensor data — but only from periods where assets were truly undisturbed, so the AI model could learn stable baselines and detect abnormal behavior with higher confidence.
The challenge was to capture and process full sensor streams during the weekend “no-disturbance” window, then run a custom AI model to identify anomalies and efficiency patterns — without interfering with normal operations. Running the pipeline on weekends also reduced unnecessary weekday compute, but the primary goal was data integrity and signal quality.
Aligned to weekend “no-disturbance” windows
Processing is scheduled to match low/no operational disturbance, producing cleaner inputs for AI analysis and more reliable anomaly detection.
A minimal yet robust containerized architecture optimized for clean data capture and predictive analysis.
Cron-driven orchestration
Predictive processing engine
Developed a lightweight, modular internal framework optimized for predictable batch execution and maintainable long-term evolution.
Dispatcher and worker services packaged as Docker containers for AWS ECS deployment with controlled scaling.
Stores raw sensor streams and processed AI outputs to support dashboards, alerts, and historical comparisons.
Dispatcher runs during the weekend “no-disturbance” window
Creates one processing job per asset and queues it via SQS
Worker invokes the custom AI model to detect anomalies and efficiency signals
Stores outputs in InfluxDB for dashboards, alerts, and historical comparison
Container orchestration for dispatcher and worker services
Reliable job queuing and message passing between services
Time-series storage for raw sensor data and processed analytics
Anomaly detection and efficiency insights from weekend baseline data
Collected full sensor streams during low/no-disturbance windows to establish stable baselines and reduce noise in the data.
Applied a custom AI model to detect anomalies and highlight efficiency patterns across cold rooms and deep freezers.
SQS-based job orchestration capable of processing assets in parallel with controlled scaling during weekend runs.
Processing outputs delivered quickly after the scheduled run, keeping dashboards and operational insights up to date.
We design containerized, event-driven pipelines that capture cleaner signals and turn them into actionable anomaly and efficiency insights.
Batch, event-driven, anomaly detection
AWS, ECS, Lambda, cost-aware scaling
Scheduling, workflows, observability
Upgraded Rails 5.2 → 7.2 with production-safe checkpoints and a phased rollout strategy.
Read Case StudyIntegrated multiple hosting systems into one platform, improving monetization and operations.
Read Case StudyFault-tolerant IoT pipeline with custom protocol integration and live dashboards.
Read Case Study