Resolved -
All Data Export jobs have now caught up, and this incident has been resolved.
No further action is required from customers, and we can confirm that no data was lost.
Thank you for your patience throughout this process.
Jun 29, 17:27 PDT
Update -
We are still monitoring the recovery of Data Export jobs. Currently, we expect to catch up on all Data Export jobs within the next four hours.
Jun 29, 11:47 PDT
Update -
All warehouse import jobs have been fully caught up since 8am PDT, and our real-time ingestion and evaluation systems have been operating normally since yesterday.
We are continuing to monitor the recovery of Data Export. Currently, the most delayed export jobs are behind by approximately 24 hours. We will keep you updated as progress continues.
Thank you for your patience.
Jun 29, 10:30 PDT
Monitoring -
We re-enabled warehouse imports for all customers at 12:30 PM PDT and are actively monitoring job lag recovery. We expect all jobs to catch up within the next 3 to 4 hours.
Jun 28, 15:50 PDT
Update -
We have identified the bottleneck in our ingestion pipeline and applied a preliminary fix. Event ingestion via the HTTP endpoint is recovering. The real-time HTTP endpoint has fully caught up, while the batch endpoint is currently in the process of recovering.
To prioritize real-time traffic, warehouse import has been paused. As a result, imports are currently delayed by up to 7 hours. We’re deploying another fix to improve performance and support full recovery.
EU customers remain unaffected, and no data loss is expected.
Jun 28, 07:33 PDT
Update -
We’re still looking into the issue to better understand and resolve it.
Jun 28, 00:55 PDT
Update -
We are continuing to work on a fix for this issue.
Jun 28, 00:22 PDT
Update -
We’ve deployed the hotfix, and systems are now recovering.
There is a backlog of several hours of data to process. We’re closely monitoring progress and adjusting configurations as needed.
Thank you for your patience.
Jun 27, 22:11 PDT
Identified -
We believe we've identified the root cause. We're implementing and deploying a hotfix to confirm this and restore normal data processing.
Jun 27, 20:01 PDT
Investigating -
We're currently experiencing delays in our real-time ingestion and evaluation systems. This issue began around 5:00 PM PST.
Our ingestion systems are operational but processing more slowly than normal. Customers using the HTTP endpoints in the US region are affected. Impacted users may not see events from the last 40–60 minutes reflected in charts. Event streaming out is also delayed.
Our evaluation systems are operational but experiencing higher than normal latency.
No data loss is expected. There is no impact to customers in the EU region.
We're actively investigating and will share an update within the next hour, or sooner if possible.
Jun 27, 19:05 PDT