Our processing systems have completely caught up and we are now processing data in realtime. No data was lost during the incident and our ingestion servers were working as expected.
Posted Jan 09, 2020 - 04:16 PST
Our processing systems are catching up and we are currently delayed by approximately 30 minutes. Our ingestion systems are working as expected and we are not losing data.
Posted Jan 09, 2020 - 03:58 PST
A fix has been implemented and we are monitoring the results.
Posted Jan 09, 2020 - 02:31 PST
We identified bad AWS node which triggered the issue. Currently working on replacing it, will post an update after 30 minutes or earlier if we fix the issue.
Posted Jan 09, 2020 - 01:58 PST
Our data processing systems are delayed due a slow dependency. This incident started at 11:50PM PST.
Current status: a) Our ingestion systems are working as expected and we are not losing data. Our processing systems are not processing some of the newly ingested data. b) All of our customers are impacted. c) Impacted customers will see delayed/partial metrics for the last one hour.
We are investigating the issue and will post an update after 30 minutes or earlier if we identify the issue.