Final Update: Thursday, 9/3/2015 12:01 UTC
We’ve confirmed that all systems are back to normal with no customer impact as of 9/3, 11:45 UTC. Our logs show the incident started on 9/3, 07:10 UTC and that during the 3 hours that it took to resolve the issue customers experienced latency for multiple data types.
• Root Cause: The failure was due to high memory issues on a backend service.
• Lessons Learned: We are working to understand the cause of the high memory issue and drive for a longer term fix.
• Incident Timeline: 3 Hours & 35 minutes - 9/3, 11:45 UTC through 9/3, 07:10 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
-Application Insights Service Delivery Team
Update: Thursday, 9/3/2015 10:21 UTC
Our DevOps team continues to investigate issues within Application Insights. Root cause is not fully understood at this time. Some customers continue to experience latency of multiple data types above the 2 hour SLA. We are working to establish the start time for the issue, initial findings indicate that the problem began at 09/03 ~07:11 UTC. We currently have no estimate for resolution.
• Work Around: None
• Next Update: Before 12:21 UTC
-Application Insights Service Delivery Team
Initial Update: Thursday, 9/3/2015 09:22 UTC
We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Latency. The following data types are affected: Metric, Request, Trace.
• Work Around: None
• Next Update: Before 10:22 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Application Insights Service Delivery Team
We’ve confirmed that all systems are back to normal with no customer impact as of 9/3, 11:45 UTC. Our logs show the incident started on 9/3, 07:10 UTC and that during the 3 hours that it took to resolve the issue customers experienced latency for multiple data types.
• Root Cause: The failure was due to high memory issues on a backend service.
• Lessons Learned: We are working to understand the cause of the high memory issue and drive for a longer term fix.
• Incident Timeline: 3 Hours & 35 minutes - 9/3, 11:45 UTC through 9/3, 07:10 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.
-Application Insights Service Delivery Team
Update: Thursday, 9/3/2015 10:21 UTC
Our DevOps team continues to investigate issues within Application Insights. Root cause is not fully understood at this time. Some customers continue to experience latency of multiple data types above the 2 hour SLA. We are working to establish the start time for the issue, initial findings indicate that the problem began at 09/03 ~07:11 UTC. We currently have no estimate for resolution.
• Work Around: None
• Next Update: Before 12:21 UTC
-Application Insights Service Delivery Team
Initial Update: Thursday, 9/3/2015 09:22 UTC
We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Latency. The following data types are affected: Metric, Request, Trace.
• Work Around: None
• Next Update: Before 10:22 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Application Insights Service Delivery Team