Approximately 32 servers became unresponsive due to high memory usage caused by a malfunction in the ingestion workflow components. A bulk hard reboot of affected servers has been initiated. Customers may experience delays and the ingestion of some events twice during this incident. There is no impact on automation features or playbooks. The situation is now stable and the backlog of events is being processed.
Posted May 06, 2026 - 14:30 UTC
Identified
The issue has been identified and a fix is being implemented.
Posted May 06, 2026 - 14:11 UTC
Investigating
We are experiencing configuration issues affecting multiple communities which are causing delays in event data indexing. Our teams are actively working to resolve the problem and restore normal indexing speed. Updates will be provided as the situation evolves.