Identified - Ingestion is slowly recovering. Additional cluster nodes are being provisioned to improve resilience. Monitoring continues to ensure full restoration.
May 12, 2026 - 09:57 UTC
Investigating - Since 10:43, ingestion services in FRA1 region are experiencing degradation due to the message bus. This issue causes delays and slowdowns in event processing. The engineering team is investigating to restore normal operation as soon as possible.
May 12, 2026 - 09:32 UTC
Web application Operational
Event ingestion Degraded Performance
Event storage Operational
Detection Operational
Hunting Operational
Case management Operational
Automation Operational
AI Operational
CTI Search Operational
CTI Feed (API) Operational
CTI Feed (TAXII) Operational
CTI Feed (MISP) Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
May 12, 2026

Unresolved incident: Ingestion degraded.

May 11, 2026

No incidents reported.

May 10, 2026

No incidents reported.

May 9, 2026

No incidents reported.

May 8, 2026

No incidents reported.

May 7, 2026

No incidents reported.

May 6, 2026
Resolved - This incident has been resolved.
May 6, 20:27 UTC
Monitoring - Approximately 32 servers became unresponsive due to high memory usage caused by a malfunction in the ingestion workflow components. A bulk hard reboot of affected servers has been initiated. Customers may experience delays and the ingestion of some events twice during this incident. There is no impact on automation features or playbooks. The situation is now stable and the backlog of events is being processed.
May 6, 14:30 UTC
Identified - The issue has been identified and a fix is being implemented.
May 6, 14:11 UTC
Investigating - We are experiencing configuration issues affecting multiple communities which are causing delays in event data indexing. Our teams are actively working to resolve the problem and restore normal indexing speed. Updates will be provided as the situation evolves.
May 6, 10:19 UTC
May 5, 2026
Resolved - The indexation is back to real-time.

Thank you for you patience.

May 5, 17:23 UTC
Monitoring - The applied fix is working as exepected and we're now cathing the lag.
The incident is actively monitored.
We will continue to inform you on the progress.

May 5, 16:14 UTC
Identified - We applied a fix and indexation has now resumed.
We're now starting to catch the lag.
The incident is actively monitored for stability.

May 5, 16:02 UTC
Investigating - We identified a network issue on event storage cluster which cause searches and indexation instability.
We already recovered cold data, restoring search query functionality to include all events.
However, we're still experiencing indexation issues which are under investigation.
We're working on fix the issue, and further updates will be provided as necessary.

May 5, 15:21 UTC
May 4, 2026

No incidents reported.

May 3, 2026

No incidents reported.

May 2, 2026

No incidents reported.

May 1, 2026

No incidents reported.

Apr 30, 2026

No incidents reported.

Apr 29, 2026

No incidents reported.

Apr 28, 2026

No incidents reported.