All Systems Operational

Web application ? Operational
Event ingestion ? Operational
Event storage ? Operational
Detection ? Operational
Hunting ? Operational
Case management ? Operational
Automation ? Operational
AI ? Operational
CTI Search ? Operational
CTI Feed (API) ? Operational
CTI Feed (TAXII) ? Operational
CTI Feed (MISP) ? Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Oct 15, 2025

No incidents reported today.

Oct 14, 2025

No incidents reported.

Oct 13, 2025

No incidents reported.

Oct 12, 2025

No incidents reported.

Oct 11, 2025

No incidents reported.

Oct 10, 2025

No incidents reported.

Oct 9, 2025

No incidents reported.

Oct 8, 2025

No incidents reported.

Oct 7, 2025

No incidents reported.

Oct 6, 2025
Resolved - We are back to real-time indexing.
Thank you for your patience

Oct 6, 17:01 UTC
Monitoring - A failure in some indexing servers due to an incident on our cloud-provider side led to a temporary stop of indexing.
The issue was quickly fixed by our team but it generated some lag in event indexing.
We are now monitoring the state of the service and will come back to you once we are back to real-time.
There were no data loss, and events processing and alerts raising are still done in time.

We are sorry for the inconvenience.

Oct 6, 14:56 UTC
Oct 5, 2025

No incidents reported.

Oct 4, 2025
Resolved - This incident has been resolved.
Oct 4, 02:21 UTC
Monitoring - The platform is now back to operational state and we are consuming the delay.
Our team is still figuring long-term solutions and working on a fix.

We will come back to you once ingestion is back to real-time.
Sorry for the inconvenience and thanks for your patience.

Oct 4, 01:52 UTC
Identified - We have found the root cause and applied a fix.
This implied to restart a service in our events processing pipeline, which shifted the problem to the ingestion.
It means you can now access the web app again, but we are now taking a little bit of delay in events processing and alerts raising.
We are slowly scaling our service up again, and should catch up on the delay rapidly.

We will keep you updated once everything is back to normal.
Thank you for your patience.

Oct 4, 01:00 UTC
Investigating - We are experiencing platform-wide degraded performance. An important relational database host is saturating, causing platform APIs and services to be throttled and to exhibit elevated response times or timeouts.
Events collection is not impacted. Keep assured no data is lost, but access to the platform is very degraded.
Engineers are investigating the root cause and evaluating mitigations.
We will come back to you as soon as we have new information.
Sorry for the inconvenience.

Oct 3, 23:50 UTC
Oct 3, 2025
Resolved - All traffic in backlog was handled on 01/10 at 22:30. As a side effect of this incident, the intakes page was showing no traffic figures on 02/10, which has been fixed in the meantime.
As we are fully back to normal since 08:51, this incident is now fully resolved. We thank you for your patience and understanding.

Oct 3, 07:58 UTC
Monitoring - We have been processing live traffic in real-time since 12 am and we have queued backlog traffic for indexing later today. We are adjusting our strategy and scaling our resources to optimize the indexing process. We appreciate your understanding and patience as we continue to work towards fully processing the backlog of events.
Oct 1, 16:26 UTC
Identified - Our team has been able to resume handling traffic in real-time and process the backlog of events in parallel. We have also initiated a slow rollover of some system components to avoid disruption. Currently, 50% of the traffic is being processed in real-time, and this figure is steadily increasing. We are continuously monitoring the situation and working to restore full service functionality. We appreciate your patience and apologize for any inconvenience caused by this incident.
Oct 1, 09:55 UTC
Investigating - We are currently dealing with a hardware issue that is impacting the performance of event indexation on FRA1. One of our nodes is experiencing network flakiness, leading to a reduction in indexing performance. Our engineers are working diligently to resolve this hardware issue and restore full service functionality. We apologize for any inconvenience this may cause and thank you for your patience.
Oct 1, 08:22 UTC
Oct 2, 2025

No incidents reported.

Oct 1, 2025