Title
Create new category
Edit page index title
Edit category
Edit link
How Pulse Processes Events
Pulse monitors your big data environments in real time. Big data applications continuously send events to Pulse. Pulse stores these events, processes them, and displays statistics from them instantly on the respective Pulse dashboards.
Pulse utilizes an asynchronous boundary, implemented as an asynchronous messaging queue, to efficiently handle high-volume event streams. This queue is managed by the Neural Autonomic Transport System (NATS). Big data applications stream data directly to this queue, which acts as a broker. The queue’s storage system is robust enough to retain raw event data for extended periods.
Pulse places this asynchronous queue between big data applications and the Pulse UI:
- Applications monitored by Pulse, stream metadata events directly to the queue.
- Pulse monitored applications stream metadata events directly to the queue.
- Events are stored as raw, immutable data in the queue.
- Once data is received from an application (publisher), it can be consumed by multiple subscribers.
- Pulse services (microservices) consume raw events from the queue, process them, and store the results in MongoDB and VictoriaMetrics. The Pulse UI then reads from these databases to display data on dashboards.
The raw event data in the asynchronous queue never changes. You can use this raw data for the following purposes:
- Debugging errors.
- Going back in time to review logs.
- Powering Artificial Intelligence (AI) or Machine Learning (ML) applications.
The following image illustrates the architecture of the Pulse Queue.


For additional help, contact www.acceldata.force.com OR call our service desk +1 844 9433282
Copyright © 2026