At its core, CyberFLOWS builds on Oracle Advanced Queuing and the new Transactional Event Queues (TxEventQ). Messages between agents are enqueued into multi-consumer queues, which means multiple subscribers can receive the same message in parallel.
This implements a topic-based publish/subscribe pattern inside the database: an agent (or any GRC application) enqueues an event, and all subscribing agents or services dequeue copies of that event. Oracle TEQ supports defining multiple event streams (partitions) for a queue – conceptually similar to Kafka topics with partitions – enabling horizontal scale-out of throughput.
Because TEQ is implemented as internal database tables, it inherits all the operational strengths of Oracle Database. That includes guaranteed message persistence, high availability, and fault tolerance – Oracle Data Guard can even replicate queues to preserve messages for DR scenarios. Enqueues and dequeues participate in the same ACID transactions as your business data, ensuring consistency without complex 2-phase commits
In practical terms, this means messages are never lost and each event is delivered exactly once to each intended consumer, without duplicates or gaps, even in the face of failures.
CyberFLOWS uses ORDS’s built-in TxEventQ REST APIs to avoid any custom broker code. ORDS exposes endpoints to create topics, manage consumer groups, and push or pull messages over HTTPS
For example, an OCI-hosted Python agent can POST a message to the “RiskAlert” topic via ORDS, while an Oracle APEX app concurrently GETs new messages from that same topic – all with standard REST calls and JSON payloads. Under the hood, TEQ ensures each message is stored and forwarded to all subscribers with minimal latency (optimized by in-memory caching and Oracle’s internal scheduling). Agents can include correlation identifiers in messages to link requests with responses, and Oracle’s queuing system allows filtering or selective consumption by correlation ID. Priorities can also be assigned to messages, so urgent events jump to the front of the queue – useful in a risk management scenario where a “critical” incident alert should be processed before low-priority tasks.