Time-Series Metrics
Time-series metrics are numeric measurements tracked over time, such as latency, error rate, or CPU usage. They are the core signal type for monitoring applications, infrastructure, and trading systems in high-volume environments.
What Are Time-Series Metrics?
A time-series metric is a stream of timestamped value points identified by a metric name and a set of key–value dimensions (labels or tags). Unlike logs (unstructured events) or traces (request flows), metrics are compact, regular, and optimized for aggregation and alerting in a time-series database.
Application metrics describe behavior inside software components (for example, order_gateway_latency_ms, risk_check_failures_total). Infrastructure metrics capture the health of the platform running those components (for example, CPU load, NIC packet drops, disk write latency).
Together they form the backbone of observability metrics and feed SLOs, anomaly detection, and capacity planning across trading, risk, and back-office systems.
From Prometheus-Style Metrics to Time-Series Storage
In Prometheus-style systems, a metric is defined by name + {label_set}. For example:
trade_events_total{service="oms",env="prod",venue="XETR"}
Each unique label set becomes a distinct time series in the storage layer. This mapping means label design directly affects metric cardinality and the cost of storing and querying high-cardinality time-series metrics.
A metrics backend or market-data-focused time-series index ingests these points via streaming time-series ingestion, persists them, and serves rollups and aggregations over large windows.
Next generation time-series database
QuestDB is an open-source time-series database optimized for market and heavy industry data. Built from scratch in Java and C++, it offers high-throughput ingestion and fast SQL queries with time-series extensions.
Use Cases in Capital Markets
Capital markets generate dense metric streams: order gateway and FIX session latency, match engine throughput, risk engine queue depth, dropped market-data packets, and market-data time-series database ingestion rates. Infrastructure metrics for co-located servers, network links, and storage arrays complement these application-level signals.
By storing metrics as time series, firms can correlate infrastructure incidents with trading behavior (for example, a spike in GC pauses preceding increases in order_rejects_total), enforce real-time risk limits, and validate regulatory trade lifecycle monitoring. The same metric patterns apply to backtesting environments, where historical time-series metrics help explain performance regressions and capacity bottlenecks alongside price and order-book data.