Interested in QuestDB use cases?

Learn more

QuestDB 2025: Year in Review

2025 was transformative for QuestDB. Across 16 open-source releases and 15 enterprise releases, we shipped features for the most demanding time-series workloads.

Capital markets remains our deepest focus. Trading desks, quants, algo traders, and market makers drove features like arrays for order book analytics, nanosecond timestamps for fuller time resolution, DECIMAL for exact calculations, optimized joins for TCA, and enterprise-grade replication.

But we're also seeing strong adoption beyond finance. DECIMAL is critical for crypto where tracking Bitcoin balances requires high precision. Ingestion robustness and high throughput for high-cardinality workloads serve energy, aerospace, and PhysicalAI/Robotics teams handling millions of distinct sensor series. And QPS and latency optimizations power fraud detection across fintech and crypto/web3.

We've also invested in open formats. Parquet support enables seamless interoperability with the broader data ecosystem, with more to come in 2026.

Here's what we shipped in 2025.

Data Types: Precision Meets Performance

Capital markets data comes in shapes that traditional databases struggle with. We needed types that match how financial data actually works.

Arrays: Order Books Without Reshaping

Release: 9.0 (July 2025)

Order book snapshots are inherently multidimensional: 50-100 bid levels, 50-100 ask levels, each with price and size. Traditionally, you'd flatten this into separate rows, destroying the semantic relationship and making queries painful.

QuestDB 9.0 introduced true N-dimensional arrays with NumPy-like operations and zero-copy performance. Now you can store complete order book snapshots as native array columns:

CREATE TABLE orderbook_snapshots (
timestamp TIMESTAMP,
symbol SYMBOL,
bids DOUBLE[][], -- [price, size] pairs
asks DOUBLE[][] -- [price, size] pairs
) TIMESTAMP(timestamp) PARTITION BY DAY;

Query the entire order book history without reshaping. Compute cumulative depth with array_cum_sum(). Slice specific levels. All with zero-copy efficiency.

See my previous post on analyzing market depth for a complete example.

Why it matters: Market making algorithms need to analyze order book imbalances across hundreds of securities. Risk managers need historical depth at specific price levels. With arrays, these queries run in milliseconds, not minutes.

Nanoseconds: When Microseconds Aren't Enough

Release: 9.1 (October 2025)

Some workloads require nanosecond resolution—HFT, market data infrastructure, scientific instrumentation, or anywhere event ordering at fine granularity matters. A microsecond timestamp might seem precise, but when you're reconstructing trade sequences or debugging timing-sensitive systems, that precision isn't enough.

The new TIMESTAMP_NS type delivers nanosecond-precision time-series:

CREATE TABLE trades_hft (
timestamp TIMESTAMP_NS,
symbol SYMBOL,
price DOUBLE,
size DOUBLE,
venue SYMBOL
) TIMESTAMP(timestamp) PARTITION BY HOUR;

This is a new data type alongside the existing TIMESTAMP type (microsecond precision). Use TIMESTAMP for most workloads, TIMESTAMP_NS when you need nanosecond resolution.

Why it matters: When debugging why your execution algo slipped 0.1 basis points, nanosecond timestamps show you exactly what happened between receiving the market data update and submitting the order.

DECIMAL: Exact Financial Calculations

Release: 9.2 (November 2025)

Floating-point arithmetic is fast but introduces rounding errors. When you're calculating P&L, portfolio valuations, regulatory capital requirements, or tracking crypto balances, "close enough" isn't acceptable. You need mathematically exact results.

QuestDB 9.2 introduced native DECIMAL(precision, scale) with up to 76 digits of precision:

CREATE TABLE positions (
timestamp TIMESTAMP,
account_id SYMBOL,
security_id SYMBOL,
quantity DECIMAL(18, 8),
avg_price DECIMAL(18, 8),
market_value DECIMAL(28, 8)
) TIMESTAMP(timestamp) PARTITION BY DAY;

The system automatically optimizes storage from 1 byte to 32 bytes depending on requirements. All arithmetic operations maintain precision.

Why it matters: When you're managing billions in assets, rounding errors compound. DECIMAL ensures your books balance to the penny and your regulatory reports pass audit.

JOINs: Purpose-Built for Trading Workflows

SQL JOINs weren't designed for time-series data. We needed temporal join semantics and the performance to handle real trading workloads.

ASOF JOIN: Up to 100x Faster Point-in-Time Analysis

Release: 8.3 (April 2025)

ASOF JOINs match events to the most recent preceding event in another table. This is fundamental for capital markets: joining trades to market data as-of execution time, matching orders to book snapshots, or enriching transactions with reference data.

We delivered up to 100x speedups on production datasets, with four specialized algorithms:

  • Fast: Optimized for frequent matches
  • Memoized: Caches lookups for repeated keys
  • Light: Minimal memory footprint
  • Dense: Handles sparse matches efficiently (9.2, November)

SQL hints let you choose the optimal strategy:

SELECT /*+ USE_ASOF_ALGO('DENSE') */
trades.timestamp,
trades.symbol,
trades.price,
quotes.bid,
quotes.ask
FROM trades
ASOF JOIN quotes ON (symbol)
WHERE trades.timestamp BETWEEN '2025-01-01' AND '2025-12-31';

Why it matters: Backtesting a trading strategy across 100 symbols and 1 year of tick data went from "overnight batch job" to "interactive query." Analysts can iterate on ideas in real-time.

RIGHT and FULL OUTER JOINs: SQL Standard Compliance

Release: 9.1 (October 2025)

We completed our SQL JOIN support with RIGHT OUTER and FULL OUTER joins. Now you can write complex multi-table analytics using standard SQL patterns without workarounds.

-- Find all symbols that traded in either venue A or B (or both)
SELECT COALESCE(a.symbol, b.symbol) AS symbol,
a.volume AS volume_a,
b.volume AS volume_b
FROM venue_a_trades a
FULL OUTER JOIN venue_b_trades b ON (a.symbol = b.symbol);

Why it matters: Compliance reports often require "show me everything in dataset A, B, or both." With full SQL JOIN support, you write the query naturally instead of complex unions.

Markout Horizon: Building Toward State-of-the-Art TCA

Release: 9.2.1 (November 2025)

We're on a journey to bring state-of-the-art Transaction Cost Analysis (TCA) and markout analysis to QuestDB users. TCA measures execution quality by comparing your trade price to market prices at intervals after execution (the "markout horizon"). Did the market move against you? How much did your order impact price?

Computing markout horizons requires CROSS JOINs between trades and market data snapshots at multiple time offsets. These queries used to be batch jobs.

One step in the right direction: our CROSS JOIN optimization with query hints reduced some production TCA queries from 135 seconds to 17 seconds while cutting memory usage from 8.4GB to 0.1GB. There's a lot more to come to make this feature complete:

WITH /** We will be releasing simplified markout syntax in 2026 **/
horizon AS (
SELECT 1_000_000_000 * sec_offs nsec_offs, sec_offs::int as sec_offs
FROM (SELECT x-61 AS sec_offs FROM long_sequence(121))
),
points AS (
SELECT /*+ markout_horizon(eq_equities_trades horizon) */ *
FROM (
SELECT timestamp, symbol, price, size, sec_offs,
timestamp + nsec_offs as ts
FROM eq_equities_trades CROSS JOIN horizon
WHERE timestamp in yesterday() and symbol = 'AAPL'
ORDER BY timestamp + nsec_offs
) TIMESTAMP(ts)
),
joined AS (
SELECT /*+ asof_dense(p m) */ *
FROM points p ASOF JOIN eq_equities_market_data m ON (symbol)
),
markouts AS (
SELECT sec_offs, size, price - l2price(size, bids[2], bids[1]) as markout
from joined
)
SELECT sec_offs,
weighted_avg(markout, size) as wavg,
weighted_stddev(markout, size) as wstddev from markouts
ORDER BY sec_offs;

Why it matters: Post-trade analysis that previously required overnight batch processing can now run interactively. This is just the beginning—we're working on simplified syntax and deeper TCA capabilities for 2026.

Materialized Views: OHLC at Scale

Journey: 8.2.3 (Beta, March) → 8.3 (Enabled by default, April) → 8.3.1 (GA, April) → Enterprise replication support → 9.2.3 (Production hardening, December)

Capital markets generate billions of ticks per day. Traders need OHLC bars, VWAP, volume profiles: aggregations over enormous datasets that must update in real-time.

Materialized Views automatically downsample tick data into pre-computed aggregations that incrementally refresh as new data arrives:

CREATE MATERIALIZED VIEW ohlc_1min AS
SELECT
symbol,
first(price) AS open,
max(price) AS high,
min(price) AS low,
last(price) AS close,
sum(size) AS volume
FROM trades
SAMPLE BY 1m
ALIGN TO CALENDAR;

The view updates automatically. Dashboard queries hit the materialized view, not raw ticks. Response time goes from seconds to milliseconds.

Performance milestones:

  • 3.3x faster deduplication (~1.7M rows/s vs ~520k rows/s) - 9.0
  • High-frequency updates optimized to use significantly less resources - 9.2.3
  • Three refresh modes: TIMER (scheduled), MANUAL (on-demand), PERIOD (calendar-aligned)

Enterprise: Full replication support ensures materialized views stay synchronized across primary/replica topologies. If your primary fails, the replica has the same pre-computed views ready immediately.

Why it matters: Real-time dashboards serving hundreds of concurrent users don't re-scan trillion-row tick tables. Materialized views provide instant aggregations while maintaining data freshness.

Performance: Handling Market Data at Scale

Capital markets data is high-cardinality and high-velocity. We needed performance that scales to thousands of symbols, millions of order IDs, and billions of events.

Symbol Auto-Scaling: No More Ingestion Bottlenecks

Release: 9.1 (October 2025, opt-in) → 9.2 (November, enabled by default)

Market data has inherent high cardinality: thousands of tickers, venues, counterparties, order IDs. Previously, you had to pre-configure symbol capacity. Guess wrong, and ingestion stalled.

Symbol auto-scaling dynamically grows capacity with distinct values:

  • ~2 million distinct values ingested in 2.5 seconds (previously: hours)
  • ~40 million distinct values ingested in 2.5 minutes (previously: never completed)

Why it matters: Options market makers tracking millions of strikes across expiries no longer hit ingestion bottlenecks. Multi-venue aggregators handling millions of order IDs per day ingest smoothly.

Query Optimization: Interactive Analytics

Multiple releases throughout 2025:

  • 10x faster ORDER BY...LIMIT (9.0.2): Top-N queries parallelized. "Show me the 100 largest trades" runs 10x faster.
  • 2x faster count_distinct() (9.2.1): Essential for "how many unique counterparties traded this symbol?"
  • 4x faster SQL parsing (9.2.3): Higher concurrent query throughput for dashboards serving multiple users.
  • 2-3x faster single-key GROUP BY (9.2.3): Common aggregation patterns accelerated.

Continuous Profiling: Find Bottlenecks in Production

Release: 9.1 (October 2025)

Built-in async-profiler generates CPU and memory flame graphs with ~10% overhead. Attach to running instances on-demand or enable at startup.

No external profilers. No performance impact unless actively profiling. Identify query bottlenecks in production.

Why it matters: When a complex risk calculation starts running slowly in production, you can profile it immediately without restarting the database or deploying monitoring tools.

Open Formats: Integration Without Data Duplication

Capital markets firms have heterogeneous infrastructure: Python for quant research, SQL for risk reporting, Spark for batch analytics, Snowflake for data warehousing. Moving data between systems shouldn't require ETL pipelines.

Parquet: Seamless Interoperability

Releases: Multiple releases throughout 2025

QuestDB already supported Parquet conversion in place for both OSS and Enterprise, allowing other tools to query your data directly without duplication. This year we added more capabilities:

Query export (9.1.1, November): Export specific query results to Parquet with COPY TO:

COPY (
SELECT symbol, timestamp, price, size
FROM trades
WHERE date = '2025-12-01'
) TO '/data/exports/trades_20251201.parquet';

Previously you could only convert whole partitions. Now you can export any query result.

Import optimizations: QuestDB recognizes timestamp-sorted Parquet files and ingests them faster using optimized algorithms.

Enterprise automated cold storage: Hot (RAM/disk) → Warm (local Parquet) → Cold (object storage Parquet). Aged data moves automatically to cost-effective object storage (S3, Azure, GCS). Queries remain transparent, the database handles retrieval automatically.

Why it matters:

  • Quants export backtesting results directly to Jupyter notebooks
  • Risk managers send positions to Snowflake for regulatory reports
  • Compliance teams archive audit trails to object storage for 7-year retention
  • Other tools can query QuestDB data in place using Parquet without copying
  • All without ETL pipelines or data duplication

Binary Line Protocol: Efficient Ingestion

Release: 9.0 (July 2025)

Binary extension to InfluxDB Line Protocol delivers efficient ingestion of arrays and doubles. Higher throughput, smaller payloads, full backward compatibility.

Why it matters: Market data feeds pushing millions of updates per second benefit from reduced serialization overhead.

Enterprise: Production-Grade for Financial Institutions

Capital markets firms require enterprise-grade security, high availability, and operational robustness. QuestDB Enterprise delivered throughout 2025.

Authentication & Authorization: OAuth2/OIDC Evolution

Multiple releases, 2.2.2 through 3.1.2:

Financial institutions integrate databases with corporate identity providers like OAuth, EntraID, and PingFederate. We continuously improved OAuth2/OIDC support:

  • State parameter support for CSRF protection
  • ROPC flow (Resource Owner Password Credentials) for PgWire connections
  • Token-based authentication for PgWire protocol
  • Granular permissions: CSV import owner assignment, full user/group/service account management, built-in admin can be disabled
  • Robustness: OIDC ignores unrecognized configuration objects instead of failing
  • Enhanced error logging for debugging authentication issues

Why it matters: Trading firms with thousands of users can integrate QuestDB with corporate SSO, apply fine-grained permissions, and audit access without custom authentication layers.

Enterprise Replication: Robustness and Observability

Releases: 2.3.2 (GCS support, June), continuous improvements through 3.1.2

High-availability architectures require replication across regions. QuestDB Enterprise delivered:

  • Storage backend support: S3, Azure Blob Storage, Google Cloud Storage, and NFS
  • Robust error handling: Transient error tolerance, configurable retry policies, timeout handling
  • Enhanced observability: Per-table replication metrics, Prometheus integration, configurable metric retention
  • Materialized views + replication: Views stay synchronized across primary/replica topologies
  • Large segment support: Handles WAL segments >4GiB for high-throughput tables
  • Stability improvements: Fixed replica suspension issues, optimized segment collection at startup

Why it matters: Global trading firms with primary/replica deployments across multiple regions need reliable replication that tolerates transient network issues and provides visibility into replication lag. Whether you use cloud object storage or on-premises NFS, replication just works.

Parquet for Cold Storage: Cost-Effective Archival

Enterprise 3-tiered storage:

  • Hot tier: RAM/disk for active data (last 7 days of trades)
  • Warm tier: Local Parquet for recent-but-inactive data (last 90 days)
  • Cold tier: Object storage Parquet for archival (everything older)

Queries remain transparent. The database retrieves data automatically from the appropriate tier.

Why it matters: Storing years of tick data in hot storage is prohibitively expensive. 3-tiered storage moves aged data to object storage automatically, reducing costs by 90%+ while maintaining query transparency.

What This Means for Capital Markets

QuestDB is now the database purpose-built for quantitative finance:

  • Order book analysis: Arrays store snapshots natively, queries run in milliseconds
  • OHLC aggregations: Materialized views downsample billions of rows, updating in real-time
  • HFT precision: Nanosecond timestamps for sub-microsecond accuracy
  • Exact calculations: DECIMAL eliminates rounding errors in P&L and risk
  • TCA and post-trade analysis: Optimized JOINs (ASOF, CROSS) handle complex temporal queries interactively
  • Seamless integration: Parquet export/import and 3-tiered storage eliminate data duplication
  • Production-grade security: OAuth2/OIDC, granular permissions, audit logging
  • Enterprise replication: Robust replication across object storage and on-premises NFS

Whether you're building a trading platform, risk analytics system, market surveillance tool, or regulatory reporting pipeline, QuestDB delivers the performance, precision, and integration capabilities capital markets demand.

Looking Ahead: Early 2026

Already merged or nearly complete:

  1. Window JOIN: Time-windowed join semantics for complex event processing (already merged to master)
  2. Views: Standard SQL views for abstraction layers (PR #5720, nearly complete)
  3. CROSS JOIN UX improvements: More user-friendly dedicated syntax for markout analysis
  4. Parquet Evolution: More efficient queries, Web Console import, deeper 3-tiered storage integration
  5. LLM Integration: Query generation and data exploration from the Web Console (UI PR #507)
  6. Open Standards: Deeper interoperability with the analytics ecosystem
  7. Backups to Object Store: QuestDB Enterprise can stream backups to object store for fast read-replica creation, or for Point In Time Recovery.

2025 delivered production-grade maturity. 2026 starts with seamless integration through views, window joins, LLM-assisted analytics, and universal Parquet support.

Try QuestDB

Open Source: github.com/questdb/questdb

Enterprise: questdb.com/enterprise

Questions? Join our Slack community or reach out to the team.


Happy New Year from the QuestDB team! If you're working on capital markets analytics and want to see how QuestDB handles your specific use case, reach out. We're always interested in learning about new challenges.

Subscribe to our newsletters for the latest. Secure and never shared or sold.