Interested in QuestDB use cases?

Learn more

Benchmark and comparison: QuestDB vs. InfluxDB

This article compares QuestDB and InfluxDB on performance, architecture, and ease of use. Last updated December 2, 2025 with benchmarks for InfluxDB v1.11, v2.7.12, and QuestDB 9.2.2.

For InfluxDB 3 Core Alpha benchmarks, see our latest post.

Key results: QuestDB ingests data up to 36x faster than InfluxDB (11.36M rows/sec vs 203K rows/sec at 1M hosts), and runs analytical queries 23x to 130x faster, with heavy aggregations 17x faster. InfluxDB is marginally faster (~2x) on two specific simple aggregation queries.

Introduction to InfluxDB and QuestDB

QuestDB (released 2019) is an open-source time-series database licensed under Apache License 2.0. Written in low-latency (Zero-GC) Java and C++, it is designed for low-latency, high-throughput ingestion and fast analytical queries using standard SQL. QuestDB uses a columnar storage model where all time series live in a single table structure, avoiding per-series overhead.

InfluxDB (released 2013) is a time-series database developed by InfluxData. The open-source versions (v1 under MIT, v2 OSS under proprietary license) are written in Go. InfluxDB uses a measurement-based data model where each unique combination of tags creates a separate series with its own storage structure.

AspectQuestDBInfluxDB
LicenseApache 2.0v1: MIT, v2 OSS: Proprietary
ImplementationJava, C++Go
Query languageStandard SQLInfluxQL, Flux
Data modelRelational (tables + rows)Measurement-based (series)
Ingestion protocolsILP, PostgreSQL wire, HTTPILP, HTTP API
High cardinalityNo performance impactPerformance degrades

Performance benchmarks

We use the open-source, industry-standard Time Series Benchmark Suite (TSBS) for all benchmarks, which supports InfluxDB (v1 and v2) and QuestDB out of the box.

Hardware: AWS EC2 r8a.8xlarge (32 vCPU, 256 GB RAM, AMD EPYC), GP3 EBS storage (20,000 IOPS, 1 GB/s throughput)

Software: Ubuntu 22.04, InfluxDB v1.11, InfluxDB v2.7.12, QuestDB 9.2.2 — all with default configurations

Ingestion benchmark

We test a cpu-only scenario with two days of CPU data for various numbers of simulated hosts (100, 1K, 4K, 100K, and 1M). This tests how each database handles increasing data volumes and cardinality.

In time-series databases, high cardinality refers to having many unique values in indexed columns—for example, millions of unique symbols, account IDs, or trading venues. More hosts in this benchmark means higher cardinality.

Example commands:

$ ./tsbs_generate_data --use-case="cpu-only" --seed=123 --scale=4000 \
--timestamp-start="2016-01-01T00:00:00Z" \
--timestamp-end="2016-01-03T00:00:00Z" \
--log-interval="10s" --format="influx" > /tmp/influx_data
$ ./tsbs_load_influx --db-name=benchmark --file=/tmp/influx_data \
--urls=http://localhost:8086 --workers=32

The results for ingestion with 32 workers:

↑ Higher is better

36xfaster than InfluxDB v1
11.36Mrows/sec peak
30xfaster than InfluxDB v2
ScaleInfluxDB v1.11InfluxDB v2QuestDBQuestDB vs v1QuestDB vs v2
100 hosts (1.7M rows)1.23M rows/sec727K rows/sec4.02M rows/sec3.3x faster5.5x faster
1,000 hosts (17M rows)1.17M rows/sec667K rows/sec7.48M rows/sec6.4x faster11.2x faster
4,000 hosts (69M rows)787K rows/sec514K rows/sec8.39M rows/sec10.7x faster16.3x faster
100,000 hosts (86M rows)491K rows/sec402K rows/sec11.36M rows/sec23x faster28x faster
1,000,000 hosts (432M rows)~203K rows/sec241K rows/sec7.33M rows/sec36x faster30x faster

Key observations:

  • QuestDB is 3.3x to 36x faster than InfluxDB, with the gap widening at scale
  • QuestDB peaks at 11.36M rows/sec at 100K hosts, maintaining high throughput even at 1M hosts
  • InfluxDB throughput degrades significantly as cardinality increases (both v1 and v2)

Why does cardinality affect InfluxDB?

The performance gap widens with scale because of how each database handles cardinality. InfluxDB creates a separate TSM (Time-Structured Merge) tree for each unique series. At 100,000 hosts with 10 metrics each, that's 1,000,000 separate storage structures to maintain, index, and compact—explaining the throughput degradation.

QuestDB stores all data in a single columnar table regardless of cardinality. Adding more hosts simply adds more rows to the same structure. Throughput starts at ~8M rows/sec for 1K-4K hosts, then peaks at 11.36M rows/sec at 100K hosts as parallelism is fully utilized, before settling at 7.3M rows/sec at 1M hosts due to memory pressure at extreme scale—still maintaining strong performance throughout.

Query performance

While QuestDB outperforms InfluxDB for ingestion, query performance is equally essential for time-series data analysis.

As part of the standard TSBS benchmark, we test several types of popular time series queries:

  • single-groupby: Aggregate CPU metrics for random hosts over specified time ranges
  • double-groupby: Aggregate across ALL hosts, grouped by host and time intervals
  • high-cpu-all: Full table scan finding hosts with CPU utilization above threshold

All queries target two days of 4000 emulated host data.

To run the benchmark:

$ ./tsbs_generate_queries --use-case="devops" --seed=123 --scale=4000 \
--timestamp-start="2016-01-01T00:00:00Z" \
--timestamp-end="2016-01-03T00:00:00Z" \
--queries=1000 --query-type="single-groupby-1-1-1" \
--format="influx" > /tmp/influx_query
$ ./tsbs_run_queries_influx --file=/tmp/influx_query \
--db-name=benchmark --workers=1

Single-groupby queries

↓ Lower is better

Query format: metrics-hosts-hours · averaged over 10 runs

QuestDBis faster as complexity grows
4.2x fasteron 5-1-12
QueryInfluxDB v1.11InfluxDB v2.7.12QuestDBBest
single-groupby-1-1-10.42 ms0.73 ms1.06 msInfluxDB v1
single-groupby-1-1-122.30 ms3.37 ms1.68 msQuestDB
single-groupby-1-8-11.00 ms1.63 ms1.39 msInfluxDB v1
single-groupby-5-1-11.09 ms1.68 ms0.99 msQuestDB
single-groupby-5-1-128.40 ms12.24 ms1.98 msQuestDB
single-groupby-5-8-13.23 ms4.34 ms1.54 msQuestDB

Double-groupby queries

↓ Lower is better

Aggregates across ALL hosts, grouped by host and 1-hour intervals · averaged over 100 runs

21-130xfaster than InfluxDB
40-58msQuestDB latency
0.9-7.5sInfluxDB latency

These queries aggregate across ALL hosts, grouped by host and 1-hour intervals.

QueryInfluxDB v1.11InfluxDB v2.7.12QuestDBQuestDB vs v1QuestDB vs v2
double-groupby-1853 ms935 ms40 ms21x faster23x faster
double-groupby-53,595 ms3,875 ms46 ms78x faster84x faster
double-groupby-all6,967 ms7,516 ms58 ms120x faster130x faster

Heavy queries

high-cpu-all query latency · ↓ Lower is better

Full table scan finding hosts with CPU utilization above threshold · averaged over 10 runs

16-17xfaster than InfluxDB
<1sQuestDB response
~16sInfluxDB response

Full table scan finding hosts with CPU utilization above threshold.

QueryInfluxDB v1.11InfluxDB v2.7.12QuestDBQuestDB vs v1QuestDB vs v2
high-cpu-all16,045 ms16,655 ms994 ms16x faster17x faster

Explaining query performance

Key finding: QuestDB outperforms both InfluxDB versions on analytical queries, delivering 21x to 130x faster results on aggregations across time and hosts. InfluxDB v1 shows a marginal edge on two simple aggregation queries, but QuestDB dominates on complex workloads where real analytical value lies.

Let's examine specific query patterns:

Double group by queries

Aggregate across both time and host.

QuestDB is 21x to 130x faster than InfluxDB v1, and 23x to 130x faster than InfluxDB v2. This is where QuestDB truly shines. The engine scans all rows within the interval and aggregates them using multiple threads, parallel execution, and SIMD instructions.

Single group by queries

Simple aggregation on metrics for specific hosts over time ranges.

For simple aggregation queries on a single host (1-1-1), InfluxDB v1 is ~2.5x faster. However, as query complexity increases (more metrics, longer time ranges), QuestDB takes the lead:

  • 5-1-12 (5 metrics, 12 hours): QuestDB is 4.2x faster than InfluxDB v1, 6.2x faster than InfluxDB v2
  • 5-8-1 (5 metrics, 8 hosts): QuestDB is 2.1x faster than InfluxDB v1, 2.8x faster than InfluxDB v2

Heavy analytical queries (high-cpu-all)

Full table scan finding hosts above CPU threshold.

QuestDB is 16x to 17x faster than both InfluxDB versions. This query type demonstrates QuestDB's strength in analytical workloads that scan large amounts of data.

Why these differences?

QuestDB keeps all time series in a single dense table with columnar storage. For queries accessing a single time series, it must filter rows on access. InfluxDB stores each time series separately, giving it an advantage for single-series lookups.

However, for analytical queries spanning multiple series, QuestDB's columnar layout combined with SIMD instructions and multi-threaded processing provides dramatically better performance.

What are the data models used in InfluxDB and QuestDB?

The data model is fundamental to understanding why these databases perform differently. InfluxDB uses a measurement-based model optimized for tagged time series, while QuestDB uses a relational model that stores all data in tables.

InfluxDB: Measurement-based model

InfluxDB organizes data around measurements, tags, and fields:

measurementName,tagKey=tagValue fieldKey="fieldValue" 1465839830100399000
--------------- --------------- --------------------- -------------------
| | | |
Measurement Tags Fields Timestamp
  • Measurement: Similar to a table name, groups related data points
  • Tags: Indexed key-value pairs (strings only) used for filtering and grouping
  • Fields: Non-indexed values containing the actual metrics (floats, integers, strings, booleans)
  • Timestamp: Nanosecond-precision time

A series in InfluxDB is defined as a unique combination of measurement + tagset. For example:

trades,symbol=AAPL,exchange=NYSE price=185.50,size=100 1705311000123456000
trades,symbol=MSFT,exchange=NASDAQ price=390.25,size=250 1705311000123789000

These two lines create two separate series because their tagsets differ. Each unique series gets its own TSM storage structure. This is why high-cardinality workloads (many unique tag combinations) degrade InfluxDB performance—thousands of unique symbol values means thousands of separate series to maintain.

QuestDB: Relational model

QuestDB uses a standard relational model where data lives in tables with typed columns:

CREATE TABLE trades (
timestamp TIMESTAMP,
symbol SYMBOL, -- Dictionary string (similar to InfluxDB tags)
exchange SYMBOL,
side SYMBOL,
price DOUBLE,
size DOUBLE
) TIMESTAMP(timestamp) PARTITION BY DAY;

Market data in QuestDB is simply rows in a table:

timestampsymbolexchangesidepricesize
2024-01-15T09:30:00.123456ZAAPLNYSEbuy185.50100
2024-01-15T09:30:00.123789ZMSFTNASDAQsell390.25250

Adding new symbols or exchanges doesn't create new storage structures—it just adds more rows. This is why QuestDB handles high cardinality without performance degradation.

Data type support:

CategoryQuestDB Types
IntegerBYTE, SHORT, INT, LONG, LONG128, LONG256
Floating pointFLOAT, DOUBLE, DECIMAL
StringSTRING, VARCHAR, CHAR, SYMBOL (indexed)
TemporalTIMESTAMP (nanosecond precision), DATE, INTERVAL
GeospatialGEOHASH
CollectionsARRAY
OtherBOOLEAN, UUID, IPv4, BINARY

QuestDB supports InfluxDB line protocol for compatibility, automatically mapping tags to SYMBOL columns and fields to appropriate types. QuestDB also extends the protocol with binary support for advanced types like arrays. For full control over schema and types, use the PostgreSQL wire protocol or REST API.

Key model differences

AspectQuestDBInfluxDB
Data organizationTables + rowsMeasurements + series
Tag handlingSYMBOL columns (indexed strings)Creates separate series per tagset
High cardinalityNo impact (just more rows)Performance degrades (more series = overhead)
Query languageStandard SQLInfluxQL / Flux
JOINsFull SQL JOIN supportNot supported
SchemaSchema-on-write or predefinedSchema-on-write

Comparing database storage models

InfluxDB: TSM Trees

For storage, InfluxDB uses Time-Structured Merge (TSM) Trees, an LSM-tree variant optimized for time-series data. Writes first go to a write-ahead log (WAL) for durability, then into an in-memory cache that serves fast reads of recent data. When the cache fills, data is flushed to immutable TSM files on disk. Background compaction continuously merges smaller TSM files into larger ones to improve read efficiency, though this adds write amplification overhead.

Critically, each unique series (measurement + tagset combination) creates its own TSM structure. This per-series architecture explains why high-cardinality workloads degrade InfluxDB's performance—more unique series means more TSM structures to maintain, index, and compact.

TSM Tree architecture showing write path from WAL through cache to compacted TSM files

InfluxDB has a shard group concept as a partitioning strategy, allowing for grouping data by time. Users can provide a shard group duration which defines how large a shard will be and can enable common operations such as retention periods for data (deleting data older than X days, for example):

Shard groups in InfluxDB showing time-based partitioning

QuestDB: Three-tier columnar storage

QuestDB implements a three-tier storage architecture optimized for both high-throughput ingestion and fast analytical queries:

Tier 1 - Write-Ahead Log (WAL): Incoming writes first land in a write-ahead log, providing durability guarantees. The WAL handles out-of-order data by buffering and sorting before committing to the main storage layer. This design allows QuestDB to sustain millions of rows per second while maintaining data integrity.

Tier 2 - Columnar partitions: Data is organized into time-partitioned columnar files optimized for query performance. Each partition is a directory containing separate files per column, enabling efficient compression and allowing queries to read only the columns they need. This columnar layout combined with time partitioning enables parallel scans with SIMD instructions across multiple CPU cores.

Tier 3 - Cold storage (Parquet): Older partitions can be converted to Parquet format and moved to object storage (S3, Azure Blob, GCS) or local cold storage. This tiered approach keeps frequently queried data on fast local storage while reducing costs for historical data. Queries transparently span all tiers.

WHERE symbol in ('AAPL', 'NVDA')
LATEST ON timestamp PARTITION BY symbol
CREATE MATERIALIZED VIEW 'trades_OHLC'
min(price) AS low
timestamp IN today()
SELECT spread_bps(bids[1][1], asks[1][1])
FROM read_parquet('trades.parquet')
SAMPLE BY 15m

Tier One: Hot ingest (WAL), durable by default

Tier Two: Real-time SQL on live data

Tier Three: Cold storage, open and queryable

Unlike InfluxDB's per-series TSM architecture, QuestDB stores all time series in a single table structure. This means adding new series (high cardinality) doesn't create additional storage overhead—the same columnar files simply contain more rows. This architectural difference explains why QuestDB maintains consistent performance as cardinality scales.

Query languages: SQL vs Flux

InfluxDB has gone through multiple query languages: InfluxQL (SQL-like), then Flux (functional), and now SQL again in InfluxDB 3. This journey validates what QuestDB has maintained from the start—SQL is the right choice for time-series data.

Why SQL matters

  • Universal skill: SQL is consistently among the top 3 languages in developer surveys. Most engineers already know it.
  • Tooling ecosystem: SQL integrates with BI tools, notebooks, ORMs, and drivers without custom adapters.
  • Transferable knowledge: Skills learned querying QuestDB apply to PostgreSQL, analytics platforms, and data warehouses.

Flux vs SQL comparison

Flux uses a functional pipeline syntax that requires learning new concepts:

from(bucket: "metrics")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "cpu" and r.host == "server1")
|> aggregateWindow(every: 1m, fn: mean)

The equivalent in QuestDB SQL:

SELECT timestamp, avg(usage)
FROM cpu
WHERE host = 'server1' AND timestamp > dateadd('h', -1, now())
SAMPLE BY 1m;

QuestDB's time-series SQL extensions

QuestDB extends standard SQL with purpose-built functions for time-series analysis:

ExtensionPurposeExample
SAMPLE BYTime-based aggregationSELECT avg(price) FROM trades SAMPLE BY 1h
LATEST ONLast value per groupSELECT * FROM trades LATEST ON timestamp PARTITION BY symbol
ASOF JOINTime-aligned joinsJoin trades with quotes at exact timestamps
WHERE IN (ts, ts)Time range filteringOptimized partition pruning

These extensions maintain SQL compatibility while providing the expressiveness needed for time-series workloads.

Ecosystem and integrations

Both databases offer solid integration options, though with different strengths:

IntegrationQuestDBInfluxDB
GrafanaNative data sourceNative data source
TelegrafVia ILPNative
PostgreSQL toolsFull compatibility (psql, any PG driver)Not supported
Client librariesPython, Java, Go, Node.js, Rust, C/C++, .NETPython, Java, Go, Node.js, and more
KafkaOfficial Kafka connectorNative Kafka consumer
Pandas/PolarsNative integrationVia client library

QuestDB's advantage: PostgreSQL wire protocol compatibility means PostgreSQL client libraries work with QuestDB—including psql, SQLAlchemy, and any PostgreSQL driver.

InfluxDB's advantage: As the older and more widely deployed database, InfluxDB has broader native integrations with monitoring tools and a larger collection of community Telegraf plugins.

Conclusion

Performance summary

WorkloadQuestDBInfluxDB v2Advantage
Ingestion (1M hosts)7.33M rows/sec241K rows/sec30x faster
Ingestion (100K hosts)11.36M rows/sec402K rows/sec28x faster
Double-groupby40-58ms935ms-7.5s23-130x faster
Heavy aggregations994ms16.6s17x faster

Key takeaways:

  • QuestDB ingests data 3x to 36x faster than InfluxDB, with the advantage growing at scale
  • QuestDB dominates analytical queries: 21x to 130x faster on double-groupby, 16x faster on heavy scans
  • InfluxDB v1 has a slight edge on two simple aggregation queries

When to choose QuestDB

Market data and trading infrastructure:

  • Tick data capture and analytics at millions of events per second
  • Order book reconstruction and market depth analysis
  • Post-trade analytics and markouts (ASOF, CROSS, Window JOIN)
  • Materialized OHLCV candlestick charts automatically maintained

Quantitative finance:

  • Backtesting strategies across historical tick data
  • Real-time P&L and risk calculations
  • Correlate market data and trades with ASOF JOIN

High cardinality and heavy ingestion workloads:

  • Industrial IoT with millions of unique sensors and devices
  • Physical AI and robotics telemetry at scale
  • Fleet management and vehicle tracking with high device counts
  • Energy grid monitoring with dense sensor networks

SQL-first teams:

  • Standard SQL with time-series extensions (SAMPLE BY, LATEST ON)
  • PostgreSQL compatibility for existing quant tools and workflows
  • Integration with Python, pandas, and Jupyter notebooks

When to consider InfluxDB

  • Simple monitoring dashboards with low cardinality
  • Single-series lookups where sub-millisecond latency is critical
  • Existing Telegraf-based collection pipelines
  • Teams already invested in the InfluxDB/Flux ecosystem

Ready to try QuestDB? Get started with the quickstart guide or join our Slack community to ask questions.

Subscribe to our newsletters for the latest. Secure and never shared or sold.