QuestDB 9.3.4 and Enterprise 3.2.4: Dynamic WINDOW JOIN, Parquet Bloom Filters, and Array Analytics
QuestDB 9.3.4 and Enterprise 3.2.4 are out. Parquet gets bloom filter pruning
and per-column encoding controls, WINDOW JOIN supports dynamic ranges computed
from column values, and new element-wise array functions bring order book
analytics into SQL. On the enterprise side, COPY PERMISSIONS and automatic
permission cleanup simplify access management.
Dynamic windows in WINDOW JOIN
Until now, WINDOW JOIN required constant values for the RANGE clause. If
different rows needed different lookback windows, you had to work around it with
self-joins or application-side logic.
Now the RANGE bounds can reference columns or expressions from the driving
table. Each row gets its own window, computed at query time.
Say you have a trades table where each row carries its own lookback interval - maybe it varies by venue or asset class. You can now write:
SELECTt.timestamp,t.symbol,t.price,avg(m.best_bid) AS avg_bid,count(*) AS quote_countFROM fx_trades tWINDOW JOIN market_data mON (t.symbol = m.symbol)RANGE BETWEEN t.lookback MICROSECONDS PRECEDING AND CURRENT ROWINCLUDE PREVAILING;
The time unit is optional. When present, the value is scaled to the left table's timestamp resolution. When omitted, the raw integer is interpreted in the left table's native resolution. Either or both bounds can be dynamic - you can mix a column reference on one side with a constant on the other.
Expressions work too. If you want to double the lookback:
SELECTt.timestamp,t.symbol,avg(m.best_bid) AS avg_bidFROM fx_trades tWINDOW JOIN market_data mON (t.symbol = m.symbol)RANGE BETWEEN 2 * t.lookback MICROSECONDS PRECEDING AND 5 SECONDS FOLLOWING;
One thing to note: dynamic bounds disable the Fast Join (symbol-keyed) and vectorized (SIMD) execution paths. If a fixed window works for your use case, prefer static bounds for maximum performance.
Here is a working example you can try on the demo instance with a static range:
SELECTt.timestamp,t.symbol,t.price,avg(m.best_bid) AS avg_bid,avg(m.best_ask) AS avg_ask,count(*) AS quote_countFROM fx_trades tWINDOW JOIN market_data mON (t.symbol = m.symbol)RANGE BETWEEN 5 SECONDS PRECEDING AND CURRENT ROWINCLUDE PREVAILINGWHERE t.symbol = 'EURUSD'AND t.timestamp > dateadd('m', -5, now())LIMIT 20;
Read the WINDOW JOIN documentation
Parquet row group pruning with bloom filters
If you query Parquet files with selective filters, QuestDB now reads bloom filters and min/max statistics embedded in each row group and skips the ones that cannot match your predicate. For highly selective queries over wide Parquet files, this dramatically reduces I/O.
SELECT symbol, count(*) AS trades, avg(price) AS avg_priceFROM read_parquet('trades.parquet')WHERE symbol = 'BTC-USDT'GROUP BY symbol;
The engine skips row groups where the bloom filter proves 'BTC-USDT' is not
present, and where min/max statistics show the timestamp range does not overlap.
No syntax changes needed - the optimization is automatic.
Array analytics functions
New element-wise functions for DOUBLE[] columns:
array_elem_min(),
array_elem_max(),
array_elem_avg(), and
array_elem_sum().
Unlike the existing array_min / array_max (which return a single scalar
from the whole array), the array_elem_* variants operate position-by-position
and return arrays. They work in two modes:
- Multi-argument:
array_elem_min(arr1, arr2)returns the element-wise minimum across arrays in the same row - Aggregate:
array_elem_min(arr_col)inGROUP BY/SAMPLE BYcomputes the element-wise minimum across rows
Arrays of different lengths are handled naturally - positions beyond the end of a shorter array receive no contribution from that array. NULL elements are skipped at each position.
The demo dataset has market_data.bids and market_data.asks as DOUBLE[][]
order book arrays, which makes a good test case. Here is an example computing
the tightest and widest top-of-book per symbol over 10-minute windows:
SELECTtimestamp,symbol,array_elem_min(bids) AS tightest_bids,array_elem_max(asks) AS widest_asksFROM market_dataWHERE symbol = 'EURUSD'AND timestamp > dateadd('m', -10, now())SAMPLE BY 1m;
Read the array functions documentation
Per-column Parquet encoding and compression
You can now configure encoding and compression on a per-column basis when creating tables that use Parquet partitions. This gives you full control to balance storage size versus query speed for each column individually.
CREATE TABLE sensors (ts TIMESTAMP,temperature DOUBLE PARQUET(rle_dictionary, zstd(3)),humidity FLOAT PARQUET(rle_dictionary),device_id VARCHAR PARQUET(default, lz4_raw),status INT) TIMESTAMP(ts) PARTITION BY DAY;
You can also modify existing tables:
ALTER TABLE sensorsALTER COLUMN temperatureSET PARQUET(rle_dictionary, zstd(3));
The default VARCHAR encoding has also changed to RLE Dictionary in this
release (see breaking changes below). If downstream tools depend on the
previous Delta Length Byte Array encoding, override it explicitly.
Read the Parquet export documentation | CREATE TABLE per-column config | ALTER TABLE PARQUET encoding
Performance improvements
- Faster ASOF and WINDOW JOIN - initial frame positioning now uses binary search instead of linear scanning, reducing first-lookup cost for large right-hand-side tables
- ORDER BY - pre-computed sort keys for fixed-width types reduce function
evaluations from O(N log N) to O(N); a dedicated fast path for
SYMBOLcolumns avoids string comparisons entirely - HORIZON JOIN parallelization - improved parallel execution across various data distributions on multi-core systems
- Vectorized GROUP BY - more non-keyed queries now use SIMD computation
- Parquet I/O - faster row group decompression and column materialization on reads; reduced write amplification and improved throughput on writes
- WAL writer - new
cairo.wal.writer.madvise.modeconfig option for tuning OS page cache hints under memory pressure (none,sequential, orrandom)
Enterprise: COPY PERMISSIONS
Onboarding a new team member or spinning up a service account no longer requires
manually re-granting dozens of permissions. COPY PERMISSIONS clones the full
permission set from one principal to another in a single statement:
COPY PERMISSIONS FROM analyst_jane TO analyst_new_hire;
This includes table-level and endpoint grants. It works across principal types too - copy from a user to a service account, or from a group to a user.
Enterprise: Automatic permission cleanup
Permissions are now automatically revoked when dropping users, groups, service accounts, or database objects. No more stale permission entries accumulating over time as team members rotate or tables get dropped.
Breaking changes
Parquet VARCHAR encoding default changed. Parquet export now uses
RLE Dictionary encoding for VARCHAR columns instead of Delta Length Byte Array. Existing Parquet files are unaffected. If downstream consumers require
the previous encoding, set it explicitly with the new per-column encoding
configuration.
Constant expression folding. The SQL engine now folds constant expressions
at compile time. As a side effect, Infinity and -Infinity in constant
float/double expressions are collapsed to NULL, consistent with QuestDB's
existing runtime behavior. If you have CASE/SWITCH expressions branching on
Infinity or -Infinity, those branches will no longer match.
Bug fixes
Both releases include numerous bug fixes across Parquet I/O, WINDOW JOIN,
decimal columns, and enterprise replication. See the full details on the
release notes page.
That is QuestDB 9.3.4 and Enterprise 3.2.4. Dynamic WINDOW JOIN ranges let
every row define its own time window. Parquet queries skip data they do not need.
Element-wise array functions bring order book analytics into SQL. The engine
keeps getting faster underneath.
docker pull questdb/questdbdocker run -p 9000:9000 -p 8812:8812 questdb/questdb
Try a dynamic WINDOW JOIN first. We think you will see the appeal.
Join our Slack or Discourse communities to share feedback and results.
Self-managed enterprise customers will find the binaries at the usual download location. BYOC enterprise customers will be contacted for upgrading.
Not on QuestDB Enterprise yet? Learn more about QuestDB Enterprise and BYOC, or contact the QuestDB team for a conversation or a demo.