How to create a materialized view
If you asked me what the most common query is that I see every QuestDB user run, I would have to say a basic SAMPLE BY
. For example, when working with market data, most users need to display OHLC prices (Open High Low Close)
on a dashboard. Those can be computed via:
SELECTtimestamp,symbol,first(price) AS open,max(price) as high,min(price) as low,last(price) AS close,sum(amount) AS volumeFROM tradesSAMPLE BY 15m;
QuestDB excels at this type of query, but... if you are ingesting data at scale, and you are powering dashboards serving several users, you can start overloading your server.
Another common use case for SAMPLE BY
queries is to downsample data into a historical table, while deleting the
raw data from the original table after a while, expiring it with TTL. This is a good strategy both to save storage and
to have more performant historical queries, but it needs automation and maintenance, and can be error-prone.
In this post, I will discuss how to use materialized views to help you in these two scenarios. I'll explain some of the benefits and drawbacks. Then we'll walk through how to set up a base table, create a materialized view for hourly averages, and verify that the view is kept up to date — without relying on external tools.
Finally, we'll look at some example use cases to demonstrate how views can simplify more complex needs.
Overview
Materialized views in QuestDB offer a powerful way to optimize your aggregation queries. By pre-computing and storing the results of time-based queries, you can significantly reduce query latency by avoiding re-running complex computations every time.

The best part is that views refresh automatically every time you ingest data into the base table, so they are always up to date, and can be used to power real-time dashboards.
Benefits
Materialized views provide a cost-efficient way to pre-compute and store resource-intensive operations that are frequently accessed. For time-series and OLAP use cases, a vast majority of the historical data is stable, and infrequently updated. Therefore, running complex calculations on this older data, just to get the same result each time, is wasteful!
Since materialized views update periodically or on triggers, they are more resource-efficient than running the query each time.
Many modern databases support materialized views, making them a practical choice for performance optimization. In PostgreSQL and Oracle, materialized views are commonly used to improve query performance in analytical workloads.
Microsoft SQL Server and Google BigQuery offer similar functionality, particularly for large-scale data warehousing and reporting.
Even distributed data processing systems like Apache Hive leverage materialized views to optimize query execution on massive datasets.
The key difference between QuestDB and other databases, is that QuestDB materialized views are engineered to power real-time workflows, so they refresh automatically and keep up to date in near-real-time, with minimal overhead for your database instance.

Drawbacks
Of course there are always trade-offs.
Data Freshness
Even if refreshing a materialized view is done incrementally and automatically, it is still an asynchronous process. A query needs to be executed behind the scenes, so there will be a short span of time when data is still not available in the view, even if it is already available in the base table. It is possible to UNION the materialized view with the base table to ensure you are accounting for the most recent data; but it adds some complexity.
Additional Complexity When Querying Views and Tables Together
Creating materialized views means you are adding more tables to your database. Especially if you are also adding TTL
policies this might mean that in some cases you want to execute queries that UNION
data from some views (for historical data)
with the base table (to include the most recent data not yet expired by TTL
).
View Maintenance
Materialized views can get invalidated if the base table changes in a way that affects the code in the view, and they need to be refreshed manually, or in some cases you might have to drop them and re-create them to remove any incompatibilities with the new base table structure.
Now, let's create a view and explore some advanced usage patterns.
Set up your base table
TIP
It'll help to have QuestDB running!
Before creating a materialized view, you first need to create a base table that contains your raw data. For our example, we'll use a simple trades
table to store trading data:
CREATE TABLE trades (timestamp TIMESTAMP,symbol SYMBOL,price DOUBLE,amount DOUBLE) TIMESTAMP(timestamp)PARTITION BY DAY;
This table is partitioned by day, ensuring data is organized by time — a key requirement for QuestDB's incremental refresh mechanism.
Create the materialized view
Now that the base table is in place, let's create a materialized view that computes the OHLC candles at 15-minute intervals:
CREATE MATERIALIZED VIEW 'trades_OHLC_15m' ASSELECTtimestamp,symbol,first(price) AS open,max(price) AS high,min(price) AS low,last(price) AS close,sum(amount) AS volumeFROM tradesSAMPLE BY 15m;
When run, the above statement creates an incrementally refreshed materialized
view with trades
as its base table. The partitioning strategy is automatically
selected, but if needed it can be also declared on the CREATE
statement.
Each time new rows are inserted into the trades
table,
the trades_OHLC_15m
view is asynchronously refreshed to hold the result
of its query.
Let's populate our base table with some sample data to see how the materialized view works:
INSERT INTO trades (symbol, price, amount, timestamp) VALUES('GBPUSD', 1.320, 10, '2024-09-10T12:01'),('GBPUSD', 1.323, 20, '2024-09-10T12:02'),('JPYUSD', 103.21, 11, '2024-09-10T12:01'),('JPYUSD', 104.21, 27, '2024-09-10T12:02');
After this insert, our materialized view trades_OHLC_15m
will automatically refresh to show:
timestamp | symbol | open | high | low | close | volume |
---|---|---|---|---|---|---|
2024-09-10T12:00:00.000000Z | GBPUSD | 1.32 | 1.323 | 1.32 | 1.323 | 30 |
2024-09-10T12:00:00.000000Z | JPYUSD | 103.21 | 104.21 | 103.21 | 104.21 | 38 |
Notice how the view automatically computed the OHLC values for each symbol?
If we were to run the original SAMPLE BY
query directly on the base table, we'd get the same results, but the materialized view gives us these results instantly without having to recompute them.
Verify and refresh the materialized view
After creating a materialized view, it's helpful to verify its status. You can check the refresh status and other metadata using the materialized_views()
function:
SELECTview_name,last_refresh_start_timestamp,view_status,refresh_base_table_txn,base_table_txnFROM materialized_views();
This query returns important metadata:
view_name | last_refresh_timestamp | view_status | refresh_base_table_txn | base_table_txn |
---|---|---|---|---|
trades_OHLC_15m | 2025-02-18T15:32:22.373513Z | valid | 1 | 42 |
Let's understand what each column tells us:
last_refresh_start_timestamp
: When the view started its last incremental refreshview_status
: Whether the view is valid and being updatedbase_table_txn
: Current transaction number available for base table readersrefresh_base_table_txn
: Transaction up to which the view is refreshed
When base_table_txn
matches refresh_base_table_txn
, your view is fully up-to-date.
If the view shows a view_status
of valid
, it means that the incremental refresh process is working as expected.
In cases where the view is marked as invalid — perhaps due to a schema change or data deletion — you can trigger a full refresh:
REFRESH MATERIALIZED VIEW trades_OHLC_15m FULL;
Keep in mind that a full refresh re-computes the entire view and may take longer on large datasets.
INFO
Read more about limitations and best practices in the materialized views docs
Working with multiple tables
For more complex scenarios, you might need to create materialized views that combine data from multiple tables. In these cases, you must specify which table is the base table using the WITH BASE
clause:
CREATE MATERIALIZED VIEW trades_ext_hourly_pricesWITH BASE trades ASSELECTt.timestamp,t.symbol,e.exchange_name,avg(t.price) AS avg_priceFROM trades tLT JOIN ext_trades e ON (symbol)SAMPLE BY 1d;
With trades
as the base table, this means:
- Changes to the
trades
table will trigger refreshes of the materialized view - The view's incremental refresh mechanism will track changes based on the
trades
table - The designated timestamp must come from the base table
This specification is required when working with multiple tables because QuestDB needs to know which table should drive the refresh mechanism.
Managing data retention
Materialized views are unaffected by TTL
settings on their base tables, so expired rows in the base table do not impact the view.
To control data retention in the view itself, you can define a TTL
directly on the view. When doing so, make sure to wrap the query in parentheses and explicitly include the PARTITION BY
clause:
CREATE MATERIALIZED VIEW 'trades_OHLC_15m' AS (SELECTtimestamp,symbol,first(price) AS open,max(price) as high,min(price) as low,last(price) AS close,sum(amount) AS volumeFROM tradesSAMPLE BY 15m;) PARTITION BY WEEK TTL 8 WEEKS;
This configuration ensures your aggregated data is automatically cleaned up after 8 weeks while remaining independent of the base table's TTL
settings.
Since materialized views can be chained - the output of one can be used as the input for the
next - you can use different sampling intervals and TTL
s to automate data downsampling
and data retention policies.
Query Patterns
As you've just learned, materialized views can be very powerful, but because of their incremental refresh model, there are cases in which you may need to write your queries carefully. Two of those scenarios are when using Window Functions, or when combining data from a base table and one or more materialized views.
Window Functions
Many financial indicators can be calculated in SQL using window functions. Because materialized views only read the rows from the base table within the same sample interval as the rows being ingested, it is not safe to materialize operations such as window functions, which might specify window frame boundaries outside the sampled interval. While the view would not fail, materialized results might be incomplete. For those cases, a good pattern is to materialize the sampled data on the base view, then run the indicator as a SQL query using window functions directly against the view.
See this query for a volume weighted average reading from our materialized view.
DECLARE@anchor_date := now(),@start_date := dateadd('d', -1,@anchor_date),@symbol := 'GBPUSD'WITH sampled AS (SELECTtimestamp, symbol, volume,((open+close)/2) * volume AS traded_valueFROM trades_OHLC_15mWHERE timestamp BETWEEN @start_date AND @anchor_dateAND symbol = @symbol), cumulative AS (SELECT timestamp, symbol,SUM(traded_value)OVER (ORDER BY timestamp) AS cumulative_value,SUM(volume)OVER (ORDER BY timestamp) AS cumulative_volumeFROM sampled)SELECT *, cumulative_value/cumulative_volume AS vwap FROM cumulative;
This view:
- Filters from the materialized view only the time range needed for the indicator.
- Uses a query with Window Functions to calculate cumulative values and volumes for each symbol and interval.
- Finally divides the value by the volume to get the moving VWAP.
Combining Materialized Views with Base-Table Data
Sometimes you might want to combine data from a materialized view and from the base table in a single query. It might
be the case that due to TTL you have partial data on the table, or in the view, and you want to query from the time range
covered by both. Or maybe your materialized view is sampling data at some broad interval, and you want to show more
fine-grained results for the recent data. You can write those queries by using a UNION
.
This query retrieves OHLC data for yesterday and for today for a specific symbol value. For yesterday's data, it will
get the OHLC values from the materialized table, which is sampling by 15-minute intervals. For today's data, it will
query the base trades
table, and will sample by 5-minute intervals. Note we are using here a DECLARE
statement
to make sure we are using the same symbol
filter in both sides of the UNION
.
DECLARE@symbol := 'BTC-USDT'SELECT *FROM trades_OHLC_15mWHERE symbol = @symbolAND timestamp IN yesterday()UNIONSELECTtimestamp,symbol,first(price) AS open,max(price) as high,min(price) as low,last(price) AS close,sum(amount) AS volumeFROM tradesWHERE symbol = @symbolAND timestamp in today()SAMPLE BY 5mORDER BY timestamp;
Summary of best practices
When working with materialized views in production, you'll want to consider:
- Query complexity vs. refresh time: More complex views take longer to refresh. Consider splitting very complex aggregations into multiple views if possible.
- Storage impact: Monitor the size of your materialized views, especially when using multiple joins.
- Refresh timing: For time-critical applications, monitor the lag between base table updates and view refreshes using
materialized_views()
. - Partitioning strategy: Align your view's partitioning with your query patterns to optimize performance. Smaller partitions and larger sample sizes may lead to faster updates!
- Cascading Materialized Views: A materialized view can use another view as the base table. When combined with TTL, this can help reduce storage size, creating effectively multiple tiers of data downsampled at increasing intervals.
Conclusion
Materialized views can provide a massive read performance boost when using aggregate queries in QuestDB. With minimal configuration, you can offload expensive computations and enjoy near real-time analytics on your time-series data.
Explore more advanced configurations and practices in our documentation.