Configuration

This page describes methods for configuring QuestDB server settings.

Configuration can be set either:

  • In the server.conf configuration file available in the root directory
  • Using environment variables

When a key is absent from both the config file and the environment variables, the default value is used.

note

For Windows users

When entering path values, use either \\ or / instead of the native path separator char \.

  • πŸ‘ C:\\path\\to\\file\\path
  • πŸ‘ C:/path/to/file
  • πŸ‘Ž C:\path\to\file

The single backslash is interpreted as an escape sequence start within Java properties.

Environment variables​

All settings in the configuration file can be set or overridden using environment variables. If a key is set in both the server.conf file and via an environment variable, the environment variable will take precedence and the value in the server configuration file will be ignored.

To make these configuration settings available to QuestDB via environment variables, they must be in the following format:

QDB_<KEY_OF_THE_PROPERTY>

Where <KEY_OF_THE_PROPERTY> is equal to the configuration key name. To properly format a server.conf key as an environment variable it must have:

  1. QDB_ prefix
  2. uppercase characters
  3. all . period characters replaced with _ underscore

For example, the server configuration key for shared workers must be passed as described below:

server.conf keyenv var
shared.worker.countQDB_SHARED_WORKER_COUNT
note

QuestDB applies these configuration changes on startup and a running instance must be restarted in order for configuration changes to take effect

Examples​

The following configuration property customizes the number of worker threads shared across the application:

conf/server.conf
shared.worker.count=5
Customizing the worker count via environment variable
export QDB_SHARED_WORKER_COUNT=5

Keys and default values​

This section lists the configuration keys available to QuestDB by topic or subsystem. Parameters for specifying buffer and memory page sizes are provided in the format n<unit>, where <unit> can be one of the following:

  • m for MB
  • k for kB

For example:

Setting maximum send buffer size to 2MB per TCP socket
http.net.connection.sndbuf=2m

Shared worker​

Shared worker threads service SQL execution subsystems and (in the default configuration) every other subsystem.

PropertyDefaultDescription
shared.worker.count

Number of worker threads shared across the application. Increasing this number will increase parallelism in the application at the expense of CPU resources.

shared.worker.affinity

Comma-delimited list of CPU ids, one per thread specified in shared.worker.count. By default, threads have no CPU affinity.

shared.worker.haltOnErrorfalse

Flag that indicates if the worker thread must stop when an unexpected error occurs.

HTTP server​

This section describes configuration settings for the Web Console and the REST API available by default on port 9000. For details on the use of this component, refer to the web console documentation page.

PropertyDefaultDescription
http.enabledtrue

Enable or disable HTTP server.

http.bind.to0.0.0.0:9000

IP address and port of HTTP server. A value of 0 means that the HTTP server will bind to all network interfaces. You can specify IP address of any individual network interface on your system.

http.userN/A

Username for HTTP Basic Authentication in QuestDB Open Source. QuestDB Enterprise Edition supports more advanced authentication mechanisms: RBAC

http.passwordN/A

Password for HTTP Basic Authentication in QuestDB Open Source. QuestDB Enterprise Edition supports more advanced authentication mechanisms: RBAC

http.net.connection.limit64

The maximum number permitted for simultaneous TCP connection to the HTTP server. The rationale of the value is to control server memory consumption.

http.net.connection.timeout300000

TCP connection idle timeout in milliseconds. Connection is closed by HTTP server when this timeout lapses.

http.net.connection.sndbuf2M

Maximum send buffer size on each TCP socket. If this value is -1, the socket send buffer size remains unchanged from the OS defaults.

http.net.connection.rcvbuf2M

Maximum receive buffer size on each TCP socket. If this value is -1, the socket receive buffer size remains unchanged from the OS defaults.

http.net.connection.hintfalse

Windows specific flag to overcome OS limitations on TCP backlog size

http.net.connection.queue.timeout5000

Amount of time in milliseconds a connection can wait in the listen backlog queue before it is refused. Connections will be aggressively removed from the backlog until the active connection limit is breached.

http.net.bind.to0.0.0.0:9000

IP address and port of HTTP server.

http.connection.pool.initial.capacity4

Initial size of pool of reusable objects that hold connection state. The pool should be configured to maximum realistic load so that it does not resize at runtime.

http.connection.string.pool.capacity128

Initial size of the string pool shared by the HTTP header and multipart content parsers.

http.multipart.header.buffer.size512

Buffer size in bytes used by the HTTP multipart content parser.

http.multipart.idle.spin.count10000

How long the code accumulates incoming data chunks for column and delimiter analysis.

http.receive.buffer.size1M

Size of receive buffer.

http.request.header.buffer.size64K

Size of internal buffer allocated for HTTP request headers. The value is rounded up to the nearest power of 2. When HTTP requests contain headers that exceed the buffer size server will disconnect the client with HTTP error in server log.

http.response.header.buffer.size32K

Size of the internal response buffer. The value will be rounded up to the nearest power of 2. The buffer size should be large enough to accommodate max size of server response headers.

http.worker.count0

Number of threads in private worker pool. When 0, HTTP server will be using shared worker pool of the server. Values above 0 switch on private pool.

http.worker.affinity

Comma separated list of CPU core indexes. The number of items in this list must be equal to the worker count.

http.worker.haltOnErrorfalse

Changing the default value is strongly discouraged. Flag that indicates if the worker thread must stop when an unexpected error occurs.

http.send.buffer.size2M

Size of the internal send buffer. Larger buffer sizes result in fewer I/O interruptions the server is making at the expense of memory usage per connection. There is a limit of send buffer size after which increasing it stops being useful in terms of performance. 2MB seems to be optimal value.

http.static.index.file.nameindex.html

Name of index file for the Web Console.

http.frozen.clockfalse

Sets the clock to always return zero. This configuration parameter is used for internal testing.

http.allow.deflate.before.sendfalse

Flag that indicates if Gzip compression of outgoing data is allowed.

http.keep-alive.timeout5

Used together with http.keep-alive.max to set the value of HTTP Keep-Alive response header. This instructs browser to keep TCP connection open. Has to be 0 when http.version is set to HTTP/1.0.

http.keep-alive.max10000

See http.keep-alive.timeout. Has to be 0 when http.version is set to HTTP/1.0.

http.static.public.directorypublic

The name of directory for public web site.

http.text.date.adapter.pool.capacity16

Size of date adapter pool. This should be set to the anticipated maximum number of DATE fields a text input can have. The pool is assigned to connection state and is reused alongside of connection state object.

http.text.json.cache.limit16384

JSON parser cache limit. Cache is used to compose JSON elements that have been broken up by TCP protocol. This value limits the maximum length of individual tag or tag value.

http.text.json.cache.size8192

Initial size of JSON parser cache. The value must not exceed http.text.json.cache.limit and should be set to avoid cache resizes at runtime.

http.text.max.required.delimiter.stddev0.1222d

The maximum standard deviation value for the algorithm that calculates text file delimiter. Usually when text parser cannot recognise the delimiter it will log the calculated and maximum standard deviation for the delimiter candidate.

http.text.max.required.line.length.stddev0.8

Maximum standard deviation value for the algorithm that classifies input as text or binary. For the values above configured stddev input will be considered binary.

http.text.metadata.string.pool.capacity128

The initial size of pool for objects that wrap individual elements of metadata JSON, such as column names, date pattern strings and locale values.

http.text.roll.buffer.limit4M

The limit of text roll buffer. See http.text.roll.buffer.size for description.

http.text.roll.buffer.size1024

Roll buffer is a structure in the text parser that holds a copy of a line that has been broken up by TCP. The size should be set to the maximum length of text line in text input.

http.text.analysis.max.lines1000

Number of lines to read on CSV import for heuristics which determine column names & types. Lower line numbers may detect CSV schemas quicker, but possibly with less accuracy. 1000 lines is the maximum for this value.

http.text.lexer.string.pool.capacity64

The initial capacity of string fool, which wraps STRING column types in text input. The value should correspond to the maximum anticipated number of STRING columns in text input.

http.text.timestamp.adapter.pool.capacity64

Size of timestamp adapter pool. This should be set to the anticipated maximum number of TIMESTAMP fields a text input can have. The pool is assigned to connection state and is reused alongside of connection state object.

http.text.utf8.sink.size4096

Initial size of UTF-8 adapter sink. The value should correspond the maximum individual field value length in text input.

http.json.query.connection.check.frequency1000000

Changing the default value is strongly discouraged. The value to throttle check if client socket has been disconnected.

http.json.query.float.scale4

The scale value of string representation of FLOAT values.

http.json.query.double.scale12

The scale value of string representation of DOUBLE values.

http.query.cache.enabledtrue

Enable or disable the query cache. Cache capacity is number_of_blocks * number_of_rows.

http.query.cache.block.count4

Number of blocks for the query cache.

http.query.cache.row.count16

Number of rows for the query cache.

http.security.readonlyfalse

Forces HTTP read only mode when true, disabling commands which modify the data or data structure, e.g. INSERT, UPDATE, or CREATE TABLE.

http.security.max.response.rows2^63-1

Limit the number of response rows over HTTP.

http.security.interrupt.on.closed.connectiontrue

Switch to enable termination of SQL processing if the HTTP connection is closed. The mechanism affects performance so the connection is only checked after circuit.breaker.throttle calls are made to the check method. The mechanism also reads from the input stream and discards it since some HTTP clients send this as a keep alive in between requests, circuit.breaker.buffer.size denotes the size of the buffer for this.

http.pessimistic.health.check.enabledfalse

When enabled, the health check returns HTTP 500 for any unhandled errors since the server started.

circuit.breaker.throttle2000000

Number of internal iterations such as loops over data before checking if the HTTP connection is still open

circuit.breaker.buffer.size32

Size of buffer to read from HTTP connection. If this buffer returns zero and the HTTP client is no longer sending data, SQL processing will be terminated.

http.server.keep.alivetrue

If set to false, the server will disconnect the client after completion of each request.

http.versionHTTP/1.1

Protocol version, other supported value is HTTP/1.0.

Cairo engine​

This section describes configuration settings for the Cairo SQL engine in QuestDB.

PropertyDefaultDescription
query.timeout.sec60

A global timeout (in seconds) for long-running queries. Timeout for each query can override the default by setting HTTP header Statement-Timeout or Postgres options.

cairo.max.uncommitted.rows500000

Maximum number of uncommitted rows per table, when the number of pending rows reaches this parameter on a table, a commit will be issued.

cairo.o3.max.lag10 minutes

The maximum size of in-memory buffer in milliseconds. The buffer is allocated dynamically through analyzing the shape of the incoming data, and o3MaxLag is the upper limit.

cairo.o3.min.lag1 second

The minimum size of in-memory buffer in milliseconds. The buffer is allocated dynamically through analyzing the shape of the incoming data, and o3MinLag is the lower limit.

cairo.sql.backup.rootnull

Output root directory for backups.

cairo.sql.backup.dir.datetime.formatnull

Date format for backup directory.

cairo.sql.backup.dir.tmp.nametmp

Name of tmp directory used during backup.

cairo.sql.backup.mkdir.mode509

Permission used when creating backup directories.

cairo.snapshot.instance.idempty string

Instance id to be included into disk snapshots.

cairo.snapshot.recovery.enabledtrue

When false, disables snapshot recovery on database start.

cairo.rootdb

Directory for storing db tables and metadata. This directory is inside the server root directory provided at startup.

cairo.commit.modenosync

How changes to table are flushed to disk upon commit. Choices: nosync, async (flush call schedules update, returns immediately), sync (waits for flush on the appended column files to complete).

cairo.rnd.memory.max.pages128

Sets the max number of pages for memory used by rnd_ functions. Supports rnd_str() and rnd_symbol().

cairo.rnd.memory.page.size8K

Sets the memory page size used by rnd_ functions. Supports rnd_str() and rnd_symbol().

cairo.create.as.select.retry.count5

Number of types table creation or insertion will be attempted.

cairo.default.map.typefast

Type of map used. Options: fast (speed at the expense of storage), compact.

cairo.default.symbol.cache.flagtrue

When true, symbol values will be cached on Java heap instead of being looked up in the database files.

cairo.default.symbol.capacity256

Specifies approximate capacity for SYMBOL columns. It should be equal to number of unique symbol values stored in the table and getting this value badly wrong will cause performance degradation. Must be power of 2.

cairo.file.operation.retry.count30

Number of attempts to open files.

cairo.idle.check.interval300000

Frequency of writer maintenance job in milliseconds.

cairo.inactive.reader.ttl120000

TTL (Time-To-Live) to close inactive readers in milliseconds.

cairo.wal.inactive.writer.ttl120000

TTL (Time-To-Live) to close inactive WAL writers in milliseconds.

cairo.inactive.writer.ttl600000

TTL (Time-To-Live) to close inactive writers in milliseconds.

cairo.index.value.block.size256

Approximation of number of rows for a single index key, must be power of 2.

cairo.max.swap.file.count30

Number of attempts to open swap files.

cairo.mkdir.mode509

File permission mode for new directories.

cairo.parallel.index.threshold100000

Minimum number of rows before allowing use of parallel indexation.

cairo.reader.pool.max.segments10

Number of segments in the table reader pool. Each segment holds up to 32 readers.

cairo.wal.writer.pool.max.segments10

Number of segments in the WAL writer pool. Each segment holds up to 32 writers.

cairo.spin.lock.timeout1000

Timeout when attempting to get BitmapIndexReaders in millisecond.

cairo.character.store.capacity1024

Size of the CharacterStore.

cairo.character.store.sequence.pool.capacity64

Size of the CharacterSequence pool.

cairo.column.pool.capacity4096

Size of the Column pool in the SqlCompiler.

cairo.compact.map.load.factor0.7

Load factor for CompactMaps.

cairo.expression.pool.capacity8192

Size of the ExpressionNode pool in SqlCompiler.

cairo.fast.map.load.factor0.5

Load factor for all FastMaps.

cairo.sql.join.context.pool.capacity64

Size of the JoinContext pool in SqlCompiler.

cairo.lexer.pool.capacity2048

Size of FloatingSequence pool in GenericLexer.

cairo.sql.map.key.capacity2M

Key capacity in FastMap and CompactMap.

cairo.sql.map.max.resizes2^31

Number of map resizes in FastMap and CompactMap before a resource limit exception is thrown, each resize doubles the previous size.

cairo.sql.map.page.size4m

Memory page size for FastMap and CompactMap.

cairo.sql.map.max.pages2^31

Memory max pages for CompactMap.

cairo.model.pool.capacity1024

Size of the QueryModel pool in the SqlCompiler.

cairo.sql.sort.key.page.size4M

Memory page size for storing keys in LongTreeChain.

cairo.sql.sort.key.max.pages2^31

Max number of pages for storing keys in LongTreeChain before a resource limit exception is thrown.

cairo.sql.sort.light.value.page.size1048576

Memory page size for storing values in LongTreeChain.

cairo.sql.sort.light.value.max.pages2^31

Max pages for storing values in LongTreeChain.

cairo.sql.hash.join.value.page.size16777216

Memory page size of the slave chain in full hash joins.

cairo.sql.hash.join.value.max.pages2^31

Max pages of the slave chain in full hash joins.

cairo.sql.latest.by.row.count1000

Number of rows for LATEST BY.

cairo.sql.hash.join.light.value.page.size1048576

Memory page size of the slave chain in light hash joins.

cairo.sql.hash.join.light.value.max.pages2^31

Max pages of the slave chain in light hash joins.

cairo.sql.sort.value.page.size16777216

Memory page size of file storing values in SortedRecordCursorFactory.

cairo.sql.sort.value.max.pages2^31

Max pages of file storing values in SortedRecordCursorFactory.

cairo.work.steal.timeout.nanos10000

Latch await timeout in nanos for stealing indexing work from other threads.

cairo.parallel.indexing.enabledtrue

Allows parallel indexation. Works in conjunction with cairo.parallel.index.threshold.

cairo.sql.join.metadata.page.size16384

Memory page size for JoinMetadata file.

cairo.sql.join.metadata.max.resizes2^31

Number of map resizes in JoinMetadata before a resource limit exception is thrown, each resize doubles the previous size.

cairo.sql.analytic.column.pool.capacity64

Size of AnalyticColumn pool in SqlParser.

cairo.sql.create.table.model.pool.capacity16

Size of CreateTableModel pool in SqlParser.

cairo.sql.create.table.model.batch.size1000000

Batch size for non-atomic CREATE AS SELECT statements.

cairo.sql.column.cast.model.pool.capacity16

Size of CreateTableModel pool in SqlParser.

cairo.sql.rename.table.model.pool.capacity16

Size of RenameTableModel pool in SqlParser.

cairo.sql.with.clause.model.pool.capacity128

Size of WithClauseModel pool in SqlParser.

cairo.sql.insert.model.pool.capacity64

Size of InsertModel pool in SqlParser.

cairo.sql.insert.model.batch.size1000000

Batch size for non-atomic INSERT INTO SELECT statements.

cairo.sql.copy.model.pool.capacity32

Size of CopyModel pool in SqlParser.

cairo.sql.copy.buffer.size2M

Size of buffer used when copying tables.

cairo.sql.double.cast.scale12

Maximum number of decimal places that types cast as doubles have.

cairo.sql.float.cast.scale4

Maximum number of decimal places that types cast as floats have.

cairo.sql.copy.formats.file/text_loader.json

Name of file with user's set of date and timestamp formats.

cairo.sql.jit.modeon

JIT compilation for SQL queries. May be disabled by setting this value to off.

cairo.sql.jit.debug.enabledfalse

Sets debug flag for JIT compilation. When enabled, assembly will be printed into stdout.

cairo.sql.jit.bind.vars.memory.page.size4K

Sets the memory page size for storing bind variable values for JIT compiled filter.

cairo.sql.jit.bind.vars.memory.max.pages8

Sets the maxΒ memory pages for storing bind variable values for JIT compiled filter.

cairo.sql.jit.rows.threshold1M

Sets minimum number of rows to shrink filtered rows memory after query execution.

cairo.sql.jit.page.address.cache.threshold1M

Sets minimum cache size to shrink page address cache after query execution.

cairo.sql.jit.ir.memory.page.size8K

Sets the memory page size for storing IR for JIT compilation.

cairo.sql.jit.ir.max.pages8

Sets max memory pages for storing IR for JIT compilation.

cairo.sql.page.frame.min.rows1000

Sets the minimum number of rows in page frames used in SQL queries.

cairo.sql.page.frame.max.rows1000000

Sets the maximum number of rows in page frames used in SQL. queries

cairo.sql.sampleby.page.size0

SampleBy index query page size. Max values returned in single scan. 0 is default, and it means to use symbol block capacity.

cairo.sql.sampleby.default.alignment.calendar0

SampleBy default alignment behaviour. true corresponds to ALIGN TO CALENDAR, false corresponds to ALIGN TO FIRST OBSERVATION.

cairo.date.localeen

The locale to handle date types.

cairo.timestamp.localeen

The locale to handle timestamp types.

cairo.o3.column.memory.size256k

Memory page size per column for O3 operations. Please be aware O3 will use 2x of the set value per column (therefore a default of 2x256kb).

cairo.writer.data.append.page.size16M

mmap sliding page size that table writer uses to append data for each column.

cairo.writer.data.index.key.append.page.size512K

mmap page size for appending index key data; key data is number of distinct symbol values times 4 bytes.

cairo.writer.data.index.value.append.page.size16M

mmap page size for appending value data.

cairo.writer.misc.append.page.size4K

mmap page size for mapping small files, default value is OS page size (4k Linux, 64K windows, 16k OSX M1). Overriding this rounds to the nearest (greater) multiple of the OS page size.

cairo.writer.command.queue.capacity32

Maximum writer ALTER TABLE and replication command capacity. Shared between all the tables.

cairo.writer.tick.rows.count1024

Row count to check writer command queue after on busy writing, e.g. tick after X rows written.

cairo.writer.alter.busy.wait.timeout500

Maximum wait timeout in milliseconds for ALTER TABLE SQL statement run via REST and PostgreSQL Wire Protocol interfaces when statement execution is ASYNCHRONOUS.

cairo.sql.column.purge.queue.capacity128

Purge column version job queue. Increase the size if column version not automatically cleanup after execution of UPDATE SQL statement. Reduce to decrease initial memory footprint.

cairo.sql.column.purge.task.pool.capacity256

Column version task object pool capacity. Increase to reduce GC, reduce to decrease memory footprint.

cairo.sql.column.purge.retry.delay10000

Initial delay (ΞΌs) before re-trying purge of stale column files.

cairo.sql.column.purge.retry.delay.multiplier10.0

Multiplier used to increases retry delay with each iteration.

cairo.sql.column.purge.retry.delay.limit60000000

Delay limit (ΞΌs), upon reaching which, the re-try delay remains constant.

cairo.sql.column.purge.retry.limit.days31

Number of days purge system will continue to re-try deleting stale column files before giving up.

cairo.volumes-

A comma separated list of alias -> root-path pairs defining allowed volumes to be used in CREATE TABLE IN VOLUME statements.

cairo.system.table.prefixsys.

Prefix of the tables used for QuestDB internal data storage. These tables are hidden from QuestDB web console.

cairo.wal.enabled.defaulttrue

Setting defining whether WAL table is the default when using CREATE TABLE.

cairo.o3.partition.split.min.size50MB

The estimated partition size on disk. This setting is one of the conditions to trigger auto-partitioning.

cairo.o3.last.partition.max.splits20

The number of partition pieces allowed before the last partition piece is merged back to the physical partition.

cairo.o3.partition.purge.list.initial.capacity1

Number of partition expected on average. Initial value for purge allocation job, extended in runtime automatically.

cairo.sql.parallel.groupby.enabledtrue

Enables parallel GROUP BY execution; requires at least 4 shared worker threads.

cairo.sql.parallel.groupby.merge.shard.queue.capacity<auto>

Merge queue capacity for parallel GROUP BY; used for parallel tasks that merge shard hash tables.

cairo.sql.parallel.groupby.sharding.threshold100000

Threshold for parallel GROUP BY to shard the hash table holding the aggregates.

cairo.sql.groupby.allocator.default.chunk.size128k

Default size for memory buffers in GROUP BY function native memory allocator.

cairo.sql.groupby.allocator.max.chunk.size4gb

Maximum allowed native memory allocation for GROUP BY functions.

cairo.sql.unordered.map.max.entry.size24

Threshold in bytes for switching from single memory buffer hash table (unordered) to a hash table with separate heap for entries (ordered).

cairo.sql.window.max.recursion128

Prevents stack overflow errors when evaluating complex nested SQLs. The value is an approximate number of nested SELECT clauses.

cairo.sql.query.registry.pool.size<auto>

Pre-sizes the internal data structure that stores active query executions. The value is chosen automatically based on the number of threads in the shared worker pool.

cairo.sql.analytic.initial.range.buffer.size32

Window function buffer size in record counts. Pre-sizes buffer for every windows function execution to contain window records.

cairo.system.writer.data.append.page.size256k

mmap sliding page size that TableWriter uses to append data for each column specifically for System tables.

WAL table configurations​

The following WAL tables settings on parallel threads are configurable for applying WAL data to the table storage:

PropertyDefaultDescription
wal.apply.worker.countequal to the CPU core count

Number of dedicated worker threads assigned to handle WAL table data.

wal.apply.worker.affinityequal to the CPU core count

Comma separated list of CPU core indexes.

wal.apply.worker.haltOnErrorfalse

Flag that indicates if the worker thread must stop when an unexpected error occurs.

cairo.wal.purge.interval30000

Period in ms of how often WAL-applied files are cleaned up from the disk

cairo.wal.segment.rollover.row.count200000

Row count of how many rows are written to the same WAL segment before starting a new segment. Triggers in conjunction with cairo.wal.segment.rollover.size (whichever is first).

cairo.wal.commit.squash.row.limit500000

Maximum row count that can be squashed together from multiple transactions before applying to the table. A very low value can delay data visibility.

cairo.wal.squash.uncommitted.rows.multiplier20.0

Multiplier to cairo.max.uncommitted.rows to calculate the limit of rows that can be kept invisible when writing to WAL table under heavy load, when multiple transactions are to be applied. It is used to reduce the number Out-Of-Order (O3) commits when O3 commits are unavoidable by squashing multiple commits together. Setting it very low can increase O3 commit frequency and decrease the throughput. Setting it too high may cause excessive memory usage and increase the latency.

cairo.wal.max.lag.txn.count20

Maximum number of transactions that can be kept invisible when writing to WAL table. Once the number is reached, full commit occurs. If not set, defaults to the rounded value of cairo.wal.squash.uncommitted.rows.multiplier.

CSV import​

This section describes configuration settings for using COPY to import large CSV files.

Settings for COPY:

PropertyDefaultDescription
cairo.sql.copy.rootimport

Input root directory for CSV imports via COPY SQL and for Parquet file reading. This path must not overlap with other directory (e.g. db, conf) of running instance, otherwise import may delete or overwrite existing files. Relative paths are resolved against the server root directory.

cairo.sql.copy.work.rootnull

Temporary import file directory. Defaults to root_directory/tmp if not set explicitly.

cairo.iouring.enabledtrue

Enable or disable io_uring implementation. Applicable to newer Linux kernels only. Can be used to switch io_uring interface usage off if there's a kernel bug affecting it.

cairo.sql.copy.buffer.size2 MiB

Size of read buffers used in import.

cairo.sql.copy.log.retention.days3

Number of days to keep import messages in sys.text_import_log.

cairo.sql.copy.max.index.chunk.size100M

Maximum size of index chunk file used to limit total memory requirements of import. Indexing phase should use roughly thread_count * cairo.sql.copy.max.index.chunk.size of memory.

cairo.sql.copy.queue.capacity32

Size of copy task queue. Should be increased if there's more than 32 import workers.

CSV import configuration for Docker​

For QuestDB instances using Docker:

  • cairo.sql.copy.root must be defined using one of the following settings:
    • The environment variable QDB_CAIRO_SQL_COPY_ROOT.
    • The cairo.sql.copy.root in server.conf.
  • The path for the source CSV file is mounted.
  • The source CSV file path and the path defined by QDB_CAIRO_SQL_COPY_ROOT are identical.
  • It is optional to define QDB_CAIRO_SQL_COPY_WORK_ROOT.

The following is an example command to start a QuestDB instance on Docker, in order to import a CSV file:

docker run -p 9000:9000 \
-v "/tmp/questdb:/var/lib/questdb" \
-v "/tmp/questdb/my_input_root:/var/lib/questdb/questdb_import" \
-e QDB_CAIRO_SQL_COPY_ROOT=/var/lib/questdb/questdb_import \
questdb/questdb

Where:

  • -v "/tmp/questdb/my_input_root:/var/lib/questdb/questdb_import": Defining a source CSV file location to be /tmp/questdb/my_input_root on local machine and mounting it to /var/lib/questdb/questdb_import in the container.
  • -e QDB_CAIRO_SQL_COPY_ROOT=/var/lib/questdb/questdb_import: Defining the copy root directory to be /var/lib/questdb/questdb_import.

It is important that the two path are identical (/var/lib/questdb/questdb_import in the example).

Parallel SQL execution​

This section describes settings that can affect the level of parallelism during SQL execution, and therefore can also have an impact on performance.

PropertyDefaultDescription
cairo.sql.parallel.filter.enabledtrue

Enable or disable parallel SQL filter execution. JIT compilation takes place only when this setting is enabled.

cairo.sql.parallel.filter.pretouch.enabledtrue

Enable column pre-touch as part of the parallel SQL filter execution, to improve query performance for large tables.

cairo.page.frame.shard.count4

Number of shards for both dispatch and reduce queues. Shards reduce queue contention between SQL statements that are executed concurrently.

cairo.page.frame.reduce.queue.capacity64

Reduce queue is used for data processing and should be large enough to supply tasks for worker threads (shared worked pool).

cairo.page.frame.rowid.list.capacity256

Row ID list initial capacity for each slot of the reduce queue. Larger values reduce memory allocation rate, but increase minimal RSS size.

cairo.page.frame.column.list.capacity16

Column list capacity for each slot of the reduce queue. Used by JIT-compiled filter functions. Larger values reduce memory allocation rate, but increase minimal RSS size.

Postgres wire protocol​

This section describes configuration settings for client connections using PostgresSQL wire protocol.

PropertyDefaultDescription
pg.enabledtrue

Configuration for enabling or disabling the Postres interface.

pg.net.bind.to0.0.0.0:8812

IP address and port of Postgres wire protocol server. 0 means that the server will bind to all network interfaces. You can specify IP address of any individual network interface on your system.

pg.net.connection.limit64

The maximum number permitted for simultaneous Postgres connections to the server. This value is intended to control server memory consumption.

pg.net.connection.timeout300000

Connection idle timeout in milliseconds. Connections are closed by the server when this timeout lapses.

pg.net.connection.rcvbuf-1

Maximum send buffer size on each TCP socket. If value is -1 socket send buffer remains unchanged from OS default.

pg.net.connection.sndbuf-1

Maximum receive buffer size on each TCP socket. If value is -1, the socket receive buffer remains unchanged from OS default.

pg.net.connection.hintfalse

Windows specific flag to overcome OS limitations on TCP backlog size

pg.net.connection.queue.timeout300000

Amount of time in milliseconds a connection can wait in the listen backlog queue before it is refused. Connections will be aggressively removed from the backlog until the active connection limit is breached.

pg.security.readonlyfalse

Forces PostgreSQL Wire Protocol read only mode when true, disabling commands which modify the data or data structure, e.g. INSERT, UPDATE, or CREATE TABLE.

pg.character.store.capacity4096

Size of the CharacterStore.

pg.character.store.pool.capacity64

Size of the CharacterStore pool capacity.

pg.connection.pool.capacity64

The maximum amount of pooled connections this interface may have.

pg.passwordquest

Postgres database password.

pg.useradmin

Postgres database username.

pg.readonly.user.enabledfalse

Enable or disable Postgres database read-only user account. When enabled, this additional user can be used to open read-only connections to the database.

pg.readonly.passwordquest

Postgres database read-only user password.

pg.readonly.useruser

Postgres database read-only user username.

pg.select.cache.enabledtrue

Enable or disable the SELECT query cache. Cache capacity is number_of_blocks * number_of_rows.

pg.select.cache.block.count16

Number of blocks to cache SELECT query execution plan against text to speed up execution.

pg.select.cache.row.count16

Number of rows to cache for SELECT query execution plan against text to speed up execution.

pg.insert.cache.enabledtrue

Enable or disable the INSERT query cache. Cache capacity is number_of_blocks * number_of_rows.

pg.insert.cache.block.count8

Number of blocks to cache INSERT query execution plan against text to speed up execution.

pg.insert.cache.row.count8

Number of rows to cache for INSERT query execution plan against text to speed up execution.

pg.update.cache.enabledtrue

Enable or disable the UPDATE query cache. Cache capacity is number_of_blocks * number_of_rows.

pg.update.cache.block.count8

Number of blocks to cache UPDATE query execution plan against text to speed up execution.

pg.update.cache.row.count8

Number of rows to cache for UPDATE query execution plan against text to speed up execution.

pg.max.blob.size.on.query512k

For binary values, clients will receive an error when requesting blob sizes above this value.

pg.recv.buffer.size1M

Size of the buffer for receiving data.

pg.send.buffer.size1M

Size of the buffer for sending data.

pg.date.localeen

The locale to handle date types.

pg.timestamp.localeen

The locale to handle timestamp types.

pg.worker.count0

Number of dedicated worker threads assigned to handle PostgreSQL Wire Protocol queries. When 0, the jobs will use the shared pool.

pg.worker.affinity

Comma-separated list of thread numbers which should be pinned for Postgres ingestion. Example pg.worker.affinity=1,2,3.

pg.halt.on.errorfalse

Whether ingestion should stop upon internal error.

pg.daemon.pooltrue

Defines whether to run all PostgreSQL Wire Protocol worker threads in daemon mode (true) or not (false).

pg.binary.param.count.capacity2

Size of the initial capacity for the pool used for binary bind variables.

InfluxDB Line Protocol (ILP)​

This section describes ingestion settings for incoming messages using InfluxDB Line Protocol.

PropertyDefaultDescription
line.default.partition.byDAYTable partition strategy to be used with tables that are created automatically by InfluxDB Line Protocol. Possible values are: HOUR, DAY, WEEK, MONTH, and YEAR.
line.auto.create.new.columnstrueWhen enabled, automatically creates new columns when they appear in the ingested data. When disabled, messages with new columns will be rejected.
line.auto.create.new.tablestrueWhen enabled, automatically creates new tables when they appear in the ingested data. When disabled, messages for non-existent tables will be rejected.

HTTP specific settings​

ILP over HTTP is the preferred way of ingesting data.

PropertyDefaultDescription
line.http.enabledtrueEnable ILP over HTTP. Default port is 9000. Enabled by default within open source versions, defaults to false and must be enabled for Enterprise.
line.http.ping.versionv2.2.2Version information for the ping response of ILP over HTTP.
HTTP propertiesVariousSee HTTP settings for general HTTP configuration. ILP over HTTP inherits from HTTP settings.

TCP specific settings​

PropertyDefaultDescription
line.tcp.enabledtrue

Enable or disable line protocol over TCP.

line.tcp.net.bind.to0.0.0.0:9009

IP address of the network interface to bind listener to and port. By default, TCP receiver listens on all network interfaces.

line.tcp.net.connection.limit256

The maximum number permitted for simultaneous connections to the server. This value is intended to control server memory consumption.

line.tcp.net.connection.timeout300000

Connection idle timeout in milliseconds. Connections are closed by the server when this timeout lapses.

line.tcp.net.connection.hintfalse

Windows specific flag to overcome OS limitations on TCP backlog size

line.tcp.net.connection.rcvbuf-1

Maximum buffer receive size on each TCP socket. If value is -1, the socket receive buffer remains unchanged from OS default.

line.tcp.net.connection.queue.timeout5000

Amount of time in milliseconds a connection can wait in the listen backlog queue before its refused. Connections will be aggressively removed from the backlog until the active connection limit is breached.

line.tcp.auth.db.path

Path which points to the authentication db file.

line.tcp.connection.pool.capacity64

The maximum amount of pooled connections this interface may have.

line.tcp.timestampn

Input timestamp resolution. Possible values are n, u, ms, s and h.

line.tcp.msg.buffer.size32768

Size of the buffer read from queue. Maximum size of write request, regardless of the number of measurements.

line.tcp.maintenance.job.interval1000

Maximum amount of time (in milliseconds) between maintenance jobs committing any uncommitted data on inactive tables.

line.tcp.min.idle.ms.before.writer.release500

Minimum amount of idle time (in milliseconds) before a table writer is released.

line.tcp.commit.interval.fraction0.5

Commit lag fraction. Used to calculate commit interval for the table according to the following formula: commit_interval = commit_lag βˆ— fraction. The calculated commit interval defines how long uncommitted data will need to remain uncommitted.

line.tcp.commit.interval.default1000

Default commit interval in milliseconds.

line.tcp.max.measurement.size32768

Maximum size of any measurement.

line.tcp.writer.queue.size128

Size of the queue between network I/O and writer jobs. Each queue entry represents a measurement.

line.tcp.writer.worker.count

Number of dedicated I/O worker threads assigned to write data to tables. When 0, the writer jobs will use the shared pool.

line.tcp.writer.worker.affinity

Comma-separated list of thread numbers which should be pinned for line protocol ingestion over TCP. CPU core indexes are 0-based.

line.tcp.writer.worker.sleep.threshold1000

Amount of subsequent loop iterations with no work done before the worker goes to sleep.

line.tcp.writer.worker.yield.threshold10

Amount of subsequent loop iterations with no work done before the worker thread yields.

line.tcp.writer.queue.capacity128

Size of the queue between the IO jobs and the writer jobs, each queue entry represents a measurement.

line.tcp.writer.halt.on.errorfalse

Flag that indicates if the worker thread must stop when an unexpected error occurs.

line.tcp.io.worker.count

Number of dedicated I/O worker threads assigned to parse TCP input. When 0, the writer jobs will use the shared pool.

line.tcp.io.worker.affinity

Comma-separated list of thread numbers which should be pinned for line protocol ingestion over TCP. CPU core indexes are 0-based.

line.tcp.io.worker.sleep.threshold1000

Amount of subsequent loop iterations with no work done before the worker goes to sleep.

line.tcp.io.worker.yield.threshold10

Amount of subsequent loop iterations with no work done before the worker thread yields.

line.tcp.disconnect.on.errortrue

Disconnect TCP socket that sends malformed messages.

line.tcp.acl.enabledtrue

Enable or disable Access Control List (ACL) authentication for InfluxDB Line Protocol over TCP. Enterprise only.

UDP specific settings​

note

The UDP receiver is deprecated since QuestDB version 6.5.2. We recommend ILP over HTTP instead, or less frequently ILP over TCP.

PropertyDefaultDescription
line.udp.join232.1.2.3

Multicast address receiver joins. This values is ignored when receiver is in "unicast" mode.

line.udp.bind.to0.0.0.0:9009

IP address of the network interface to bind listener to and port. By default UDP receiver listens on all network interfaces.

line.udp.commit.rate1000000

For packet bursts the number of continuously received messages after which receiver will force commit. Receiver will commit irrespective of this parameter when there are no messages.

line.udp.msg.buffer.size2048

Buffer used to receive single message. This value should be roughly equal to your MTU size.

line.udp.msg.count10000

Only for Linux. On Linux, QuestDB will use the recvmmsg() system call. This is the max number of messages to receive at once.

line.udp.receive.buffer.size8388608

UDP socket buffer size. Larger size of the buffer will help reduce message loss during bursts.

line.udp.enabledfalse

Enable or disable UDP receiver.

line.udp.own.threadfalse

When true, UDP receiver will use its own thread and busy spin that for performance reasons. "false" makes receiver use worker threads that do everything else in QuestDB.

line.udp.own.thread.affinity-1

-1 does not set thread affinity. OS will schedule thread and it will be liable to run on random cores and jump between the. 0 or higher pins thread to give core. This property is only valid when UDP receiver uses own thread.

line.udp.unicastfalse

When true, UDP will use unicast. Otherwise multicast.

line.udp.timestampn

Input timestamp resolution. Possible values are n, u, ms, s and h.

line.udp.commit.modenosync

Commit durability. Available values are nosync, sync and async.

Database replication​

note

Replication is Enterprise only.

Replication enables high availability clusters.

For setup instructions, see the replication operations guide.

For an overview of the concept, see the replication concept page.

For a tuning guide see... the replication tuning guide.

PropertyDefaultDescription
replication.rolenone

Defaults to none for stand-alone instances. To enable replication set to one of: primary, replica.

replication.object.store

A configuration string that allows connecting to an object store. The format is scheme::key1=value;key2=value2;…. The various keys and values are detailed in a later section. Ignored if replication is disabled. No default given variability.

cairo.wal.segment.rollover.size2097152

The size of the WAL segment before it is rolled over. Default is 2MiB. However, defaults to 0 unless replication.role=primary is set.

cairo.writer.command.queue.capacity32

Maximum writer ALTER TABLE and replication command capacity. Shared between all the tables.

replication.primary.throttle.window.duration10000

The millisecond duration of the sliding window used to process replication batches. Default is 10000 ms.

replication.requests.max.concurrent0

A limit to the number of concurrent object store requests. The default is 0 for unlimited.

replication.requests.retry.attempts3

Maximum number of times to retry a failed object store request before logging an error and reattempting later after a delay. Default is 3.

replication.requests.retry.interval200

How long to wait before retrying a failed operation. Default is 200 ms.

replication.primary.compression.threadscalculated

Max number of threads used to perform file compression operations before uploading to the object store. The default value is calculated as half the number of CPU cores.

replication.primary.compression.level1

Zstd compression level. Defaults to 1. Valid values are from 1 to 22.

replication.replica.poll.interval1000

Millisecond polling rate of a replica instance to check for the availability of new changes.

native.async.io.threadscpuCount

The number of async (network) io threads used for replication (and in the future cold storage). The default should be appropriate for most use cases.

native.max.blocking.threadscpuCount * 4

Maximum number of threads for parallel blocking disk IO read/write operations for replication (and other). These threads are ephemeral: They are spawned per need and shut down after a short duration if no longer in use. These are not cpu-bound threads, hence the relative large number. The default should be appropriate for most use cases.

Identity and Access Management (IAM)​

note

Identity and Access Management is available within QuestDB Enterprise.

Identity and Access Management (IAM) ensures that data can be accessed only by authorized users. The below configuration properties relate to various authentication and authorization features.

For a full explanation of IAM, see the Identity and Access Management (IAM) documentation.

PropertyDefaultDescription
acl.enabledtrue

Enables/disables Identity and Access Management.

acl.admin.user.enabledtrue

Enables/disables the built-in admin user.

acl.admin.useradmin

Name of the built-in admin user.

acl.admin.passwordquest

The password of the built-in admin user.

acl.basic.auth.realm.enabledfalse

When enabled the browser's basic auth popup window is used instead of the Web Console's login screen. Only present for backwards compatibility.

acl.entity.name.max.length255

Maximum length of user, group and service account names.

acl.password.hash.iteration.count100000

QuestDB Enterprise never stores passwords in plain text, it stores password hashes only. This is the number of hash iterations used in password hashing. Higher means safer, almost never should be changed.

acl.rest.token.refresh.threshold10

When a REST token is created in REFRESH mode, its TTL is extended on every successful authentication, unless the last successful authentication was within this threshold. This setting removes unnecessary overhead of continuously refreshing REST tokens if they are used often. The value is expressed in seconds.

tls.enabledfalse

Enables/disables TLS encryption globally for all QuestDB interfaces (HTTP endpoints, ILP over TCP).

tls.cert.path

Path to certificate used for TLS encryption globally. The certificate should be DER-encoded and saved in PEM format.

tls.private.key.path

Path to private key used for TLS encryption globally.

http.tls.enabledfalse

Enables/disables TLS encryption for the HTTP server only.

http.tls.cert.path

Path to certificate used for TLS encryption for the HTTP server only. The certificate should be DER-encoded and saved in PEM format.

http.tls.private.key.path

Path to private key used for TLS encryption for the HTTP server only.

http.min.tls.enabledfalse

Enables/disables TLS encryption for the minimal HTTP server only.

http.min.tls.cert.path

Path to certificate used for TLS encryption for the minimal HTTP server only. The certificate should be DER-encoded and saved in PEM format.

http.min.tls.private.key.path

Path to private key used for TLS encryption for the minimal HTTP server only.

line.tcp.tls.enabledfalse

Enables/disables TLS encryption for ILP over TCP only.

line.tcp.tls.cert.path

Path to certificate used for TLS encryption for ILP over TCP only. The certificate should be DER-encoded and saved in PEM format.

line.tcp.tls.private.key.path

Path to private key used for TLS encryption for ILP over TCP only.

line.tcp.acl.enabledtrue

Enables/disables authentication for the ILP over TCP endpoint only.

OpenID Connect (OIDC)​

note

OpenID Connect is Enterprise only.

OpenID Connect (OIDC) support is part of QuestDB's Identity and Access Management. The database can be integrated with any OAuth2/OIDC Identity Provider (IdP).

For detailed information about OIDC, see the OpenID Connect (OIDC) integration guide.

PropertyDefaultDescription
acl.oidc.enabledfalse

Enables/disables OIDC authentication. When enabled, few other configuration options must also be set.

acl.oidc.pkce.enabledtrue

Enables/disables PKCE for the Authorization Code Flow. This should always be enabled in a production environment, the Web Console is not fully secure without it.

acl.oidc.ropc.flow.enabledfalse

Enables/disables Resource Owner Password Credentials flow. When enabled, this flow also has to be configured in the OIDC Provider.

acl.oidc.configuration.url

URL where the OpenID Provider's configuration information cna be loaded in json format, should always end with /.well-known/openid-configuration.

acl.oidc.host

OIDC provider hostname. Required when OIDC is enabled, unless the OIDC configuration URL is set.

acl.oidc.port443

OIDC provider port number.

acl.oidc.tls.enabledtrue

Whether the OIDC provider requires a secure connection or not. It is highly unlikely in a production environment, but if the OpenID Provider endpoints do not require a secure connection, this option can be set to false.

acl.oidc.tls.validation.enabledtrue

Enables/disables TLS certificate validation. If you are working with self-signed certificates that you would like QuestDB to trust, disable this option. Validation is strongly recommended in production environments. QuestDB will check that the certificate is valid, and that it is issued for the server to which it connects.

acl.oidc.tls.keystore.path

Path to a keystore file that contains trusted Certificate Authorities. Will be used when validating the certificate of the OIDC provider. Not required if your OIDC provider's certificate is signed by a public CA.

acl.oidc.tls.keystore.password

Keystore password, required if there is a keystore file and it is password protected.

acl.oidc.http.timeout30000

OIDC provider HTTP request timeout in milliseconds.

acl.oidc.client.id

Client name assigned to QuestDB in the OIDC server, required when OIDC is enabled.

acl.oidc.audience

OAuth2 audience as set on the tokens issued by the OIDC Provider, defaults to the client id.

acl.oidc.redirect.uri

The redirect URI tells the OIDC server where to redirect the user after successful authentication. If not set, the Web Console defaults it to the location where it was loaded from (window.location.href).

acl.oidc.scopeopenid

The OIDC server should ask consent for the list of scopes provided in this property. The scope openid is mandatory, and always should be included.

acl.oidc.public.keys.endpoint/pf/JWKS

JSON Web Key Set (JWKS) Endpoint, the default value should work for the Ping Identity Platform. This endpoint provides the list of public keys can be used to decode and validate ID tokens issued by the OIDC Provider.

acl.oidc.authorization.endpoint/as/authorization.oauth2

OIDC Authorization Endpoint, the default value should work for the Ping Identity Platform.

acl.oidc.token.endpoint/as/token.oauth2

OIDC Token Endpoint, the default value should work for the Ping Identity Platform.

acl.oidc.userinfo.endpoint/idp/userinfo.openid

OIDC User Info Endpoint, the default value should work for the Ping Identity Platform. Used to retrieve additional user information which contains the user's group memberships.

acl.oidc.groups.encoded.in.tokenfalse

Should be set to false, if the OIDC Provider is configured to encode the group memberships of the user into the id token. When set to true, QuestDB will look for the groups in the token instead of calling the User Info endpoint.

acl.oidc.sub.claimsub

The name of the claim in the user information, which contains the name of the user. Could be a username, the user's full name or email. It will be displayed in the Web Console, and logged for audit purposes.

acl.oidc.groups.claimgroups

The name of the custom claim in the user information, which contains the group memberships of the user.

acl.oidc.cache.ttl30000

User info cache entry TTL (time to live) in milliseconds, default value is 30 seconds. For improved performance QuestDB caches user info responses for each valid access token, this settings drives how often the access token should be validated and the user info updated.

Config Validation​

The database startup phase checks for configuration issues, such as invalid or deprecated settings. Issues may be classified as advisories or errors. Advisory issues are logged without causing the database to stop its startup sequence: These are usually setting deprecation warnings. Configuration errors can optionally cause the database to fail its startup.

PropertyDefaultDescription
config.validation.strictfalseWhen enabled, startup fails if there are configuration errors.

We recommended enabling strict validation.

Telemetry​

QuestDB sends anonymous telemetry data with information about usage which helps us improve the product over time. We do not collect any personally-identifying information, and we do not share any of this data with third parties.

PropertyDefaultDescription
telemetry.enabledtrueEnable or disable anonymous usage metrics collection.
telemetry.hide.tablesfalseHides telemetry tables from select * from tables() output. As a result, telemetry tables will not be visible in the Web Console table view.
telemetry.queue.capacity512Capacity of the internal telemetry queue, which is the gateway of all telemetry events. This queue capacity does not require tweaking.

Logging & Metrics​

The following settings are available in server.conf:

PropertyDefaultDescription
log.level.verbosefalse

Converts short-hand log level indicators (E, C, I) into long-hand (ERROR, CRITICAL, INFO)

Further settings are available in log.conf. For more information, and details of our Prometheus metrics, please visit the Logging & Metrics documentation.