There are two different connection implementations: one, the default, uses Promise, and the other uses Callback, allowing for compatibility with the MySQL and mysql2 API's.
The documentation provided on this page is the promise API (default). If you want information on the Callback API, see the CALLBACK API.
Install the MariaDB Connector using npm
You can then use the Connector in your application code with the Promise API. For instance,
To use the Connector, you first need to install it on your system. The installation process for Promise and Callback API is managed with the same package through npm.
To use the Connector, you need to import the package into your application code.
Default behaviour for decoding / datatype for 2.x version and mysql/mysql2 drivers return a JavaScript object. BIGINT/DECIMAL values might not be in threturns range, resulting in approximate results.
Since 3.x version, driver has reliable default, returning:
DECIMAL => javascript String
BIGINT => javascript object
For compatibility with previous version or mysql/mysql driver, 4 options have been added to return BIGINT/DECIMAL as number, as previous defaults.
Previous options supportBigNumbers and bigNumberStrings still exist for compatibility but are now deprecated.
Other considerations
mysql has an experimental syntax permitting the use of ?? characters as placeholder to escape id. This isn't implemented in the MariaDB driver, permitting the same query syntax for and .
example:
has to use explicit escapeId:
Cluster configuration removeNodeErrorCount default to Infinity when mysql/mysql2 default to value 5. This avoids removing nodes without explicitly saying so.
It is recommended to activate the trace option in development. Since the driver is asynchronous, enabling this option permits saving initial stack when calling any driver methods. This allows having interesting debugging information: example:
The caller method and line are now in the error stack, permitting easy error debugging.
The problem is this error stack is created using that is super slow (hoping ). To give an idea, this slows down by 10% a query like 'select * from mysql.user LIMIT 1', so not recommended in production.
If Client and Server share the same timezone, default behavior (timezone='local') is the solution.
The problem resides when client and server don't share the same timezone.
The timezone option can have the following value:
'local' (default): connector doesn't do any conversion. If the database has a different timezone, there will be offset issues.
'auto': connector retrieves server timezone, and if client timezone differs from server, connector will set session timezone to client timezone
IANA timezone / offset, example 'America/New_York' or '+06:00'. Connector will set session timezone to indicated timezone. It is expected that this timezone corresponds to client tz.
Using 'auto' or setting specific timezone solves timezone correction. Please be careful for fixed timezone: Etc./GMT+12 = GMT-12:00 = -12:00 = offset -12. Etc./GMT have opposite sign!!
(Before 3.1, the connector was converting date to server timezone, but these were not correcting all timezone issues)
IANA timezone / offset
When using IANA timezone, the connector will set the connection timezone to the timezone. This can throw an error on connection if timezone is unknown by the server (see , timezone tables might be not initialized) If you are sure the server is using that timezone, this step can be skipped with the option skipSetTimezone.
If the timezone corresponds to JavaScript default timezone, then no conversion will be done.
Timezone setting recommendation
The best is to have the same timezone on client and database, then keep the 'local' default value.
If different, then either client or server has to convert the date. In general, it is best to use client conversion to avoid putting any unneeded stress on the database. Timezone has to be set to the IANA timezone corresponding to server timezone and disabled skipSetTimezone option since you are sure that the server has the corresponding timezone.
Example: The client uses 'America/New_York' by default, and server 'America/Los_Angeles'. Execute 'SELECT @@system_time_zone' on the server. That will give the server default timezone. The server can return a POSIX timezone like 'PDT' (Pacific Daylight Time). IANA timezone correspondence must be found (see ) and configure client-side. This will ensure DST (automatic daylight saving time change will be handled).
Connection details such as URL, username, and password are better hidden into environment variables. Using code like:
Then for example, run node.js setting those environment variables:
Another solution is using dotenv package. Dotenv loads environment variables from .env files into the process.env variable in Node.js:
Then configure dotenv to load all .env files:
with an .env file containing:
.env files must NOT be pushed into the repository, using .gitignore.
Alternatively, Node.js 20.0 introduced the experimental feature of using the node --env-file=.env syntax to load environment variables without the need for external dependencies. We can then simply write:
Assuming the presence of the same .env file as previously described.
For new projects, enabling option supportBigInt is recommended (It will be in a future 3.x version).
This option permits to avoid exact value for big integer (value > 2^53) (see )
Base:
: Creates a new connection.
: Creates a new Pool.
: Creates a new pool cluster.
: Import Sql file
Connection:
: Executes a query.
: Executes a query, returning an emitter object to stream rows.
: Prepares a query.
: Prepare and Executes a query.
Pool:
: Creates a new connection.
: Executes a query.
: Executes a batch
: Gracefully closes the connection.
PoolCluster
: Add a pool to cluster.
: Remove and end pool according to pattern.
: End cluster.
: Return a connection from cluster.
createConnection(options) → Promise
options: JSON/StringReturns a promise that:
resolves with a object,
Raises an .
Creates a new object.
Example:
Essential options list:
For more information, see the documentation.
When working with a local database (that is, cases where MariaDB and your Node.js application run on the same host), you can connect to MariaDB through the Unix socket or Windows named pipe for better performance, rather than using the TCP/IP layer.
In order to set this up, you need to assign the connection a socketPath value. When this is done, the Connector ignores the host and port options.
The specific socket path you need to set is defined by the server system variable. If you don't know it offhand, you can retrieve it from the server.
It defaults to /tmp/mysql.sock on Unix-like operating systems and MySQL on Windows. Additionally, on Windows, this feature only works when the server is started with the --enable-named-pipe option.
For instance, on Unix a connection might look like this:
It has a similar syntax on Windows:
createPool(options) → Pool
options: JSON/StringReturns a object,
Creates a new pool.
Example:
Pool options include that will be used when creating new connections.
Specific options for pools are:
createPoolCluster(options) → PoolCluster
options: JSONReturns a object,
Creates a new pool cluster. Cluster handle multiple pools, giving high availability / distributing load (using round robin / random / ordered).
Example:
Pool cluster options include that will be used when creating new pools.
Specific options for a pool cluster are:
importFile(options) → Promise
options: JSON/String + one additional optionsfileReturns a promise that:
resolves with an empty result,
raises an .
Import an sql file
Example:
version → StringReturns a String that is a library version. example '2.1.2'.
defaultOptions(options) → Json
options: JSON/String (non-mandatory)Returns a JSON value containing options default value.
Permits listing the default options that will be used.
connection.query(sql[, values]) -> Promise
sql: string | JSON SQL string or JSON object to supersede default connection options. When using JSON object, object must have a "sql" key. For instance,{ dateStrings: true, sql: 'SELECT now()' }
values: array | object Placeholder values. Usually an array, but in cases of only one placeholder, it can be given as is.Returns a promise that:
Sends a query to a database and return a result as a Promise.
For instance, when using an SQL string:
Alternatively, you could use the JSON object:
To prevent SQL Injection attacks, queries permit the use of question marks as placeholders. The Connection escapes values according to their type. Values can be of native JavaScript types, Buffers, Readables, objects with toSQLString methods, or objects that can be stringified (that is, JSON.stringify).
When streaming, objects that implement Readable are streamed automatically. But there are two server system variables that may interfere:
: The server must receive queries before reaching this timeout, which defaults to 30 seconds.
: This system variable defines the maximum amount of data the Connector can send to the server.
For instance,
In the case of streaming,
Queries return two different kinds of results, depending on the type of query you execute. When you execute write statements (such as INSERT, DELETE and UPDATE), the method returns a JSON object with the following properties:
affectedRows: The number of rows affected by the operation
insertId: The auto-increment ID generated by the operation (for the first inserted row when multiple rows are inserted)
warningStatus: A flag indicating whether the query generated warnings
When executing a SELECT statement, the method returns the result-set as an array of JSON objects. Each object in the array represents a row from the result-set, with column names as property keys.
The result also includes a special non-enumerable meta property containing an array of information.
The following options can be set at either the query level or the connection level. When set at the connection level, they apply to all subsequent queries.
timeout
number, timeout in ms
Sets a timeout for query execution. Only available for MariaDB server >= 10.1.2.
The driver implements this using SET STATEMENT max_statement_time=<timeout> FOR <command>, which allows the server to cancel operations that exceed the specified timeout.
Important limitation: When using multiple statements (with the multipleStatements option enabled), only the first query will be subject to the timeout.
The implementation of max_statement_time is engine-dependent and may behave differently across storage engines. For example, with the Galera engine, commits ensure replication to other nodes is completed, which might exceed the timeout to maintain proper server state.
namedPlaceholders
boolean, default false
Enables the use of named placeholders instead of question mark placeholders. When enabled, the values parameter must be an object with keys matching the placeholder names in the query.
rowsAsArray
boolean, default false
Returns rows as arrays instead of objects, which can improve performance by 5-10% with local databases and reduces memory usage by avoiding the need to parse column metadata completely.
metaAsArray
boolean, default false
A compatibility option that causes the Promise to return an array [rows, metadata] instead of rows with a meta property. This option is primarily for mysql2 compatibility.
nestTables
boolean / string, default false
Helps resolve column name conflicts in joins by grouping data by table. When set to true, results are grouped by table name. When set to a string value, it's used as a separator between table name and column name.
With boolean value:
With string value:
dateStrings
boolean, default: false
Whether you want the Connector to retrieve date values as strings, rather than Date objects.
bigIntAsNumber
boolean, default: true
Whether the query should return JavaScript ES2020 for data type. This ensures having the expected value even for value > 2^53 (see range). This option can be set to query level, supplanting connection option supportBigInt value.
this option is for compatibility for driver version < 3
decimalAsNumber
boolean, default: false
Whether the query should return decimal as Number. If enabled, this might return approximate values.
typeCast
Experimental
function(column, next)
In the event that you need certain values returned as a different type, you can use this function to cast the value into that type yourself.
For instance, casting all TINYINT(1) values as boolean values:
collation: Object indicates the column collation. It has the properties: index, name, encoding, and maxlen. For instance, 33, "UTF8_GENERAL_CI", "utf8", 3
columnLength: Shows the column's maximum length if there's a limit and 0 if there is no limit, (such as with a
When using typeCast, additional function are available on Column, in order to decode value :
string(): string : decode // value
buffer(): Buffer : decode / value
float(): float : decode value
connection.queryStream(sql[, values]) → Emitter
sql: string | JSON SQL string value or JSON object to supersede default connections options. JSON objects must have an"sql"property. For instance,{ dateStrings: true, sql: 'SELECT now()' }
values: array | object Defines placeholder values. This is usually an array, but in cases of only one placeholder, it can be given as a string.Returns an Emitter object that emits different types of events:
When using the query() method, the Connector returns the entire result-set with all its data in a single call. While this works well for small result sets, it can become problematic for queries returning millions of rows, potentially causing memory issues.
The queryStream() method solves this by using Node.js's event-driven architecture to process rows one by one, significantly reducing memory usage for large result sets.
Important: The stream handles backpressure automatically, pausing the socket when data handling takes time to prevent Node.js socket buffers from growing indefinitely. If you're using a pipeline and your data handling throws an error, you must explicitly call queryStream.close() to prevent connection hangs.
There are several ways to implement streaming:
Using for-await-of (Node.js 10+)
The simplest approach using modern JavaScript syntax:
Using event listeners
Traditional Node.js event-based approach:
Using Node.js streams
For advanced use cases, you can integrate with Node.js streams API:
connection.prepare(sql) → Promise
sql: string | JSON SQL string value or JSON object to supersede default connections options. JSON objects must have an"sql"property. For instance,{ dateStrings: true, sql: 'SELECT now()' }Returns a promise that :
resolves with a object.
This permits to a command that permits to be executed many times. After use, prepare.close() method MUST be call, in order to properly close object.
Public variables :
id: Prepare statement Identifier
query: sql command
database: database it applies to.
Public methods :
execute(values) → Promise
values: array | object Defines placeholder values. This is usually an array, but in cases of only one placeholder, it can be given as a string.Returns a promise that :
resolves with a JSON object for update/insert/delete or a object for a result-set.
rejects with an .
executeStream(values) → Promise
values: array | object Defines placeholder values. This is usually an array, but in cases of only one placeholder, it can be given as a string.Returns an Emitter object that emits different types of events:
error: Emits an object when the query fails. (No
"end"event will then be emitted).
This is the equivalent of queryStream using execute.
When using the execute() method, documented above, the Connector returns the entire result-set with all its data in a single call. While this is fine for queries that return small result-sets, it can grow unmanageable in cases of huge result-sets. Instead of retrieving all the data into memory, you can use the executeStream() method, which uses the event drive architecture to process rows one by one, which allows you to avoid putting too much strain on memory.
You may want to consider updating the server system variable. The resultSet must be totally received before this timeout, which defaults to 30 seconds.
for-await-of
simple use with for-await-of only available since Node.js 10 (note that this must be use within async function) :
Events
close() → void
This closes the prepared statement. Each time a Prepared object is used, it must be closed.
In case prepare cache is enabled (having option prepareCacheLength > 0 (default)), Driver will either really close Prepare or keep it in cache.
connection.execute(sql[, values]) → Promise
sql: string | JSON SQL string or JSON object to supersede default connection options. When using JSON object, an object must have a "sql" key. For instance,{ dateStrings: true, sql: 'SELECT now()' }
values: array | object Placeholder values. Usually an array, but in cases of only one placeholder, it can be given as is.Returns a promise that :
This is quite similar to method, with a few differences: Execute will in fact + + command.
It makes sense to use this only if the command often is used and if prepare cache is enabled (default). If a PREPARE result is already in cache, only The command is executed. MariaDB server 10.6 even avoids resending result-set metadata if not changed since, permitting even faster results.
connection.batch(sql, values) → Promise
sql: string | JSON SQL string value or JSON object to supersede default connections options. JSON objects must have an"sql"property. For instance,{ dateStrings: true, sql: 'SELECT now()' }
values: array Array of parameter (array of array or array of objects if using named placeholders).Returns a promise that :
Implementation depends on the server type and version. for MariaDB server version 10.2.7+, the implementation uses dedicated bulk protocol.
For other, insert queries will be rewritten for optimization. example: insert into ab (i) values (?) with first batch values = 1, second = 2 will be rewritten insert into ab (i) values (1), (2).
If a query cannot be re-writen will execute a query for each value.
An option fullResult permit to indicate if user wants to retrieve individual batch results (to retrieve generated ids). This might change the performance of bathing if set, depending on a server version (for server 11.5.1 and above with , bulk will be use, or pipelining if not)
For instance,
fullResult optionBy default, batch operations aggregate results, combining all individual operations into a single result. You can use the fullResult: true option to retrieve individual results for each parameter set.
When to use fullResult
The fullResult option is particularly useful when:
You need to know which specific parameter sets succeeded or failed
You need to access individual insertId values for each inserted row.
Performance considerations
For MariaDB servers that support it (version 10.2.7+), the connector will use the optimized COM_STMT_BULK_EXECUTE protocol for better performance when possible. The fullResult option with bulk protocol requires 11.5.1.
connection.beginTransaction() → PromiseReturns a promise that :
resolves (no argument)
rejects with an .
Begins a new transaction.
connection.commit() → PromiseReturns a promise that :
resolves (no argument)
rejects with an .
Commits the current transaction if there is one active. The Connector tracks the current transaction state on the server. In the event that you issue the commit() method when there's no active transaction, it ignores the method and sends no commands to MariaDB.
connection.release() → PromiseWhen a .connection comes from pool, only connection.release() is an async method returning an empty promise success. This function will never throw an error. The default behavior is that if there is a transaction still open, a rollback command will be issued, and the connection will be released to pool.
2 options might interfere:
When resetAfterUse set, the connection will completely be reset like a fresh connection
noControlAfterUse when set, no control (rollback or reset) will be done on release
connection.rollback() → PromiseReturns a promise that :
resolves (no argument)
Rejects with an .
Rolls back the current transaction if there is one active. The Connector tracks the current transaction state on the server. In the event that you issue the rollback() method when there's no active transaction, it ignores the method and sends no commands to MariaDB.
connection.changeUser(options) → Promise
options: JSON, subset of atabase/charset = database/charset / password/userReturns a promise that :
resolves without result
Rejects with an .
Resets the connection and re-authorizes it using the given credentials. It is the equivalent of creating a new connection with a new user, reusing the open socket.
connection.ping() → PromiseReturns a promise that :
resolves (no argument)
rejects with an .
Sends a packet to the database containing one byte to check that the connection is still active.
connection.reset() → PromiseReturns a promise that :
resolves (no argument)
rejects with an .
reset the connection. Reset will:
rollback any open transaction
reset transaction isolation level
reset session variables
delete user variables
This command is only available for MariaDB >=10.2.4 or MySQL >= 5.7.3. the function will be rejected with the error "Reset command not permitted for server XXX" if the server version doesn't permit reset.
For previous MariaDB version, reset connection can be done using that do the same + redo authentication phase.
connection.isValid() → booleanReturns a boolean
Indicates the connection state as the Connector knows it. If it returns false, there is an issue with the connection, such as the socket disconnected without the Connector knowing about it.
connection.end() → PromiseReturns a promise that :
resolves (no argument)
rejects with an .
Closes the connection gracefully, after waiting for any currently executing queries to finish.
connection.destroy()Closes the connection without waiting for any currently executing queries. These queries are interrupted. MariaDB logs the event as an unexpected socket close.
connection.escape(value) → StringThis function permits escaping a parameter properly according to a parameter type to avoid injection. See for escaping.
Escaping has some limitations:
doesn't permit parameters
this is less efficient compared to using standard conn.query(), that will stream data to socket, avoiding string concatenation and using memory unnecessary
escape per type:
boolean: explicit true or false
number: string representation. ex: 123 => '123'
Date: String representation using YYYY-MM-DD HH:mm:ss.SSS format
Escape is done for value without NO_BACKSLASH_ESCAPES that disable \ escaping (default); Escaping API are meant to prevent . However, privilege the use of and avoid building the command manually.
connection.escapeId(value) → StringThis function permits escaping an Identifier properly. See for escaping. Value will be enclosed by '`' character if content doesn't satisfy:
ASCII: [0-9,a-z,A-Z$_] (numerals 0–9, basic Latin letters, both lowercase and uppercase, dollar sign, underscore)
Extended: U+0080 .. U+FFFF and escaping '`' character if needed.
connection.pause()Pauses data reads.
connection.resume()Resumes data reads from a pause.
connection.serverVersion()Returns a string
Retrieves the version of the currently connected server. Throws an error when not connected to a server.
connection.importFile(options) → Promise
optionsJSON: > ** file: file path (mandatory) > ** database: database if different that current connection database (optional)Returns a promise that :
resolves without a result
rejects with an .
Import SQL file. If a database is set, the database will be used, then after file import, the database will be reverted to the previous value.
ErrorWhen the Connector encounters an error, Promise returns an object. In addition to the standard properties, this object has the following properties:
fatal: A boolean value indicating whether the connection remains valid.
errno: The error number corresponding to the MariaDB/MySQL error code.
sqlState: The SQL state code following the ANSI SQL standard.
When working with the MariaDB connector, implementing proper error handling is crucial for building robust applications. Here are some recommended practices:
1. Always use try/catch with async/await
2. Check for specific error codes
The connector provides detailed error information that you can use to handle specific error scenarios:
3. Distinguish between fatal and non-fatal errors
The fatal property indicates whether the connection is still usable:
Here's an example of what an error object might look like when logged:
When the trace option is enabled, errors include the original stack trace, which helps identify where in your code the query was executed.
For a complete list of error codes and their meanings, see the documentation.
eventsConnection object that inherits from the Node.js . Emits an error evthe ent when the connection closes unexpectedly.
A connection pool is a cache of database connections maintained so that connections can be reused when future requests to the database are required. Connection pools are used to enhance the performance of executing commands on a database.
Each time a connection is requested, if the pool contains an available connection, the pool will validate the connection by exchanging an empty MySQL packet with the server to ensure the connection is still valid, then provide the connection.
The pool reuses connections intensively to improve performance. This validation is only performed if a connection has been idle for a period (specified by the minDelayValidation option, which defaults to 500ms).
If no connection is available, the request will be queued until either:
A connection becomes available (through creation or release)
The connection timeout (acquireTimeout) is reached
When a connection is released back to the pool, any remaining transactions will be automatically rolled back to ensure a clean state for the next use.
pool.getConnection() → Promise
Returns a promise that: > * resolves with a object
rejects with an
Retrieves a connection from the pool. If the pool is at its connection limit, the promise will wait until a connection becomes available or the acquireTimeout is reached.
Example: Using a pooled connection with transactions
pool.query(sql[, values]) → Promise
sql: string | JSON SQL string or JSON object with query options
values: array | object Placeholder valuesReturns a promise that:
resolves with query results (same as )
Executes a query using a connection from the pool. The connection is automatically acquired and released, making this method ideal for simple queries.
Example: Simple query with error handling
Example: Using query options
pool.batch(sql, values) → Promise
sql: string | JSON SQL string or JSON object with query options
values: array Array of parameter sets (array of arrays or array of objects for named placeholders)Returns a promise that:
resolves with batch operation results
Executes a batch operation using a connection from the pool. The pool automatically handles connection acquisition and release.
For MariaDB server version 10.2.7+, this implementation uses a dedicated bulk protocol for improved performance.
Example: Batch insert with generated IDs
pool.end() → PromiseReturns a promise that:
resolves when all connections are closed
rejects with an if closing fails
Gracefully closes all connections in the pool and ends the pool. This should be called when your application is shutting down to ensure all database resources are properly released.
Example: Application shutdown handler
pool.escape(value) → StringThis is an alias for to escape parameters when building queries manually.
Example:
pool.escapeId(value) → StringThis is an alias for to escape identifiers like table or column names.
Example:
pool.importFile(options) → Promise
options: > *file: file path (mandatory)
database: database if different from current connection database (optional)Returns a promise that:
Imports an SQL file. If a database is specified, it will be used for the import and then reverted to the original database afterward.
Example: Import a database dump
The pool object inherits from Node.js and emits the following events:
acquireEmitted when a connection is acquired from the pool.
connectionEmitted when a new connection is created within the pool.
releaseEmitted when a connection is released back to the pool.
errorEmitted when an error occurs in the pool, such as failure to create a connection.
The pool provides several methods to monitor its state:
Right-size your connection pool:
Set connectionLimit based on your application's concurrency needs and database server capacity
Too few connections can create bottlenecks
Too many connections can overload the database server
A pool cluster manages multiple database connection pools and provides high availability and load balancing capabilities. It allows your application to:
Connect to multiple database servers (for primary/replica setups)
Automatically handle failover if a database server goes down
Distribute queries across multiple servers
Group servers by pattern for targeted operations
The cluster manages a collection of connection pools, each identified by a name. You can select pools using pattern matching and specify different load balancing strategies (selectors) to determine which pool to use for each connection request.
When a connection fails, the cluster can automatically retry with another pool matching the same pattern. If a pool fails consistently, it can be temporarily blacklisted or even removed from the cluster configuration.
createPoolCluster(options) → PoolCluster
options: JSONReturns a object
Creates a new pool cluster to manage multiple database connection pools.
Example: Creating a basic primary/replica setup
poolCluster.add(id, config)
id: string node identifier. Example:'primary','replica1'
config: JSON to create the poolReturns: void
Adds a new connection pool to the cluster with the specified identifier and configuration.
Example: Adding nodes with descriptive identifiers
poolCluster.remove(pattern)
pattern: string Regex pattern to select pools. Example:'replica*'Returns: void
Removes and ends all pools whose identifiers match the specified pattern.
Example: Removing nodes from the cluster
poolCluster.getConnection([pattern], [selector]) → Promise
pattern: string Regex pattern to select pools. Default:'*'(all pools)
selector: string Selection strategy: 'RR' (round-robin), 'RANDOM', or 'ORDER'. Default: value of thedefaultSelectoroptionReturns a promise that:
Gets a connection from a pool in the cluster that matches the pattern using the specified selection strategy.
Example: Using different selectors for different connection patterns
Example: Handling failover gracefully
poolCluster.of(pattern, [selector]) → FilteredPoolCluster
pattern: string Regex pattern to select pools. Example:'replica*'
selector: string Selection strategy: 'RR' (round-robin), 'RANDOM', or 'ORDER'Returns a object
Creates a new filtered pool cluster that only includes pools matching the specified pattern. This allows you to create specialized interfaces for different database roles.
Example: Creating dedicated interfaces for read and write operations
poolCluster.end() → PromiseReturns a promise that:
resolves when all pools in the cluster are closed
rejects with an if closing fails
Gracefully closes all connection pools in the cluster.
Example: Application shutdown with clustered connections
A filtered pool cluster is a subset of the main cluster that only includes pools matching a specific pattern. It provides a simplified interface for working with logically grouped database nodes.
filteredPoolCluster.getConnection() → PromiseReturns a promise that:
resolves with a object
rejects with an
Gets a connection from one of the pools in the filtered cluster using the selector specified when the filtered cluster was created.
Example:
filteredPoolCluster.query(sql[, values]) → Promise
sql: string | JSON SQL string or JSON object with query options
values: array | object Placeholder valuesReturns a promise that:
resolves with query results
Shorthand method to get a connection from the filtered cluster, execute a query, and release the connection.
Example:
The pool cluster inherits from Node.js and emits the following events:
removeEmitted when a node is removed from the cluster configuration. This happens when a node fails to connect more than removeNodeErrorCount times (if this option is defined).
The pool cluster supports three different selection strategies for choosing which database node to use:
Round-Robin ('RR'): Uses pools in rotation, ensuring an even distribution of connections.
Random ('RANDOM'): Selects a random pool for each connection request.
Order ('ORDER'): Always tries pools in sequence, using the first available one. Useful for primary/fallback setups.
Use meaningful node identifiers:
Choose clear identifiers that indicate the node's role (e.g., 'primary', 'replica1')
This makes pattern matching more intuitive and maintenance easier
Implement role-based access with patterns:
checkNumberRange
when used in conjunction of decimalAsNumber, insertIdAsNumber or bigIntAsNumber, if conversion to number is not exact, connector will throw an error (since 3.0.1)
function
version → String: Return library version.
defaultOptions(options) → Json: List options with default values
connection.batch(sql, values) → Promise: Fast batch processing.
connection.beginTransaction() → Promise: Begins a transaction.
connection.commit() → Promise: Commits the current transaction, if any.
connection.release() → Promise: Release connection to pool if connection comes from pool.
connection.rollback() → Promise: Rolls back the current transaction, if any.
connection.changeUser(options) → Promise: Changes the current connection user.
connection.ping() → Promise: Sends a 1 byte packet to the database to validate the connection.
connection.reset() → Promise: Reset current connection state.
connection.isValid() → boolean: Checks that the connection is active without checking socket state.
connection.end() → Promise: Gracefully close the connection.
connection.destroy(): Forces the connection to close.
connection.escape(value) → String: Escape parameter
connection.escapeId(value) → String: Escape identifier
connection.pause(): Pauses the socket output.
connection.resume(): Resumes the socket output.
connection.serverVersion(): Retrieves the current server version.
connection.importFile(options) → Promise: Import Sql file
events: Subscribes to connection error events.
pool.escape(value) → String: Escape parameter
pool.escapeId(value) → String: Escape identifier
pool.importFile(options) → Promise: Import Sql file
pool.activeConnections() → Number: Gets current active connection number.
pool.totalConnections() → Number: Gets current total connection number.
pool.idleConnections() → Number: Gets current idle connection number.
pool.taskQueueSize() → Number: Gets current stacked request.
pool events: Subscribes to pool events.
poolCluster.of(pattern, selector) → FilteredPoolCluster: Return a subset of cluster.
poolCluster events: Subscribes to pool cluster events.
port
Database server port number. Not used when using the option socketPath
integer
3306
ssl
Enables TLS support. For more information, see the documentation.
mixed
database
Default database to use when establishing the connection.
string
socketPath
Permits connections to the database through the Unix domain socket or named pipe.
string
compress
Compresses the exchange with the database through gzip. This permits better performance when the database is not in the same location.
boolean
false
connectTimeout
Sets the connection timeout in milliseconds.
integer
1000
socketTimeout
Sets the socket timeout in milliseconds after connection succeeds. A value of 0 disables the timeout.
integer
0
queryTimeout
Set maximum query time in ms (an error will be thrown if limit is reached). 0 or undefined meaning no timeout. This can be superseded for a query using the option
int
0
rowsAsArray
Returns result-sets as arrays, rather than JSON. This is a faster way to get results. For more information, see Query.
boolean
false
logger
Configure logger. For more information, see the documentation.
mixed
initializationTimeout
Pool will retry creating connection in loop, emitting 'error' event when reaching this timeout. In milliseconds.
integer
acquireTimeout value
minimumIdle
Permit to set a minimum number of connection in pool. Recommendation is to use fixed pool, so not setting this value.
integer
set to connectionLimit value
minDelayValidation
When asking a connection to pool, the pool will validate the connection state. "minDelayValidation" permits disabling this validation if the connection has been borrowed recently avoiding useless verifications in case of frequent reuse of connections. In milliseconds. 0 means validation is done each time the connection is asked.
integer
500
noControlAfterUse
After giving back connection to pool (connection.end) connector will reset or rollback connection to ensure a valid state. This option permit to disable those controls
boolean
false
resetAfterUse
When a connection is given back to pool, reset the connection if the server allows it (only for MariaDB version >= 10.2.22 /10.3.13). If disabled or server version doesn't allows reset, pool will only rollback open transaction if any
boolean
true before version 3, false since
leakDetectionTimeout
Permit to indicate a timeout to log connection borrowed from pool. When a connection is borrowed from pool and this timeout is reached, a message will be logged to console indicating a possible connection leak. Another message will tell if the possible logged leak has been released. In milliseconds. 0 means leak detection is disabled.
integer
0
pingTimeout
Validation timeout (ping) for checking an connection not used recently from pool. In milliseconds.
integer
500
defaultSelector
default pools selector. Can be 'RR' (round-robin), 'RANDOM' or 'ORDER' (use in sequence = always use first pools unless fails)
string
'RR'
resolves with a JSON object for update/insert/delete or a result-set object for a result-set.
rejects with an Error.
BLOBtype: Shows the column type as a String value. For more information on the relevant values, see field-type.js
columnType: Shows the column type as an integer value. For more information on the relevant values, see field-type.js
scale: Provides the decimal part length.
flags: Shows the byte-encoded flags. For more information, see field-detail.js.
db(): Name of the database schema. You can also retrieve this using schema().
table(): Table alias.
orgTable(): Real table name.
name(): Column alias.
orgName(): Real column name.
tiny(): int : decode value
short(): int : decode value
int(): int : decode value
long(): bigint : decode value
decimal(): string : decode value
date(): date : decode value
datetime(): date : decode / value
geometry(): geojson : decode value
error: Emits an Error object when the query fails. (No "end" event will then be emitted).
fields: Emits when column metadata from the result-set are received (the parameter is an array of Metadata fields).
data: Emits each time a row is received (parameter is a row).
end: Emits when the query ends (no parameter). > a method: close(): permits closing stream (since 3.0)
parameters: parameter array information.columns: column array information.
end: Emits when the query ends (no parameter). > a method: close(): permits closing stream (since 3.0)
resolves with a JSON object for update/insert/delete or a result-set object for a result-set.
rejects with an Error.
resolves with a JSON object.
rejects with an Error.
remove all PREPARE statement
object with toSqlString function: String an escaped result of toSqlString
Array: list of escaped values. ex: [true, "o'o"] => ('true', 'o\'o')
geoJson: MariaDB transformation to corresponding geotype. ex: { type: 'Point', coordinates: [20, 10] } => "ST_PointFromText('POINT(20 10)')"
JSON: Stringification of JSON, or if permitSetMultiParamEntries is enabled, key escaped as identifier + value
String: escaped value, (\u0000, ', ", \b, \n, \r, \t, \u001A, and \ characters are escaped with '')
code: The error code as a string identifier.rejects with an Error
rejects with an Error
rejects with an Error
Start with a connection limit of 10–20 and adjust based on performance testing
Handle connection leaks:
Always release connections:
Use connection validation wisely:
Prefer pool.query() for simple operations:
For single queries, use pool.query() instead of manually acquiring and releasing connections
Only use getConnection() when you need to maintain context across multiple queries
Implement proper error handling:
Listen for 'error' events on the pool
Implement reconnection strategies for fatal errors
Consider using a circuit breaker pattern for persistent database issues
Close the pool during application shutdown:
Always call pool.end() when your application terminates
Use process signal handlers (SIGINT, SIGTERM) to ensure proper cleanup
resolves with a Connection object
rejects with an Error
rejects with an Error
Use appropriate selectors for different scenarios:
'ORDER' for high availability with failover (tries primary first, then fallbacks)
'RR' for load balancing across equivalent nodes (like replicas)
'RANDOM' when pure distribution is needed
Configure node removal thresholds appropriately:
Monitor removed nodes:
Implement graceful degradation:
Design your application to function with reduced capabilities when some nodes are unavailable.
Use fallback strategies when specific node patterns become unavailable
Always close the cluster during application shutdown:
Call cluster.end() to properly release all resources
Use process signal handlers to ensure cleanup.
insertIdAsNumber
Whether the query should return last insert id from INSERT/UPDATE command as BigInt or Number. default return BigInt
boolean
false
decimalAsNumber
Whether the query should return decimal as Number. If enabled, this might return approximate values.
boolean
false
bigIntAsNumber
Whether the query should return BigInt data type as Number. If enabled, this might return approximate values.
boolean
user
User to access database.
string
password
User password.
string
host
IP address or DNS of the database server. Not used when using option socketPath.
string
acquireTimeout
Timeout to get a new connection from pool. In order to have connection error information, must be higher than connectTimeout. In milliseconds.
integer
10000
connectionLimit
Maximum number of connection in pool.
integer
10
idleTimeout
Indicate idle time after which a pool connection is released. Value must be lower than . In seconds. 0 means never release.
integer
canRetry
When getting a connection from pool fails, can cluster retry with other pools
boolean
true
removeNodeErrorCount
Maximum number of consecutive connection fail from a pool before pool is removed from cluster configuration. Infinity means node won't be removed. Default to Infinity since 3.0, was 5 before
integer
Infinity
restoreNodeTimeout
delay before a pool can be reused after a connection fails. 0 = can be reused immediately (in ms)
integer
false
"localhost"
1800
1000
const pool = mariadb.createPool({
// ...connection options
connectionLimit: 10,
leakDetectionTimeout: 30000 // Log potential leaks after 30 seconds
});let conn;
try {
conn = await pool.getConnection();
// Use connection...
} catch (err) {
// Handle error...
} finally {
if (conn) conn.release(); // Always release in finally block
}const pool = mariadb.createPool({
// ...connection options
minDelayValidation: 500, // Only validate connections unused for 500ms
pingTimeout: 1000 // Timeout for ping validation
});const cluster = mariadb.createPoolCluster({
removeNodeErrorCount: 5, // Remove after 5 consecutive failures
restoreNodeTimeout: 10000, // Wait 10 seconds before retrying failed nodes
canRetry: true // Enable retry on different nodes
});// Track cluster health
let clusterHealth = {
removedNodes: [],
lastIncident: null
};
cluster.on('remove', (nodeId) => {
clusterHealth.removedNodes.push(nodeId);
clusterHealth.lastIncident = new Date();
// Alert operations team
alertOps(`Database node ${nodeId} removed from cluster at ${clusterHealth.lastIncident}`);
});$ npm install mariadbconst mariadb = require('mariadb');
async function asyncFunction() {
const conn = await mariadb.createConnection({
host: 'mydb.com',
user: 'myUser',
password: 'myPwd'
});
try {
const res = await conn.query('select 1');
console.log(res); // [{ "1": 1 }]
return res;
} finally {
conn.end();
}
}
asyncFunction();$ npm install mariadbconst mariadb = require('mariadb'); const res = await conn.query('call ??(?)', [myProc, 'myVal']); const res = await conn.query(`call ${conn.escapeId(myProc)}(?)`, ['myVal']);const pool = mariadb.createPool({
host: 'mydb.com',
user: 'myUser',
connectionLimit: 5,
trace: true
});
await pool.query('wrong query');
/* will throw an error like :
sql: wrong query - parameters:[]
at Object.module.exports.createError (C:\temp\mariadb-connector-nodejs2\lib\misc\errors.js:57:10)
at ...
From event:
at Function._PARAM (C:\temp\mariadb-connector-nodejs2\lib\connection-promise.js:104:30)
at PoolPromise.query (C:\temp\mariadb-connector-nodejs2\lib\pool-promise.js:102:40)
at Context.<anonymous> (C:\temp\mariadb-connector-nodejs2\test\integration\test-pool.js:60:18)
at callFn (C:\temp\mariadb-connector-nodejs2\node_modules\mocha\lib\runnable.js:366:21)
at Test.Runnable.run (C:\temp\mariadb-connector-nodejs2\node_modules\mocha\lib\runnable.js:354:5)
at Runner.runTest (C:\temp\mariadb-connector-nodejs2\node_modules\mocha\lib\runner.js:678:10)
at C:\temp\mariadb-connector-nodejs2\node_modules\mocha\lib\runner.js:801:12
at next (C:\temp\mariadb-connector-nodejs2\node_modules\mocha\lib\runner.js:593:14)
at C:\temp\mariadb-connector-nodejs2\node_modules\mocha\lib\runner.js:603:7
at next (C:\temp\mariadb-connector-nodejs2\node_modules\mocha\lib\runner.js:486:14)
at Immediate.<anonymous> (C:\temp\mariadb-connector-nodejs2\node_modules\mocha\lib\runner.js:571:5)
at processImmediate (internal/timers.js:464:21) {
text: "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'wrong query' at line 1",
sql: 'wrong query - parameters:[]',
fatal: false,
errno: 1064,
sqlState: '42000',
code: 'ER_PARSE_ERROR'
}
*/const conn = await mariadb.createConnection({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PWD,
timezone: 'America/Los_Angeles',
skipSetTimezone: true
});const conn = await mariadb.createConnection({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PWD
});$ DB_HOST=localhost DB_USER=test DB_PASSWORD=secretPasswrd node my-app.js$ npm install dotenvrequire('dotenv').config();
const conn = await mariadb.createConnection({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PWD
});DB_HOST=localhost
DB_USER=test
DB_PWD=secretPasswrdconst conn = await mariadb.createConnection({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PWD
});try {
const conn = await mariadb.createConnection({
host: 'mydb.com',
user: 'myUser',
password: 'myPwd'
});
console.log("connected! connection id is " + conn.threadId);
} catch (err) {
console.log("not connected due to error: " + err);
}SHOW VARIABLES LIKE 'socket';const conn = await mariadb.createConnection({
socketPath: '/tmp/mysql.sock',
user: 'root'
});const conn = await mariadb.createConnection({
socketPath: '\\\\.\\pipe\\MySQL',
user: 'root'
});const pool = mariadb.createPool({
host: 'mydb.com',
user: 'myUser',
connectionLimit: 5
});
let conn;
try {
conn = await pool.getConnection();
console.log('connected! connection id is ' + conn.threadId);
conn.release(); //release to pool
} catch (err) {
console.log('not connected due to error: ' + err);
}const cluster = mariadb.createPoolCluster();
cluster.add('master', { host: 'mydb1.com', user: 'myUser', connectionLimit: 5 });
cluster.add('slave1', { host: 'mydb2.com', user: 'myUser', connectionLimit: 5 });
cluster.add('slave2', { host: 'mydb3.com', user: 'myUser', connectionLimit: 5 });
//getting a connection from slave1 or slave2 using round-robin
const conn = await cluster.getConnection(/slave*/, "RR");
try {
const rows = await conn.query("SELECT 1");
return rows[0]["1"];
} finally {
conn.end();
}try {
await mariadb.importFile({ host: 'localhost', user: 'root', file: '/tmp/tools/data-dump.sql'});
} catch (e) {
// ...
}console.log(mariadb.defaultOptions({ timezone: '+00:00' }));
/*
{
host: 'localhost',
port: 3306,
user: 'root',
password: undefined,
database: undefined,
collation: Collation { index: 224, name: 'UTF8MB4_UNICODE_CI', charset: 'utf8' },
timezone: '+00:00',
...
}
*/ const rows = await conn.query('SELECT NOW()');
console.log(rows); //[ { 'NOW()': 2018-07-02T17:06:38.000Z } ]const rows = await conn.query({
dateStrings: true,
sql: 'SELECT NOW()'
});
console.log(rows); //[ { 'NOW()': '2018-07-02 19:06:38' } ]const res = await connection.query("INSERT INTO someTable VALUES (?, ?, ?)", [
1,
Buffer.from("c327a97374", "hex"),
"mariadb",
]);
//will send INSERT INTO someTable VALUES (1, _BINARY '.\'.st', 'mariadb')const https = require('https');
//3Mb page
https.get(
'https://node.green/#ES2018-features-Promise-prototype-finally-basic-support',
readableStream => conn.query('INSERT INTO StreamingContent (b) VALUE (?)', [readableStream])
);await connection.query('CREATE TABLE animals (' +
'id MEDIUMINT NOT NULL AUTO_INCREMENT,' +
'name VARCHAR(30) NOT NULL,' +
'PRIMARY KEY (id))');
const res = await connection.query('INSERT INTO animals(name) value (?)', ['sea lions']);
//res : { affectedRows: 1, insertId: 1, warningStatus: 0 }const res = await connection.query('select * from animals');
// res : [
// { id: 1, name: 'sea lions' },
// { id: 2, name: 'bird' },
// ]
const meta = res.meta;
// meta: [ ... ]try {
// Query that would normally take more than 100ms
await connection.query({
sql: 'SELECT * FROM information_schema.tables, information_schema.tables as t2',
timeout: 100
});
} catch (err) {
// Error will be:
// SqlError: (conn:2987, no: 1969, SQLState: 70100) Query execution was interrupted (max_statement_time exceeded)
// ...
}await connection.query(
{ namedPlaceholders: true, sql: 'INSERT INTO someTable VALUES (:id, :img, :db)' },
{ id: 1, img: Buffer.from('c327a97374', 'hex'), db: 'mariadb' }
);const res = await connection.query({ rowsAsArray: true, sql: 'select * from animals' });
// res = [
// [ 1, 'sea lions' ],
// [ 2, 'bird' ],
// ]
const meta = res.meta;
// meta: [...]const [rows, meta] = await connection.query({ metaAsArray: true, sql: 'select * from animals' });
// rows = [
// {'id': 1, 'name': 'sea lions' },
// {'id': 2, 'name': 'bird' },
// ]
// meta = [...]const res = await connection.query({
nestTables: true,
sql: 'select a.name, a.id, b.name from animals a, animals b where b.id=1'
});
// res = [
// {
// a: { name: 'sea lions', id: 1 },
// b: { name: 'sea lions' }
// },
// {
// a: { name: 'bird', id: 2 },
// b: { name: 'sea lions' }
// }
//]const res = await connection.query({
nestTables: '_',
sql: 'select a.name, a.id, b.name from animals a, animals b where b.id=1'
});
// res = [
// { a_name: 'sea lions', a_id: 1, b_name: 'sea lions' },
// { a_name: 'bird', a_id: 2, b_name: 'sea lions' }
//]await shareConn.query('CREATE TEMPORARY TABLE bigIntTable(id BIGINT)');
await shareConn.query("INSERT INTO bigIntTable value ('9007199254740993')");
const res = await shareConn.query('select * from bigIntTable');
// res : [{ id: 9007199254740993n }] (exact value)
const res2 = await shareConn.query({sql: 'select * from bigIntTable', supportBigInt: false});
// res : [{ id: 9007199254740992 }] (not exact value)const tinyToBoolean = (column, next) => {
if (column.type == 'TINY' && column.columnLength === 1) {
const val = column.tiny();
return val === null ? null : val === 1;
}
return next();
};
connection.query({ typeCast: tinyToBoolean, sql: '...' });const rows = await connection.query("SELECT 1, 'a'");
// rows = [
// { '1': 1, a: 'a' }
// ]
const meta = rows.meta;
// meta: [
// {
// collation: [Object],
// columnLength: 1,
// columnType: 8,
// scale: 0,
// type: 'LONGLONG',
// flags: 129,
// db: [Function],
// schema: [Function],
// table: [Function],
// orgTable: [Function],
// name: [Function],
// orgName: [Function]
// },
// {
// collation: [Object],
// columnLength: 4,
// columnType: 253,
// scale: 39,
// type: 'VAR_STRING',
// flags: 1,
// db: [Function],
// schema: [Function],
// table: [Function],
// orgTable: [Function],
// name: [Function],
// orgName: [Function]
// }
// ]
async function streamingFunction() {
const queryStream = connection.queryStream('SELECT * FROM mysql.user');
try {
for await (const row of queryStream) {
console.log(row);
}
} catch (e) {
queryStream.close();
throw e;
}
}connection.queryStream('SELECT * FROM mysql.user')
.on("error", err => {
console.log(err); // handle error
})
.on("fields", meta => {
console.log(meta); // metadata array
})
.on("data", row => {
console.log(row); // process each row
})
.on("end", () => {
console.log("Query completed"); // all rows received
});const stream = require('stream');
const fs = require('fs');
// Create a transform stream to convert rows to JSON strings
const transformStream = new stream.Transform({
objectMode: true, // Important! queryStream produces objects
transform: function transformer(row, encoding, callback) {
callback(null, JSON.stringify(row) + '\n');
}
});
// Create output file stream
const fileStream = fs.createWriteStream('./query-results.jsonl');
// Start the query stream
const queryStream = connection.queryStream('SELECT * FROM mysql.user');
// Using pipeline (Node.js 10+) to handle errors and cleanup
stream.pipeline(
queryStream,
transformStream,
fileStream,
(err) => {
if (err) {
console.error('Pipeline failed:', err);
} else {
console.log('Pipeline succeeded');
}
queryStream.close(); // Always close the query stream
}
);async function streamingFunction() {
const prepare = await shareConn.prepare('SELECT * FROM mysql.user where host = ?');
const stream = prepare.executeStream(['localhost']);
try {
for await (const row of stream) {
console.log(row);
}
} catch (e) {
queryStream.close();
}
prepare.close();
} const prepare = await shareConn.prepare('SELECT * FROM mysql.user where host = ?');
prepare.executeStream(['localhost'])
.on("error", err => {
console.log(err); //if error
})
.on("fields", meta => {
console.log(meta); // [ ...]
})
.on("data", row => {
console.log(row);
})
.on("end", () => {
//ended
prepare.close();
});const prepare = await conn.prepare('INSERT INTO mytable(id,val) VALUES (?,?)');
await prepare.execute([1, 'val1'])
prepare.close();const res = await conn.execute('SELECT * FROM mytable WHERE someVal = ? and otherVal = ?', [1, 'val1']);connection.query(
'CREATE TEMPORARY TABLE batchExample(id int, id2 int, id3 int, t varchar(128), id4 int)'
);
const res = await connection.batch('INSERT INTO `batchExample` values (1, ?, 2, ?, 3)', [
[1, 'john'],
[2, 'jack']
]);
console.log(res.affectedRows); // 2// Get individual results for each insert operation,
let results = await connection.batch(
{sql :'INSERT INTO users(name, age) VALUES (?, ?)', fullResult: true },
[['John', 25], ['Jane', 26], ['Bob', 32]])
// results is an array of individual OkPacket objects
.forEach((res, i) => {
console.log(`Result ${i+1}:`, res);
});
// Output:
// Result 1: OkPacket { affectedRows: 1, insertId: 1, warningStatus: 0 }
// Result 2: OkPacket { affectedRows: 1, insertId: 2, warningStatus: 0 }
// Result 3: OkPacket { affectedRows: 1, insertId: 3, warningStatus: 0 }
// Get aggregate results for each insert operation,
let results = await connection.batch(
{sql :'INSERT INTO users(name, age) VALUES (?, ?)', fullResult: true },
[['Boby', 24], ['Rico', 20], ['Johnny', 321]])
// results is an array of individual OkPacket objects
.forEach((res, i) => {
console.log(`Result ${i+1}:`, res);
});
// Output:
// Result 1: OkPacket { affectedRows: 3, insertId: 1, warningStatus: 0 }const conn = await pool.getConnection();
try {
await conn.beginTransaction();
await conn.query("INSERT INTO testTransaction values ('test')");
await conn.query("INSERT INTO testTransaction values ('test2')");
await conn.commit();
} finally {
conn.release();
}try {
await conn.beginTransaction();
await conn.query("INSERT INTO testTransaction values ('test')");
await conn.query("INSERT INTO testTransaction values ('test2')");
await conn.commit();
} catch(err) {
await conn.rollback();
}try {
await conn.changeUser({
user: 'changeUser',
password: 'mypassword'
});
//connection user is now changed.
} catch (e) {
// ...
}conn.ping()
.then(() => {
//connection is valid
})
.catch(err => {
//connection is closed
})conn.end()
.then(() => {
//connection has ended properly
})
.catch(err => {
//connection was closed but not due of current end command
})try {
// long query > 20s
conn.query(
'select * from information_schema.columns as c1, information_schema.tables, information_schema.tables as t2'
);
conn.destroy(); //will immediately close the connection, before previous command end (no `await` in previous command)
} catch (err) {
//Error: Connection destroyed, command was killed
// ...
// fatal: true,
// errno: 45004,
// sqlState: '08S01',
// code: 'ER_CMD_NOT_EXECUTED_DESTROYED'
}const myColVar = "let'go";
const myTable = 'table:a'
const cmd = 'SELECT * FROM ' + conn.escapeId(myTable) + ' where myCol = ' + conn.escape(myColVar);
//or using template literals
const cmd2 = `SELECT * FROM ${conn.escapeId(myTable)} where myCol = ${conn.escape(myColVar)}`;
// cmd = cmd2 = "SELECT * FROM `table:a` where myCol = 'let\\'s go'"const myColVar = "let'go";
const myTable = "table:a"
const cmd = 'SELECT * FROM ' + conn.escapeId(myTable) + ' where myCol = ' + conn.escape(myColVar);
// cmd value will be:
// "SELECT * FROM `table:a` where myCol = 'let\\'s go'"
// using template literals:
const res = await con.query(`SELECT * FROM ${con.escapeId(myTable)} where myCol = ?`, [myColVar]); console.log(connection.serverVersion()); //10.2.14-MariaDBtry {
await conn.importFile({
file: '/tmp/someFile.sql',
database: 'myDb'
});
} catch (e) {
// ...
}async function executeQuery() {
let connection;
try {
connection = await mariadb.createConnection({
host: 'localhost',
user: 'root',
password: 'password'
});
return await connection.query('SELECT * FROM users WHERE id = ?', [userId]);
} catch (err) {
// Log the error with all available information
console.error('Database error:', {
message: err.message,
code: err.code,
sqlState: err.sqlState,
query: err.sql,
fatal: err.fatal
});
// Rethrow or handle appropriately based on error type
if (err.code === 'ER_ACCESS_DENIED_ERROR') {
throw new Error('Database authentication failed');
} else if (err.code === 'ER_BAD_DB_ERROR') {
throw new Error('Database does not exist');
} else {
throw new Error('An unexpected database error occurred');
}
} finally {
// Always close the connection to avoid leaks
if (connection) await connection.end();
}
}try {
await connection.query('INSERT INTO users (email) VALUES (?)', [email]);
} catch (err) {
if (err.code === 'ER_DUP_ENTRY') {
// Handle duplicate email error
return { success: false, message: 'Email already registered' };
}
// Handle other errors
throw err;
}try {
await connection.query('SELECT * FROM nonexistent_table');
} catch (err) {
if (err.fatal) {
// Connection is no longer usable
console.error('Fatal error, connection lost:', err.message);
// Reconnect or fail gracefully
} else {
// Connection is still valid despite the error
console.error('Non-fatal error:', err.message);
// Continue using the same connection
}
}{
Error: (conn:116, no: 1146, SQLState: 42S02) Table 'testdb.nonexistent_table' doesn't exist
sql: SELECT * FROM nonexistent_table - parameters:[]
at Socket.Readable.push (_stream_readable.js:134:10)
at TCP.onread (net.js:559:20)
From event:
at Connection.query (/path/to/mariadb-connector-nodejs/lib/connection.js:183:12)
at async function (/path/to/your/app.js:25:16)
fatal: false,
errno: 1146,
sqlState: '42S02',
code: 'ER_NO_SUCH_TABLE'
}
const conn = await mariadb.createConnection({
user: 'root',
password: 'myPwd',
host: 'localhost',
socketTimeout: 100
});
conn.on('error', err => {
//will be executed after 100ms due to inactivity, socket has closed.
console.log(err);
//log :
//{ Error: (conn:6283, no: 45026, SQLState: 08S01) socket timeout
// ...
// at Socket.emit (events.js:208:7)
// at Socket._onTimeout (net.js:410:8)
// at ontimeout (timers.js:498:11)
// at tryOnTimeout (timers.js:323:5)
// at Timer.listOnTimeout (timers.js:290:5)
// fatal: true,
// errno: 45026,
// sqlState: '08S01',
// code: 'ER_SOCKET_TIMEOUT' }
});
// Create a pool
const pool = mariadb.createPool({
host: 'localhost',
user: 'root',
password: 'password',
connectionLimit: 5
});
async function transferFunds(fromAccount, toAccount, amount) {
let conn;
try {
// Get a connection from the pool
conn = await pool.getConnection();
// Use the connection for a transaction
await conn.query("START TRANSACTION");
// Verify sufficient funds
const [account] = await conn.query(
"SELECT balance FROM accounts WHERE id = ? FOR UPDATE",
[fromAccount]
);
if (account.balance < amount) {
await conn.query("ROLLBACK");
return { success: false, message: "Insufficient funds" };
}
// Perform the transfer
await conn.query(
"UPDATE accounts SET balance = balance - ? WHERE id = ?",
[amount, fromAccount]
);
await conn.query(
"UPDATE accounts SET balance = balance + ? WHERE id = ?",
[amount, toAccount]
);
// Commit the transaction
await conn.query("COMMIT");
return { success: true, message: "Transfer completed" };
} catch (err) {
// Handle errors
if (conn) await conn.query("ROLLBACK");
console.error('Transaction failed:', err);
return { success: false, error: err.message };
} finally {
// Always release the connection back to the pool
if (conn) conn.release();
}
}// Simple query using the pool directly
async function getProductsByCategory(category) {
try {
const rows = await pool.query(
'SELECT * FROM products WHERE category = ? ORDER BY price ASC',
[category]
);
console.log(`Found ${rows.length} products in ${category} category`);
return {
success: true,
count: rows.length,
products: rows
};
} catch (err) {
console.error('Query failed:', err);
return {
success: false,
error: err.message
};
}
}async function getRecentOrders(options) {
try {
const rows = await pool.query({
sql: 'SELECT * FROM orders WHERE created_at > ? LIMIT ?',
values: [options.since, options.limit || 10],
dateStrings: true, // Return dates as strings
nestTables: true // Group results by table
});
return rows;
} catch (err) {
console.error('Failed to fetch recent orders:', err);
throw err;
}
}async function addMultipleUsers(users) {
try {
// Format user data for batch insert
const userValues = users.map(user => [
user.name,
user.email,
user.password,
user.created_at || new Date()
]);
const result = await pool.batch(
'INSERT INTO users(name, email, password, created_at) VALUES (?, ?, ?, ?)',
userValues);
console.log(`Added ${result.affectedRows} users`);
return {
success: true,
insertCount: result.affectedRows,
insertIds: result.map(r => r.insertId)
};
} catch (err) {
console.error('Batch user creation failed:', err);
return {
success: false,
error: err.message
};
}
}// Application shutdown handler
async function gracefulShutdown() {
console.log('Application shutting down...');
try {
// Close database pool
console.log('Closing database connections...');
await pool.end();
console.log('All database connections closed successfully');
// Close other resources
// ...
console.log('Shutdown complete');
process.exit(0);
} catch (err) {
console.error('Error during shutdown:', err);
process.exit(1);
}
}
// Register shutdown handlers
process.on('SIGINT', gracefulShutdown);
process.on('SIGTERM', gracefulShutdown);const userId = "user's-id";
const query = `SELECT * FROM users WHERE id = ${pool.escape(userId)}`;
// query = "SELECT * FROM users WHERE id = 'user\\'s-id'"const tableName = "user-data";
const columnName = "last-login";
const query = `SELECT ${pool.escapeId(columnName)} FROM ${pool.escapeId(tableName)}`;
// query = "SELECT `last-login` FROM `user-data`"async function importDatabaseDump(filePath, targetDatabase) {
try {
await pool.importFile({
file: filePath,
database: targetDatabase
});
console.log(`Successfully imported ${filePath} into ${targetDatabase}`);
return { success: true };
} catch (err) {
console.error(`Import failed: ${err.message}`);
return {
success: false,
error: err.message
};
}
}pool.on('acquire', (connection) => {
console.log(`Connection ${connection.threadId} acquired from pool`);
});pool.on('connection', (connection) => {
console.log(`New connection ${connection.threadId} created in pool`);
// You can initialize connections with specific settings
connection.query("SET SESSION time_zone='+00:00'");
connection.query("SET SESSION sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE'");
});pool.on('release', (connection) => {
console.log(`Connection ${connection.threadId} released back to pool`);
});pool.on('error', (err) => {
console.error('Pool error:', err);
// Implement monitoring or recovery logic
notifyAdministrator(`Database pool error: ${err.message}`);
});// Get current number of active connections
const active = pool.activeConnections();
// Get total number of connections (used and unused)
const total = pool.totalConnections();
// Get current number of unused connections
const idle = pool.idleConnections();
// Get size of pending connection requests queue
const queued = pool.taskQueueSize();
console.log(`Pool status: ${active}/${total} connections active, ${idle} idle, ${queued} requests queued`);const mariadb = require('mariadb');
// Create the cluster
const cluster = mariadb.createPoolCluster({
removeNodeErrorCount: 5, // Remove a node after 5 consecutive connection failures
restoreNodeTimeout: 1000, // Wait 1 second before trying a failed node again
defaultSelector: 'ORDER' // Use nodes in order (first working node in the list)
});
// Add database nodes to the cluster
cluster.add('primary', {
host: 'primary-db.example.com',
user: 'app_user',
password: 'password',
connectionLimit: 10
});
cluster.add('replica1', {
host: 'replica1-db.example.com',
user: 'app_user',
password: 'password',
connectionLimit: 20
});
cluster.add('replica2', {
host: 'replica2-db.example.com',
user: 'app_user',
password: 'password',
connectionLimit: 20
});// Create an empty cluster
const cluster = mariadb.createPoolCluster();
// Add a primary database node
cluster.add('primary', {
host: 'primary-db.example.com',
user: 'app_user',
password: 'password',
connectionLimit: 10
});
// Add multiple read-only replica nodes
cluster.add('replica-east', {
host: 'replica-east.example.com',
user: 'readonly_user',
password: 'password',
connectionLimit: 20
});
cluster.add('replica-west', {
host: 'replica-west.example.com',
user: 'readonly_user',
password: 'password',
connectionLimit: 20
});// Create a cluster with multiple nodes
const cluster = mariadb.createPoolCluster();
cluster.add('primary', { host: 'primary-db.example.com', user: 'app_user' });
cluster.add('replica1', { host: 'replica1.example.com', user: 'readonly_user' });
cluster.add('replica2', { host: 'replica2.example.com', user: 'readonly_user' });
cluster.add('analytics', { host: 'analytics-db.example.com', user: 'analytics_user' });
// Later, remove all replica nodes
cluster.remove('replica*');
// Remove a specific node
cluster.remove('analytics');async function executeQuery(sql, params) {
let conn;
try {
// For write operations, always use the primary
if (sql.toLowerCase().startsWith('insert') ||
sql.toLowerCase().startsWith('update') ||
sql.toLowerCase().startsWith('delete')) {
conn = await cluster.getConnection('primary');
}
// For read operations, use round-robin among replicas
else {
conn = await cluster.getConnection('replica*', 'RR');
}
const result = await conn.query(sql, params);
return result;
} finally {
if (conn) conn.release();
}
}
// Usage
const users = await executeQuery('SELECT * FROM users WHERE status = ?', ['active']);
await executeQuery('UPDATE users SET last_login = NOW() WHERE id = ?', [userId]);async function executeQueryWithRetry(sql, params, maxRetries = 3) {
let attempts = 0;
let lastError;
while (attempts < maxRetries) {
let conn;
attempts++;
try {
conn = await cluster.getConnection('*', 'ORDER'); // Try nodes in order
const result = await conn.query(sql, params);
return result;
} catch (err) {
lastError = err;
console.warn(`Query attempt ${attempts} failed:`, err.message);
// Only retry on connection errors, not query syntax errors
if (!err.fatal) throw err;
// Wait before retrying
await new Promise(resolve => setTimeout(resolve, 500));
} finally {
if (conn) conn.release();
}
}
throw new Error(`All ${maxRetries} query attempts failed. Last error: ${lastError.message}`);
}// Create interfaces for different database roles
const primaryPool = cluster.of('primary'); // Only the primary node
const replicaPool = cluster.of('replica*', 'RANDOM'); // All replicas with random selection
async function readData(userId) {
let conn;
try {
// Get connection from any replica randomly
conn = await replicaPool.getConnection();
return await conn.query('SELECT * FROM users WHERE id = ?', [userId]);
} finally {
if (conn) conn.release();
}
}
async function writeData(userData) {
let conn;
try {
// Always write to primary
conn = await primaryPool.getConnection();
await conn.query('INSERT INTO users SET ?', userData);
return { success: true };
} finally {
if (conn) conn.release();
}
}// Application shutdown handler
async function gracefulShutdown() {
console.log('Application shutting down...');
try {
// Close database connection pool cluster
console.log('Closing database connections...');
await cluster.end();
console.log('All database connections closed successfully');
// Close other resources
// ...
console.log('Shutdown complete');
process.exit(0);
} catch (err) {
console.error('Error during shutdown:', err);
process.exit(1);
}
}
// Register shutdown handlers
process.on('SIGINT', gracefulShutdown);
process.on('SIGTERM', gracefulShutdown);// Create a filtered cluster with only replica nodes
const replicas = cluster.of('replica*', 'RR'); // Round-robin among replicas
async function getReadOnlyData() {
let conn;
try {
// This will automatically use round-robin selection among replica nodes
conn = await replicas.getConnection();
return await conn.query('SELECT * FROM some_large_table LIMIT 1000');
} finally {
if (conn) conn.release();
}
}// Create filtered clusters for different roles
const primary = cluster.of('primary');
const replicas = cluster.of('replica*', 'RR');
// Read from replicas using the shorthand query method
async function getUserById(id) {
try {
return await replicas.query('SELECT * FROM users WHERE id = ?', [id]);
} catch (err) {
console.error('Failed to get user:', err);
throw err;
}
}
// Write to primary
async function updateUserStatus(id, status) {
try {
return await primary.query(
'UPDATE users SET status = ?, updated_at = NOW() WHERE id = ?',
[status, id]
);
} catch (err) {
console.error('Failed to update user status:', err);
throw err;
}
}cluster.on('remove', (nodeId) => {
console.warn(`Database node '${nodeId}' has been removed from the cluster`);
// You might want to send alerts or trigger monitoring
notifyAdministrators(`Database node ${nodeId} has been removed from the cluster due to repeated connection failures`);
});// Direct write operations to primary
const primary = cluster.of('primary');
// Direct read operations to replicas
const replicas = cluster.of('replica*', 'RR');
async function saveData(data) {
// Writes go to primary
return await primary.query('INSERT INTO table SET ?', [data]);
}
async function getData(id) {
// Reads come from replicas
return await replicas.query('SELECT * FROM table WHERE id = ?', [id]);
}