Microsoft SQL Server client for Node.js
Supported TDS drivers:
- Tedious (pure JavaScript - Windows/macOS/Linux, default)
- MSNodeSQLv8 (Microsoft / Contributors Node V8 Driver for Node.js for SQL Server, v2 native - Windows or Linux/macOS 64 bits only)
npm install mssql
npm install mssql msnodesqlv8
This package requires TCP/IP to connect to SQL Server, and you may need to enable this in your installation.
const sql = require('mssql')
(async () => {
try {
// make sure that any items are correctly URL encoded in the connection string
await sql.connect('Server=localhost,1433;Database=database;User Id=username;Password=password;Encrypt=true')
const result = await sql.query`select * from mytable where id = ${value}`
console.dir(result)
} catch (err) {
// ... error checks
}
})()
If you're on Windows Azure, add ?encrypt=true
to your connection string. See docs to learn more.
Parts of the connection URI should be correctly URL encoded so that the URI can be parsed correctly.
Assuming you have set the appropriate environment variables, you can construct a config object as follows:
const sql = require('mssql')
const sqlConfig = {
user: process.env.DB_USER,
password: process.env.DB_PWD,
database: process.env.DB_NAME,
server: 'localhost',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 30000
},
options: {
encrypt: true, // for azure
trustServerCertificate: false // change to true for local dev / self-signed certs
}
}
(async () => {
try {
// make sure that any items are correctly URL encoded in the connection string
await sql.connect(sqlConfig)
const result = await sql.query`select * from mytable where id = ${value}`
console.dir(result)
} catch (err) {
// ... error checks
}
})()
const sql = require('mssql/msnodesqlv8');
const config = {
server: "MyServer",
database: "MyDatabase",
options: {
trustedConnection: true, // Set to true if using Windows Authentication
trustServerCertificate: true, // Set to true if using self-signed certificates
},
driver: "msnodesqlv8", // Required if using Windows Authentication
};
(async () => {
try {
await sql.connect(config);
const result = await sql.query`select TOP 10 * from MyTable`;
console.dir(result);
} catch (err) {
console.error(err);
}
})();
- CLI
- Geography and Geometry
- Table-Valued Parameter
- Response Schema
- Affected Rows
- JSON support
- Handling Duplicate Column Names
- Errors
- Informational messages
- Metadata
- Data Types
- SQL injection
- Known Issues
- Contributing
- 8.x to 9.x changes
- 7.x to 8.x changes
- 6.x to 7.x changes
- 5.x to 6.x changes
- 4.x to 5.x changes
- 3.x to 4.x changes
- 3.x Documentation
const config = {
user: '...',
password: '...',
server: 'localhost', // You can use 'localhost\\instance' to connect to named instance
database: '...',
}
const sql = require('mssql')
(async function () {
try {
let pool = await sql.connect(config)
let result1 = await pool.request()
.input('input_parameter', sql.Int, value)
.query('select * from mytable where id = @input_parameter')
console.dir(result1)
// Stored procedure
let result2 = await pool.request()
.input('input_parameter', sql.Int, value)
.output('output_parameter', sql.VarChar(50))
.execute('procedure_name')
console.dir(result2)
} catch (err) {
// ... error checks
}
})()
sql.on('error', err => {
// ... error handler
})
const sql = require('mssql')
sql.on('error', err => {
// ... error handler
})
sql.connect(config).then(pool => {
// Query
return pool.request()
.input('input_parameter', sql.Int, value)
.query('select * from mytable where id = @input_parameter')
}).then(result => {
console.dir(result)
}).catch(err => {
// ... error checks
});
const sql = require('mssql')
sql.on('error', err => {
// ... error handler
})
sql.connect(config).then(pool => {
// Stored procedure
return pool.request()
.input('input_parameter', sql.Int, value)
.output('output_parameter', sql.VarChar(50))
.execute('procedure_name')
}).then(result => {
console.dir(result)
}).catch(err => {
// ... error checks
})
Native Promise is used by default. You can easily change this with sql.Promise = require('myownpromisepackage')
.
const sql = require('mssql')
sql.connect(config).then(() => {
return sql.query`select * from mytable where id = ${value}`
}).then(result => {
console.dir(result)
}).catch(err => {
// ... error checks
})
sql.on('error', err => {
// ... error handler
})
All values are automatically sanitized against sql injection. This is because it is rendered as prepared statement, and thus all limitations imposed in MS SQL on parameters apply. e.g. Column names cannot be passed/set in statements using variables.
const sql = require('mssql')
sql.connect(config, err => {
// ... error checks
// Query
new sql.Request().query('select 1 as number', (err, result) => {
// ... error checks
console.dir(result)
})
// Stored Procedure
new sql.Request()
.input('input_parameter', sql.Int, value)
.output('output_parameter', sql.VarChar(50))
.execute('procedure_name', (err, result) => {
// ... error checks
console.dir(result)
})
// Using template literal
const request = new sql.Request()
request.query(request.template`select * from mytable where id = ${value}`, (err, result) => {
// ... error checks
console.dir(result)
})
})
sql.on('error', err => {
// ... error handler
})
If you plan to work with large amount of rows, you should always use streaming. Once you enable this, you must listen for events to receive data. Events must be attached before the query completes, but can be attached while in-flight.
const sql = require('mssql')
sql.connect(config, err => {
// ... error checks
const request = new sql.Request()
request.stream = true // You can set streaming differently for each request
request.on('recordset', columns => {
// Emitted once for each recordset in a query
})
request.on('row', row => {
// Emitted for each row in a recordset
})
request.on('rowsaffected', rowCount => {
// Emitted for each `INSERT`, `UPDATE` or `DELETE` statement
// Requires NOCOUNT to be OFF (default)
})
request.on('error', err => {
// May be emitted multiple times
})
request.on('done', result => {
// Always emitted as the last one
})
request.query('select * from verylargetable') // or request.execute(procedure)
})
sql.on('error', err => {
// ... error handler
})
When streaming large sets of data you want to back-off or chunk the amount of data you're processing
to prevent memory exhaustion issues; you can use the Request.pause()
function to do this. Here is
an example of managing rows in batches of 15:
let rowsToProcess = [];
request.on('row', row => {
rowsToProcess.push(row);
if (rowsToProcess.length >= 15) {
request.pause();
processRows();
}
});
request.on('done', () => {
processRows();
});
function processRows() {
// process rows
rowsToProcess = [];
request.resume();
}
An important concept to understand when using this library is Connection Pooling as this library uses connection pooling extensively. As one Node JS process is able to handle multiple requests at once, we can take advantage of this long running process to create a pool of database connections for reuse; this saves overhead of connecting to the database for each request (as would be the case in something like PHP, where one process handles one request).
With the advantages of pooling comes some added complexities, but these are mostly just conceptual and once you understand how the pooling is working, it is simple to make use of it efficiently and effectively.
To assist with pool management in your application there is the sql.connect()
function that is used to connect to the global connection pool. You can make repeated calls to this function, and if the global pool is already connected, it will resolve to the connected pool. The following example obtains the global connection pool by running sql.connect()
, and then runs the query against the pool.
NB: It's important to note that there can only be one global connection pool connected at a time. Providing a different connection config to the connect()
function will not create a new connection if it is already connected.
const sql = require('mssql')
const config = { ... }
// run a query against the global connection pool
function runQuery(query) {
// sql.connect() will return the existing global pool if it exists or create a new one if it doesn't
return sql.connect(config).then((pool) => {
return pool.query(query)
})
}
Awaiting or .then
-ing the pool creation is a safe way to ensure that the pool is always ready, without knowing where it is needed first. In practice, once the pool is created then there will be no delay for the next connect()
call.
Also notice that we do not close the global pool by calling sql.close()
after the query is executed, because other queries may need to be run against this pool and closing it will add additional overhead to running subsequent queries. You should only ever close the global pool if you're certain the application is finished. Or for example, if you are running some kind of CLI tool or a CRON job you can close the pool at the end of the script.
The ability to call connect()
and close()
repeatedly on the global pool is intended to make pool management easier, however it is better to maintain your own reference to the pool, where connect()
is called once, and the resulting global pool's connection promise is re-used throughout the entire application.
For example, in Express applications, the following approach uses a single global pool instance added to the app.locals
so the application has access to it when needed. The server start is then chained inside the connect()
promise.
const express = require('express')
const sql = require('mssql')
const config = {/*...*/}
//instantiate a connection pool
const appPool = new sql.ConnectionPool(config)
//require route handlers and use the same connection pool everywhere
const route1 = require('./routes/route1')
const app = express()
app.get('/path', route1)
//connect the pool and start the web server when done
appPool.connect().then(function(pool) {
app.locals.db = pool;
const server = app.listen(3000, function () {
const host = server.address().address
const port = server.address().port
console.log('Example app listening at http://%s:%s', host, port)
})
}).catch(function(err) {
console.error('Error creating connection pool', err)
});
Then the route uses the connection pool in the app.locals
object:
// ./routes/route1.js
const sql = require('mssql');
module.exports = function(req, res) {
req.app.locals.db.query('SELECT TOP 10 * FROM table_name', function(err, recordset) {
if (err) {
console.error(err)
res.status(500).send('SERVER ERROR')
return
}
res.status(200).json({ message: 'success' })
})
}
For some use-cases you may want to implement your own connection pool management, rather than using the global connection pool. Reasons for doing this include:
- Supporting connections to multiple databases
- Creation of separate pools for read vs read/write operations
The following code is an example of a custom connection pool implementation.
// pool-manager.js
const mssql = require('mssql')
const pools = new Map();
module.exports = {
/**
* Get or create a pool. If a pool doesn't exist the config must be provided.
* If the pool does exist the config is ignored (even if it was different to the one provided
* when creating the pool)
*
* @param {string} name
* @param {{}} [config]
* @return {Promise.<mssql.ConnectionPool>}
*/
get: (name, config) => {
if (!pools.has(name)) {
if (!config) {
throw new Error('Pool does not exist');
}
const pool = new mssql.ConnectionPool(config);
// automatically remove the pool from the cache if `pool.close()` is called
const close = pool.close.bind(pool);
pool.close = (...args) => {
pools.delete(name);
return close(...args);
}
pools.set(name, pool.connect());
}
return pools.get(name);
},
/**
* Closes all the pools and removes them from the store
*
* @return {Promise<mssql.ConnectionPool[]>}
*/
closeAll: () => Promise.all(Array.from(pools.values()).map((connect) => {
return connect.then((pool) => pool.close());
})),
};
This file can then be used in your application to create, fetch, and close pools.
const { get } = require('./pool-manager')
async function example() {
const pool = await get('default')
return pool.request().query('SELECT 1')
}
Similar to the global connection pool, you should aim to only close a pool when you know it will never be needed by the application again. Typically this will only be when your application is shutting down.
In some instances it is desirable to manipulate the record data as it is returned from the database, this may be to cast it as a particular object (eg: moment
object instead of Date
) or similar.
In v8.0.0+ it is possible to register per-datatype handlers:
const sql = require('mssql')
// in this example all integer values will return 1 more than their actual value in the database
sql.valueHandler.set(sql.TYPES.Int, (value) => value + 1)
sql.query('SELECT * FROM [example]').then((result) => {
// all `int` columns will return a manipulated value as per the callback above
})
The following is an example configuration object:
const config = {
user: '...',
password: '...',
server: 'localhost',
database: '...',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 30000
}
}
- user - User name to use for authentication.
- password - Password to use for authentication.
- server - Server to connect to. You can use 'localhost\instance' to connect to named instance.
- port - Port to connect to (default:
1433
). Don't set when connecting to named instance. - domain - Once you set domain, driver will connect to SQL Server using domain login.
- database - Database to connect to (default: dependent on server configuration).
- connectionTimeout - Connection timeout in ms (default:
15000
). - requestTimeout - Request timeout in ms (default:
15000
). NOTE: msnodesqlv8 driver doesn't support timeouts < 1 second. When passed via connection string, the key must berequest timeout
- stream - Stream recordsets/rows instead of returning them all at once as an argument of callback (default:
false
). You can also enable streaming for each request independently (request.stream = true
). Always set totrue
if you plan to work with large amount of rows. - parseJSON - Parse JSON recordsets to JS objects (default:
false
). For more information please see section JSON support. - pool.max - The maximum number of connections there can be in the pool (default:
10
). - pool.min - The minimum of connections there can be in the pool (default:
0
). - pool.idleTimeoutMillis - The Number of milliseconds before closing an unused connection (default:
30000
). - arrayRowMode - Return row results as a an array instead of a keyed object. Also adds
columns
array. (default:false
) See Handling Duplicate Column Names
Complete list of pool options can be found here.
In addition to configuration object there is an option to pass config as a connection string. Connection strings are supported.
Server=localhost,1433;Database=database;User Id=username;Password=password;Encrypt=true
Driver=msnodesqlv8;Server=(local)\INSTANCE;Database=database;UID=DOMAIN\username;PWD=password;Encrypt=true
Several types of Azure Authentication are supported:
Server=*.database.windows.net;Database=database;Authentication=Active Directory Integrated;Client secret=clientsecret;Client Id=clientid;Tenant Id=tenantid;Encrypt=true
Note: Internally, the 'Active Directory Integrated' will change its type depending on the other parameters you add to it. On the example above, it will change to azure-active-directory-service-principal-secret because we supplied a Client Id, Client secret and Tenant Id.
If you want to utilize Authentication tokens (azure-active-directory-access-token) Just remove the unnecessary additional parameters and supply only a token parameter, such as in this example:
Server=*.database.windows.net;Database=database;Authentication=Active Directory Integrated;token=token;Encrypt=true
Finally if you want to utilize managed identity services such as managed identity service app service you can follow this example below:
Server=*.database.windows.net;Database=database;Authentication=Active Directory Integrated;msi endpoint=msiendpoint;Client Id=clientid;msi secret=msisecret;Encrypt=true
or if its managed identity service virtual machines, then follow this:
Server=*.database.windows.net;Database=database;Authentication=Active Directory Integrated;msi endpoint=msiendpoint;Client Id=clientid;Encrypt=true
We can also utilizes Active Directory Password but unlike the previous examples, it is not part of the Active Directory Integrated Authentication.
Server=*.database.windows.net;Database=database;Authentication=Active Directory Password;User Id=username;Password=password;Client Id=clientid;Tenant Id=tenantid;Encrypt=true
For more reference, you can consult here. Under the authentication.type parameter.
Default driver, actively maintained and production ready. Platform independent, runs everywhere Node.js runs. Officially supported by Microsoft.
Extra options:
- beforeConnect(conn) - Function, which is invoked before opening the connection. The parameter
conn
is the configured tediousConnection
. It can be used for attaching event handlers like in this example:
require('mssql').connect({...config, beforeConnect: conn => {
conn.once('connect', err => { err ? console.error(err) : console.log('mssql connected')})
conn.once('end', err => { err ? console.error(err) : console.log('mssql disconnected')})
}})
- options.instanceName - The instance name to connect to. The SQL Server Browser service must be running on the database server, and UDP port 1434 on the database server must be reachable.
- options.useUTC - A boolean determining whether or not use UTC time for values without time zone offset (default:
true
). - options.encrypt - A boolean determining whether or not the connection will be encrypted (default:
true
). - options.tdsVersion - The version of TDS to use (default:
7_4
, available:7_1
,7_2
,7_3_A
,7_3_B
,7_4
). - options.appName - Application name used for SQL server logging.
- options.abortTransactionOnError - A boolean determining whether to rollback a transaction automatically if any error is encountered during the given transaction's execution. This sets the value for
XACT_ABORT
during the initial SQL phase of a connection.
Authentication:
On top of the extra options, an authentication
property can be added to the pool config option
- authentication - An object with authentication settings, according to the Tedious Documentation. Passing this object will override
user
,password
,domain
settings. - authentication.type - Type of the authentication method, valid types are
default
,ntlm
,azure-active-directory-password
,azure-active-directory-access-token
,azure-active-directory-msi-vm
, orazure-active-directory-msi-app-service
- authentication.options - Options of the authentication required by the
tedious
driver, depends onauthentication.type
. For more details, check Tedious Authentication Interfaces tedious
does not support Windows Authentication/Trusted Connection, however themsnodesqlv8
driver does.
More information about Tedious specific options: http://tediousjs.github.io/tedious/api-connection.html
Alternative driver, requires Node.js v10+ or newer; Windows (32 or 64-bit) or Linux/macOS (64-bit only). It's not part of the default package so it must be installed in addition. Supports Windows/Trusted Connection authentication.
To use this driver you must use this require
statement:
const sql = require('mssql/msnodesqlv8')
Note: If you use import into your lib to prepare your request (const { VarChar } = require('mssql')
) you also need to upgrade all your types import into your code (const { VarChar } = require('mssql/msnodesqlv8')
) or a connection.on is not a function
error will be thrown.
Extra options:
- beforeConnect(conn) - Function, which is invoked before opening the connection. The parameter
conn
is the connection configuration, that can be modified to pass extra parameters to the driver'sopen()
method. - connectionString - Connection string (default: see below).
- options.instanceName - The instance name to connect to. The SQL Server Browser service must be running on the database server, and UDP port 1444 on the database server must be reachable.
- options.trustedConnection - Use Windows Authentication (default:
false
). - options.useUTC - A boolean determining whether or not to use UTC time for values without time zone offset (default:
true
).
Default connection string when connecting to port:
Driver={SQL Server Native Client 11.0};Server={#{server},#{port}};Database={#{database}};Uid={#{user}};Pwd={#{password}};Trusted_Connection={#{trusted}};
Default connection string when connecting to named instance:
Driver={SQL Server Native Client 11.0};Server={#{server}\\#{instance}};Database={#{database}};Uid={#{user}};Pwd={#{password}};Trusted_Connection={#{trusted}};
Please note that the connection string with this driver is not the same than tedious and use yes/no instead of true/false. You can see more on the ODBC documentation.
Internally, each ConnectionPool
instance is a separate pool of TDS connections. Once you create a new Request
/Transaction
/Prepared Statement
, a new TDS connection is acquired from the pool and reserved for desired action. Once the action is complete, connection is released back to the pool. Connection health check is built-in so once the dead connection is discovered, it is immediately replaced with a new one.
IMPORTANT: Always attach an error
listener to created connection. Whenever something goes wrong with the connection it will emit an error and if there is no listener it will crash your application with an uncaught error.
const pool = new sql.ConnectionPool({ /* config */ })
- error(err) - Dispatched on connection error.
Create a new connection pool. The initial probe connection is created to find out whether the configuration is valid.
Arguments
- callback(err) - A callback which is called after initial probe connection has established, or an error has occurred. Optional. If omitted, returns Promise.
Example
const pool = new sql.ConnectionPool({
user: '...',
password: '...',
server: 'localhost',
database: '...'
})
pool.connect(err => {
// ...
})
Errors
- ELOGIN (
ConnectionError
) - Login failed. - ETIMEOUT (
ConnectionError
) - Connection timeout. - EALREADYCONNECTED (
ConnectionError
) - Database is already connected! - EALREADYCONNECTING (
ConnectionError
) - Already connecting to database! - EINSTLOOKUP (
ConnectionError
) - Instance lookup failed. - ESOCKET (
ConnectionError
) - Socket error.
Close all active connections in the pool.
Example
pool.close()
const request = new sql.Request(/* [pool or transaction] */)
If you omit pool/transaction argument, global pool is used instead.
- recordset(columns) - Dispatched when metadata for new recordset are parsed.
- row(row) - Dispatched when new row is parsed.
- done(returnValue) - Dispatched when request is complete.
- error(err) - Dispatched on error.
- info(message) - Dispatched on informational message.
Call a stored procedure.
Arguments
- procedure - Name of the stored procedure to be executed.
- callback(err, recordsets, returnValue) - A callback which is called after execution has completed, or an error has occurred.
returnValue
is also accessible as property of recordsets. Optional. If omitted, returns Promise.
Example
const request = new sql.Request()
request.input('input_parameter', sql.Int, value)
request.output('output_parameter', sql.Int)
request.execute('procedure_name', (err, result) => {
// ... error checks
console.log(result.recordsets.length) // count of recordsets returned by the procedure
console.log(result.recordsets[0].length) // count of rows contained in first recordset
console.log(result.recordset) // first recordset from result.recordsets
console.log(result.returnValue) // procedure return value
console.log(result.output) // key/value collection of output values
console.log(result.rowsAffected) // array of numbers, each number represents the number of rows affected by executed statemens
// ...
})
Errors
- EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled. - ETIMEOUT (
RequestError
) - Request timeout. - ENOCONN (
RequestError
) - No connection is specified for that request. - ENOTOPEN (
ConnectionError
) - Connection not yet open. - ECONNCLOSED (
ConnectionError
) - Connection is closed. - ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EABORT (
TransactionError
) - Transaction was aborted (by user or because of an error).
Add an input parameter to the request.
Arguments
- name - Name of the input parameter without @Â char.
- type - SQL data type of input parameter. If you omit type, module automatically decide which SQL data type should be used based on JS data type.
- value - Input parameter value.
undefined
andNaN
values are automatically converted tonull
values.
Example
request.input('input_parameter', value)
request.input('input_parameter', sql.Int, value)
JS Data Type To SQL Data Type Map
String
->Âsql.NVarChar
Number
->sql.Int
Boolean
->sql.Bit
Date
->sql.DateTime
Buffer
->sql.VarBinary
sql.Table
->sql.TVP
Default data type for unknown object is sql.NVarChar
.
You can define your own type map.
sql.map.register(MyClass, sql.Text)
You can also overwrite the default type map.
sql.map.register(Number, sql.BigInt)
Errors (synchronous)
- EARGS (
RequestError
) - Invalid number of arguments. - EINJECT (
RequestError
) - SQL injection warning.
NB: Do not use parameters @p{n}
as these are used by the internal drivers and cause a conflict.
Add an output parameter to the request.
Arguments
- name - Name of the output parameter without @Â char.
- type - SQL data type of output parameter.
- value - Output parameter value initial value.
undefined
andNaN
values are automatically converted tonull
values. Optional.
Example
request.output('output_parameter', sql.Int)
request.output('output_parameter', sql.VarChar(50), 'abc')
Errors (synchronous)
- EARGS (
RequestError
) - Invalid number of arguments. - EINJECT (
RequestError
) - SQL injection warning.
Convert request to a Node.js ReadableStream
Example
const { pipeline } = require('stream')
const request = new sql.Request()
const readableStream = request.toReadableStream()
pipeline(readableStream, transformStream, writableStream)
request.query('select * from mytable')
OR if you wanted to increase the highWaterMark of the read stream to buffer more rows in memory
const { pipeline } = require('stream')
const request = new sql.Request()
const readableStream = request.toReadableStream({ highWaterMark: 100 })
pipeline(readableStream, transformStream, writableStream)
request.query('select * from mytable')
Sets request to stream
mode and pulls all rows from all recordsets to a given stream.
Arguments
- stream - Writable stream in object mode.
Example
const request = new sql.Request()
request.pipe(stream)
request.query('select * from mytable')
stream.on('error', err => {
// ...
})
stream.on('finish', () => {
// ...
})
Execute the SQL command. To execute commands like create procedure
or if you plan to work with local temporary tables, use batch instead.
Arguments
- command - T-SQL command to be executed.
- callback(err, recordset) - A callback which is called after execution has completed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const request = new sql.Request()
request.query('select 1 as number', (err, result) => {
// ... error checks
console.log(result.recordset[0].number) // return 1
// ...
})
Errors
- ETIMEOUT (
RequestError
) - Request timeout. - EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled. - ENOCONN (
RequestError
) - No connection is specified for that request. - ENOTOPEN (
ConnectionError
) - Connection not yet open. - ECONNCLOSED (
ConnectionError
) - Connection is closed. - ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EABORT (
TransactionError
) - Transaction was aborted (by user or because of an error).
const request = new sql.Request()
request.query('select 1 as number; select 2 as number', (err, result) => {
// ... error checks
console.log(result.recordset[0].number) // return 1
console.log(result.recordsets[0][0].number) // return 1
console.log(result.recordsets[1][0].number) // return 2
})
NOTE: To get number of rows affected by the statement(s), see section Affected Rows.
Execute the SQL command. Unlike query, it doesn't use sp_executesql
, so is not likely that SQL Server will reuse the execution plan it generates for the SQL. Use this only in special cases, for example when you need to execute commands like create procedure
which can't be executed with query or if you're executing statements longer than 4000 chars on SQL Server 2000. Also you should use this if you're plan to work with local temporary tables (more information here).
NOTE: Table-Valued Parameter (TVP) is not supported in batch.
Arguments
- batch - T-SQL command to be executed.
- callback(err, recordset) - A callback which is called after execution has completed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const request = new sql.Request()
request.batch('create procedure #temporary as select * from table', (err, result) => {
// ... error checks
})
Errors
- ETIMEOUT (
RequestError
) - Request timeout. - EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled. - ENOCONN (
RequestError
) - No connection is specified for that request. - ENOTOPEN (
ConnectionError
) - Connection not yet open. - ECONNCLOSED (
ConnectionError
) - Connection is closed. - ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EABORT (
TransactionError
) - Transaction was aborted (by user or because of an error).
You can enable multiple recordsets in queries with the request.multiple = true
command.
Perform a bulk insert.
Arguments
- table -
sql.Table
instance. - options - Options object to be passed through to driver (currently tedious only). Optional. If argument is a function it will be treated as the callback.
- callback(err, rowCount) - A callback which is called after bulk insert has completed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const table = new sql.Table('table_name') // or temporary table, e.g. #temptable
table.create = true
table.columns.add('a', sql.Int, {nullable: true, primary: true})
table.columns.add('b', sql.VarChar(50), {nullable: false})
table.rows.add(777, 'test')
const request = new sql.Request()
request.bulk(table, (err, result) => {
// ... error checks
})
IMPORTANT: Always indicate whether the column is nullable or not!
TIP: If you set table.create
to true
, module will check if the table exists before it start sending data. If it doesn't, it will automatically create it. You can specify primary key columns by setting primary: true
to column's options. Primary key constraint on multiple columns is supported.
TIP: You can also create Table variable from any recordset with recordset.toTable()
. You can optionally specify table type name in the first argument.
Errors
- ENAME (
RequestError
) - Table name must be specified for bulk insert. - ETIMEOUT (
RequestError
) - Request timeout. - EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled. - ENOCONN (
RequestError
) - No connection is specified for that request. - ENOTOPEN (
ConnectionError
) - Connection not yet open. - ECONNCLOSED (
ConnectionError
) - Connection is closed. - ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EABORT (
TransactionError
) - Transaction was aborted (by user or because of an error).
Cancel currently executing request. Return true
if cancellation packet was send successfully.
Example
const request = new sql.Request()
request.query('waitfor delay \'00:00:05\'; select 1 as number', (err, result) => {
console.log(err instanceof sql.RequestError) // true
console.log(err.message) // Cancelled.
console.log(err.code) // ECANCEL
// ...
})
request.cancel()
IMPORTANT: always use Transaction
class to create transactions - it ensures that all your requests are executed on one connection. Once you call begin
, a single connection is acquired from the connection pool and all subsequent requests (initialized with the Transaction
object) are executed exclusively on this connection. After you call commit
or rollback
, connection is then released back to the connection pool.
const transaction = new sql.Transaction(/* [pool] */)
If you omit connection argument, global connection is used instead.
Example
const transaction = new sql.Transaction(/* [pool] */)
transaction.begin(err => {
// ... error checks
const request = new sql.Request(transaction)
request.query('insert into mytable (mycolumn) values (12345)', (err, result) => {
// ... error checks
transaction.commit(err => {
// ... error checks
console.log("Transaction committed.")
})
})
})
Transaction can also be created by const transaction = pool.transaction()
. Requests can also be created by const request = transaction.request()
.
Aborted transactions
This example shows how you should correctly handle transaction errors when abortTransactionOnError
(XACT_ABORT
) is enabled. Added in 2.0.
const transaction = new sql.Transaction(/* [pool] */)
transaction.begin(err => {
// ... error checks
let rolledBack = false
transaction.on('rollback', aborted => {
// emited with aborted === true
rolledBack = true
})
new sql.Request(transaction)
.query('insert into mytable (bitcolumn) values (2)', (err, result) => {
// insert should fail because of invalid value
if (err) {
if (!rolledBack) {
transaction.rollback(err => {
// ... error checks
})
}
} else {
transaction.commit(err => {
// ... error checks
})
}
})
})
- begin - Dispatched when transaction begin.
- commit - Dispatched on successful commit.
- rollback(aborted) - Dispatched on successful rollback with an argument determining if the transaction was aborted (by user or because of an error).
Begin a transaction.
Arguments
- isolationLevel - Controls the locking and row versioning behavior of TSQL statements issued by a connection. Optional.
READ_COMMITTED
by default. For possible values seesql.ISOLATION_LEVEL
. - callback(err) - A callback which is called after transaction has began, or an error has occurred. Optional. If omitted, returns Promise.
Example
const transaction = new sql.Transaction()
transaction.begin(err => {
// ... error checks
})
Errors
- ENOTOPEN (
ConnectionError
) - Connection not yet open. - EALREADYBEGUN (
TransactionError
) - Transaction has already begun.
Commit a transaction.
Arguments
- callback(err) - A callback which is called after transaction has committed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const transaction = new sql.Transaction()
transaction.begin(err => {
// ... error checks
transaction.commit(err => {
// ... error checks
})
})
Errors
- ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EREQINPROG (
TransactionError
) - Can't commit transaction. There is a request in progress.
Rollback a transaction. If the queue isn't empty, all queued requests will be Cancelled and the transaction will be marked as aborted.
Arguments
- callback(err) - A callback which is called after transaction has rolled back, or an error has occurred. Optional. If omitted, returns Promise.
Example
const transaction = new sql.Transaction()
transaction.begin(err => {
// ... error checks
transaction.rollback(err => {
// ... error checks
})
})
Errors
- ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EREQINPROG (
TransactionError
) - Can't rollback transaction. There is a request in progress.
IMPORTANT: always use PreparedStatement
class to create prepared statements - it ensures that all your executions of prepared statement are executed on one connection. Once you call prepare
, a single connection is acquired from the connection pool and all subsequent executions are executed exclusively on this connection. After you call unprepare
, the connection is then released back to the connection pool.
const ps = new sql.PreparedStatement(/* [pool] */)
If you omit the connection argument, the global connection is used instead.
Example
const ps = new sql.PreparedStatement(/* [pool] */)
ps.input('param', sql.Int)
ps.prepare('select @param as value', err => {
// ... error checks
ps.execute({param: 12345}, (err, result) => {
// ... error checks
// release the connection after queries are executed
ps.unprepare(err => {
// ... error checks
})
})
})
IMPORTANT: Remember that each prepared statement means one reserved connection from the pool. Don't forget to unprepare a prepared statement when you've finished your queries!
You can execute multiple queries against the same prepared statement but you must unprepare the statement when you have finished using it otherwise you will cause the connection pool to run out of available connections.
TIP: You can also create prepared statements in transactions (new sql.PreparedStatement(transaction)
), but keep in mind you can't execute other requests in the transaction until you call unprepare
.
Add an input parameter to the prepared statement.
Arguments
- name - Name of the input parameter without @Â char.
- type - SQL data type of input parameter.
Example
ps.input('input_parameter', sql.Int)
ps.input('input_parameter', sql.VarChar(50))
Errors (synchronous)
- EARGS (
PreparedStatementError
) - Invalid number of arguments. - EINJECT (
PreparedStatementError
) - SQL injection warning.
Add an output parameter to the prepared statement.
Arguments
- name - Name of the output parameter without @Â char.
- type - SQL data type of output parameter.
Example
ps.output('output_parameter', sql.Int)
ps.output('output_parameter', sql.VarChar(50))
Errors (synchronous)
- EARGS (
PreparedStatementError
) - Invalid number of arguments. - EINJECT (
PreparedStatementError
) - SQL injection warning.
Prepare a statement.
Arguments
- statement - T-SQL statement to prepare.
- callback(err) - A callback which is called after preparation has completed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const ps = new sql.PreparedStatement()
ps.prepare('select @param as value', err => {
// ... error checks
})
Errors
- ENOTOPEN (
ConnectionError
) - Connection not yet open. - EALREADYPREPARED (
PreparedStatementError
) - Statement is already prepared. - ENOTBEGUN (
TransactionError
) - Transaction has not begun.
Execute a prepared statement.
Arguments
- values - An object whose names correspond to the names of parameters that were added to the prepared statement before it was prepared.
- callback(err) - A callback which is called after execution has completed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const ps = new sql.PreparedStatement()
ps.input('param', sql.Int)
ps.prepare('select @param as value', err => {
// ... error checks
ps.execute({param: 12345}, (err, result) => {
// ... error checks
console.log(result.recordset[0].value) // return 12345
console.log(result.rowsAffected) // Returns number of affected rows in case of INSERT, UPDATE or DELETE statement.
ps.unprepare(err => {
// ... error checks
})
})
})
You can also stream executed request.
const ps = new sql.PreparedStatement()
ps.input('param', sql.Int)
ps.prepare('select @param as value', err => {
// ... error checks
ps.stream = true
const request = ps.execute({param: 12345})
request.on('recordset', columns => {
// Emitted once for each recordset in a query
})
request.on('row', row => {
// Emitted for each row in a recordset
})
request.on('error', err => {
// May be emitted multiple times
})
request.on('done', result => {
// Always emitted as the last one
console.log(result.rowsAffected) // Returns number of affected rows in case of INSERT, UPDATE or DELETE statement.
ps.unprepare(err => {
// ... error checks
})
})
})
TIP: To learn more about how number of affected rows works, see section Affected Rows.
Errors
- ENOTPREPARED (
PreparedStatementError
) - Statement is not prepared. - ETIMEOUT (
RequestError
) - Request timeout. - EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled.
Unprepare a prepared statement.
Arguments
- callback(err) - A callback which is called after unpreparation has completed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const ps = new sql.PreparedStatement()
ps.input('param', sql.Int)
ps.prepare('select @param as value', err => {
// ... error checks
ps.unprepare(err => {
// ... error checks
})
})
Errors
- ENOTPREPARED (
PreparedStatementError
) - Statement is not prepared.
If you want to add the MSSQL CLI tool to your path, you must install it globally with npm install -g mssql
.
Setup
Create a .mssql.json
configuration file (anywhere). Structure of the file is the same as the standard configuration object.
{
"user": "...",
"password": "...",
"server": "localhost",
"database": "..."
}
Example
echo "select * from mytable" | mssql /path/to/config
Results in:
[[{"username":"patriksimek","password":"tooeasy"}]]
You can also query for multiple recordsets.
echo "select * from mytable; select * from myothertable" | mssql
Results in:
[[{"username":"patriksimek","password":"tooeasy"}],[{"id":15,"name":"Product name"}]]
If you omit config path argument, mssql will try to load it from current working directory.
Overriding config settings
You can override some config settings via CLI options (--user
, --password
, --server
, --database
, --port
).
echo "select * from mytable" | mssql /path/to/config --database anotherdatabase
Results in:
[[{"username":"onotheruser","password":"quiteeasy"}]]
node-mssql has built-in deserializer for Geography and Geometry CLR data types.
Geography types can be constructed several different ways. Refer carefully to documentation to verify the coordinate ordering; the ST methods tend to order parameters as longitude (x) then latitude (y), while custom CLR methods tend to prefer to order them as latitude (y) then longitude (x).
The query:
select geography::STGeomFromText(N'POLYGON((1 1, 3 1, 3 1, 1 1))',4326)
results in:
{
srid: 4326,
version: 2,
points: [
Point { lat: 1, lng: 1, z: null, m: null },
Point { lat: 1, lng: 3, z: null, m: null },
Point { lat: 1, lng: 3, z: null, m: null },
Point { lat: 1, lng: 1, z: null, m: null }
],
figures: [ { attribute: 1, pointOffset: 0 } ],
shapes: [ { parentOffset: -1, figureOffset: 0, type: 3 } ],
segments: []
}
NOTE: You will also see x
and y
coordinates in parsed Geography points,
they are not recommended for use. They have thus been omitted from this example.
For compatibility, they remain flipped (x, the horizontal offset, is instead used for latitude, the vertical), and thus risk misleading you.
Prefer instead to use the lat
and lng
properties.
Geometry types can also be constructed in several ways. Unlike Geographies, they are consistent in always placing x before y. node-mssql decodes the result of this query:
select geometry::STGeomFromText(N'POLYGON((1 1, 3 1, 3 7, 1 1))',4326)
into the JavaScript object:
{
srid: 4326,
version: 1,
points: [
Point { x: 1, y: 1, z: null, m: null },
Point { x: 1, y: 3, z: null, m: null },
Point { x: 7, y: 3, z: null, m: null },
Point { x: 1, y: 1, z: null, m: null }
],
figures: [ { attribute: 2, pointOffset: 0 } ],
shapes: [ { parentOffset: -1, figureOffset: 0, type: 3 } ],
segments: []
}
Supported on SQL Server 2008 and later. You can pass a data table as a parameter to stored procedure. First, we have to create custom type in our database.
CREATE TYPE TestType AS TABLE ( a VARCHAR(50), b INT );
Next we will need a stored procedure.
CREATE PROCEDURE MyCustomStoredProcedure (@tvp TestType readonly) AS SELECT * FROM @tvp
Now let's go back to our Node.js app.
const tvp = new sql.Table() // You can optionally specify table type name in the first argument.
// Columns must correspond with type we have created in database.
tvp.columns.add('a', sql.VarChar(50))
tvp.columns.add('b', sql.Int)
// Add rows
tvp.rows.add('hello tvp', 777) // Values are in same order as columns.
You can send table as a parameter to stored procedure.
const request = new sql.Request()
request.input('tvp', tvp)
request.execute('MyCustomStoredProcedure', (err, result) => {
// ... error checks
console.dir(result.recordsets[0][0]) // {a: 'hello tvp', b: 777}
})
TIP: You can also create Table variable from any recordset with recordset.toTable()
. You can optionally specify table type name in the first argument.
You can clear the table rows for easier batching by using table.rows.clear()
const tvp = new sql.Table() // You can optionally specify table type name in the first argument.
// Columns must correspond with type we have created in database.
tvp.columns.add('a', sql.VarChar(50))
tvp.columns.add('b', sql.Int)
// Add rows
tvp.rows.add('hello tvp', 777) // Values are in same order as columns.
tvp.rows.clear()
An object returned from a sucessful
basic query would look like the following.
{
recordsets: [
[
{
COL1: "some content",
COL2: "some more content"
}
]
],
recordset: [
{
COL1: "some content",
COL2: "some more content"
}
],
output: {},
rowsAffected: [1]
}
If you're performing INSERT
, UPDATE
or DELETE
in a query, you can read number of affected rows. The rowsAffected
variable is an array of numbers. Each number represents number of affected rows by a single statement.
Example using Promises
const request = new sql.Request()
request.query('update myAwesomeTable set awesomness = 100').then(result => {
console.log(result.rowsAffected)
})
Example using callbacks
const request = new sql.Request()
request.query('update myAwesomeTable set awesomness = 100', (err, result) => {
console.log(result.rowsAffected)
})
Example using streaming
In addition to the rowsAffected attribute on the done event, each statement will emit the number of affected rows as it is completed.
const request = new sql.Request()
request.stream = true
request.query('update myAwesomeTable set awesomness = 100')
request.on('rowsaffected', rowCount => {
console.log(rowCount)
})
request.on('done', result => {
console.log(result.rowsAffected)
})
SQL Server 2016 introduced built-in JSON serialization. By default, JSON is returned as a plain text in a special column named JSON_F52E2B61-18A1-11d1-B105-00805F49916B
.
Example
SELECT
1 AS 'a.b.c',
2 AS 'a.b.d',
3 AS 'a.x',
4 AS 'a.y'
FOR JSON PATH
Results in:
recordset = [ { 'JSON_F52E2B61-18A1-11d1-B105-00805F49916B': '{"a":{"b":{"c":1,"d":2},"x":3,"y":4}}' } ]
You can enable built-in JSON parser with config.parseJSON = true
. Once you enable this, recordset will contain rows of parsed JS objects. Given the same example, result will look like this:
recordset = [ { a: { b: { c: 1, d: 2 }, x: 3, y: 4 } } ]
IMPORTANT: In order for this to work, there must be exactly one column named JSON_F52E2B61-18A1-11d1-B105-00805F49916B
in the recordset.
More information about JSON support can be found in official documentation.
If your queries contain output columns with identical names, the default behaviour of mssql
will only return column metadata for the last column with that name. You will also not always be able to re-assemble the order of output columns requested.
Default behaviour:
const request = new sql.Request()
request
.query("select 'asdf' as name, 'qwerty' as other_name, 'jkl' as name")
.then(result => {
console.log(result)
});
Results in:
{
recordsets: [
[ { name: [ 'asdf', 'jkl' ], other_name: 'qwerty' } ]
],
recordset: [ { name: [ 'asdf', 'jkl' ], other_name: 'qwerty' } ],
output: {},
rowsAffected: [ 1 ]
}
You can use the arrayRowMode
configuration parameter to return the row values as arrays and add a separate array of column values. arrayRowMode
can be set globally during the initial connection, or per-request.
const request = new sql.Request()
request.arrayRowMode = true
request
.query("select 'asdf' as name, 'qwerty' as other_name, 'jkl' as name")
.then(result => {
console.log(result)
});
Results in:
{
recordsets: [ [ [ 'asdf', 'qwerty', 'jkl' ] ] ],
recordset: [ [ 'asdf', 'qwerty', 'jkl' ] ],
output: {},
rowsAffected: [ 1 ],
columns: [
[
{
index: 0,
name: 'name',
length: 4,
type: [sql.VarChar],
scale: undefined,
precision: undefined,
nullable: false,
caseSensitive: false,
identity: false,
readOnly: true
},
{
index: 1,
name: 'other_name',
length: 6,
type: [sql.VarChar],
scale: undefined,
precision: undefined,
nullable: false,
caseSensitive: false,
identity: false,
readOnly: true
},
{
index: 2,
name: 'name',
length: 3,
type: [sql.VarChar],
scale: undefined,
precision: undefined,
nullable: false,
caseSensitive: false,
identity: false,
readOnly: true
}
]
]
}
Streaming Duplicate Column Names
When using arrayRowMode
with stream
enabled, the output from the recordset
event (as described in Streaming) is returned as an array of column metadata, instead of as a keyed object. The order of the column metadata provided by the recordset
event will match the order of row values when arrayRowMode
is enabled.
Default behaviour (without arrayRowMode
):
const request = new sql.Request()
request.stream = true
request.query("select 'asdf' as name, 'qwerty' as other_name, 'jkl' as name")
request.on('recordset', recordset => console.log(recordset))
Results in:
{
name: {
index: 2,
name: 'name',
length: 3,
type: [sql.VarChar],
scale: undefined,
precision: undefined,
nullable: false,
caseSensitive: false,
identity: false,
readOnly: true
},
other_name: {
index: 1,
name: 'other_name',
length: 6,
type: [sql.VarChar],
scale: undefined,
precision: undefined,
nullable: false,
caseSensitive: false,
identity: false,
readOnly: true
}
}
With arrayRowMode
:
const request = new sql.Request()
request.stream = true
request.arrayRowMode = true
request.query("select 'asdf' as name, 'qwerty' as other_name, 'jkl' as name")
request.on('recordset', recordset => console.log(recordset))
Results in:
[
{
index: 0,
name: 'name',
length: 4,
type: [sql.VarChar],
scale: undefined,
precision: undefined,
nullable: false,
caseSensitive: false,
identity: false,
readOnly: true
},
{
index: 1,
name: 'other_name',
length: 6,
type: [sql.VarChar],
scale: undefined,
precision: undefined,
nullable: false,
caseSensitive: false,
identity: false,
readOnly: true
},
{
index: 2,
name: 'name',
length: 3,
type: [sql.VarChar],
scale: undefined,
precision: undefined,
nullable: false,
caseSensitive: false,
identity: false,
readOnly: true
}
]
There are 4 types of errors you can handle:
- ConnectionError - Errors related to connections and connection pool.
- TransactionError - Errors related to creating, committing and rolling back transactions.
- RequestError - Errors related to queries and stored procedures execution.
- PreparedStatementError - Errors related to prepared statements.
Those errors are initialized in node-mssql module and its original stack may be cropped. You can always access original error with err.originalError
.
SQL Server may generate more than one error for one request so you can access preceding errors with err.precedingErrors
.
Each known error has name
, code
and message
properties.
Name | Code | Message |
---|---|---|
ConnectionError |
ELOGIN | Login failed. |
ConnectionError |
ETIMEOUT | Connection timeout. |
ConnectionError |
EDRIVER | Unknown driver. |
ConnectionError |
EALREADYCONNECTED | Database is already connected! |
ConnectionError |
EALREADYCONNECTING | Already connecting to database! |
ConnectionError |
ENOTOPEN | Connection not yet open. |
ConnectionError |
EINSTLOOKUP | Instance lookup failed. |
ConnectionError |
ESOCKET | Socket error. |
ConnectionError |
ECONNCLOSED | Connection is closed. |
TransactionError |
ENOTBEGUN | Transaction has not begun. |
TransactionError |
EALREADYBEGUN | Transaction has already begun. |
TransactionError |
EREQINPROG | Can't commit/rollback transaction. There is a request in progress. |
TransactionError |
EABORT | Transaction has been aborted. |
RequestError |
EREQUEST | Message from SQL Server. Error object contains additional details. |
RequestError |
ECANCEL | Cancelled. |
RequestError |
ETIMEOUT | Request timeout. |
RequestError |
EARGS | Invalid number of arguments. |
RequestError |
EINJECT | SQL injection warning. |
RequestError |
ENOCONN | No connection is specified for that request. |
PreparedStatementError |
EARGS | Invalid number of arguments. |
PreparedStatementError |
EINJECT | SQL injection warning. |
PreparedStatementError |
EALREADYPREPARED | Statement is already prepared. |
PreparedStatementError |
ENOTPREPARED | Statement is not prepared. |
SQL errors (RequestError
with err.code
equal to EREQUEST
) contains additional details.
- err.number - The error number.
- err.state - The error state, used as a modifier to the number.
- err.class - The class (severity) of the error. A class of less than 10 indicates an informational message. Detailed explanation can be found here.
- err.lineNumber - The line number in the SQL batch or stored procedure that caused the error. Line numbers begin at 1; therefore, if the line number is not applicable to the message, the value of LineNumber will be 0.
- err.serverName - The server name.
- err.procName - The stored procedure name.
To receive informational messages generated by PRINT
or RAISERROR
commands use:
const request = new sql.Request()
request.on('info', info => {
console.dir(info)
})
request.query('print \'Hello world.\';', (err, result) => {
// ...
})
Structure of informational message:
- info.message - Message.
- info.number - The message number.
- info.state - The message state, used as a modifier to the number.
- info.class - The class (severity) of the message. Equal or lower than 10. Detailed explanation can be found here.
- info.lineNumber - The line number in the SQL batch or stored procedure that generated the message. Line numbers begin at 1; therefore, if the line number is not applicable to the message, the value of LineNumber will be 0.
- info.serverName - The server name.
- info.procName - The stored procedure name.
Recordset metadata are accessible through the recordset.columns
property.
const request = new sql.Request()
request.query('select convert(decimal(18, 4), 1) as first, \'asdf\' as second', (err, result) => {
console.dir(result.recordset.columns)
console.log(result.recordset.columns.first.type === sql.Decimal) // true
console.log(result.recordset.columns.second.type === sql.VarChar) // true
})
Columns structure for example above:
{
first: {
index: 0,
name: 'first',
length: 17,
type: [sql.Decimal],
scale: 4,
precision: 18,
nullable: true,
caseSensitive: false
identity: false
readOnly: true
},
second: {
index: 1,
name: 'second',
length: 4,
type: [sql.VarChar],
nullable: false,
caseSensitive: false
identity: false
readOnly: true
}
}
You can define data types with length/precision/scale:
request.input("name", sql.VarChar, "abc") // varchar(3)
request.input("name", sql.VarChar(50), "abc") // varchar(50)
request.input("name", sql.VarChar(sql.MAX), "abc") // varchar(MAX)
request.output("name", sql.VarChar) // varchar(8000)
request.output("name", sql.VarChar, "abc") // varchar(3)
request.input("name", sql.Decimal, 155.33) // decimal(18, 0)
request.input("name", sql.Decimal(10), 155.33) // decimal(10, 0)
request.input("name", sql.Decimal(10, 2), 155.33) // decimal(10, 2)
request.input("name", sql.DateTime2, new Date()) // datetime2(7)
request.input("name", sql.DateTime2(5), new Date()) // datetime2(5)
List of supported data types:
sql.Bit
sql.BigInt
sql.Decimal ([precision], [scale])
sql.Float
sql.Int
sql.Money
sql.Numeric ([precision], [scale])
sql.SmallInt
sql.SmallMoney
sql.Real
sql.TinyInt
sql.Char ([length])
sql.NChar ([length])
sql.Text
sql.NText
sql.VarChar ([length])
sql.NVarChar ([length])
sql.Xml
sql.Time ([scale])
sql.Date
sql.DateTime
sql.DateTime2 ([scale])
sql.DateTimeOffset ([scale])
sql.SmallDateTime
sql.UniqueIdentifier
sql.Variant
sql.Binary
sql.VarBinary ([length])
sql.Image
sql.UDT
sql.Geography
sql.Geometry
To setup MAX length for VarChar
, NVarChar
and VarBinary
use sql.MAX
length. Types sql.XML
and sql.Variant
are not supported as input parameters.
This module has built-in SQL injection protection. Always use parameters or tagged template literals to pass sanitized values to your queries.
const request = new sql.Request()
request.input('myval', sql.VarChar, '-- commented')
request.query('select @myval as myval', (err, result) => {
console.dir(result)
})
- If you're facing problems with connecting SQL Server 2000, try setting the default TDS version to 7.1 with
config.options.tdsVersion = '7_1'
(issue) - If you're executing a statement longer than 4000 chars on SQL Server 2000, always use batch instead of query (issue)
- Upgraded to tedious version 15
- Dropped support for Node version <= 12
- Upgraded to tedious version 14
- Removed internal library for connection string parsing. Connection strings can be resolved using the static method
parseConnectionString
on ConnectionPool
- Upgraded tedious version to v11
- Upgraded msnodesqlv8 version support to v2
- Upgraded tarn.js version to v3
- Requests in stream mode that pipe into other streams no longer pass errors up the stream chain
- Request.pipe now pipes a true node stream for better support of backpressure
- tedious config option
trustServerCertificate
defaults tofalse
if not supplied - Dropped support for Node < 10
- Upgraded
tarn.js
so_poolDestroy
can take advantage of being a promise ConnectionPool.close()
now returns a promise / callbacks will be executed once closing of the pool is complete; you must make sure that connections are properly released back to the pool otherwise the pool may fail to close.- It is safe to pass read-only config objects to the library; config objects are now cloned
options.encrypt
is nowtrue
by defaultTYPES.Null
has now been removed- Upgraded tedious driver to v6 and upgraded support for msnodesqlv8]
- You can now close the global connection by reference and this will clean up the global connection, eg:
const conn = sql.connect(); conn.close()
will be the same assql.close()
- Bulk table inserts will attempt to coerce dates from non-Date objects if the column type is expecting a date
- Repeat calls to the global connect function (
sql.connect()
) will return the current global connection if it exists (rather than throwing an error) - Attempting to add a parameter to queries / stored procedures will now throw an error; use
replaceInput
andreplaceOutput
instead - Invalid isolation levels passed to
Transaction
s will now throw an error ConnectionPool
now reports if it is healthy or not (ConnectionPool.healthy
) which can be used to determine if the pool is able to create new connections or not- Pause/Resume support for streamed results has been added to the msnodesqlv8 driver
- Moved pool library from
node-pool
totarn.js
ConnectionPool.pool.size
deprecated, useConnectionPool.size
insteadConnectionPool.pool.available
deprecated, useConnectionPool.available
insteadConnectionPool.pool.pending
deprecated, useConnectionPool.pending
insteadConnectionPool.pool.borrowed
deprecated, useConnectionPool.borrowed
instead
- Library & tests are rewritten to ES6.
Connection
was renamed toConnectionPool
.- Drivers are no longer loaded dynamically so the library is now compatible with Webpack. To use
msnodesqlv8
driver, useconst sql = require('mssql/msnodesqlv8')
syntax. - Every callback/resolve now returns
result
object only. This object containsrecordsets
(array of recordsets),recordset
(first recordset from array of recordsets),rowsAffected
(array of numbers representig number of affected rows by each insert/update/delete statement) andoutput
(key/value collection of output parameters' values). - Affected rows are now returned as an array. A separate number for each SQL statement.
- Directive
multiple: true
was removed. Transaction
andPreparedStatement
internal queues was removed.- ConnectionPool no longer emits
connect
andclose
events. - Removed verbose and debug mode.
- Removed support for
tds
andmsnodesql
drivers. - Removed support for Node versions lower than 4.