SQLx – Rust SQL Toolkit

2025-07-262:5418097github.com

🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, and SQLite. - launchbadge/sqlx



Built with ❤️ by The LaunchBadge team


SQLx is an async, pure Rust SQL crate featuring compile-time checked queries without a DSL.

  • Truly Asynchronous. Built from the ground-up using async/await for maximum concurrency.

  • Compile-time checked queries (if you want). See SQLx is not an ORM.

  • Database Agnostic. Support for PostgreSQL, MySQL, MariaDB, SQLite.

    • MSSQL was supported prior to version 0.7, but has been removed pending a full rewrite of the driver as part of our SQLx Pro initiative.
  • Pure Rust. The Postgres and MySQL/MariaDB drivers are written in pure Rust using zero unsafe†† code.

  • Runtime Agnostic. Works on different runtimes (async-std / tokio / actix) and TLS backends (native-tls, rustls).

† The SQLite driver uses the libsqlite3 C library as SQLite is an embedded database (the only way we could be pure Rust for SQLite is by porting all of SQLite to Rust).

†† SQLx uses #![forbid(unsafe_code)] unless the sqlite feature is enabled. The SQLite driver directly invokes the SQLite3 API via libsqlite3-sys, which requires unsafe.

  • Cross-platform. Being native Rust, SQLx will compile anywhere Rust is supported.

  • Built-in connection pooling with sqlx::Pool.

  • Row streaming. Data is read asynchronously from the database and decoded on demand.

  • Automatic statement preparation and caching. When using the high-level query API (sqlx::query), statements are prepared and cached per connection.

  • Simple (unprepared) query execution including fetching results into the same Row types used by the high-level API. Supports batch execution and returns results from all statements.

  • Transport Layer Security (TLS) where supported (MySQL, MariaDB and PostgreSQL).

  • Asynchronous notifications using LISTEN and NOTIFY for PostgreSQL.

  • Nested transactions with support for save points.

  • Any database driver for changing the database driver at runtime. An AnyPool connects to the driver indicated by the URL scheme.

SQLx is compatible with the async-std, tokio, and actix runtimes; and, the native-tls and rustls TLS backends. When adding the dependency, you must choose a runtime feature that is runtime + tls.

# Cargo.toml
[dependencies]
# PICK ONE OF THE FOLLOWING: # tokio (no TLS)
sqlx = { version = "0.8", features = [ "runtime-tokio" ] }
# tokio + native-tls
sqlx = { version = "0.8", features = [ "runtime-tokio", "tls-native-tls" ] }
# tokio + rustls with ring and WebPKI CA certificates
sqlx = { version = "0.8", features = [ "runtime-tokio", "tls-rustls-ring-webpki" ] }
# tokio + rustls with ring and platform's native CA certificates
sqlx = { version = "0.8", features = [ "runtime-tokio", "tls-rustls-ring-native-roots" ] }
# tokio + rustls with aws-lc-rs
sqlx = { version = "0.8", features = [ "runtime-tokio", "tls-rustls-aws-lc-rs" ] } # async-std (no TLS)
sqlx = { version = "0.8", features = [ "runtime-async-std" ] }
# async-std + native-tls
sqlx = { version = "0.8", features = [ "runtime-async-std", "tls-native-tls" ] }
# async-std + rustls with ring and WebPKI CA certificates
sqlx = { version = "0.8", features = [ "runtime-async-std", "tls-rustls-ring-webpki" ] }
# async-std + rustls with ring and platform's native CA certificates
sqlx = { version = "0.8", features = [ "runtime-async-std", "tls-rustls-ring-native-roots" ] }
# async-std + rustls with aws-lc-rs
sqlx = { version = "0.8", features = [ "runtime-async-std", "tls-rustls-aws-lc-rs" ] }

For backward-compatibility reasons, the runtime and TLS features can either be chosen together as a single feature, or separately.

For forward compatibility, you should use the separate runtime and TLS features as the combination features may be removed in the future.

  • runtime-async-std: Use the async-std runtime without enabling a TLS backend.

  • runtime-tokio: Use the tokio runtime without enabling a TLS backend.

    • Actix-web is fully compatible with Tokio and so a separate runtime feature is no longer needed.
  • tls-native-tls: Use the native-tls TLS backend (OpenSSL on *nix, SChannel on Windows, Secure Transport on macOS).

  • tls-rustls: Use the rustls TLS backend (cross-platform backend, only supports TLS 1.2 and 1.3).

  • postgres: Add support for the Postgres database server.

  • mysql: Add support for the MySQL/MariaDB database server.

  • mssql: Add support for the MSSQL database server.

  • sqlite: Add support for the self-contained SQLite database engine with SQLite bundled and statically-linked.

  • sqlite-unbundled: The same as above (sqlite), but link SQLite from the system instead of the bundled version.

    • Allows updating SQLite independently of SQLx or using forked versions.
    • You must have SQLite installed on the system or provide a path to the library at build time. See the rusqlite README for details.
    • May result in link errors if the SQLite version is too old. Version 3.20.0 or newer is recommended.
    • Can increase build time due to the use of bindgen.
  • sqlite-preupdate-hook: enables SQLite's preupdate hook API.

    • Exposed as a separate feature because it's generally not enabled by default.
    • Using this feature with sqlite-unbundled may cause linker failures if the system SQLite version does not support it.
  • any: Add support for the Any database driver, which can proxy to a database driver at runtime.

  • derive: Add support for the derive family macros, those are FromRow, Type, Encode, Decode.

  • macros: Add support for the query*! macros, which allows compile-time checked queries.

  • migrate: Add support for the migration management and migrate! macro, which allow compile-time embedded migrations.

  • uuid: Add support for UUID.

  • chrono: Add support for date and time types from chrono.

  • time: Add support for date and time types from time crate (alternative to chrono, which is preferred by query! macro, if both enabled)

  • bstr: Add support for bstr::BString.

  • bigdecimal: Add support for NUMERIC using the bigdecimal crate.

  • rust_decimal: Add support for NUMERIC using the rust_decimal crate.

  • ipnet: Add support for INET and CIDR (in postgres) using the ipnet crate.

  • ipnetwork: Add support for INET and CIDR (in postgres) using the ipnetwork crate.

  • json: Add support for JSON and JSONB (in postgres) using the serde_json crate.

  • Offline mode is now always enabled. See sqlx-cli/README.md.

SQLx supports compile-time checked queries. It does not, however, do this by providing a Rust API or DSL (domain-specific language) for building queries. Instead, it provides macros that take regular SQL as input and ensure that it is valid for your database. The way this works is that SQLx connects to your development DB at compile time to have the database itself verify (and return some info on) your SQL queries. This has some potentially surprising implications:

  • Since SQLx never has to parse the SQL string itself, any syntax that the development DB accepts can be used (including things added by database extensions)
  • Due to the different amount of information databases let you retrieve about queries, the extent of SQL verification you get from the query macros depends on the database

If you are looking for an (asynchronous) ORM, you can check out our new Ecosystem wiki page!

See the examples/ folder for more in-depth usage.

use sqlx::postgres::PgPoolOptions;
// use sqlx::mysql::MySqlPoolOptions;
// etc. #[async_std::main] // Requires the `attributes` feature of `async-std`
// or #[tokio::main]
// or #[actix_web::main]
async fn main() -> Result<(), sqlx::Error> { // Create a connection pool // for MySQL/MariaDB, use MySqlPoolOptions::new() // for SQLite, use SqlitePoolOptions::new() // etc. let pool = PgPoolOptions::new() .max_connections(5) .connect("postgres://postgres:password@localhost/test").await?; // Make a simple query to return the given parameter (use a question mark `?` instead of `$1` for MySQL/MariaDB) let row: (i64,) = sqlx::query_as("SELECT $1") .bind(150_i64) .fetch_one(&pool).await?; assert_eq!(row.0, 150); Ok(())
}

A single connection can be established using any of the database connection types and calling connect().

use sqlx::Connection; let conn = SqliteConnection::connect("sqlite::memory:").await?;

Generally, you will want to instead create a connection pool (sqlx::Pool) for the application to regulate how many server-side connections it's using.

let pool = MySqlPool::connect("mysql://user:pass@host/database").await?;

In SQL, queries can be separated into prepared (parameterized) or unprepared (simple). Prepared queries have their query plan cached, use a binary mode of communication (lower bandwidth and faster decoding), and utilize parameters to avoid SQL injection. Unprepared queries are simple and intended only for use where a prepared statement will not work, such as various database commands (e.g., PRAGMA or SET or BEGIN).

SQLx supports all operations with both types of queries. In SQLx, a &str is treated as an unprepared query, and a Query or QueryAs struct is treated as a prepared query.

// low-level, Executor trait
conn.execute("BEGIN").await?; // unprepared, simple query
conn.execute(sqlx::query("DELETE FROM table")).await?; // prepared, cached query

We should prefer to use the high-level query interface whenever possible. To make this easier, there are finalizers on the type to avoid the need to wrap with an executor.

sqlx::query("DELETE FROM table").execute(&mut conn).await?;
sqlx::query("DELETE FROM table").execute(&pool).await?;

The execute query finalizer returns the number of affected rows, if any, and drops all received results. In addition, there are fetch, fetch_one, fetch_optional, and fetch_all to receive results.

The Query type returned from sqlx::query will return Row<'conn> from the database. Column values can be accessed by ordinal or by name with row.get(). As the Row retains an immutable borrow on the connection, only one Row may exist at a time.

The fetch query finalizer returns a stream-like type that iterates through the rows in the result sets.

// provides `try_next`
use futures_util::TryStreamExt;
// provides `try_get`
use sqlx::Row; let mut rows = sqlx::query("SELECT * FROM users WHERE email = ?") .bind(email) .fetch(&mut conn); while let Some(row) = rows.try_next().await? { // map the row into a user-defined domain type let email: &str = row.try_get("email")?;
}

To assist with mapping the row into a domain type, one of two idioms may be used:

let mut stream = sqlx::query("SELECT * FROM users") .map(|row: PgRow| { // map the row into a user-defined domain type }) .fetch(&mut conn);
#[derive(sqlx::FromRow)]
struct User { name: String, id: i64 } let mut stream = sqlx::query_as::<_, User>("SELECT * FROM users WHERE email = ? OR name = ?") .bind(user_email) .bind(user_name) .fetch(&mut conn);

Instead of a stream of results, we can use fetch_one or fetch_optional to request one required or optional result from the database.

We can use the macro, sqlx::query! to achieve compile-time syntactic and semantic verification of the SQL, with an output to an anonymous record type where each SQL column is a Rust field (using raw identifiers where needed).

let countries = sqlx::query!( "
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
 ", organization ) .fetch_all(&pool) // -> Vec<{ country: String, count: i64 }> .await?; // countries[0].country
// countries[0].count

Differences from query():

  • The input (or bind) parameters must be given all at once (and they are compile-time validated to be the right number and the right type).

  • The output type is an anonymous record. In the above example the type would be similar to:

    { country: String, count: i64 }
  • The DATABASE_URL environment variable must be set at build time to a database which it can prepare queries against; the database does not have to contain any data but must be the same kind (MySQL, Postgres, etc.) and have the same schema as the database you will be connecting to at runtime.

    For convenience, you can use a .env file1 to set DATABASE_URL so that you don't have to pass it every time:

    DATABASE_URL=mysql://localhost/my_database
    

The biggest downside to query!() is that the output type cannot be named (due to Rust not officially supporting anonymous records). To address that, there is a query_as!() macro that is mostly identical except that you can name the output type.

// no traits are needed
struct Country { country: String, count: i64 } let countries = sqlx::query_as!(Country, "
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
 ", organization ) .fetch_all(&pool) // -> Vec<Country> .await?; // countries[0].country
// countries[0].count

To avoid the need of having a development database around to compile the project even when no modifications (to the database-accessing parts of the code) are done, you can enable "offline mode" to cache the results of the SQL query analysis using the sqlx command-line tool. See sqlx-cli/README.md.

Compile-time verified queries do quite a bit of work at compile time. Incremental actions like cargo check and cargo build can be significantly faster when using an optimized build by putting the following in your Cargo.toml (More information in the Profiles section of The Cargo Book)

[profile.dev.package.sqlx-macros]
opt-level = 3

1 The dotenv crate itself appears abandoned as of December 2021 so we now use the dotenvy crate instead. The file format is the same.

This crate uses #![forbid(unsafe_code)] to ensure everything is implemented in 100% Safe Rust.

If the sqlite feature is enabled, this is downgraded to #![deny(unsafe_code)] with #![allow(unsafe_code)] on the sqlx::sqlite module. There are several places where we interact with the C SQLite API. We try to document each call for the invariants we're assuming. We absolutely welcome auditing of, and feedback on, our unsafe code usage.

Licensed under either of

at your option.

Unless you explicitly state otherwise, any Contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.


Read the original article

Comments

  • By banashark 2025-07-295:155 reply

    One thing I don't usually see addressed with the pure-sql approaches is how to handle dynamic query building. The most common example being large configurable forms that display a data grid. Kysely[1] does a good job of starting from this angle, but allowing something like specifying the concrete deserialization type similar to the libraries here.

    I'm a big fan of sql in general (even if the syntax can be verbose, the declarative nature is usually pleasant and satisfying to use), but whenever dynamic nature creeps in it gets messy. Conditional joins/selects/where clauses, etc

    How do folks that go all in on sql-first approaches handle this? Home-grown dynamic builders is what I've seen various places I've work implement in the past, but it's usually not built out as a full API and kind of just cobbled together. Eventually they just swap to an ORM to solve the issue.

    * [1] https://kysely.dev

    • By rzmmm 2025-07-2910:402 reply

      One approach is to create views for the required data and then just select the columns which are needed. The joins will be pruned by the query planner if they are not needed, so there is no need for conditional joins.

      • By hu3 2025-07-2913:151 reply

        > The joins will be pruned by the query planner if they are not needed, so there is no need for conditional joins.

        I always wondered about this. How reliable is that in your experience? Thank you in advance.

        • By derekperkins 2025-08-1021:03

          Depends on the underlying database, and you'd have to test your query to have any level of certainty.

          That being said, query planning is generally where Oracle/MSSQl outshine MySQL/Postgres, especially for pruning unnecessary joins. BigQuery is great at it IME.

      • By banashark 2025-07-305:37

        Yeah this definitely makes sense, and is good database API design as well.

    • By dathinab 2025-07-2910:48

      > dynamic query building

      it's not (really) addressed by sqlx (intentionally), in the same way most ORM features are not addressed

      but to some degree this is what is so nice about sqlx it mainly(1) provides the basic SQL functionality and then let you decide what to use on top of it (or if to use anything on top).

      If you need more e.g. the sea-* ecosystem (sea-query, sea-orm) might fulfill you needs.

      (1): It can compile time check "static" queries (i.e. only placeholders) which is a bit more then "basic" features, but some projects have to 99+% only static queries in which case this feature can move SQLx from "a building block for other sql libs" to "all you need" to keep dependencies thinner.

    • By tracker1 2025-07-2916:26

      Not rust, but I've been a pretty big fan of Dapper and Dapper.SqlBuilder in the C# space... have used it with MS-SQL and PostgreSQL very effectively, even with really complex query construction against input options.

      https://github.com/DapperLib/Dapper/blob/main/Dapper.SqlBuil...

    • By nicoburns 2025-07-2913:48

      I find that interpolating strings works pretty well for this use case (which actually switchd TO string interpolation from ORMs at a previous job of mine).

      But this is conditional on either your database or your minimal abstraction layer having support for bindings arrays of data with a single placeholder (which is generally true for Postgres).

    • By zackangelo 2025-07-296:011 reply

      Is something like SeaQuery[0] what you're talking about?

      [0] https://github.com/SeaQL/sea-query/

      • By inbx0 2025-07-298:224 reply

        SeaQuery looks like a similar dynamic query builder for Rust as Kysely is for JS/TS, so yeah, that'd probably solve the dynamic query problem. But I think parent wasn't so much asking for another library but for patterns.

        How do people who choose to use a no-dsl SQL library, like SQLx, handle dynamic queries? Especially with compile-time checking. The readme has this example:

          ...
          WHERE organization = ?
        
        But what if you have multiple possible where-conditions, let's say "WHERE organization = ?", "WHERE starts_with(first_name, ?)", "WHERE birth_date > ?", and you need to some combination of those (possibly also none of those) based on query parameters to the API. I think that's a pretty common use case.

        • By fstephany 2025-07-298:592 reply

          I agree with you that dynamic query building can be tedious with a pure SQL approach. The use case you are describing can be solved with something alone the lines of:

            WHERE organization = $1
               AND ($2 IS NULL OR starts_with(first_name, $2)
               AND ($3 IS NULL OR birth_date > $3)
          
          With SQLx you would have all the params to be Options and fill them according the parameters that were sent to your API.

          Does that make sense?

          • By yahoozoo 2025-07-2910:43

            I think the dynamic part is where the clauses themselves are optional. For example, say you have a data table that a user can filter rows using multiple columns. They can filter by just `first_name` or by `birth_date` or both at the same time using AND / OR, and so on. So you’re dynamically needing to add more or less “WHERE” clauses and then it gets tricky when you have to include placeholders like `$1` since you have to keep track of how many parameters your dynamic query is actually including.

          • By williamdclt 2025-07-2913:24

            That's relying a lot on the DB engine, which will struggle as the condition gets more complex. I've had MySQL make stupid choices of query plans for very similar queries, I had to break the OR into UNIONs

        • By williamdclt 2025-07-2913:22

          I generally avoid DSLs as they don't bring much... except for this exact use-case. Dynamic queries is pretty much what a query builder is for: you can avoid a dependency by rolling your own, but well it's not trivial and people out there have built some decent ones.

          So, if I have this use-case I'd reach for a query builder library. To answer the question of "how to do dynamic queries without a query builder library", I don't think there's any other answer than "make your own query builder"

        • By dathinab 2025-07-2911:03

          > Especially with compile-time checking.

          no compile time checking and integration tests

          in general sqlx only provides the most minimal string based query building so you can easily run into annoying edge cases you forgot to test, so if your project needs that, libraries like sea-query or sea-orm are the way to go (through it's still viable, without just a bit annoying).

          in general SQLx "compile time query checking" still needs a concrete query and a running db to check if the query is valid. It is not doing a rem-implementation of every dialects syntax, semantics and subtle edge cases etc. that just isn't practical as sql is too inconsistent in the edge cases, non standard extensions and even the theoretical standardized parts due to it costing money to read the standard and its updates being highly biased for MS/Oracle databases).

          This means compile time query checking doesn't scale that well to dynamic queries, you basically would need to build and check every query you might dynamically create (or the subset you want to test) at which point you are in integration test territory (and you can do it with integration tests just fine).

          besides the sqlx specific stuff AFIK some of the "tweaked sql syntax for better composeability" experiments are heading for SQL standardization which might make this way less of a pain in the long run but I don't remember the details at all, so uh, maybe not???

          ---

          EDIT: Yes there is an sqlx "offline" mode which doesn't need a live db, it works by basically caching results from the online mode. It is very useful, but still no "independent/standalone" query analysis.

        • By seivan 2025-07-298:58

          [dead]

  • By bitbasher 2025-07-2820:594 reply

    I've been using sqlx with postgres for several months now on a production server with decent query volume all day long. It has been rock solid.

    I find writing sql in rust with sqlx to be far fewer lines of code than the same in Go. This server was ported from Go and the end result was ~40% fewer lines of code, less memory usage and stable cpu/memory usage over time.

    • By jchw 2025-07-2821:217 reply

      Speaking of Go, if you want compile-time type checking like what SQLx offers, the Go ecosystem has an option that is arguably even better at it:

      https://sqlc.dev/

      It has the advantage that it implements the parsing and type checking logic in pure Go, allowing it to import your migrations and infer the schema for type checking. With SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available. This makes SQLx kind of a non-starter for me, though I understand why nobody wants to do what sqlc does (it involves a lot of duplication that essentially reimplements database features.) (Somewhat ironically it's less useful for sqlc to do this since it runs as code generation outside the normal compilation and thus even if it did need a live database connection to do the code generation it would be less of an impact... But it's still nice for simplicity.)

      • By bitbasher 2025-07-2821:343 reply

        It's possible to run sqlx in 'offline' mode that uses your schema to do the checks so you don't need a live database. That's a popular option in CI/CD scenarios.

        • By jjice 2025-07-293:021 reply

          It's absolutely core to SQLx. I'm surprised to hear that that isn't widely known based on the parent. The first time I used SQLx has to be 4 or 5 years ago and they had it back then.

          • By jchw 2025-07-293:582 reply

            Well, it hurts that it isn't the default. The README still tells you to set the environment variable, it just isn't the "default" way to do things. In my opinion it would be better to entirely remove support for connecting to the database during compilation. Does anyone actually want to use it that way?

            Comparing and contrasting, sqlc type checking happens via code generation, basically the only option in Go since there's nothing remotely like proc macros. Even with code generation, sqlc doesn't default to requiring an actual running instance of the database, though you can use an actual database connection (presumably this is useful if you're doing something weird that sqlc's internal model doesn't support, but even using PostgreSQL-specific features I hadn't really ran into much of this.)

            • By ringeryless 2025-07-314:48

              "default"? they are explicitly different macros with different syntax. use whichever you prefer.

            • By maxbond 2025-07-296:411 reply

              I think that, if a new user is going to encounter an error, it should be that SQLx couldn't talk to the database rather than that a mysterious file doesn't exist. They're going to need to connect to a dev database either way. They can learn about caching the schema information when they come to those later steps like building a CI pipeline. Early in a project, when your queries and schema are unstable, caching isn't going to be very useful anyway, since you'll be invalidating it constantly.

              The sqlc authors are to be applauded for making a static analyzer, that is no small feat. But if you can get away with offloading SQL semantics to the same SQL implementation you plan to use, I think that's a steal. The usability hit is basically free - don't you want to connect to a dev database locally anyway to run end to end tests? It's great to eliminate type errors, but unless I'm missing something, neither SQLx nor sqlc will protect you from value errors (eg constraint violations).

              • By jchw 2025-07-296:581 reply

                1. I can't tell you how unconvinced I am with the error being less confusing. A good error message tells you what's wrong and ideally what to do to remedy it if possible... and to me there isn't really a practical difference between "set this environment variable" and "run this command". It seems like you basically add one extra step, but you prevent people from choosing a probably suboptimal workflow that they almost certainly don't want to use anyways... Either way, I don't think it's more confusing, and for someone new it's better to only have one way to do something, especially if it's the obviously superior thing anyways.

                2. Sure, the database will probably be running locally, when you're working on database stuff. However, the trouble here is that while I almost definitely will have a local database running somehow, it is not necessarily going to be accessible from where the compiler would normally run. It might be in a VM or a Docker container where the database port isn't actually directly accessible. Plus, the state of the database schema in that environment is not guaranteed to match the code.

                If I'm going to have something pull my database schema to do some code generation I'd greatly prefer it to be set up in such a way that I can easily wrap it so I can hermetically set up a database and run migrations from scratch so it's going to always match the code. It's not obvious what kinds of issues could be caused by a mismatch other than compilation errors, but personally I would prefer if it just wasn't possible.

                • By maxbond 2025-07-297:07

                  The error message is a fair point, I do still think that making caching the default is premature.

                  I would definitely recommend writing a Compose file that applies your migrations to a fresh RDBMS and allows you to connect from the host device, regardless of what libraries you're using. Applying your migrations will vary by what tools you use, but the port forwarding is 2 simple lines. (Note that SQLx has a migration facility, but it's quite bare bones.)

        • By solidsnack9000 2025-07-295:381 reply

          This is not quite the same thing, because it requires `sqlx prepare` to be run first; and that talks to the database to get type information. In SQLC, on the other hand, query parsing and type inference is implemented from first principles, in pure Go.

          • By koito17 2025-07-297:461 reply

            sqlc's approach has its limitations. Its SQLite query parser is generated from an ANTLR grammar, and I've encountered situations where valid SQLite syntax was rejected by sqlc due to their parser failing.

            Type inference was okay, since SQLite barely has any types. The bigger issue I had was dealing with migration files. The nice part about SQLx is that `cargo sqlx database setup` will run all necessary migrations, and no special tooling is necessary to manage migration files. sqlc, on the other hand, hard codes support for specific Go migration tools; each of the supported tools were either too opinionated for my use case or seemed unmaintained. SQLx has built-in tooling for migrations; it requires zero extra dependencies and satisfies my needs. Additionally, inferring types inside the actual database has its benefits: (1) no situations where subsets of valid query syntax are rejected, and (2) the DB may be used for actual schema validation.

            For an example of why (2) may be better than sqlc's approach: databases like SQLite sometimes allow NULL primary keys; this gets reflected in SQLx when it validates inferred types against actual database schemas. When I last used sqlc, this potential footgun was never represented in the generated types. In SQLx, this footgun is documented in the type system whenever it can detect that SQLite allows silly things (like NULL primary keys when the PK satisfies certain conditions).

            • By jchw 2025-07-2915:15

              I believe sqlc can also connect to the database for type inference now too, fwiw.

        • By echelon 2025-07-291:151 reply

          Offline query caching is great. The team has made it work fantastically for workspace oriented monorepos too.

          I ran sqlx / mysql on a 6M MAU Actix-Web website with 100kqps at peak with relatively complex transactions and queries. It was rock solid.

          I'm currently using sqlx on the backend and on the desktop (Tauri with sqlite).

          In my humble opinion, sqlx is the best, safest, most performant, and most Rustful way of writing SQL. The ORMs just aren't quite there.

          I wish other Rust client libraries were as nice as sqlx. I consider sqlx to be one of Rust's essential crates.

          • By bagavi 2025-07-296:40

            You seem to know your stuff. What's your opinion of diesel?

      • By solidsnack9000 2025-07-295:34

        Implementing the parsing and type checking logic in pure Go is not an unqualified advantage. As you point out, it means that SQLC "...essentially reimplements database features..." and in my experience, it does not reimplement all of them.

      • By conroy 2025-07-2923:151 reply

        Maintainer of sqlc here. Thanks for the kind words! I'm considering switching to the sqlx model of talking to a running database simply because trying to re-implement PostgreSQL internals has been a huge challenge. It works for most queries, but for the long tail of features, it's a losing battle.

        Can you tell me why it's a non-starter for you?

        • By jchw 2025-07-303:07

          I think it's only a non-starter for me in SQLx if not using query caching. Caching makes the situation workable.

          For sqlc, it isn't really a big problem because you only need to run the code generation when you're actually modifying database things. Still, with that having been said, I think just passing a database URI and having analysis work based on that is unideal. Using an actual database isn't a huge problem, but having to manage the database instance out of band is the part that I think isn't great, because it allows for the schema in the code to trivially desync with the schema used in analysis. If I used SQLx I'd probably be compelled to try to wire up a solution that spawns the database and migrates it up hermetically for the caching part. Likewise if I used this mode of sqlc.

          I guess it might be possible for sqlc to add first class support for that sort of concept, but I can see holes in it. For one thing, you have to figure out where to grab binaries from and what version. An approach using Docker/Podman works, and at least partly solves this problem because you could allow specifying any OCI image, but that has caveats too, like requiring Docker or Podman to be installed. The most heroic effort would be to use some kind of solution using WASM builds of database engines: pulling down and running something like PGlite in process seems like it would be an almost ideal solution, but it sticks you to whatever things can actually be made to work in WASM in terms of features, extensions and versions, at least unless/until database servers and extension vendors miraculously decide that supporting WASM as a target is a good idea. Still, if you want some crazy ideas for how to make the UX better, I think either the Docker approach or the WASM approach could be made to work to some degree.

          Barring that, though, I'd be most likely to have some kind of Docker setup for running sqlc with an ephemeral database instance. It's not pretty, but it works...

          I don't think it would be a non-starter, though. I only really think that connecting to the database from within rustc invocations is a non-starter.

      • By tizzy 2025-07-2822:301 reply

        I never gelled with how SQLC needs to know about your schema via the schema file. I'm used to flyway where you can update the schema as long as it's versioned correctly such that running all the sets of flyways will produce the same db schema.

        I referred go-jet since it introspects the database for it's code generation instead.

        • By jchw 2025-07-2822:371 reply

          The way I prefer to use sqlc is in combination with a schema migration framework like goose. It actually is able to read the migration files and infer the schema directly without needing an actual database. This seems to work well in production.

      • By craftkiller 2025-07-2915:01

        > with SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available.

        FWIW, the compile-time query checking is entirely optional. If you don't use the query syntax checking then you don't need live database and you don't need `sqlx prepare`.

      • By written-beyond 2025-07-2821:422 reply

        I spent 2 weeks trying to build a very basic rest crud API with SQLc and it was not better. I had to shift to SQLx because of how unintuitive SQLc was.

        • By devjab 2025-07-2822:001 reply

          We've been running SQLC in production for a while now and I'm curious which part of it you found unintuitive? We run ours as a container service within the development environment that will compile your code from a postgres dump file. We've had no issues with it at all after the initial configuration guidelines for SQLC, though the documentation certainly isn't exactly great. Hell, I'm not sure I've ever worked with a better SQL to language tool in my decades so I'm surprised that it isn't working out for you.

          That being said, as I understand it, SQLx does something very different. If you want dynamic queries, you'll basically have to build that module yourself. The power of SQLC is that anyone who can write SQL can work on the CRUD part of your Go backend, even if they don't know Go. Hell, we've even had some success with business domain experts who added CRUD functionality by using LLM's to generate SQL. (We do have a lot of safeguards around that, to make it less crazy than it sounds).

          If you want fancy Linq, grapQL, Odata or even a lot of REST frameworks, you're not getting any of that with SQLC though, but that's typically not what you'd want from a Go backend in my experience. Might as well build it with C# or Java then.

          • By written-beyond 2025-07-2822:452 reply

            It's quite simple really. I want to write a query and have a concrete object as it's return type. The framework that gets me there in the least amount of steps is going to be more intuitive.

            Let's compare: SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want

            SQLx - env: DATABASE_URL

            Now does that mean that SQLx is the best possible database framework. No, it does not. Because I didn't spend my time doing things that weren't related to the exact queries I had to write I got more work done.

            I want to appreciate the hard work the SQLx Devs have put in to push the bar for a decent SQL developer experience. People give them a really hard time for certain design decisions, pending features and bugs. I've seen multiple comments calling it's compile time query validation "gimmicky" and that's not nice at all. You can go to any other language and you won't find another framework that is as easy to get started with.

            • By devjab 2025-07-295:31

              > SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want

              I would recommend using pg_dump for your schema file which means it'll not be related to SQLC as such. This way it will be easier for you to maintain your DB, we use Goose as an example. In our setup part of the pipeline is that you write your Goose migration, and then there is an automated process which will update the DB running in your local dev DB container, do a pg_dump from that and then our dev container instance of SQLC will compile your schema for you.

              The configuration file is centralized as well, so you don't have to worry about it.

              I agree with you on the SQLC meta language on queries, I appreciate that it's there but we tend to avoid using it. I personally still consider the meta language a beter way of doing things than in-code SQL queries. This is a philosophical sort of thing of course, and I respect that not everyone agres with me on this. It's hard for me to comment on SQLx, however, as I haven't really used it.

              What I like about SQLC is that it can be completely de-coupled from your Go code.

            • By bbkane 2025-07-292:291 reply

              Maybe I'm drinking the sqlc Kool aid, but because I'm already using migration files, setting up the config to point to them and a folder of SQL queries was pretty painless.

              And of course now that I have it, the incremental cost of adding a new query is really low as well

              • By written-beyond 2025-07-2913:441 reply

                That's all understandable. But like I said I did spend 2 weeks working with SQLc, however when I compared it to just writing the query in my code, the developer experience was miles apart.

                You could compare it to people writing CSS, JavaScript and Markup in separate files Vs having just one file in React/Svelte etc. which gives the user the option to combine everything into one.

                There maybe a lot of drawbacks from the latter approach but it's makes everything a hell easier for people to just get started building.

                • By devjab 2025-07-305:28

                  We're into Go and SQLC by extension because we write systems with 0 dependencies outside of the standard library. Which is a security and compliance thing.

                  As far as building something fast, I'm with you. I always reach out for Python with UV, Litestar and Advanced Alchemy when I want to build personal web projects. I don't think SQLC is bad as such, once you've written your SQL you can essentially compile that into a CRUD application which is ready to go. As you've pointed out, however, you'd need to slam something like a GraphQL engine on top of it if you wanted rich quries easily, and you'd still not have the auto-generated OpenAPI that comes with Python web frameworks.

                  SQLC is for code where you want a low amount (or zero) external depedencies. Which is a very "Go" thing to want. It does scale well, but that requires you to build various CLI tools to help maintain things as well as your own Go modules to add "quality of life" like dynamic routers and get queries for low traffic requests.

                  I'll try SQLx eventually when I get time to look more into Rust.

        • By happens 2025-07-2822:001 reply

          Interesting - I've had the opposite experience. I usually prefer rust for personal projects, but when I recently tried to use SQLx with sqlite, lots of very basic patterns presented problems, and I wished I had sqlc back.

      • By frollogaston 2025-07-2823:06

        This is why I like using NodeJS or Python with SQL, it's very simple to have it not care about the return types. SQL is already statically typed per se, I don't need to re-assert everything. Achieving the same kind of automation in Go etc requires parsing the schema at compile-time like what you described, which is complicated.

    • By Thaxll 2025-07-291:261 reply

      imo sqlc from Go is supperior to sqlx from Rust. The other thing is that sqlx is somehow slow, when I did some test, pgx ( Go ) was faster than sqlx.

      • By j-krieger 2025-07-298:54

        sqlx pulls in `syn`. Syn is really slow to compile.

    • By frollogaston 2025-07-2822:591 reply

      How is it more LoC in Go, just cause of the "if err" stuff?

      • By bitbasher 2025-07-2912:121 reply

        Go's verbose error handling certainly impacted the vertical height of files (lots of early returns), but wasn't a big contributor to overall LoC.

        The more serious LoC offenders in Go were:

        1. Marshalling/Unmarshalling code (for API responses, to/from external services, etc). In general, working with JSON in Go was painful and error prone. Rust's serde made this a complete non-issue.

        2. Repetitive sql query code (query, scan for results, custom driver code for jsonb column marshalling/unmarshalling). Rust's sqlx made this a non-issue.

        3. Go's use of context to share data through handlers was a real pain and error prone (type casting, nil checks, etc). Rust's actix-web made this a real beautiful thing to work with. Need a "User" in your handler? Just put it as an argument to the handler and it's only called if it's available. Need a db connection? Just put it as an argument to the handler.

        4. Go's HTML/Text templates required more data to be passed in and also required more safety checks. Rust's askama was overall more pleasant to use and provided more confidence when changing templates. In Rust, I'd catch errors at compile time. In Go, I'd catch them at runtime (or, a user would).

        I must admit I was surprised. I thought Rust would have been more lines of code because it's a lower level language, but it ended up being ~40% less code. My general sentiment around working with the code is very different as well.

        In the Rust codebase I have no hesitation to change things. I am confident the compiler will tell me when I'm breaking something. I never had that confidence in Go.

        • By frollogaston 2025-07-2921:28

          Hm. I've used Rust a lot more than Go, so this is secondhand to me. I know that generics are iffy and nullness is annoying. If you're paying for static types in Go and still not getting the guarantees, that really bites.

  • By tmpfs 2025-07-269:436 reply

    I have used this as well as many of the other lower-level db drivers (which don't check your SQL at compile time) and I can say I much prefer the latter.

    My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.

    Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.

    Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.

    SQLx is a nice idea but too cumbersome in my experience.

    [1]: https://docs.rs/sqlite/latest/sqlite/ [2]: https://docs.rs/postgres/latest/postgres/

    • By belak 2025-07-2820:47

      I'm have no experience with abstracting away the backend, but Dockerizing is actually pretty easy now - there's an offline mode[1] where you can have sqlx generate some files which let it work when there's no DB running.

      [1]: https://docs.rs/sqlx/latest/sqlx/macro.query.html#offline-mo...

    • By vegizombie 2025-07-2610:18

      It's definitely not perfect, but I think both of those issues are better now, if not fully solved.

      For needing a DB at compile time, there's an option to have it produce artefacts on demand that replace the DB, although you'll need to connect to a DB again each time your queries change. Even that is all optional though, if you want it to compile time check your queries.

    • By 0xCMP 2025-07-2820:52

      I know it's annoying (and apparently there is a solution for generating the required files before the build), but in these kinds of situations Go and Rust are great for doing a static build on the system and then copying into a scratch image.

      Versus Python and Node often needing to properly link with the system they'll actually be running in.

    • By adelmotsjr 2025-07-2822:472 reply

      Why would you want to abstract away the underlying database? Wouldn't it better to already use the target DB to cattch potential issues earlier? Also to avoid creating another layer of indirection, potentially complecting the codebase and reducing performance?

      • By TrueDuality 2025-07-290:32

        Primarily for libraries and deployment environments that aren't fully in your control which is still pretty common once you get to B2B interactions, SaaS is not something you can easily sell to certain environments. Depending on the assurance you need, you might even need to mock out the database entirely to test certain classes of database errors being recoverable or fail in a consistent state.

        Even in SaaS systems, once you get large enough with a large enough test suite you'll be wanting to tier those tests starting with a lowest common denominator (sqlite) that doesn't incur network latency before getting into the serious integration tests.

      • By small_scombrus 2025-07-2914:33

        > Wouldn't it better to already use the target DB to cattch potential issues earlier?

        The target DB can change as a project goes from something mildly fun to tinker with to something you think might actually be useful.

        Also I personally find that SQLite is just nice to work with. No containers or extra programs, it just does what you ask it to, when you ask it to

    • By stmw 2025-07-2620:39

      Thanks, interesting experience - so much depends on getting developer ergonomics right. There is something to be said for checking the SQL at compile-time, though - esp. if trying to ORM to a typesafe language.

    • By no_circuit 2025-07-2821:32

      How long ago did you try SQLx? Not necessarily promoting SQLX, but the `query_as` which lets one make queries without the live database macro has been around for 5 years [1].

      For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.

      The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.

      The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.

      If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.

      [1] https://github.com/launchbadge/sqlx/commit/47f3d77e599043bc2...

      [2] https://crates.io/crates/rusqlite

      [3] https://www.sea-ql.org/SeaORM/

      [4] https://github.com/diesel-rs/diesel/issues/2078

      [5] https://github.com/diesel-rs/diesel/issues/1728

      [6] https://github.com/diesel-rs/diesel/discussions/4160

      [7] https://en.wikipedia.org/wiki/Database_normalization

HackerNews