
I'd seen this running in a browser before (the ~3MB download is really impressive for that), but I hadn't clocked that it runs server-side with Node.js and Bun as well: https://pglite.dev/docs/
Since that's still not spinning up an actual network server, that feels like it's an alternative to SQLite - you can spin up a full in-process PostgreSQL implementation, that persists to disk, as part of an existing Node.js/Bun application.
That's really interesting!
I'd love to use this from Python, via something like https://github.com/wasmerio/wasmer-python or https://github.com/bytecodealliance/wasmtime-py - has anyone run PGlite via one of those wrappers yet?
Getting PGlite working with other languages is very high on my list. We are working on two approaches:
- a WASI build with a lower level api that users can wrap with a higher level api.
- a "libpglite" that can be linked to by any native language. The WASI build is likely a WASM build of this.
Most languages already have a Postgres wire protocol implementation and so wrapping a low level API that reads/emits this is relatively easy - it's what the JS side of PGlite does.
If you can make it work with Rust, and compatible with wasm targets as well, it opens a huge field of possibilities.
Windmill.dev is a workflow engine based fully on postgresql. This would be the missing piece to offer a local development workflow that doesn't require spinning a full pg.
I'd love to see this for SQLite and DuckDB at the same time. I could see them using a common wasi filesystem shim so they can transparently do range requests over http blobs.
Do you have a branch for this work?
No branch yet, we're still in the early research and experimental stage.
As a workaround, maybe something like this could work:
db.js:
const { PGlite } = require('@electric-sql/pglite');
const db = new PGlite();
async function execSQL(sql) {
await db.exec(sql);
return { success: true };
}
async function querySQL(sql, params) {
const ret = await db.query(sql, params);
return ret.rows;
}
// Command-line interface logic
const action = process.argv[2];
const sql = process.argv[3];
const params = process.argv.slice(4);
if (action === 'exec') {
execSQL(sql).then(result => {
console.log(JSON.stringify(result));
process.exit();
});
} else if (action === 'query') {
querySQL(sql, params).then(result => {
console.log(JSON.stringify(result));
process.exit();
});
}
main.py: import subprocess
import json
def execute_sql(sql: str) -> dict:
result = subprocess.run(
['node', 'db.js', 'exec', sql],
stdout=subprocess.PIPE,
text=True
)
return json.loads(result.stdout)
def query_sql(sql: str, params: list) -> list:
result = subprocess.run(
['node', 'db.js', 'query', sql] + params,
stdout=subprocess.PIPE,
text=True
)
return json.loads(result.stdout)
# Example Usage
create_table_sql = """
CREATE TABLE IF NOT EXISTS todo (
id SERIAL PRIMARY KEY,
task TEXT,
done BOOLEAN DEFAULT false
);
"""
execute_sql(create_table_sql)
query_result = query_sql("SELECT * FROM todo WHERE id = $1", ["1"])
print(query_result)We have the (accidentally undocumented) execProtocolRaw api that lets you execute Postgres wire protocol messages. If you use that you can then use a Python Postgres wire protocol lib to get all the type support.
I'd also love to try PGlite in Python.
While reading this thread, I realized that you could already access PGlite in Pyodide. Pyodide is a Wasm port of CPython. It can work with JavaScript objects through proxies.
Here is a demo. I have run the code in current Node.js 18, Deno 1, and Bun 1.
import { PGlite } from "@electric-sql/pglite";
import { loadPyodide } from "pyodide";
globalThis.db = new PGlite();
let pyodide = await loadPyodide();
console.log(
await pyodide.runPythonAsync(`
import js
ret = await js.db.query("""
SELECT 'Hello, world!'
AS "greeting";
""")
ret.rows[0].greeting
`),
);
It works on my machine: > npm init -y
> npm install '@electric-sql/pglite@==0.2.0'
> npm install 'pyodide@==0.26.2'
> node index.mjs
prerun(C-node) worker= false
Running in main thread, faking onCustomMessage
Hello, world!Definitely interesting... maybe someone more familiar with the licenses involved can chime in on how this might impact a project/solution and compare/contrast to something like embedded FirebirdSQL.
For example, can you use this in/with a commercial project without releasing source? If you embed this, will you need to change your license to match pg/fb?
aside: Mostly asking out of curiosity if anyone knows already, if I actually had an immediate need, would be reviewing this of my own accord. I often dislike a lot of the simplicity behind SQLite in very similar ways as to things I don't care for with MySQL/MariaDB.
It'll be interesting how well this works out in practice. I don't know how they modified the persistence layer of PG, but I could imagine that there are limitations compared to running PG in a docker container. SQLite doesn't have those limitations.
Huge fan of PGlite.
It's the perfect solution to have Postgres without the need of Docker. With just `npm install`, you can have a Postgres instance on your computer. So, it's extremely easy to onboard a new developer in your team.
And, the good news, PGlite works perfectly with Next.js.
I'm using PGlite in local and development environment with Next.js Boilerplate: https://github.com/ixartz/Next-js-Boilerplate
With only one command `npm install`, you can have a full-stack application, which also includes the database (a working Postgres). And, no need to have/install external tools.
Definitely interesting, though I often just use docker compose for dev dependencies, this is definitely a cool alternative.
Yes, it's very cool alternative and it removes the need for docker compose for dev environment.
On top of it, PGlite also perfect for CI and testing.
Wouldn't be easier to have a shared dev database? So that each developer doesn't have to apply the migrations and import the dumps in their local db, and figure out why the local db for dev A is different than the local db for dev B.
IME, creating scaffolding to allow for a known, reproducible and clean state (of both DDL and DML) are wild systematic (eg across the team and the estate) productivity and stability boosts. Not having to be "connected to the shared dev DB frees engineers of latency (every ms counts) and also of mutulple pollutants to that database (eg Sally was writing protractor/cypresss tests and now the foo table has 3m rows, george was doing data exploration and manually entered records to the DB that now causes a runtime exception fot the team, etc)
If a shared dev DB is really what everyone wants, then at least having the scaffolding mentioned above to fix the DB state when pollution happens (it will) will help heal foot shots. In industrialized practice, what you are mentioning (a shared dev/env) is really the "QA"/pre-prod environment. Ideologically and personally (putting on down vote helmet) if you can't run the whole stack locally you're in for a world of trouble down the road.. local first, test extensively there, then promote changes.
I get frustrated when I join a project that doesn't allow running the full stack locally, but forces sharing parts, which always comes with limitations (not being able to work offline for starters).
It already is quite easy to spin up a local PG instance with Docker, but this probably makes it even simpler. Importing mock data and running migrations should just be 1 `npm run` command with a properly set up codebase.
That sounds cumbersome when devs are working in their respective branches and changing the schema (adding migrations)
A shared dev database can be okay for application dev. Locally deployable database is better imo. By nature of migration scripts running for every dev locally, it helps to better ensure the stability of said scripts. This can/should be combined with snapshot runs that represent your production environments.
More critical still if you have more than one "production" environment, where in different customers/clients may be at different application and db versions from one-another. This is often the case for large business and govt work. With shared environments it's often too easy to have a poorly adapted set of migration scripts.
To keep the next person from having to look (or ask without looking): it supports browser-side persistence via IndexedDB and OPFS: <https://pglite.dev/docs/filesystems>
Yours and Roys work on WASM SQLite VFSs has directly influenced my work on this. Thank you!
> Yours and Roys work on ...
(blush) But to be 100% clear: the AHP-based VFS in the SQLite project is a direct port of Roy's, so all of the credit for that one truly goes to him!
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io