Supabase raises a $200 million Series D, and the company hits a $2 billion valuation.
Paul Copplestone didn’t think things like this actually happened.
Then one day Accel’s Gonzalo Mocorrea asked for his New Zealand address—more than 7,000 miles from Silicon Valley.
Mocorrea “literally showed up on my doorstep in Wānaka, which is really not easy to get to,” said Copplestone, the CEO and cofounder of open source application development platform Supabase. “For the next two days, he’d pop in and we’d chat for a couple hours.”
After a few days in Wānaka—located on New Zealand's South Island and famous for its snow capped mountains overlooking a massive mirror of a lake—Accel’s Mocorrea called in backup, texting partner Arun Mathew.
“Arun said ‘alright, I’m coming,’” said Copplestone. “And I said, ‘Oh no, don’t come! We haven’t agreed to anything!’ But yeah, he came, we had dinner in Queenstown, another beautiful place. We caught up the next morning, and they offered a term sheet.”
Accel’s Mathew had traveled more than 24 hours, across two flights and multiple car rides, to make the trip as the firm weighed its first investment in Supabase.
“I needed to sit across the table, look him in the eye, and really believe he’s going to do something else,” Mathew told Fortune. “That’s necessary, certainly at this valuation…We know what greatness looks like, we believe that—and I’m obviously betting with my career.”
That term sheet became Supabase’s latest funding round, a $200 million Series D valuing the company at $2 billion, Fortune can exclusively report. Coatue, Y Combinator, Craft Ventures, and Felicis participated in the round, as did big-name angels like OpenAI Chief Product Officer Kevin Weil, Vercel CEO Guillermo Rauch, and Laravel CEO Taylor Otwell.
“The core thesis for us is that in every major platform shift, there's always value created at the database layer,” said Mathew. “It's part of the reason that Larry Ellison and Oracle have held the same power for 40-plus years. It's partially why MongoDB is one of the most interesting enterprise software companies out there…The database layer has a lot of dead bodies, but it also has a number of companies that have created exceptional enterprise value.”
Supabase is currently used by two million developers who manage more than 3.5 million databases. The startup supports Postgres, the most popular developer database system that’s an alternative to Google’s Firebase. Supabase’s goal: To be a one-stop backend for developers and "vibe coders."
“I see our community, over the next decade, as something that will grow with us, and it’s for everyone from developers, all the way up to enterprise,” said Copplestone. “It’s more than just developers even now. Our sign-up rate just doubled in the past three months because of vibe coding—Bolt, Lovable, Cursor, all those.”
Vercel’s Rauch first invested in Supabase in 2021, and he sees vibe coding as a key tailwind for the startup. (Vercel, last valued at $3.25 billion, is widely viewed as a leader in the vibe coding movement.)
“The metaphor I always use is that vibe coding is like a self-driving car,” Rauch told Fortune. "It’s an amazing feat that gets us around, but it still needs roads…Supabase has been working really hard not to just be easy to use, but within reach. The roads are cloud infrastructure, and Supabase was ready to meet this foundational need in the data ecosystem.”
Founded in 2020, Supabase is a product of the pandemic, and has stayed remote. Copplestone, a third-time founder whose previous startups were Nimbus For Work and ServisHero, has proactively decided to learn from the past to improve upon the future.
"I went through playing startup the first time, where you raise the money and put the posters up,” said Copplestone. “You pay top dollar, you hire, tell people how many employees you’ve got—and then, of course, we ran out of money."
This time, he’s been hiring differently. A native New Zealander, Copplestone estimates that about 28% of Supabase employees are former founders themselves, and they’re all over the world.
"It's a bit like Moneyball,” said Copplestone. “We found these really good humans, and they’re not necessarily in San Francisco. We’ve got people in Peru and Macedonia. What matters to us is that they’re extremely competent, but also low-ego, good people.”
Even though Supabase is remote, the company sets up ways for people to meet each other. For example, during launch weeks, Supabase releases one new thing every day. This month, employees and developers in 100 cities around the world are organizing meetups.
And, if you have any familiarity with pop hits from 2011, Copplestone is very much aware his unicorn’s name echoes Nicki Minaj’s hit “Super Bass.” In fact, that's by design.
"I was looking for something along the lines of ‘superlative base,’” said Copplestone. “I looked up all the domain names, and nothing was free. I could only find S-U-P-A. Then, I thought it was funny, because of the Nicki Minaj song. So, I chose that name so I could send memes to my cofounder Ant [Wilson], who’s very good at memes.”
To judge by the latest funding round, the meme is catching on.
See you tomorrow,
Allie Garfinkle
X: @agarfinks
Email: alexandra.garfinkle@fortune.com
Submit a deal for the Term Sheet newsletter here.
Nina Ajemian curated the deals section of today’s newsletter. Subscribe here.
This story was originally featured on Fortune.com
I'm sure that a lot of the l33t h4x0rs here think that Supabase sucks and is only for amateurs but I'll say that as a former engineer who's getting back into building fun side projects again, Supabase has been incredible and just what I wanted. It's my favorite new product that I've started using in the last year. I hope they build out an enormous TAM of people who don't want to live inside a terminal and make a ton of money.
I was looking for this comment.
A non-technical family member is working on a tech project, and giving them Lovable.dev with Supabase as a backend was like complete magic. No amount of fiddling with terminals or propping up postgres is too little.
We technical people always underestimate how fast things change when non-technical users can finally get things done without opening the hood.
Back in the day we'd call this phase a design and workflow prototype as to not have to deal with all the technical components until the actual flow and concept is done.
Feels we're skipping these steps and "generating" prototypes that may or may not satisfy the need and moving forward with that code into final.
One of the huge benefits of things like Invision, Marvel, Avocode, Figma, etc. was to allow the idea and flow to truly get its legs and skip the days where devs would plop right into code and do 100s of iterations and updates in actual code. This was a huge gain in development and opened up roles for PMs and UI/UX, while keeping developer work more focused on the actual implementation.
Feels these generate design & code tools are regressing back to direct-Code prototypes without all that workflow and understanding of what should actually be happening BEFORE the code, and instead will return to the distractions of the "How", and its millions of iterations and updates, rather than "What".
Some of this was already unfortunately happening due to Figma's loss of focus on workflow and collaboration, but seems these AI generation tools have made many completely lose sight of what was nice about the improved workflow of planning, simply because we CAN now generate the things we think we want, doesn't mean we should, especially before we know what we actually want / need.
Maybe I'm just getting old, but that's my .02 :).
there is no need to this tedious, boring phase which you miss, especially since it still requires a significant of coding effort (eg to stitch a backend to figma).
you can vibe code a fully working UI+backend that requires way less effort so why bother with planning and iterating on the UI separately at all?
anybody who actually knows what they are doing gets 10x from these tools plus they enable non-coders to bring ideas to the market and do it fast.
That's always been the justification to skip this phase :). Tools have just changed. One-person to small-team wonders that could code and build directly made the same arguments.
My point isn't to stitch things to Figma, that's abhorrent to me as well. My point is to not get bogged down on the implementation details, in this case an actually working DB, those tables, etc, but rather less fidelity actual full flow concepts that can be generated and iterated.
Then that can be fed into a magic genie GPT that generates the front-end, back-end, and all that good jazz.
If the effort to produce websites goes tends to zero, the value of websites will surely tend to zero. Either issues with security and maintainability will be a break on this tendency or we will get to a point where generating a custom website will be something trivial that will be done on demand.
The thing is, the cost of producing websites is already pretty low, but the value of websites mostly derives from network effects. So a rising flood of micro crud saas products will not be likely to generate much added value. And since interoperability will drive complexity, and transformer based LLMs are inherently limited at compositional tasks, any unforeseen value tapped by these extra websites will likely be offset by the maintainability and security breaks I mentioned. And because there is a delay in this signal, there is likely to be a bullwhip effect: an explosion of sites now and a burnout in a couple of years in which a lot of people will get severely turned off by the whole experience.
If you need a website that needs prototyping in 2025, you're probably doing it wrong (eg launch on insta or something). But anyway, you can vibe iterate, and not just small iterations, but wholesale different value props and approaches, so why not. it's tangible, easier to test, and you get more meaningful feedback. I do this and it's 3-4x faster than working with a designer. And to be sure, we're not making websites, but protyping features into a saas app to test with users and ourselves.
The value of Amazon.com is not the cost to produce the HTML and JavaScript you see when you visit that website. It is a component of the Amazon business, and to Amazon it is extremely valuable, and to everyone else it would be almost worthless.
If someone has the idea for the next Amazon, as well as everything else you need beyond the idea, and tools like Supabase and Lovable allow them to get it off the ground, those tools are incredibly valuable to that person.
If someone’s ideas are worthless, their websites will be worthless.
Don’t get me wrong, I love Supabase, but
> you can vibe code a fully working UI+backend
…is gonna bring a lot of houses crashing down sooner or later.
I couldn't agree more. "Vibe coding" is pretty cool, but it's not sustainable at least with with current technology. You're much better off being a knowledgeable developer who can guide an an LLM to write code for you.
One thing I will agree on though is that LLMs make it easier to iterate or try ideas and see if they'll work. I've been doing that a ton in my projects where I'll ask an LLM to build an interface and then if I like it I'll clean it up and or rebuild it myself.
I doubt that I'll ever use Figma to design, it's just too foreign to me. But LLMs let me work in a medium that I understand (code) while iterating quickly and trying ideas that I would never attempt because I wouldn't and be sure if they'd work out and it would take me a long time to implement them visually.
Really, that's where LLMs shine for me. Trying out an idea that you would even be capable of doing, but it would take you a long time. I can't tell you how many scripts I've asked ChatGPT or similar to write that I am fully capable of writing, but the return on investment just would not be there if I had to write it all by by hand. Additionally, I will use them to write scripts to debug problems or analyze logs/files. Again, things that I am perfectly capable of doing but would never do in the middle of a production issue because they would take too long and wouldn't necessarily yield results. With an LLM, I feel comfortable trying it out because at worst I'd burn a minute or two of of time and at best I can save myself hours. The return on investment just isn't there if it would take me 30 minutes to write that script and only then find out if it was useful or not.
LLMs are better search. Google burned down the house to keep itself warm and held off on LLMs until it was inevitable and are now pulling up ahead. This is the logical conclusion. LLMs will be monetized and enshitified by ads.
Soon, some free smart LLM code generators will stop generating certain outputs and instead suggest using commercial components that have paid for promotion.
The whole point of Supabase is not needing to vibe code the backend part.
PostgREST is quite boring, open source, proven tech.
So they say; I still haven't seen any high quality vibe coded software, and I'm pretty sure I never will.
I’ve seen tons of low quality successful software (concur anyone). Clearly being well built is not a requirement for success.
I'd split the difference and say the cons that low quality software creates only matter sometimes.
E.g. Concur is primarily feature-complete and will only ever need to evolve gradually.
So the drawbacks of being brittle, kludged-together, and incapable of making rapid feature changes doesn't really matter.
In some other products, that matters a huge deal.
So the tl;dr is, as always, optimize for the things that actually matter for your particular situation.
This can only be true of some products. Often there are a lot of concerns like privacy, white labeling, legal consequences that need to be considered _before_ you vibe code.
> We technical people always underestimate how fast things change when non-technical users can finally get things done without opening the hood.
This is good and bad. Non-technical users throwing up a prototype quickly is good. Non-technical users pushing that prototype into production with its security holes and non-obvious bugs is bad. It's easy for non-technical users to get a false sense of confidence if the thing they make looks good. This has been true since the RAD days of Delphi and VisualBasic.
Knowing the industry I'm pretty sure they will all push those AI prototypes to production - because they did the same with non-AI prototypes before. Now the question is once they inevitably pull in experienced folk for maintenance, refactoring and debugging, will it be easier or harder than working with that retired solo devs spaghetti codebase?
From looking at "vibe coding" tools their output is about the quality of bad body shop contractors. It's entirely possible for experienced devs to come in and fix it.
I think there's going to be the same problems as there are fixing bad body shop code. The companies that pushed their "vibe code" for a few dollars worth of AI tokens will expect people to work for pennies and/or have unreasonable time demands. There's also no ability to interview the original authors to figure out what they were thinking.
Meanwhile their customers are getting screwed over with data leaks if not outright hacks (depending on the app).
It's not a whole new issue, shitty contractors have existed for decades, but AI is pushing down the value of actual expertise.
Yeah, the current trend has lots of parallels to the low code/no code trend we had a couple of years back and the workflow engine trend we had about 15 years back... I'm curious why you think it would push down the value of engineering hours though, that didn't even happen in the past.
> From looking at "vibe coding" tools their output is about the quality of bad body shop contractors.
Genuinely, it's a lot better.
I think this is just another correction. The software market is worth several trillion dollars now. Enterprise is pushing against the rise in labor costs. It will backfire as it did every single time and in a few years competent developers will be worth their weight in platinum.
For nearly 50 years now, software causes disruption, demand drives labor costs, enterprise responds with some silver bullet, haircuts in expensive suits collect bonuses, their masters pocket capital gains, and the chicken come home to roost with a cycle of disruption and labor cost increases. LLMs are being sold as disruption but it's actually another generation of enterprise tech. Hence the confusion. Vibe coding is just PR. Karpathy knows what he's doing.
50 years might be overstating it a bit, lookup tables/hash maps were a novelty back then and available compute resources increased by many orders of magnitude... So maybe we actually had some real enablers in the meantime. My gut feeling is the current AI hype is at least as revolutionary as search engines, marketplaces or social networks (not like recommendation engines or block chain). Though not as revolutionary as the loom or electricity
I don't think its bad enough
Even us entrepreneurially minded technical devs cut corners on our personal projects that we just want to through a Stripe integration or Solana Wallet connect on
And large companies with FTC and DOJ involved data breaches just wind up offering credits to users as compensation
so for non-technical creators to get into the mix, this just expands how many projects there are that get big enough to need dedicated UX and engineers
This suggests a strong need for AI powered code security review and patching as a compliment to Agentic coding platforms. Ideally, in parallel to your coding, it could scan your GitHub and output specific tasks for the Agentic AI to perform for you.
> Non-technical users pushing that prototype into production with its security holes and non-obvious bugs is bad.
I beg to differ. Non-technical users pushing anything into production is GREAT!
For many, that's the only way they can get their internal tool done.
For many others, that's the only way they might get enough buyers and capital to hire a "real" developer to get rid of the security holes and non-obvious bugs.
I mean, it's not like every "senior developer" is immune from having obvious-in-retrospect security holes. Wasn't there a huge dating app recently with a glaring issue where you could list and access every photo and conversation ever shared, because nobody on their professional tech team secured the endpoints against enumeration of IDs?
What about users who sign up for these insecure apps and have their data and possibly their identity stolen due to the misplaced trust? That this already happens is no excuse to encourage even less security by encouraging novices to believe they are experts.
I agree it is great that more people can build software, but let's not pretend there are zero downsides.
This is a contrived situation. Most of the apps in discussion see little to no use and go dead soon after launch. The vast majority are collecting little data of negligible risk.
If a user is confident enough about a no name company that they give them enough info to make identity theft a possibility, it was only a matter of time before a spammer/phishing attack gets them anyway
> Most of the apps in discussion see little to no use and go dead soon after launch
That's not convincing. Of the apps that do get used, the vibe-coded ones will likely be unsafe.
> If a user is confident enough about a no name company that they give them enough info to make identity theft a possibility
That's completely unrelated. You can give a company very little information. Any of it being leaked is unacceptable. You can find a lot from an email, or a phone number.
People are taught, by CNBC, by suits, by hacks, that you can trust the apps on your commercials and it will be fine. It likely won't be, and your response is exactly why. Many of you are apathetic to the idea of doing right by people.
So people are manipulated, and some of them are elderly and don't even understand how computers work. This is reason enough to care about what they are exposed to, not say "let's burn it all down with shitty vibe-coding because users are dumb anyway."
We're supposed to be better than this.
> Of the apps that do get used, the vibe-coded ones will likely be unsafe.
What's the threat though. As in, what's at risk. A leaked email address? Probably. Enough info to have your identity stolen as prior commenter had mentioned. Probably not.
> That's completely unrelated.
Umm, no, it's related due to the prior commenter claiming that was the risk in their contrived situation from prior post mentioning identity theft.
> Any of it being leaked is unacceptable. You can find a lot from an email, or a phone number.
Everyone's email has already been leaked somewhere. It's not private data. This is like saying your bank account number is confidential financial information and ignoring the fact it's printed on every check you write.
> Many of you are apathetic to the idea of doing right by people.
> We're supposed to be better than this.
I object by simply saying I'm just being realistic. Data leaks somewhere, everywhere, sometimes, always. You're choosing to live in a fantasy land where this doesn't happen as if it wasn't the very true state of the world long before vibe coding came along. Sure, it's not my ideal state. But it is the actual state of things. Get real.
Vibe coded apps are by definition less secure. The more vibe coded apps, the more risk to users' data. Nothing you've said changes these facts.
That you think vibe coded apps may not collect PII, or that all PII has already been leaked is not at all realistic.
This is the same thinking that PHP is unsafe thus can't use PHP. Meanwhile, PHP is running countless billions of commerce just fine every day. Sure, vibe coding has most likely not gone through some common sense checks for security. SQL injection is likely higher risk, XSS risks, etc. But I just don't believe your assertation of risk is realistic either. There's always a risk.
I use AI and PHP, so I'm not someone unfamiliar with either.
> This is the same thinking that PHP is unsafe thus can't use PHP.
No, it's not the same because you code in a programming language. You don't code with vibe code, you let something else tell you there's code and you don't look at it. It's different on every level. Unless you're copy-pasting without understanding the code, which, as far as I'm concerned, is just as bad.
> Meanwhile, PHP is running countless billions of commerce just fine every day.
It is famously *NOT* running just fine. It *CAN* run just fine. But the freedom of what you can do in PHP and the low barrier to entry has led to Frankenstein apps with higher than average security issues. I work in legacy software, lots of PHP apps.
You seem to be under the impression that people are saying all apps were secure before vibe coding. That is not the case. But the scale of risk is far greater. Programming safely requires diligence. Instead you're saying "well maybe if we pay even less attention to what we're writing, it will be just as safe." That's irresponsible.
> There's always a risk.
There's a risk of me dying in a car accident. That doesn't mean I'm going to let my toddler drive for me.
[flagged]
My feeling is that this is similar to saying, "non-professional AirBnB hosts are a terrible security nightmare, and the fact that people are not much safer in regulated hotels is no excuse to encourage even less security by encouraging novices to play in the hospitality business".
I agree with you on the downsides.
AirBnB externality is not the safety risk for guests (although I personally ended up in some sketchy situations years ago, I don't use it anymore, mainly because:) the real externality is imposed on the inhabitants of popular tourist destinations.
There was a reason the industry was regulated, and circumventing these reasons with an app has been a net negative to society.
I really wanted to like Supabase, and decided to adopt it as the back end for a mobile app I'm building. So... I was invested to some extent.
But I had to abandon it after wasting weeks trying to do simple things. The biggest problem is the lack of documentation. Fundamental parts of the system are undocumented, like the User table. There's no doc on how the columns function, so I couldn't determine why a user is marked as "confirmed" (presumably through E-mail or other validation) immediately upon insertion to the table.
There's also no full documentation of client-library syntax. For example the Swift library: There are a few examples of queries, but no full documentation on how to do joins (for example).
And just try to use your own certificates; something that I've been doing for years during iPhone-app development was impossible with Supabase.
And why? Because these simple scenarios appear to be distant outliers for Supabase. It's as if nobody has ever brought them up before; and even if they have, nobody has been able to answer the first questions about them.
If you're not building a single-page Web app that just lets people browse a database, Supabase doesn't seem to envision your application.
So I went back to a plain Deno back-end, which is what I was building before trying Supabase. In the amount of time I wasted trying to scrounge up documentation and fruitlessly asking questions in forums and Discord, I was able to learn and implement authorization, and then get back to work building a product.
Maybe all this money will let the Supabase team hire some people to document their product.
Let’s hope that a tiny portion of the $200M goes towards documentation. If they spent $5k on professional writers they could get something useful. For $50k something great. And for $500k they could have an entire suite of highly produced explainer videos with great post production.
$5k won’t even get you native English language writers, $50k might get you one. One decent writer…for 6 months. Y’all really don’t know the value of skilled non software engineer professionals do you?
Why not just throw AI at it? Seems to be the best use case. So get a startup to fix this startup…so on and so forth…
Citation: professional writer with technical writing experience (also out of work)
My mom was a writer all her life. The last 5 years she's done more and more editing. After LLMs kicked off, and I mean to the month of them starting to hit headlines her work plummeted, then spiked with tons of garbage, and is now levelling off with her usual workflow.
For me, that was a strong signal that everyone gave it a go, found it too difficult to generate quality stuff, and reverted.
Good luck to you regardless.
You are on the money. I just went back to technical writing after decades in software development for, well, some very very well-known companies.
Good luck to you. I ended up at a company that makes non-software products but really wants (and needs) to modernize their doc-production pipeline.
>Because these simple scenarios appear to be distant outliers for Supabase
You've only talked about 2 things : Lack of documentation (which I somewhat agree with) and using custom certificates. Custom certificates is not a "simple scenario" and I don't blame Supabase for not spending time on this. I fact I would prefer they work on other things (like documentation !).
It is 100% a simple scenario. I can't speak for Android, but you have to use HTTPS now for calls in an iPhone app if you want to get it approved. That means you need to deploy certificates to your test devices, simulators, and development machine.
"Lack of documentation" speaks to several apparently routine use cases being outliers; otherwise, they'd be documented. I already talked about the User table Supabase provides (and populates in unexpected ways), and about the Swift library that you have no reference for formulating joins through... another critical and expected ability.
That is the root of the problem with these batteries-included frameworks: lock-in.
Once you encounter a problem they either don't want to solve or haven't solved, your only choices are either:
- start layering on hacks (in which case you quickly get into case where no one and nothing else could help you)
- decide not to do that-thing
- do a rebuild to get rid of the batteries-included.
Personally I think something Supabase is great for toy projects that have a defined scope & no future or a very early startup that has the intention to rebuild entirely. Just my opinion though, maybe others feel more comfortable with that level of lock-in.
Even something like Heroku is miles better because they keep everything separated where your auth, database, & infrastructure aren't tightly coupled with a library.
By comparing supabase to Heroku, you demonstrated that you don't actually understand what it is...
Having worked with it quite a bit I'm still not sure I really understand what it is, which sounds like a bizarre sentence but:
It's Postgres, but bundled with some extensions and Postgrest. And a database UI. But hosted and it runs locally also by pulling the separate parts. Running it locally has issues though, so much so that I found it easier to run a docker compose of the separate parts from scratch and at that point just carry that through to a deployment, at which point is there still a reason to use Supabase rather than another hosted Postgres with the extensions?
It's a bit of a confusing product story.
I really love supabase. And I’m glad they are getting some funding because I’m terrified they’ll get bought by Amazon or google and completely ruined.
The developer experience is first rate. It’s like they just read my mind and made everything I need really easy.
- Deals with login really nicely
- Databases for data
- Storage for files
- Both of those all nicely working with permissions
- Realtime is v cool
- Great docs
- Great SDK
- Great support peeps
Please never sell out.
I have no idea where you're finding "great docs" and "great support," because the lack of both of those drove me away from Supabase after having invested quite a bit of time and effort in it.
Which parts did you have a problem with?
Is it the PostgREST part? Are you using it for simple queries, or are you trying to use it for complex business logic?
Asking because PostgREST is great when you use it the way it’s intended but like any tool it will underperform when used in a way it’s not supposed to. It’s a screwdriver that you shouldn’t use to hammer nails.
I didn't see any purpose to the PostgREST part as a back end to an application, because I'm not going to hard-code queries in my application. My server is going to provide an API that isolates the application from the DB structure.
So no... PostgREST wasn't a factor for me at all.
> My server is going to provide an API that isolates the application from the DB structure.
The same can be achieved with "schema isolation". See https://docs.postgrest.org/en/v12/explanations/schema_isolat....
Thanks for the reference. I'll take a look.
It seems your opposition to it is philosophical and/or based on assumptions (that one must "hard-code queries in the application"), or limitations of Supabase's Swift library.
I'm sorry you had a bad experience with this kind of tool, but I hope that one day you choose to revisit it.
"Seems" how? I gave specifics, not assumptions. I already explained WHY an auto-generated HTTP API to your database caters to the hard-coding of queries in the application. And the shortcomings of Supabase's Swift library (or its doc) are neither philosophical nor assumed.
So... what are you talking about?
You keep saying “this does X which is bad” while at the same time saying “if it doesn’t do X it is useless”.
To me this is a 100% philosophical opposition.
People find this tech useful. You might prefer writing your own backend. Claiming it doesn’t save time for people who use it to save time however doesn’t make much sense, does it?
> And the shortcomings of Supabase's Swift library (or its doc) are neither philosophical nor assumed.
I didn’t say it was.
The product story is that people want to build apps and naturally find themselves having to handle:
- remote state
- authoritative logic that can't run solely on the user's device because you can't trust it
- authentication
each of which is annoying when you're focused on building the user-facing app experience. Supabase solves all three without you needing to touch any infrastructure. The self-hosting thing just provides insurance that users are not completely locked in to their platform, which is a big concern when you're outsourcing basically your entire backend stack.
it's just a firebase competitor, that's based on postgres and you can run sql against it if you want.
It's also implied, and proven by some, that having access to Postgres means you can up and leave Supabase if you want to later. It won't be snap-your-fingers easy, but it's more direct than other hosted SaaS where you can't access your data or the schemas.
that "just" is carrying a lot of weight there
exactly this
[dead]
Totally agree. I read all kinds of articles and posts and asked for opinions and explanations, to see if I should use Supabase to build a back end for a mobile app.
In the end I jumped into it wholeheartedly, mainly because I wanted a canned solution for authorization and user-confirmation. But soon I came up against obstacles I had easily overcome with plain Deno already, but were seemingly insurmountable with Supabase.
When one basic use-case after another turned out to be almost wholly undocumented and unexplored by the Supabase docs and community, I concluded that Supabase is really only suited for people building Web back-ends that let people browse a database.
As an application back-end, its marquee features don't add value or are basically irrelevant... as far as I can see. The rest of it is incomplete and/or undocumented, with client libraries being an example.
You are not wrong that it’s a postgres + extensions. However, the tech market is very big now and that can sustain these valuations.
Not really a confusing story: it's a PaaS that wants to beat fears of becoming another Parse (https://www.willowtreeapps.com/craft/parse-shutdown-what-it-...)
Realistically 99% of the users would still be screwed if they ever shut down, regardless of if it's open (see: Parse)... but it gives people a some confidence to hear they're building on a platform that they could (strictly in theory) spin up their own instance of should a similar rug pull ever occur
They have also been giving back to postgres some of their extra work, and also their real time stuff i think is on erlang?
I agree you might prefer to choose the stack yourself, but for total n00bs and vibe coders supabase is a great start / boilerplate vs say the MEAN stack that was a hit 5y ago
I’ve been using Hasura and PostgREST for a few years now with real big production apps, in enterprise and in startups, and honestly the only problem with them is that backend engineers feel threatened.
They are great products that cover 95% of what a CRUD API does without hacks. They’re great tools in the hands of engineers too.
To me it’s not about vibe coding or AI. It is that it's pointless to reinvent the wheel on every single CRUD backend once again.
Experienced backend dev here who also uses Hasura for work at a successful small business. I think it's great at getting a prototype to production and solves real business problems that a solo dev could do by himself. As engineer #2 it's a mess, and it doesn't seem like a viable long term strategy.
I've only worked with Hasura, but I can say it's an insecure nightmare that forces anti-patterns. Your entire schema is exposed. Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper. Likewise you can't easily customize your API without building an API on top of your API. You're doing weird extra network hops if you have other services that need the data but can't safely access it directly. You're pushed into fake open source where you can't always run the software independently. Who knows what will happen when the VC backers demand returns or the company deems the version you're on as not worth it to maintain compared to their radically different but more lucrative next version.
I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
"Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper."
Exactly. This is one of the things I never understood about Supabase's messaging: The highly-touted, auto-generated "RESTful API" to your database seems pointless. Why would I hard-code query logic into my client application? If my DB structure changes, I have to force new app versions on every platform because I didn't insulate back-end changes with an API.
Why would anyone do this?
> If my DB structure changes, I have to force new app versions on every platform because I didn't insulate back-end changes with an API.
To avoid the above problem, it's a standard practice in PostgREST to only expose a schema consisting of views and functions. That allows you to shield the applications from table changes and achieve "logical data independence".
For more details, see https://docs.postgrest.org/en/v12/explanations/schema_isolat....
Thanks. If you're writing functions, though, it seems like nearly as much work as writing traditional endpoints, no?
Not really, the work is much reduced.
1. If your function returns a table type, you can reuse all the filters that PostgREST offers on regular tables or views [1].
2. The SQL code will be much more concise (and performant, which leads to less maintenance work) than the code of a backend programming language.
3. The need for migrations is a common complaint, but you can treat SQL as regular code and version control it. Supabase recently released some tooling [2] that helps with this.
[1]: https://docs.postgrest.org/en/v12/references/api/functions.h...
[2]: https://supabase.com/docs/guides/local-development/declarati...
Nobody but you is forcing you to put the “business logic” in the frontend.
Both those techs might make this look convenient, but engineering rules must still be followed.
Frontend should do validation and might have some logic that’s duplicate for avoiding round-trips… but anything involving security, or that must be tamper-proof, must stay in the server, or if possible be protected by permissions.
There are whole classes of applications that can be hosted almost entirely by Supabase or Hasura. If yours isn’t, it doesn’t mean you should force it.
Who said anything about forcing? I asked what the value of Supabase's most highly-touted features are, when they CATER TO the movement of such things as query logic to the front end. What else are you doing with an auto-generated RESTful HTTP "API" to the database?
I also didn't mention security, let alone promote moving it to the front end.
You are the one mentioning “Why would I hard-code query logic into my client application?”
The answer is: you wouldn’t. That’s not the point of any of those tools.
Yep, I'm the one. And the question stands.
What is the point of an auto-generated HTTP API to the database, if not to let clients formulate queries? And why would you do that?
PostgREST creates the same type of CRUD endpoint that one would create when writing a traditional backend with an (eg) MVC framework, and it does this without requiring a developer and with complete consistency.
If "letting the client formulate queries" you mean "filter posts by DidYaWipe, sorting by date", this is also what traditional CRUD backends do.
I wouldn't write a back end with an MVC framework, since it's not doing any presentation whatsoever.
If PostgREST auto-generates three-table joins automatically to resolve many-to-many relationships and presents an appropriate endpoint, that's interesting.
Yes, it does many-to-many joins automatically: https://docs.postgrest.org/en/v12/references/api/resource_em....
Thanks for the reference. I'll check it out!
> As engineer #2 it's a mess
As a long-time Hasura stan, I can't agree with this in any way.
> Your entire schema is exposed
In what sense? All queries to the DB go thru Hasura's API, there is no direct DB access. Roles are incredibly easy to set up and limit access on. Auth is easy to configure.
If you're really upset about this direct access, you can just hide the GQL endpoint and put REST endpoints that execute GQL queries in front of Hasura.
> Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper
> Likewise you can't easily customize your API without building an API on top of your API. You're doing weird extra network hops
... How is an API that queries Hasura via GQL any different than an API that queries PG via SQL? Put your business logic in an API. Separating direct data access from API endpoints is a long-since solved problem.
Colocating Hasura and PG or Hasura and your API makes these network hops trivial.
Since Hasura also manages roles and access control, these "extra hops" are big value adds.
> You're pushed into fake open source where you can't always run the software independently
... Are you implying they will scrub the internet of their docker images? I always self-host Hasura. Have for years.
> I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
I think your arguments pretty much sum up why people think it's just about backend engineers feeling threatened - your sole point with any merit is that there's one extra network leg, but in a microservices world that's generally completely inconsequential.
I completely disagree.
Backends are far messier (especially when built over time by a team), more expensive and less flexible than a GraphQL or PostgREST's api.
> I've only worked with Hasura, but I can say it's an insecure nightmare that forces anti-patterns
Writing backend code without knowing what you're doing is also an insecure nightmare that forces anti-patterns. All good engineering practices still need to apply to Hasura.
Nothing says that "everything must go through it". Use it for the parts it fits well, use a normal backend for the non-CRUD parts. This makes securing tables easier for both Hasura and PostgREST.
> Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper. You're doing weird extra network hops if you have other services that need the data but can't safely access it directly
I'm gonna disagree a bit with the sibling post here. If you think that going through Hasura for everything is not working: just don't.
This is 100% a self-imposed limitation. Hasura and PostgREST still allow you to have a separate backend that goes around it. There is nothing forbidding you from accessing the DB directly from another backend. This is not different from accessing the same database from two different classes. Keep the 100% CRUD part on Hasura/PostgREST, keep the fiddly bits in the backend.
The kind of dogma that says that everything must be built with those tools produces worse apps. You're describing it yourself.
> I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
I have heard the arguments and all I hear is people complaining about how hard it is to shove round pieces in square holes. These tools can be used correctly, but just like anything else they have a soft spot that you have to learn.
Once again: "use right tool for the job" doesn't mean you can only use a single tool in your project.
I've only played with these kinds of plug and play databases, but mixing and matching seems like the worst of both worlds. The plug and play is gone, because some things might me in API 1, some others in API 2, and maybe worst of all, their domains might overlap. So you need to know that the "boring" changes happen via the postgREST, but the fancier ones via some custom API. The APIs will probably also drift apart in small ways, making everything even more error prone.
What you say is also true for situations where you us an ORM vs queries, or some direct MVC approach vs business service libraries which are common in backend apps. Or even having two different sets of APIs.
What sounds like the worst of both words to me is forcing Supabase/Hasuea to do what it isn’t good at or force a traditional backend to do the same thing those tools can do but taking 10x of the time and cost.
My experience was super positive and saved a lot of coding and testing time. The generated APIs are consistent and performant. When they don’t apply, I was still able to use a separate endpoint successfully.
I like PostgREST for some of it's use cases (views mostly), but the issue I have with it is that I don't often want a user to have direct access to the database, even if it's limited to their own data.
Mike can edit his name and his bio. He could edit some karma metric that he's got view access to but no write access to. That's fine, I can introduce an RLS policy to control this. Now Mike wants to edit his e-mail.
Now I need to send a confirmation e-mail to make sure the e-mail is valid, but at this point I can't protect the integrity of the database with RLS because the e-mail/receipt/confirm loop lives outside the database entirely. I can attach webhooks for this and use pg_net, but I could quickly have a lot of triggers firing webhooks inside my database and now most of my business logic is trapped in SQL and is at the mercy of how far pg_net will scale the increasing amount of triggers on a growing database.
Even for simple CRUD apps, there's so much else happening outside of the database that makes this get really gnarly really fast.
> Now I need to send a confirmation e-mail to make sure the e-mail is valid, but at this point I can't protect the integrity of the database with RLS because the e-mail/receipt/confirm loop lives outside the database entirely
Congratulations: that's not basic CRUD anymore, so you ran into the 5% of cases not covered by an automatic CRUD API.
And I don't see what's the dilemma here. Just use a normal endpoint. Keep using PostgREST to save time.
You don't have to throw the baby away with the bathwater just because it doesn't cover 5% of cases the way you want.
It's a rite of passage to realize that "use the right tool for the job" means you can use two tools at the same time for the same project. There are nails and screws. You can use a hammer and a screwdriver at the same time.
>You can use a hammer and a screwdriver at the same time
How do you balance the nail and screw? I'm serious, I'm trying to picture this, hammer in one hand, screwdriver in the other, and the problem I see here is the nail and screw need to be set first, which implies I can't completely use them both at the same time.
Perhaps my brain is too literal here, but I can't figure how to do this without starting with one or the other first
I'm going to answer this using Firebase, which Supabase is supposed to be a copy of.
There are 2 parts to using Fireabse, the client SDK and the admin SDK.
The client SDK is what's loaded in the front end and used for 95% of use cases like what u/whstl mentions.
The adminSDK can't be used in the browser. It's server only and is what you can use inside a custom REST API. In your use case, the email verification loop has to happen on a backend somewhere. That backend could be a simple AWS lambda that only spins up when it gets such a verification request.
You're now using a hammer for the front end and a screw driver for the finer details.
Yep. Exactly the same for PostgREST/Supabase.
The equivalent to Firebase's "Client SDK" is just the PostgREST API, which needs to be secured.
The equivalent to Firebase's "Admin SDK" is the PosgreSQL connection string that can be used like a normal PostgreSQL.
At the same time: in the same project.
Some projects require nails, other require screws, some might require both.
Instead of hammering screws (or in this case reinventing a screwdriver), just use an existing screwdriver. That’s what I mean: don’t reinvent the solved problem of CRUD endpoints when applicable to the endpoint. Nothing says you can’t use two techs per project.
"...an automatic CRUD API. And I don't see what's the dilemma here. Just use a normal endpoint. Keep using PostgREST to save time."
Where in my message does it say or imply that you should “hard code queries in your client application?”?
EDIT: What I’m advocating here is the opposite: use those tools for CRUD so that your frontend looks exactly the same as a frontend with a regular backend would. If the tool is not good for it (like the example), just use a regular endpoint in whatever backend language or framework. Don’t throw the baby (the 95%) with the bathwater (the 5%).
By “just use a normal endpoint” I mean “write a normal backend for the necessary cases”.
Of what use is an "automatic CRUD API" if you're not putting query logic in your client application?
With Supabase or Hasura you would write the same client code you would write if you were using a traditional backend.
At least when used correctly, but honestly I can’t see a situation where it’s easy to do otherwise for queries.
The utility is in not having to write a lot of repetitive endpoints in a traditional backend, for a large amount of endpoints.
What exactly do you mean by “query logic in the client code”?
I mean instead of doing a GET on an endpoint called userMessages with an ID parameter, you're formulating a join in the client between specific tables.
That's not necessarily true.
In PostgREST, if userMessages is a table in itself, you do get an endpoint called /userMessages.
If the table is called messages and you want to get messages from a user, you can just request something like /messages?user_id=123. And if user_id must your own user_id, you can just skip passing the parameter, thanks to RLS.
If userMessages requires is a join between two tables and you don't want to let the frontend know about it, you can use a view and PostgREST will expose the view as an endpoint.
Once again, there is no "need" to formulate joins in the frontend to reap the benefits of this tool.
I don't do anything close to "formulating a join in the client" with PostgREST and I still use it to its full extent, and it does save time.
EDIT: If one wants to formulate more complex joins in the frontend, then they probably want something like Hasura instead. Once again: complex queries in the frontend is BY NO MEANS mandatory, you can still use flat GraphQL queries and db views for complex queries. PostgREST OTOH is about keeping it simple.
Thanks for the reply. If your database is normalized to any degree and you have multi-way relationships, I don't really see significant payoff from the auto-generated API vs. writing traditional queries and endpoints.
I have used them too, and I would say that at least for Hasura, performance can be poor for the generated queries. You have to be careful. Especially since they gate metrics behind their enterprise offering.
When you use their SaaS offering, it's a good product. Self hosted is a different story. Massive stack that reinvents the wheel for every component, lack of documentation, breaking changes between versions all the time (although this has gotten better lately).
It feels like it's Open Source mainly for the sake of good PR, not to be actually useful.
World still needs a replacement for Microsoft Access on the web.
It’s been so long that new ideas are solving parts on the access spectrum without seemingly being aware of it.
Supabase and others would have a smaller footprint to add an app layer and reporting layer to their tool since it is data as the cornerstone not an afterthought
I consider myself as fairly technical and don't think Supabase or Neon are sucking, but that they're getting quite expensive once you need a mid size DB. If I'd only need a small DB I'd hesitate not a second to get one of them.
Not until/unless it has proper offline-first support. Check out InstantDB and Triplit.
Its the same h4x0rs who would build facebook in a weekend but they didn't
How much are you paying per month?
I’m with you, supabase is a fantastic product.
It's all fun and games until you need caching - something that comes at unspecified cost from when I looked into it.
So you're saying it's something like an updated version of Yahoo Small Business?
It is really good for getting started but ultimately our companies transition off of it.
[dead]
That’s a lot of money.
What’s Supabase’s exit strategy? Are they sustainable long term as a standalone business?
You can also see how money is starting to chase “vibe coding” — as long as you say the magic words, even if your product is only tangentially related to it, you can get funding!
Reading the tea leaves, Series D means they opted for more funding vs IPO. They claim to have 2 million users, but they're open core so how many are paying? Maybe their books aren't looking that great. Wall street doesn't understand database vendors outside of "big data", so they're probably hoping for acquisition. Not sure who would buy them though, as PostgreSQL vendors are kind of a dime-a-dozen these days...
If lovable, bolt.new, etc kept integrating with them, that's a money maker without needing to do much sales. There's a wave of AI tools that require somehow save state and Supabase provides that. I'm absolutely amazed others haven't jumped in the same ship yet.
That definitely seems to be the play. Keep funneling in users from Lovable/bolt.new and keep building revenue or hope to be acquired if one of those vibe coding tools gets huge.
In the short term yes, but why would you as an end user of lovable and similar tools prefer expensive Supabase? If you already have an AI developer at your disposal you might as well make it figure out how to properly set up and maintain AWS, right? Maybe it can't do it now -- too complicated because there's no simple AI-accessible interface, and LLM models are simply not smart enough yet, but I imagine it will change soon.
> Not sure who would buy them though, as PostgreSQL vendors are kind of a dime-a-dozen these days...
Supabase defo has a much higher mindshare.
Sure but ultimately they're still just selling something that is already free and wrapping AWS. These business models aren't sustainable unless you trash your free product, which also isn't sustainable. Presumably they have a good deal with their cloud vendor, AWS I think, but I think its safe to assume they lose A LOT on their free products.
The whole premise of cloud hosting businesses is that people don't want to manage stuff themselves.
They end up managing stuff themselves anyway. Plus managing another kind of bills.
AWS seems to be doing fine.
> so how many are paying
This is like if Google Spanner were open sourced tomorrow morning: realistically how many people are going to learn how to deploy a thing that was built by Google for Google to serve an ultra-specific persona?
Maybe you might get some Amazon-sized whale peeking at it for bits to improve their own product, but the entire value prop is that it's a managed service: you're probably going to continue paying for it to be managed for you.
IMO it also depends on how the whole process is tied together.
I always loved Vercel for their easy hosting of Next.js with included CI/CD, but I recently switched to self-hosting - their pricing switched from a flat, worry-free $20/month to an unpredictable whatever-it-may-cost plus it sent me 10+ emails every single month about hitting some quotas that they introduced and I couldn't find a good way to stop that.
A lot are paying, including me for multiple projects. They have a pretty good offering. I used to use them for dev and prod, but now using neon for dev. Supabase still for prod. I had switched from mongo to supabase. I may switch to neon for prod but not in a rush.
They also offer so much more than just postgres. Though I use them only for postgres myself.
Since you use both supabase and neon, any particular strength or weakness to keep supabase for prod? I just moved my app to neon today (easy enough to test it!) and am enjoying the auto-scaling features and UI is great on neon. But I'm curious about how supabase stacks up.
Supabase feels less flexible. Also it tries to do many other things so I don't think they can focus fully on the db side. However it still works well enough for production so cannot complain too much. I haven't done benchmarks for performance latency etc though. I should!
A lot of people don't self-host it, even though it is open core. This is due to their docs being garbage and tons of differences between the offerings, so you can't even rely on the main docs if you're self-hosting.
It's easier to just become familiar with a DB UI tool like Beekeeper or DataGrip and spin up your own things. I'm also not a huge fan of being "locked-in" to so many things (including their auth). I think most projects would be better off keeping these parts separated, even if they are using third-party services to handle them, as it would be way less overhead to migrate out.
it's an aggressive preemptive round, so i'd guess 2b/50 = 40M of revenue. Probably low margins since the free tier/ hosting postgres nature of the business.
> What’s Supabase’s exit strategy? Are they sustainable long term as a standalone business?
Acquisition best case, Private Equity worst case.
Do you see Supabase going public on the stock market? Perhaps unless they do what Cloudflare done and are replicating AWS, it may be hard to see a stock market debut.
Could be wrong though.
Supabase is basically AWS Postgres under the hood. It's popular amongst hobbyists and small teams but I'm not sure whether any large teams actively use it. Once you're past the point of serious business, it's much more cost effective to host everything by yourself.
Supabase at is minimum providing a PostgreSQL server, pooler (they started on pgbouncer, how their own) and a PostREST API, and support, backup, logging etc. You can be doing serious business and not have the time/people to run these reliably self-hosted. They also provide Auth almost to the Auth0 level and Edge functions like Vercel, S3 like storage (sharing the db's permission system), and websocket/presence backed by Elixir. TBH they are a compelling value, at least for us.
Granted, Supabase does provide a lot under the hood, and it's excellent for whipping out a quick MVP, but I wouldn't count it production ready even today, simply because of the downtime I've experienced at times (even if their dashboards say otherwise). And it's not just an isolated case - a lot of folks on reddit complain with the same issues. Perhaps your treatment is different because of the high spend you guys have.
What is serious business? I think supabase can scale brilliantly, and it doesn't lock you in, if you have need for some special infra you can build and integrate it, I don't know but you could possibly even use FDW to access a postgres you run yourself.
Also they can't run on AWS postgres with all their postgres plug-ins AFAIK.
The point of "cheaper to host everything yourself" is a lot higher than what most estimate.
My only concern is that is supabase goes out of business or go evil you're gonna have a bad time, however everything is open-source
Serious business is when you need to maintain uptime and stability. Not just me, but a lot of folks on the Supabase reddit have complained often about the insane downtimes that we've experienced at times with the platform. I would 100% use it for prototypes and MVPs, but for production? Neither me nor a lot of others would touch it with a pole, even though I'm sure your experience might be different.
My former place ran a lot of RDS Postgres but also loved Supabase. It's more than just hosted DB because it has loads of value adds like web-based table editing, auth, edge functions, row-level security, easy hooks and triggers. We were capable of operating RDS but the cost of operations in dev hours was high. Supabase was super easy for moderate price and readily compatible with our RDS and Redshift.
I don't disagree. Supabase does provide a lot of functions under the hood that one would have to build out individually otherwise. I really like their lock-in model - you're not locked in with your data, but because of the extra functionality that they provide to your database.
> Are they sustainable long term as a standalone business?
It's bananas to me that questions like these could be unanswered even 5 years after the business started. This possibly cannot be the most efficient way for finding new solutions and "disrupting" stale industries?
> It's bananas to me that questions like these could be unanswered even 5 years after the business started.
Those are rookie numbers, Discord is coming up on 10 years old and has made zero dollars to date, yet is supposedly considering an IPO soon.
It's very common for tech companies to go public without being profitable. As long as their growth is good and they have a reasonable story for how they will achieve profitability, then it typically makes sense. Of course every company is different and not all will reach their profitability goals post-IPO, but in many cases, it wouldn't make sense to wait for profitability before going public.
Also reasons they need to go public. Growth is costly.
Discord has a fairly successful subscription product that is generating tens of millions in revenue. They most certainly have made more than 0 dollars. Profitable? Less likely.
Yeah I meant zero profit, poor wording on my part.
Discord has the user growth to make up for it. It’s practically a household name on the level of Google now. I see many people boosting servers too, so clearly they are making a bunch of money.
The IPO is how (the shareholders) make money, by selling to bagholders.
> This possibly cannot be the most efficient way for finding new solutions and "disrupting" stale industries?
The thing is, the people with far more information than we have, and with actual money on the line, think this is a good use of their money. They're not always right, of course, but the industry as a whole is profitable and is innovative and "disruptive".
So, yes, this can be a good way for finding new solutions. The most efficient? IDK but it's the best we've come up with so far.
What's really bananas is that your comment is just as relevant today as it would have been 15 years ago. It's been bananas for a while now.
Google bought firebase, so my guess is they are aiming for an Amazon or Microsoft acquisition.
To be fair, their $2B valuation is probably the most reasonable valuation we've seen in years. That doesn't negate the question of how they plan to turn a profit.
If they truly have 3.5 million databases, that's only ~$500 per database to recoup the investments, that doesn't seem to crazy. Companies like OpenAI or Twitter/X are never going to be profitable enough to cover what they've already spend/cost. Supabase could because the amount is so much lower and they have paying customer, but I'd like to emphasize the "could".
Acquisition. All of these VC companies raise unsustainable levels of money in hopes of acquisition or IPO. Supabase seems to be leaning towards acquisition.
They are definitely creating some value. Managed database.
"ROI? Return on Investment?"
"No, Radio on the Internet."
and setting up postgresql on a simple VPS is so easy... You can literally ask Gemini 2.5 Pro or o3 or Sonnet 3.7 and do it in 15-30 minutes... Learned helplessness is really something and vibe coding is overrated imo: https://www.lycee.ai/blog/why-vibe-coding-is-overrated
i'm a bit brain fried right now but are you being sarcastic? typing out apt-get install postgresql is a lot less that 15-30min.
Yes, but setting up e.g. users and pg_hba might be something you would need to research before doing your first postgres deployment even if you previously came from a managed postgres service. Also coming up some sort of backup strategy would be a good idea.
But once you know these things, you could of course be faster.
The greater fool strategy has worked well for unprofitable tech companies for decades, and shows no signs of slowing down
> Are they sustainable long term as a standalone business?
Was Meteor? They are exactly the same thing. And I really liked Meteor!
To me, the more money pouring in, the better. That said:
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRCVKYR...
(The Silicon Valley Economy cartoon)
What an absolute joke. Their exit strategy is presumably to keep chasing the high and find more ways to integrate AI. The era of building good software for fun and profit is coming to an end.
I think their product is sound, they build essentially a backend as a service platform on open-source software. That doesn't make it easy to run at scale, so you probably wanna use their paid offering unless you plan to hire a lot of staff to maintain it, but it is possible and they support small scale dev envs
100% this; we have a 4 digit monthly spend. I guess I will double-check when we reach 5 digits, but I can't afford my own time self-host it yet.
Have you thought about possibly hiring a junior engineer to make the transition possible? I don't know what your use case but creating a solution that fits your needs would be worth it IMO. You're already spending a years worth of dev salary which can get you plenty of good talent around the world with.
>Supabase is currently used by two million developers who manage more than 3.5 million databases. The startup supports Postgres, the most popular developer database system that’s an alternative to Google’s Firebase. Supabase’s goal: To be a one-stop backend for developers and "vibe coders."
How many of those users are paid. You can sign up for free without a credit card.
It's cool, for certain use cases. I ended up trying it for a few months before switching to Django.
If you ONLY need to store data behind some authentication, and handle everything else on the frontend, it's great. Once you need to try some serverside logic it gets weird. I'm open to being wrong, but I found firebase phenomenally more polished and easier to work with particularly when you get to firebase functions compared to edge functions.
Self hosting requires magical tricks, it's clearly not a focus for them right now.
I hope they keep the free tier intact. While it's not perfect, if your in a situation where you can spend absolutely no money you can easily use it learning ( or for portfolio piece).
> Self hosting requires magical tricks
Has anything changed recently? ~1year ago I installed a local instance (that I still use today for logging LLM stats) and IIRC all I had to do was "docker compose up". All the dockers are still starting for me at boot from that 1yo install, to this day. (I use it on 127.0 so no SSE & stuff, perhaps that's where the pain points are? Dunno, but for my local logging needs it's perfect).
Hosting it on an actual server with a URL is not a fun experience. You need to generate a specific type of string to get it to work.
This isn't documented anywhere. Deep deep in their GitHub issues you'll find a script for generating this magic string which needs to be set as an environment variable.
See https://github.com/supabase/supabase/issues/17164#issuecomme...
Looks like it is just an issue of correctly making a jwt token, if you are not using their client libraries, but you can also just do it via their docs https://supabase.com/docs/guides/self-hosting/docker#generat... now (not sure how long you've been able to do in the docs)
Sure "it's in the docs" but last time our devops tried the compose file with ~10 or so services it took several days of fiddling with all sorts of different issues. It is just not made for selfhosting at all. It can be so much simpler but JS devs like it different.
It can be set up in a day if you slap traefik infront of it, change the env and compose files a bit and run it with docker compose.
I admit, I already had a working mail server and wildcard LE cert... let's say you'd need the other half of the day too to set that up if wanted.
Personally I set it up in a way such that the studio is not publicly accessible but can be accessed using ssh port forwarding.
All in all, I still agree that it's not really user friendly to self host it. It's basically only one supabase project. But in reality, it shouldn't be that hard to create new template dbs in postgres to set up multiple projects and also provide a good UI for that. They don't bother to provide that functionality though for I believe obvious reasons.
It has gotten a bit better lately, but it's still a pain.
Just recently the docker-compose stack from the current main branch would not start properly, because someone committed a faulty health check. Why does this make it to the main branch, does nobody there review PR?
Starting a new stack is nothing compared to maintaining it. Upgrading to newer images requires carefully checking which environment variables suddenly appeared in new versions or maybe were renamed. Upgrades never really went absolutely smooth for me in the past.
Try using your own certificates. It's easy with Deno (for example) but as far as I could tell impossible with Supabase. Certainly it's undocumented, and that's a huge problem if you want to do real development.
If you are self hosting it, at least put a reverse proxy of it infront to control what is actually accessible. You can easily slap a traefik infront and get LE certs automatically, then terminate TLS at that level.
It’s normal Postgres. There’s no need to handle everything on the front end. The tutorials nudge you to learn RLS and use their SDKs for the client, but you can write perfectly normal server side code as well.
Yeah I’ve ran a small project where I just did everything with the “service account” credentials which operates like a normal Postgres connection.
If you're not supporting users, it's fine.
But if you usecase involves Supabase auth, using a service account to bypass RLS is kind of like hardcoding connection strings.
You can use both properly and together.
The service account should only be accessed on the service.
If using Auth+Server, you can check the verified user identity via Auth JWTs (or something, see the docs).
Yeah, don't use the server connection on the client, but they have many warnings against that.
I literally use it because it's a free hosted postgres database. I just connect via connection string on my backend and run the queries there.
Yeah, it's a bit wonky, especially when you are dealing with configuring specific combination of supabase/deno/typescript features (e.g. stage 2 vs stage 3 decorators)
How is Djanjo a replacement for Supabase?
For my current project I basically need a backend server for processing some basic game logic.
I had done something similar in Firebase and it was easy. Supabase wasn't straightforward here. It got to a point where I'm sure I could eventually get it working, but I also think I'm outside the expected usecase.
Django is much more flexibility in this regard.