Supabase raises $200M Series D at $2B valuation
A non-technical family member is working on a tech project, and giving them Lovable.dev with Supabase as a backend was like complete magic. No amount of fiddling with terminals or propping up postgres is too little.
We technical people always underestimate how fast things change when non-technical users can finally get things done without opening the hood.
Feels we're skipping these steps and "generating" prototypes that may or may not satisfy the need and moving forward with that code into final.
One of the huge benefits of things like Invision, Marvel, Avocode, Figma, etc. was to allow the idea and flow to truly get its legs and skip the days where devs would plop right into code and do 100s of iterations and updates in actual code. This was a huge gain in development and opened up roles for PMs and UI/UX, while keeping developer work more focused on the actual implementation.
Feels these generate design & code tools are regressing back to direct-Code prototypes without all that workflow and understanding of what should actually be happening BEFORE the code, and instead will return to the distractions of the "How", and its millions of iterations and updates, rather than "What".
Some of this was already unfortunately happening due to Figma's loss of focus on workflow and collaboration, but seems these AI generation tools have made many completely lose sight of what was nice about the improved workflow of planning, simply because we CAN now generate the things we think we want, doesn't mean we should, especially before we know what we actually want / need.
Maybe I'm just getting old, but that's my .02 :).
you can vibe code a fully working UI+backend that requires way less effort so why bother with planning and iterating on the UI separately at all?
anybody who actually knows what they are doing gets 10x from these tools plus they enable non-coders to bring ideas to the market and do it fast.
My point isn't to stitch things to Figma, that's abhorrent to me as well. My point is to not get bogged down on the implementation details, in this case an actually working DB, those tables, etc, but rather less fidelity actual full flow concepts that can be generated and iterated.
Then that can be fed into a magic genie GPT that generates the front-end, back-end, and all that good jazz.
The thing is, the cost of producing websites is already pretty low, but the value of websites mostly derives from network effects. So a rising flood of micro crud saas products will not be likely to generate much added value. And since interoperability will drive complexity, and transformer based LLMs are inherently limited at compositional tasks, any unforeseen value tapped by these extra websites will likely be offset by the maintainability and security breaks I mentioned. And because there is a delay in this signal, there is likely to be a bullwhip effect: an explosion of sites now and a burnout in a couple of years in which a lot of people will get severely turned off by the whole experience.
If someone has the idea for the next Amazon, as well as everything else you need beyond the idea, and tools like Supabase and Lovable allow them to get it off the ground, those tools are incredibly valuable to that person.
If someone’s ideas are worthless, their websites will be worthless.
you can vibe code a fully working UI+backend
…is gonna bring a lot of houses crashing down sooner or later.
One thing I will agree on though is that LLMs make it easier to iterate or try ideas and see if they'll work. I've been doing that a ton in my projects where I'll ask an LLM to build an interface and then if I like it I'll clean it up and or rebuild it myself.
I doubt that I'll ever use Figma to design, it's just too foreign to me. But LLMs let me work in a medium that I understand (code) while iterating quickly and trying ideas that I would never attempt because I wouldn't and be sure if they'd work out and it would take me a long time to implement them visually.
Really, that's where LLMs shine for me. Trying out an idea that you would even be capable of doing, but it would take you a long time. I can't tell you how many scripts I've asked ChatGPT or similar to write that I am fully capable of writing, but the return on investment just would not be there if I had to write it all by by hand. Additionally, I will use them to write scripts to debug problems or analyze logs/files. Again, things that I am perfectly capable of doing but would never do in the middle of a production issue because they would take too long and wouldn't necessarily yield results. With an LLM, I feel comfortable trying it out because at worst I'd burn a minute or two of of time and at best I can save myself hours. The return on investment just isn't there if it would take me 30 minutes to write that script and only then find out if it was useful or not.
E.g. Concur is primarily feature-complete and will only ever need to evolve gradually.
So the drawbacks of being brittle, kludged-together, and incapable of making rapid feature changes doesn't really matter.
In some other products, that matters a huge deal.
So the tl;dr is, as always, optimize for the things that actually matter for your particular situation.
We technical people always underestimate how fast things change when non-technical users can finally get things done without opening the hood.
This is good and bad. Non-technical users throwing up a prototype quickly is good. Non-technical users pushing that prototype into production with its security holes and non-obvious bugs is bad. It's easy for non-technical users to get a false sense of confidence if the thing they make looks good. This has been true since the RAD days of Delphi and VisualBasic.
Non-technical users pushing that prototype into production with its security holes and non-obvious bugs is bad.
I beg to differ. Non-technical users pushing anything into production is GREAT!
For many, that's the only way they can get their internal tool done.
For many others, that's the only way they might get enough buyers and capital to hire a "real" developer to get rid of the security holes and non-obvious bugs.
I mean, it's not like every "senior developer" is immune from having obvious-in-retrospect security holes. Wasn't there a huge dating app recently with a glaring issue where you could list and access every photo and conversation ever shared, because nobody on their professional tech team secured the endpoints against enumeration of IDs?
I agree it is great that more people can build software, but let's not pretend there are zero downsides.
If a user is confident enough about a no name company that they give them enough info to make identity theft a possibility, it was only a matter of time before a spammer/phishing attack gets them anyway
Most of the apps in discussion see little to no use and go dead soon after launch
That's not convincing. Of the apps that do get used, the vibe-coded ones will likely be unsafe.
If a user is confident enough about a no name company that they give them enough info to make identity theft a possibility
That's completely unrelated. You can give a company very little information. Any of it being leaked is unacceptable. You can find a lot from an email, or a phone number.
People are taught, by CNBC, by suits, by hacks, that you can trust the apps on your commercials and it will be fine. It likely won't be, and your response is exactly why. Many of you are apathetic to the idea of doing right by people.
So people are manipulated, and some of them are elderly and don't even understand how computers work. This is reason enough to care about what they are exposed to, not say "let's burn it all down with shitty vibe-coding because users are dumb anyway."
We're supposed to be better than this.
Of the apps that do get used, the vibe-coded ones will likely be unsafe.
What's the threat though. As in, what's at risk. A leaked email address? Probably. Enough info to have your identity stolen as prior commenter had mentioned. Probably not.
That's completely unrelated.
Umm, no, it's related due to the prior commenter claiming that was the risk in their contrived situation from prior post mentioning identity theft.
Any of it being leaked is unacceptable. You can find a lot from an email, or a phone number.
Everyone's email has already been leaked somewhere. It's not private data. This is like saying your bank account number is confidential financial information and ignoring the fact it's printed on every check you write.
Many of you are apathetic to the idea of doing right by people.We're supposed to be better than this.
I object by simply saying I'm just being realistic. Data leaks somewhere, everywhere, sometimes, always. You're choosing to live in a fantasy land where this doesn't happen as if it wasn't the very true state of the world long before vibe coding came along. Sure, it's not my ideal state. But it is the actual state of things. Get real.
I agree with you on the downsides.
There was a reason the industry was regulated, and circumventing these reasons with an app has been a net negative to society.
I think there's going to be the same problems as there are fixing bad body shop code. The companies that pushed their "vibe code" for a few dollars worth of AI tokens will expect people to work for pennies and/or have unreasonable time demands. There's also no ability to interview the original authors to figure out what they were thinking.
Meanwhile their customers are getting screwed over with data leaks if not outright hacks (depending on the app).
It's not a whole new issue, shitty contractors have existed for decades, but AI is pushing down the value of actual expertise.
For nearly 50 years now, software causes disruption, demand drives labor costs, enterprise responds with some silver bullet, haircuts in expensive suits collect bonuses, their masters pocket capital gains, and the chicken come home to roost with a cycle of disruption and labor cost increases. LLMs are being sold as disruption but it's actually another generation of enterprise tech. Hence the confusion. Vibe coding is just PR. Karpathy knows what he's doing.
From looking at "vibe coding" tools their output is about the quality of bad body shop contractors.
Genuinely, it's a lot better.
Even us entrepreneurially minded technical devs cut corners on our personal projects that we just want to through a Stripe integration or Solana Wallet connect on
And large companies with FTC and DOJ involved data breaches just wind up offering credits to users as compensation
so for non-technical creators to get into the mix, this just expands how many projects there are that get big enough to need dedicated UX and engineers
They are great products that cover 95% of what a CRUD API does without hacks. They’re great tools in the hands of engineers too.
To me it’s not about vibe coding or AI. It is that it's pointless to reinvent the wheel on every single CRUD backend once again.
Mike can edit his name and his bio. He could edit some karma metric that he's got view access to but no write access to. That's fine, I can introduce an RLS policy to control this. Now Mike wants to edit his e-mail.
Now I need to send a confirmation e-mail to make sure the e-mail is valid, but at this point I can't protect the integrity of the database with RLS because the e-mail/receipt/confirm loop lives outside the database entirely. I can attach webhooks for this and use pg_net, but I could quickly have a lot of triggers firing webhooks inside my database and now most of my business logic is trapped in SQL and is at the mercy of how far pg_net will scale the increasing amount of triggers on a growing database.
Even for simple CRUD apps, there's so much else happening outside of the database that makes this get really gnarly really fast.
Congratulations: that's not basic CRUD anymore, so you ran into the 5% of cases not covered by an automatic CRUD API.
And I don't see what's the dilemma here. Just use a normal endpoint. Keep using PostgREST to save time.
You don't have to throw the baby away with the bathwater just because it doesn't cover 5% of cases the way you want.
It's a rite of passage to realize that "use the right tool for the job" means you can use two tools at the same time for the same project. There are nails and screws. You can use a hammer and a screwdriver at the same time.
You can use a hammer and a screwdriver at the same time
How do you balance the nail and screw? I'm serious, I'm trying to picture this, hammer in one hand, screwdriver in the other, and the problem I see here is the nail and screw need to be set first, which implies I can't completely use them both at the same time.
Perhaps my brain is too literal here, but I can't figure how to do this without starting with one or the other first
There are 2 parts to using Fireabse, the client SDK and the admin SDK.
The client SDK is what's loaded in the front end and used for 95% of use cases like what u/whstl mentions.
The adminSDK can't be used in the browser. It's server only and is what you can use inside a custom REST API. In your use case, the email verification loop has to happen on a backend somewhere. That backend could be a simple AWS lambda that only spins up when it gets such a verification request.
You're now using a hammer for the front end and a screw driver for the finer details.
Some projects require nails, other require screws, some might require both.
Instead of hammering screws (or in this case reinventing a screwdriver), just use an existing screwdriver. That’s what I mean: don’t reinvent the solved problem of CRUD endpoints when applicable to the endpoint. Nothing says you can’t use two techs per project.
...an automatic CRUD API. And I don't see what's the dilemma here. Just use a normal endpoint. Keep using PostgREST to save time.
EDIT: What I’m advocating here is the opposite: use those tools for CRUD so that your frontend looks exactly the same as a frontend with a regular backend would. If the tool is not good for it (like the example), just use a regular endpoint in whatever backend language or framework. Don’t throw the baby (the 95%) with the bathwater (the 5%).
By “just use a normal endpoint” I mean “write a normal backend for the necessary cases”.
At least when used correctly, but honestly I can’t see a situation where it’s easy to do otherwise for queries.
The utility is in not having to write a lot of repetitive endpoints in a traditional backend, for a large amount of endpoints.
What exactly do you mean by “query logic in the client code”?
In PostgREST, if userMessages is a table in itself, you do get an endpoint called /userMessages.
If the table is called messages and you want to get messages from a user, you can just request something like /messages?user_id=123. And if user_id must your own user_id, you can just skip passing the parameter, thanks to RLS.
If userMessages requires is a join between two tables and you don't want to let the frontend know about it, you can use a view and PostgREST will expose the view as an endpoint.
Once again, there is no "need" to formulate joins in the frontend to reap the benefits of this tool.
I don't do anything close to "formulating a join in the client" with PostgREST and I still use it to its full extent, and it does save time.
EDIT: If one wants to formulate more complex joins in the frontend, then they probably want something like Hasura instead. Once again: complex queries in the frontend is BY NO MEANS mandatory, you can still use flat GraphQL queries and db views for complex queries. PostgREST OTOH is about keeping it simple.
I've only worked with Hasura, but I can say it's an insecure nightmare that forces anti-patterns. Your entire schema is exposed. Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper. Likewise you can't easily customize your API without building an API on top of your API. You're doing weird extra network hops if you have other services that need the data but can't safely access it directly. You're pushed into fake open source where you can't always run the software independently. Who knows what will happen when the VC backers demand returns or the company deems the version you're on as not worth it to maintain compared to their radically different but more lucrative next version.
I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper.
Exactly. This is one of the things I never understood about Supabase's messaging: The highly-touted, auto-generated "RESTful API" to your database seems pointless. Why would I hard-code query logic into my client application? If my DB structure changes, I have to force new app versions on every platform because I didn't insulate back-end changes with an API.
Why would anyone do this?
Both those techs might make this look convenient, but engineering rules must still be followed.
Frontend should do validation and might have some logic that’s duplicate for avoiding round-trips… but anything involving security, or that must be tamper-proof, must stay in the server, or if possible be protected by permissions.
There are whole classes of applications that can be hosted almost entirely by Supabase or Hasura. If yours isn’t, it doesn’t mean you should force it.
I also didn't mention security, let alone promote moving it to the front end.
The answer is: you wouldn’t. That’s not the point of any of those tools.
What is the point of an auto-generated HTTP API to the database, if not to let clients formulate queries? And why would you do that?
If "letting the client formulate queries" you mean "filter posts by DidYaWipe, sorting by date", this is also what traditional CRUD backends do.
If my DB structure changes, I have to force new app versions on every platform because I didn't insulate back-end changes with an API.
To avoid the above problem, it's a standard practice in PostgREST to only expose a schema consisting of views and functions. That allows you to shield the applications from table changes and achieve "logical data independence".
For more details, see https://docs.postgrest.org/en/v12/explanations/schema_isolat....
Backends are far messier (especially when built over time by a team), more expensive and less flexible than a GraphQL or PostgREST's api.
> I've only worked with Hasura, but I can say it's an insecure nightmare that forces anti-patterns
Writing backend code without knowing what you're doing is also an insecure nightmare that forces anti-patterns. All good engineering practices still need to apply to Hasura.
Nothing says that "everything must go through it". Use it for the parts it fits well, use a normal backend for the non-CRUD parts. This makes securing tables easier for both Hasura and PostgREST.
> Business logic gets pushed into your front end because where else do you run it unless you make an API wrapper. You're doing weird extra network hops if you have other services that need the data but can't safely access it directly
I'm gonna disagree a bit with the sibling post here. If you think that going through Hasura for everything is not working: just don't.
This is 100% a self-imposed limitation. Hasura and PostgREST still allow you to have a separate backend that goes around it. There is nothing forbidding you from accessing the DB directly from another backend. This is not different from accessing the same database from two different classes. Keep the 100% CRUD part on Hasura/PostgREST, keep the fiddly bits in the backend.
The kind of dogma that says that everything must be built with those tools produces worse apps. You're describing it yourself.
> I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
I have heard the arguments and all I hear is people complaining about how hard it is to shove round pieces in square holes. These tools can be used correctly, but just like anything else they have a soft spot that you have to learn.
Once again: "use right tool for the job" doesn't mean you can only use a single tool in your project.
What sounds like the worst of both words to me is forcing Supabase/Hasuea to do what it isn’t good at or force a traditional backend to do the same thing those tools can do but taking 10x of the time and cost.
My experience was super positive and saved a lot of coding and testing time. The generated APIs are consistent and performant. When they don’t apply, I was still able to use a separate endpoint successfully.
As engineer #2 it's a mess
As a long-time Hasura stan, I can't agree with this in any way.
Your entire schema is exposed
In what sense? All queries to the DB go thru Hasura's API, there is no direct DB access. Roles are incredibly easy to set up and limit access on. Auth is easy to configure.
If you're really upset about this direct access, you can just hide the GQL endpoint and put REST endpoints that execute GQL queries in front of Hasura.
Business logic gets pushed into your front end because where else do you run it unless you make an API wrapperLikewise you can't easily customize your API without building an API on top of your API. You're doing weird extra network hops
... How is an API that queries Hasura via GQL any different than an API that queries PG via SQL? Put your business logic in an API. Separating direct data access from API endpoints is a long-since solved problem.
Colocating Hasura and PG or Hasura and your API makes these network hops trivial.
Since Hasura also manages roles and access control, these "extra hops" are big value adds.
You're pushed into fake open source where you can't always run the software independently
... Are you implying they will scrub the internet of their docker images? I always self-host Hasura. Have for years.
I think the people who write this off as "backend engineers feel threatened" aren't taking the time to understand the arguments they're hearing
I think your arguments pretty much sum up why people think it's just about backend engineers feeling threatened - your sole point with any merit is that there's one extra network leg, but in a microservices world that's generally completely inconsequential.
It's Postgres, but bundled with some extensions and Postgrest. And a database UI. But hosted and it runs locally also by pulling the separate parts. Running it locally has issues though, so much so that I found it easier to run a docker compose of the separate parts from scratch and at that point just carry that through to a deployment, at which point is there still a reason to use Supabase rather than another hosted Postgres with the extensions?
It's a bit of a confusing product story.
The developer experience is first rate. It’s like they just read my mind and made everything I need really easy.
- Deals with login really nicely
- Databases for data
- Storage for files
- Both of those all nicely working with permissions
- Realtime is v cool
- Great docs
- Great SDK
- Great support peeps
Please never sell out.
Is it the PostgREST part? Are you using it for simple queries, or are you trying to use it for complex business logic?
Asking because PostgREST is great when you use it the way it’s intended but like any tool it will underperform when used in a way it’s not supposed to. It’s a screwdriver that you shouldn’t use to hammer nails.
So no... PostgREST wasn't a factor for me at all.
I'm sorry you had a bad experience with this kind of tool, but I hope that one day you choose to revisit it.
My server is going to provide an API that isolates the application from the DB structure.
The same can be achieved with "schema isolation". See https://docs.postgrest.org/en/v12/explanations/schema_isolat....
Realistically 99% of the users would still be screwed if they ever shut down, regardless of if it's open (see: Parse)... but it gives people a some confidence to hear they're building on a platform that they could (strictly in theory) spin up their own instance of should a similar rug pull ever occur
I agree you might prefer to choose the stack yourself, but for total n00bs and vibe coders supabase is a great start / boilerplate vs say the MEAN stack that was a hit 5y ago
In the end I jumped into it wholeheartedly, mainly because I wanted a canned solution for authorization and user-confirmation. But soon I came up against obstacles I had easily overcome with plain Deno already, but were seemingly insurmountable with Supabase.
When one basic use-case after another turned out to be almost wholly undocumented and unexplored by the Supabase docs and community, I concluded that Supabase is really only suited for people building Web back-ends that let people browse a database.
As an application back-end, its marquee features don't add value or are basically irrelevant... as far as I can see. The rest of it is incomplete and/or undocumented, with client libraries being an example.
- remote state
- authoritative logic that can't run solely on the user's device because you can't trust it
- authentication
each of which is annoying when you're focused on building the user-facing app experience. Supabase solves all three without you needing to touch any infrastructure. The self-hosting thing just provides insurance that users are not completely locked in to their platform, which is a big concern when you're outsourcing basically your entire backend stack.
But I had to abandon it after wasting weeks trying to do simple things. The biggest problem is the lack of documentation. Fundamental parts of the system are undocumented, like the User table. There's no doc on how the columns function, so I couldn't determine why a user is marked as "confirmed" (presumably through E-mail or other validation) immediately upon insertion to the table.
There's also no full documentation of client-library syntax. For example the Swift library: There are a few examples of queries, but no full documentation on how to do joins (for example).
And just try to use your own certificates; something that I've been doing for years during iPhone-app development was impossible with Supabase.
And why? Because these simple scenarios appear to be distant outliers for Supabase. It's as if nobody has ever brought them up before; and even if they have, nobody has been able to answer the first questions about them.
If you're not building a single-page Web app that just lets people browse a database, Supabase doesn't seem to envision your application.
So I went back to a plain Deno back-end, which is what I was building before trying Supabase. In the amount of time I wasted trying to scrounge up documentation and fruitlessly asking questions in forums and Discord, I was able to learn and implement authorization, and then get back to work building a product.
Maybe all this money will let the Supabase team hire some people to document their product.
Because these simple scenarios appear to be distant outliers for Supabase
You've only talked about 2 things : Lack of documentation (which I somewhat agree with) and using custom certificates. Custom certificates is not a "simple scenario" and I don't blame Supabase for not spending time on this. I fact I would prefer they work on other things (like documentation !).
Once you encounter a problem they either don't want to solve or haven't solved, your only choices are either:
- start layering on hacks (in which case you quickly get into case where no one and nothing else could help you)
- decide not to do that-thing
- do a rebuild to get rid of the batteries-included.
Personally I think something Supabase is great for toy projects that have a defined scope & no future or a very early startup that has the intention to rebuild entirely. Just my opinion though, maybe others feel more comfortable with that level of lock-in.
Even something like Heroku is miles better because they keep everything separated where your auth, database, & infrastructure aren't tightly coupled with a library.
"Lack of documentation" speaks to several apparently routine use cases being outliers; otherwise, they'd be documented. I already talked about the User table Supabase provides (and populates in unexpected ways), and about the Swift library that you have no reference for formulating joins through... another critical and expected ability.
It’s been so long that new ideas are solving parts on the access spectrum without seemingly being aware of it.
Supabase and others would have a smaller footprint to add an app layer and reporting layer to their tool since it is data as the cornerstone not an afterthought
The startup supports Postgres, the most popular developer database system that’s an alternative to Google’s Firebase. Supabase’s goal: To be a one-stop backend for developers and "vibe coders."
With Vercel/Netlify, you're paying for ease of use. For a lot of people, that tradeoff is worth it. Not everything can be free.
https://money.usnews.com/careers/best-jobs/computer-programm...
Do you have a better source for your number.
As far as cost, 200/month is nothing, but those are not the numbers we hear about when things spiral out of controll due to a ddos or sudden surge in popularity.
But the market rate for a freelance midlevel US-based engineer would be about double per hour what you'd pay a full-time employee of the same level, to account for taxes/PTO/health care/etc.
An engineer starts at $200/hr
Starts?!
I remember getting a sheet from an employer early in my career that fully broke down the cost of benefits and taxes and showed me the full cost of just my employment, not including overhead, profit, etc. it was rather eye opening because although I kid of knew it from accounting and finance, it never really impacted me quite as much before seeing the numbers.
All this is to say: even if all progress on AI halted today, it would remain the case that, after the Internet, LLMs are the most impactful thing to happen to software development in my career. It would be weird if companies like Supabase weren't thinking about them in their product plans.
I have two main issues, first the tooling is changing so rapidly that as I start to hone in on a process it changes out from under me. The second is verifying the output. I’m at like 90% success rate on getting code generated correctly (if not always faster than I could do it) but boy does that final 10% bite when I don’t notice.
An aside, I think the cloud ought to make your (perhaps especially your) list. At least for me that changed the whole economy of building new software enterprises.
https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
The Internet, LLMs, open source, high-level languages, the cloud --- that would be my top 5.
For “real work” done by a “real engineer”, I approach it almost exactly as you say.
For side projects/personal software that I most likely would have never started pre-llms? I’ll just go full vibe code and see how far I get. Sometimes I have to just delete it, but sometimes it works. That’s cool.
An unsuccessful project might be unsuccessful because it got eaten by costs before it became successful.
A wildly successful project is risky to migrate.
I think it’s rare that fails to show potential because of the underlying technology that’s chosen.
Sure, Vercel is relatively expensive. But I just don’t see how you’d throw in the towel because the costs are too high without first evaluating how to lower them.
If you’re saying that the evaluation is likely to show that you’re stuck - I have never seen that be the case personally.
Most startups fail. Optimizing for getting revenue is more important than optimizing cost in the beginning.
If you get revenue you can solve the cost problem. If you don’t, it doesn’t matter.
Anything that gives you more shots at the goal is a win in a startup.
I've seen many colleagues bootstrap something - even if they're not themselves very technical - because they've leveraged these well integrated low cost platforms.
Yes, “vibecoding” still has issues (and likely will for the forseeable future). I’m sure the next decade will be an absolute boon for security researchers working with new companies. But you shouldn’t dismiss people based on their use of these tools.
And other commenters are right that these expensive infra tools can be replaced later when the idea has actually been validated.
Based on the “vibe coders” crowd I see on X, they are a superset of indie hackers with lower barrier to entry when it comes to coding skills and less patience for mediocre success. They seem to have the “go big or go home” mindset.
As long as they have a popular product, they don’t mind forking over some of their profit to OpenAI or a hosting provider. None of the Ghibli generator app creators complained about paying OpenAI… If the product is not popular, no outrageous costs, and the product will be abandoned anyway very fast.
Migrating from it is not that hard so far. I did it on an afternoon for a customer.
Also a couple friends are running the open source version in their own containers.
Maybe there are (or will be) cloud only features, but for the basic service there isn’t as much lock-in as something like AWS.
the cool thing you made is locked into an overpriced cloud stack that will bleed you dry
Not necessarily applicable to vibing with Supabase specifically, right?
There are several ways to host Supabase on your own computer, server, or cloud.
Making it easy for engineers, experienced OR aspiring is huge.
I don't mean to demean "vibe coders" exactly either, but rather jumping on the hype train of using that term for your funding pitch. You're using AI to learn to become a software developer? Great! No problem with that.
But also — if you now have a database involved and you're handling people's data, you better learn what you're doing. A database provider pushing "vibe coding" is not a good look imo.
The problem we have now is we have people who aren't engineers trying to make an app and they end up creating insecure and buggy messes, then struggle with figuring out how to deploy, or they end up destroying all their code with no recovery because they didn't know anything about version control.
I used to pride myself of knowing all the little ins and outs of the tech stack, especially when it comes to ops type stuff. This is still required, the difference is you don't need to spend 4 hours writing code - you can use the experience to get to the same result in 4 minutes.
I can see how "ask it for what you want and hope for the best" might not end well but personally - I am very much enjoying the process of distilling what I know we need to do next into a voice dictated prompt and then watching the AI just solve it based on what I said.
Makes sense to me, vibe coding basically shifts your burden to specification and review, which are traditionally things a senior developer should be good at.
Supabase is currently used by two million developers who manage more than 3.5 million databases. The startup supports Postgres, the most popular developer database system that’s an alternative to Google’s Firebase. Supabase’s goal: To be a one-stop backend for developers and "vibe coders."
How many of those users are paid. You can sign up for free without a credit card.
It's cool, for certain use cases. I ended up trying it for a few months before switching to Django.
If you ONLY need to store data behind some authentication, and handle everything else on the frontend, it's great. Once you need to try some serverside logic it gets weird. I'm open to being wrong, but I found firebase phenomenally more polished and easier to work with particularly when you get to firebase functions compared to edge functions.
Self hosting requires magical tricks, it's clearly not a focus for them right now.
I hope they keep the free tier intact. While it's not perfect, if your in a situation where you can spend absolutely no money you can easily use it learning ( or for portfolio piece).
Self hosting requires magical tricks
Has anything changed recently? ~1year ago I installed a local instance (that I still use today for logging LLM stats) and IIRC all I had to do was "docker compose up". All the dockers are still starting for me at boot from that 1yo install, to this day. (I use it on 127.0 so no SSE & stuff, perhaps that's where the pain points are? Dunno, but for my local logging needs it's perfect).
This isn't documented anywhere. Deep deep in their GitHub issues you'll find a script for generating this magic string which needs to be set as an environment variable.
See https://github.com/supabase/supabase/issues/17164#issuecomme...
But if you usecase involves Supabase auth, using a service account to bypass RLS is kind of like hardcoding connection strings.
The service account should only be accessed on the service.
If using Auth+Server, you can check the verified user identity via Auth JWTs (or something, see the docs).
Yeah, don't use the server connection on the client, but they have many warnings against that.
I had done something similar in Firebase and it was easy. Supabase wasn't straightforward here. It got to a point where I'm sure I could eventually get it working, but I also think I'm outside the expected usecase.
Django is much more flexibility in this regard.
What’s Supabase’s exit strategy? Are they sustainable long term as a standalone business?
You can also see how money is starting to chase “vibe coding” — as long as you say the magic words, even if your product is only tangentially related to it, you can get funding!
Not sure who would buy them though, as PostgreSQL vendors are kind of a dime-a-dozen these days...
Supabase defo has a much higher mindshare.
so how many are paying
This is like if Google Spanner were open sourced tomorrow morning: realistically how many people are going to learn how to deploy a thing that was built by Google for Google to serve an ultra-specific persona?
Maybe you might get some Amazon-sized whale peeking at it for bits to improve their own product, but the entire value prop is that it's a managed service: you're probably going to continue paying for it to be managed for you.
I always loved Vercel for their easy hosting of Next.js with included CI/CD, but I recently switched to self-hosting - their pricing switched from a flat, worry-free $20/month to an unpredictable whatever-it-may-cost plus it sent me 10+ emails every single month about hitting some quotas that they introduced and I couldn't find a good way to stop that.
They also offer so much more than just postgres. Though I use them only for postgres myself.
It's easier to just become familiar with a DB UI tool like Beekeeper or DataGrip and spin up your own things. I'm also not a huge fan of being "locked-in" to so many things (including their auth). I think most projects would be better off keeping these parts separated, even if they are using third-party services to handle them, as it would be way less overhead to migrate out.
Are they sustainable long term as a standalone business?
It's bananas to me that questions like these could be unanswered even 5 years after the business started. This possibly cannot be the most efficient way for finding new solutions and "disrupting" stale industries?
It's bananas to me that questions like these could be unanswered even 5 years after the business started.
Those are rookie numbers, Discord is coming up on 10 years old and has made zero dollars to date, yet is supposedly considering an IPO soon.
This possibly cannot be the most efficient way for finding new solutions and "disrupting" stale industries?
The thing is, the people with far more information than we have, and with actual money on the line, think this is a good use of their money. They're not always right, of course, but the industry as a whole is profitable and is innovative and "disruptive".
So, yes, this can be a good way for finding new solutions. The most efficient? IDK but it's the best we've come up with so far.
What’s Supabase’s exit strategy? Are they sustainable long term as a standalone business?
Acquisition best case, Private Equity worst case.
Do you see Supabase going public on the stock market? Perhaps unless they do what Cloudflare done and are replicating AWS, it may be hard to see a stock market debut.
Could be wrong though.
Also they can't run on AWS postgres with all their postgres plug-ins AFAIK.
The point of "cheaper to host everything yourself" is a lot higher than what most estimate.
My only concern is that is supabase goes out of business or go evil you're gonna have a bad time, however everything is open-source
But once you know these things, you could of course be faster.
Are they sustainable long term as a standalone business?
Was Meteor? They are exactly the same thing. And I really liked Meteor!
To me, the more money pouring in, the better. That said:
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRCVKYR...
(The Silicon Valley Economy cartoon)
If they truly have 3.5 million databases, that's only ~$500 per database to recoup the investments, that doesn't seem to crazy. Companies like OpenAI or Twitter/X are never going to be profitable enough to cover what they've already spend/cost. Supabase could because the amount is so much lower and they have paying customer, but I'd like to emphasize the "could".
1. oxc (oxlint)
2. vercel
3. fly.io
probably more! and more every day
no one is vibe coding elixir
I did :) I made a browser-based MMO with Phoenix to test out liveview and learn the language: https://shopkeep.gg
And it was pretty annoying. Elixir doesn't really lend itself to vibe coding due to namespacing and aliasing of modules, pattern matching, all without static typing (I know, Dialyzer...). It also struggles to understand the difference between LiveComponents and LiveViews, where to send/handle messages between layers.
Without references to filenames, the agent perpetually decides "this doesn't exist, so I'll write it :)". I found it to be pretty challenging before figuring out I could force the agent to run `mix xref callers <Some.Module>` when trying to do cross-module refs.
(caveat: this was all with claude 3.5 sonnet)
https://github.com/dbos-inc/dbos-docs/blob/main/docs/python/...
AWS needs to get their act together and start prioritizing developer experience
Also, supabase is looking like the go to database for ai created apps. Which will be a major tailwind
The major issue is - cost. It is way more expensive than I realized as they have so many little ways they charge you. It's almost like death by thousands of paper cuts. My bill for my app with just a few thousand users was $70 last month.
I do like the tooling and all, but the pricing has been very confusing.
just a few thousand users was $70 last month.
Few Thousand!?! Sound very reasonable to me. Monetize just two of those users at $35 per month and your server costs are covered. Or run it yourself, there's a lot of moving parts but it's all open source.
Few Thousand!?! Sound very reasonable to me. Monetize just two of those users at $35 per month and your server costs are covered
That's one way to look at it, but compared to any other way to run a server, it's objectively terrible. You can serve that many users with a $5 box.
While the funding is impressive, I haven’t come across too many people touting Supabase or using it in production.
My experience of supabase really demonstrates to me that the ideals of all of the postgres layer technologies - postrest, realtime via wal, jwt auth in the db -, just don't make for an easy experience. It all works (mostly) but I find it more annoying than useful and have to work around it more often than I'd like. I suppose I'm old school, but just building the things that one needs is often more robust and less work than trying to plug into what they've provided.
I really don't know what they're going to do with a series D. It seems they now _have_ to go for a high-value exit, but I really don't see which company would provide that exit.
It is good to get started and no doubt useful for simple CRUD apps. But once you want to start doing more complicated stuff, a lot of the RLS primitives become very hard to maintain and test, for example. You could say that that's Postgres's fault, but Supabase strongly pushes you in that direction.
The tooling, while looking quite polished, just felt pretty half baked along with docs (at least a year ago when we pulled the plug). Try to implement even a halfway complicated permissions scheme with it and RLS and you are in for a world of hurt and seemingly unmaintainable code.
So we ditched Supabase Auth for AuthJS, and are using vanilla postgres with Prisma. That's worked well for us. All the tooling is relatively mature, it's easy to write tests, etc.
Maybe if AI is writing some of the code, it might get easier, but for now, I'm avoiding Supabase like the plague until I see a project that's relatively complex that's actually easy to maintain.
The whole growth of vibe coding really did help them because I don't think actual developers use it because putting things like functions in the database and authorization in the database is something that we learnt a few decades ago is a bad idea.
So I would guess they are used by massive amounts of developers who are new to coding or do not fully know how to code, but are becoming developers and who love the free databases Supabase provides.
Would love to know what is their actual revenu.
The whole growth of vibe coding really did help them because I don't think actual developers use it because putting things like functions in the database and authorization in the database is something that we learnt a few decades ago is a bad idea.
Why are those things a bad ideas? You could be right but if you insist on making value judgements without explanation or elaboration, you're going to sound like a whiny old crank who is scared of becoming obsolete.
When I see valuations like this, they are overvalued until they use that money to acquire another company for a total addressable market expansion.
It's a quick/convenient way to get a "full" backend up and running. The other option (Firebase) can't be self-hosted and has some absurd pricing footguns.
The startup supports Postgres, the most popular developer database system that’s an alternative to Google’s Firebase
I've always taken issue with branding Supabase as an alternative to Firebase. Firebase is a PaaS whereas Supabase is more of a BaaS.
To get the equivalent functionality of Firebase, you'd need to add something like Netlify or Vercel to pair it with your Supabase backend.
So if you assume their revenue is in that range, you're looking at 66x to 133x ARR multiple. In today's market that's quite a big markup. Standard SaaS right now is probably more like 5-15x. AI is a lot more (but Supabase isn't AI). But they are a key leader in their market, so probably get a meaningful bonus for that. And I'm sure a lot of big industry investors were competing against each other for the Supabase deal, so that definitely would have helped valuation too. Also, at their maturity today, they are probably showing some great success signing big enterprise deals and telling a story about how that will grow.
That being said, those factors alone don't answer 66-133x. Perhaps Supabase's strongest angle is their opportunity for product-led growth:
- They have a huge number of people on a free tier
- The growth rate of free tier users might be accelerating
- The conversion rate of free tier users to paid users might also be increasing
- They're adding more things that people can pay for, increasing LTV of customers. e.g., for my business, we probably 20x our Supabase cost in the last 6 months - most of that is due to our growth but also there are a lot of things we can buy from Supabase beyond compute.
So I would assume, in addition to the above, they're telling a story about their actual revenue growth rate will accelerate meaningfully because of all of these factors working together.
Lots of assumptions in here, but you can start to see how a lot of different factors + a hype multiple could lead to such a valuation.
My prediction: They're banking on a big exit to OpenAI or Claude as the defacto backend for an AI IDE.
They're the only big alternative to Firebase, and Firebase just got pulled into Google AI Studio.
All the components are declarative HTML and update in realtime. Similar concept as HTMX but doesn't require any backend code. You can still implement complex UX, authentication, access control and filtered views (indexing and all).
I built this app with it over a few months as a weekend project: https://www.insnare.net/app/#/onboarding/country/All
"They ship buggy, insecure messes" "They don't know how to fix what AI gave them" etc etc etc
Right. Like that same thing hasn't been happening literally during the entire existence of programming. I, for one, welcome the vibe coders. I hope it grows their interest in the field and encourages them to go deeper and learn more. Will some be lazy and not even try? Of course! Will some get curious and learn the ins and outs? Absolutely.
Either that or they need to add features and products alongside the DB to essentially replace the likes of Vercel.
Having said that Supabase is probably the best 'cloud DB' I've played around with so hope they succeed.
I was a speaker in a local Supabase event just few weeks ago, https://shorturl.at/JwWMk. We had a local event in Abuja, Nigeria. There we promoted their Launch Week 14 series, highlighting new features from Supabase. In reality, it became an event to show people how to bootstrap a quick backend for their SME business in a weekend.
Nevertheless congrats to the Supabase team!