I switched from Htmx to Datastar
Which states some of the basic (great) functionality of Datastar has been moved to the Datastar Pro product (?!).
I’m eager to support an open source product financially and think the framework author is great, but the precedent this establishes isn’t great.
That said, the attitude of the guy in the article is really messed up. Saying "fuck you" to someone who gave you something amazing for free, because he's not giving you as much as you want for free -- it's entitled to a toxic degree, and poisons the well for anyone else who may want to do something open-source.
You're of course free to do it, just as I'm free continue to use other products which do not do this.
Seems to fit in with your world view better and then I can just leave those people high and dry with much less concern!
You're not owed these people's time.
If the point is good stewardship of the product, deleting features that users clearly need only to replace with a for-pay version stinks possibly only slightly less than deprecating features users clearly need and then replacing with nothing. Both of these things mean your product sucks.
For a proper way this would work, you the user would contribute your time for those features so you don't overburden the maintainer, but people like you won't and so this is where we're at.
I'd rather avoid burnout (which will kill a project entirely) and lose a few folks like you.
For a proper way this would work, you the user would contribute your time for those features so you don't overburden the maintainer, but people like you won't and so this is where we're at.
Sure, if you're up front and honest from the beginning then some users will do that, the majority are likely to go for other offerings which don't suffer this problem. Vanishingly few users are going to be cool with features disappearing and then reappearing later on with a price tag attached, which is the scenario we're talking about.
In reality, 99.9% of the users are going to be using whatever free thing is available and your project will live for just about as long as you personally care about it. Rightly or wrongly, the maintainer's work/life balance isn't on the forefront of your mind when you're looking at npm packages, no amount of grandstanding will change that.
For such reasons, The Economist style guide advises against using fancy language when simpler language will suffice.
I don't really have much of a response beyond this.
Bit like Pydantic. It's a JSON parsing library at the end of the day, and now suddenly that's got a corporate backer and they've built a new thing
Polars is similar. It's a faster Pandas and now suddenly it's no longer the same prospect.
FastAPI the same. That one I find even more egregious since it's effectively Starlette + Pydantic.
Edit: Add Plotly/Dash, SQLAlchemy, Streamlit to that list.
Saying "fuck you" to someone who gave you something amazing for free, because he's not giving you as much as you want for free
I don't have a problem, on principle, with paywalling new features. I don't like it, but I don't think it's bad behaviour.
Putting up a paywall around features that were previously free, however, I do take issue with. It's deceptive and it's not common practice. It tricks people into becoming invested and then holds hostage the features that they've become invested in using. Frankly, fuck that.
I'm not opposed to open source projects placing features that realistically only large/enterprise users would use behind a paywall, i.e. the open core model. When done fairly, I think this is the most sustainable way to build a business around OSS[1]. I even think that subscriptions to such features are a fair way of making the project viable long-term.
But if the project already had features that people relied on, removing them and forcing them to pay to get them back is a shitty move. The right approach would've been to keep every existing feature free, and only commercialize additional features that meet the above criteria.
Now, I can't say whether what they paywalled is a niche/pro feature or not. But I can understand why existing users wouldn't be happy about it.
Forking is always an option, of course, but not many people have the skills nor desire to maintain a piece of software they previously didn't need to. In some cases, this causes a rift in the community, as is the case for Redis/Valkey, Terraform/OpenTofu, etc., which is confusing and risky for users.
All of this could've been avoided by keeping all existing features freely available to everyone, and commercializing new value-add features for niche/enterprise users. Not doing that has understandably soured peoples' opinion of the project and tarnished their trust, as you can see from that blog post, and comments on here and on Reddit. It would be a mistake to ignore or dismiss them.
First, barely anyone used datastar at that point, and those features were particularly arcane. So, the impact was minimal.
Second, its likely that even fewer of them contributed anything at all to the project in general, and those features in particular. What claim do they have to anything - especially when it was just freely given to them, and not actually taken away (the code is still there)?
And to the extent that they can't or wont fix it themselves, what happens if the dev just says "im no longer maintaining datastar anymore"? You might say "well, at least he left them something usable", but how is that any different from considering the pro changes to just be a fork? In essence, he forked his own project - why does anyone have any claim to any of that?
Finally, if they cant fix it themselves (especially when AI could almost certainly fix it rapidly), should they really be developing anything?
In the end, this really is a non-issue. Again, most of the furor is quite clearly performative. Its like when DHH removed typescript from one of his projects that he and his company maintain, and people who have nothing to do with ruby came out of the woodwork to decry the change in his github repo. And even if they do have something to do with ruby, they have no say over how he writes his code.
a lot of what you said rests upon the notion that people were relying on these features.
They were, though. The blog post linked above, and several people in the Reddit thread linked in the blog post mentioned depending on these features.
We can disagree about whether it matters that a small percentage of people used them, but I would argue that even if a single person did, a rugpull is certainly a shitty experience for them. It also has a network effect, where if other people see that developers did that, they are likely to believe that something similar in the future can happen again. Once trust is lost, it's very difficult to gain it back.
Second, its likely that even fewer of them contributed anything at all to the project in general, and those features in particular. What claim do they have to anything - especially when it was just freely given to them, and not actually taken away (the code is still there)?
I think this is a very hostile mentality to have as an OSS developer. Delaney himself expressed something similar in that Reddit thread[1]:
I expect nothing from you and you in turn should expect nothing from me.
This is wrong on many levels.
When a software project is published, whether as open source or otherwise, a contract is established between developers and potential users. This is formalized by the chosen license, but even without it, there is an unwritten contract. At a fundamental level, it states that users can expect the software to do what it advertises to do. I.e. that it solves a particular problem or serves a particular purpose, which is the point of all software. In turn, at the very least, the developer can expect the project's existence to serve as an advertisement of their brand. Whether they decide to monetize this or not, there's a reason they decide to publish it in the first place. It could be to boost their portfolio, which can help them land jobs, or in other more direct ways.
So when that contract is broken, which for OSS typically happens by the developer, you can understand why users would be upset.
Furthermore, the idea that because users are allowed to use the software without any financial obligations they should have no functional expectations of the software is incredibly user hostile. It's akin to the proverb "don't look a gift horse in the mouth", which boils down to "I can make this project as shitty as I want to, and you can't say anything about it". At that point, if you don't care about listening to your users, why even bother releasing software? Why choose to preserve user freedoms on one hand, but on the other completely alienate and ignore them? It doesn't make sense.
As for your point about the code still being there, that may be technically true. But you're essentially asking users to stick with a specific version of the software that will be unmaintained moving forward, as you focus on the shiny new product (the one with the complete rewrite). That's unrealistic for many reasons.
And to the extent that they can't or wont fix it themselves, what happens if the dev just says "im no longer maintaining datastar anymore"?
That's an entirely separate scenario. If a project is not maintained anymore, it can be archived, or maintenance picked up by someone else. Software can be considered functionally complete and require little maintenance, but in the fast moving world of web development, that is practically impossible. A web framework, no matter how simple, will break eventually, most likely in a matter of months.
Finally, if they cant fix it themselves (especially when AI could almost certainly fix it rapidly), should they really be developing anything?
Are you serious? You expect people who want to build a web site and move on with their lives to dig into a foreign code base, and fix the web framework? It doesn't matter how simple or complex it is. The fact you think this is a valid argument, and additionally insult their capability is wild to me. Bringing up "AI" is laughable.
Again, most of the furor is quite clearly performative.
Again, it's really not. A few people (that we know of) were directly impacted by this, and the network effect of that has tarnished the trust other people had in the project. Doubling down on this, ignoring and dismissing such feedback as "performative", can only further harm the project. Which is a shame, as I truly do want it to gain traction, even if that is not the authors' goal.
Anyway, I wish you and the authors well. Your intentions seem to come from the right place, but I think this entire thing is a misstep.
[1] https://old.reddit.com/r/datastardev/comments/1lxhdp9/though...
a rugpull is certainly a shitty experience for them
It would certainly be a shitty experience, if there actually was a rugpull, which there was not. People who were using the version of Datastar that had all those features are still free to keep using that version. No one is taking it away. No rug was pulled.
a contract is established between developers and potential users
Sorry, but no. The license makes this quite clear–every open source license in the world very explicitly says 'NO WARRANTY' in very big letters. 'No warranty' means 'no expectations'. Please, don't be one of those people who try to peer-pressure open source developers into providing free software support. Don't be one of the people who says that 'exposure' is a kind of payment. I can't put food on my table with 'exposure'. If you think 'exposure' by itself can be monetized, I'm sorry but you are not being realistic. Go and actually work on monetizing an open source project before you make these kinds of claims.
why even bother releasing software?
Much research and study is not useful for many people. Why even bother doing research and development? Because there are some who might find it useful and convert it into something that works for themselves. Open source software is a gift. The giving of the gift does not place obligations on the giver. If you give someone a sweater, are you expected to keep patching it whenever it develops holes?
If a project is not maintained anymore, it can be archived, or maintenance picked up by someone else.
Then why can't it be maintained by someone else in the case of using the old free version?
A web framework, no matter how simple, will break eventually, most likely in a matter of months.
Sure, the ones that depend on a huge npm transitive dependency cone can. But libraries or frameworks like htmx and Datastar are not like that, they are single <script> files that you include directly in your HTML. There is no endless treadmill of npm packages that get obsoleted or have security advisories all the time.
You expect people who want to build a web site and move on with their lives to dig into a foreign code base, and fix the web framework?
Well...ultimately, if I use some open source software, I am actually responsible for it. Especially if it's for a commercial use case. I can't just leech off the free work of others to fix or maintain the software to my needs. I need to either fix my own issues or pay someone to do it. If the upstream project happens to do it for me, I'm in luck. But that's all it is. There is ultimately no expectation that open source maintainers will support me for free, perpetually, when I use their software.
A few people (that we know of) were directly impacted by this
What impact? One guy blogged that just because there are some paid features, it automatically kills the whole project for him. There's no clear articulation of why exactly he needs those exact paid features. Everything else we've seen in this thread is pile-ons.
Doubling down on this, ignoring and dismissing such feedback as "performative"
Aren't you doing the same thing? You have been ignoring and dismissing the feedback that this is actually not that big of a deal. Why do you think that your opinion carries more weight than that of the actual maintainers and users of the project?
People who were using the version of Datastar that had all those features are still free to keep using that version.
Why are you ignoring my previous comment that contradicts this opinion?
No one is taking it away. No rug was pulled.
When Redis changed licenses to SSPL/RSAL, users were also free to continue using the BSD-licensed version. Was that not a rug pull?
In practice, it doesn't matter whether the entire project was relicensed, or if parts of it were paywalled. Users were depending on a piece of software one day, and the next they were forced to abide by new terms if they want to continue receiving updates to it. That's the very definition of a rug pull. Of course nobody is claiming that developers physically took the software people were using away—that's ridiculous.
Sorry, but no. The license makes this quite clear
My argument was beyond any legal licensing terms. It's about not being an asshole to your users.
I can't put food on my table with 'exposure'.
That wasn't the core of my argument, but you sure can. Any public deed builds a brand and reputation, which in turn can lead to financial opportunities. I'm not saying the act of publishing OSS is enough to "put food on your table", but it can be monetized in many ways.
Open source software is a gift. The giving of the gift does not place obligations on the giver. If you give someone a sweater, are you expected to keep patching it whenever it develops holes?
Jesus. There's so many things wrong with these statements, that I don't know where to start...
OSS is most certainly not a "gift". What a ridiculous thing to say. It's a philosophy and approach of making computers accessible and friendly to use for everyone. It's about building meaningful relationships between people in ways that we can all collectivelly build a better future for everyone.
Seeing OSS as a plain transaction, where users should have absolutely no expectations beyond arbitrary license terms, is no better than publishing proprietary software. Using it to promote your brand while ignoring your users is a corruption of this philosophy.
Then why can't it be maintained by someone else in the case of using the old free version?
I addressed this in my previous comment.
Sure, the ones that depend on a huge npm transitive dependency cone can. But libraries or frameworks like htmx and Datastar are not like that
Eh, no. Libraries with less dependencies will naturally require less maintenance, but are not maintenance-free. Browsers frequently change. SDK language ecosystems frequently change. Software doesn't exist in a vacuum, and it is incredibly difficult to maintain backwards compatibility over time. Ask Microsoft. In the web world, it's practically impossible.
What impact? One guy [...]
Yeah, fuck that guy.
Everything else we've seen in this thread is pile-ons.
Have you seen Reddit? But clearly, everyone who disagrees is "piling on".
Aren't you doing the same thing? You have been ignoring and dismissing the feedback that this is actually not that big of a deal. Why do you think that your opinion carries more weight than that of the actual maintainers and users of the project?
Huh? I'm pointing out why I think this was a bad move, and why the negative feedback is expected. You can disagree with it, if you want, but at no point did I claim that my opinion carries more weight than anyone else's.
Why are you ignoring my previous comment that contradicts this opinion?
Because it doesn't contradict it, it just disagrees with it. Because what actual argument did you have that people using an old version of the software can't keep using it? The one about things constantly breaking? On the web, the platform that's famously stable and backward-compatible? Sorry, I just don't find that believable for projects like htmx and Datastar which are very self-contained and use basic features of the web platform, not crazy things like WebSQL for example.
When Redis changed licenses to SSPL/RSAL, users were also free to continue using the BSD-licensed version. Was that not a rug pull?
Firstly, there are tons of people on old versions of Redis who didn't even upgrade through all that and weren't even impacted. Secondly, Redis forks sprang up almost immediately, which is exactly what you yourself said was a viable path forward in an earlier comment–someone new could take over maintaining it. That's effectively what happened with Valkey.
My argument was beyond any legal licensing terms.
And my argument is that there is no 'beyond' legal licensing terms, the terms are quite clear and you agree to them when you start using the software. In your opinion should it be standard practice for people to weasel their way out of agreed license terms after the fact?
Any public deed builds a brand and reputation, which in turn can lead to financial opportunities.
Notice that you're missing quite a lot of steps there, and even then you can only end with 'can lead' to financial opportunities. Why? Because there's no guarantee that anyone will be able to monetize exposure. No serious person would claim that that uncertain outcome constitutes any kind of 'contract'. Anyone who does should be rightly called out.
It's about building meaningful relationships between people in ways that we can all collectivelly build a better future for everyone.
Then by your own logic shouldn't everyone contribute to that effort? Why is it that only the one guy who creates the project must bear the burden of maintaining all of it in perpetuity?
Seeing OSS as a plain transaction
Isn't that what you are doing by claiming that OSS is about providing software in exchange for exposure?
Yeah, fuck that guy.
The guy who didn't even explain what exactly he lost by not being able to use the new paywalled features? The guy who likely was not impacted at all, and was just ranting on his blog because he didn't like someone monetizing their own project? You want us to take that guy seriously?
everyone who disagrees is "piling on".
Everyone who disagrees? Yeah. Anyone who provides a coherent argument about exactly what they are missing out on by not being able to afford the paid version? I would take them seriously. I haven't seen anyone like that here.
Are people being entitled expecting it ? Yes. Is there something stopping people from taking up this work and creating a repo ? No. But it is illustrative of the attitude of the owners. The point is not to accuse of rug pull but how confident is the community in taking a dependency on such a project. The fact that the lead dev had to write an article responding to misunderstandings is in response to what the community feels about this.
The argument on their discord for licensing for professional teams 'contact us for pricing' goes like it depends on the number of employees in the company including non-tech folks.
Here's the text of the mit license https://mit-license.org/
At no point does it say anything like "I am obliged to maintain this for you forever, or even at all, let alone to your liking"
despite your good intentions, you don't seem to have even the slightest understanding of open source
Please. Resorting to ad hominem when you don't have good arguments against someone's opinion is intellectually lazy.
At no point does it say anything like "I am obliged to maintain this for you forever, or even at all, let alone to your liking"
I'm well familiar with most OSS licenses. I never claimed they said this.
My point was about an unwritten social contract of not being an asshole. When you do a public deed, such as publishing OSS, and that project gains users, you have certain obligations to those users at a more fundamental level than the license you chose, whether you want to acknowledge this or not.
When you ignore and intentionally alienate users, you can't be surprised when you receive backlash for it. We can blame this on users and say that they're greedy, and that as a developer you're allowed to do whatever you want, becuase—hey, these people are leeching off your hard work!—but that's simply hostile.
The point of free software is to provide a good to the world. If your intention is to just throw something over the fence and not take users into consideration—which are ultimately the main reason we build and publish software in the first place—then you're simply abusing this relationship. You want to reap the benefits of exposure that free software provides, while having zero obligations. That's incredibly entitled, and it would've been better for everyone involved if you had kept the software private.
I'll go further this time - not only do you not understand open source licensing or ecosystem even slightly, but it's genuinely concerning that you think that someone sharing some code somehow creates "a relationship" with anyone who looks at it. The point of free software is free software, and the good to the world is whatever people make of that.
Again, the only people who seem to be truly bothered by any of this are people who don't use datastar.
Don't use it. In fact, I suspect that the datastar maintainers would prefer that you, specifically, don't use it. Use it to spite them! We don't care.
I also retract my statement about you having good intentions/communicating in good faith. I won't respond to you again.
the only people who seem to be truly bothered by any of this are people who don't use datastar.
Yeah, those silly people who were previously interested in Datastar, and are criticizing the hostility of how this was handled. Who cares what they think?
Don't use it. We don't care. In fact, I suspect that the datastar maintainers would prefer that you, specifically, don't use it.
Too bad. I'll use it to spite all of you!
I also retract my statement about you having good intentions/communicating in good faith.
Oh, no.
But, honestly, to the people who actually understand, like and use Datastar, none of this matters. Most of the outrage is performative, at best - as can be seen by the pathetically superficial quality of the vast majority of criticisms in threads like this.
Frankly, if people can't/won't see that the devs are very clearly not VC rugpull assholes, and that the vast majority of the functionality is available for free, then they're probably also the sorts of people who aren't a good fit for something that is doing a great rethink of web development. The devs very explicitly are not trying to get rich (nor can they, due to the 501c3!) nor do they want this to be something massive - they're building it for their own needs, first and foremost, and for those who understand that vision.
But I'm now here to defend Datastar.
It's their code, which, up to now, they built and literally given away totally for free, under a MIT license. Everything (even what "they moved to the Pro tier") should still be free and under the MIT license that it was published under originally.
You just decided to rely and freeload (as, as far as I can tell, you never contributed to the project).
You decided to rely on a random third party that owns the framework. And now you're outraged because they've decided that from now on, future work will be paid.
You know the three magic words:
Just. Fork. It.
The software was released as a free version, with NO expectation for it to go commercial.
The fact that they switch to a paid version, and stripping out features from the original free version, is called "bait and switch".
If OP knew in advanced, he will have been informed about this and the potential 299 price tag. And he will have been able to make a informed decision BEFORE integrating the code.
You just decided to rely and freeload (as, as far as I can tell, you never contributed to the project).
But you complaint about him being a freeloader for not contributing to a project. What a ridiculous response.
I feel like you never even read the post and are making assumption that OP is a full time programmer.
Datastar can do whatever they want, its their code. But calling out a *bait and switch* does not make OP the bad guy.
its not bait and switch, its main has features we are willing to continue to support given we did a whole rewrite and this is what we think you should use. Don't like it? Fork it, code is still there. I hope your version is better!
It sounds like your are the dev of Datastar...
Let me give one piece of advice. Drop the attitude because this is not how you interact in public as the developers of a paid piece of software.
You can get away with a lot when its free/hobby project, but the moment you request payment, there is a requirement for more professionalism. These reactions that i am reading, will trigger responses that will hurt your future paycheck. Your already off on a bad start with this "bait and switch", do not make it worse.
I really question your future client interactions, if they criticize your product(s) or practices.
I hope your version is better!
No need for Datastar, my HTMX "alternative" has been in production (with different rewrites) over 20 years. So thank you for offering, but no need.
I did read the post. I know OP not a programmer. And that makes it even worse: OP has the audacity of saying they "make no money from the project" while it being a scheduling tool for their presumably plenty money-making clinic.
It would in fact be less shocking if they were a programmer doing a side project for fun.
This piece is not a rational, well tempered article. Is a rant by someone who just took something that was free and is now outraged and saying fuck you to those who made their project possible in the first place, not even understanding how licenses work or even being aware that the code they relied on is still there, on github, fully intact, and available for them.
This sort of people not only want to get it for free. They want their code to be maintained and improved for free in perpetuity.
They deserve to be called freeloaders.
Just. Fork. It.
The “outrage” is literally just people saying they’ll use a different project instead. Why would they ever fork it? They don’t like the devs of datastar they don’t want to use it going forwards. Yes the developers are allowed to do what they want with their code and time, but people are allowed to vote with their feet and go elsewhere and they are allowed to be vocal about it.
The rest is plugins, which anyone can write or modify. There's no need for the plugins to get merged upstream - just use them in your project, and share them publicly if you want. You could even do the same with the pre-pro versions of the pro plugins - just make the (likely minor) modifications to make them compatible with the current datastar core.
They're also going to be releasing a formal public plugin api in the next release. Presumably it'll be even easier to do all of this then.
This is nonsensical. Someone did something for free. Fantastic. They used it, successfully, for a production system that enables scheduling for their job.
Nobody took that away from them. They didn't force them to rebuild their tool.
The code is even there, in the git history, available for them.
If OP doesn't like what the devs decided to do with the project, just move on or fork and pay someone to help you fix any outstanding bugs or missing features.
The modern one is what op and lots of younger generation agree upon. It should always be open source and continue to be supported by the community.
The old folks are basically take it or leave it. Fork it into my own while taking the maintenance burden too.
I had been tracking Datastar for months, waiting for the 1.0.0 release.
But my enthusiasm for Datastar has now evaporated. I've been bitten by the open-source-but-not-really bait and switch too many times before.
My current thoughts lean towards a fully functional open source product with a HashiCorp style BSL and commercial licensing for teams above a size threshold.
Many corporations wouldn't buy licenses and those that would pay for support wanted support for hardware that was 2 or 3 generations old.
Gentle reminder: please encourage your corporation to pay for open source support whenever possible.
if you want to promise open source software simply to attract the mindshare and users who habitually ignore anything that isn't open source, trying to capture financial value may well be infeasible unless some rare confluence of stars lines up for you. the key is in the word "capture" - capturing the value implies making sure it goes to you rather than to someone else, and that means imposing restrictions that will simply piss those same users off.
i am also the president of the local youth baseball program and helped get BigSkyDevCon over the hump
i think you'd be surprised at how little time i actually spend on twitter
HTMX is a single htmx.js file with like 4000 lines of pretty clearly written code.
It purports to - and I think succeeds - in adding a couple of missing hypermedia features to HTML.
It's not a "framework" - good
It's not serverside - good
Need to add a feature? Just edit htmx.js
https://htmx.org/essays/future/
there are bugs, but we have to trade off fixes against potentially breaking users sites that depend on (sometimes implicitly) the current behavior
this makes me very hesitant to make changes and accept PRs, but i also feel bad closing old issues without really diving deep into them
such is life in open source
It was a full rewrite. Use the beta release forever if it has all the tools you need. No one is stopping you.Open source doesn't owe you anything and I expect the same back.
The author was constantly pushing it in the HTMX discord, telling anyone who would listen that if they liked HTMX how great Datastar would be for them
You know who else does that? THE DEVELOPER OF HTMX! https://htmx.org/essays/alternatives/
Some pretty classy comments from them on reddit too:
What is unclassy about those comments? Seem sensible to me...
We all know they are evil. But you know the most evil thing? That code that was previously released under a free license? Still sneakily on display in the git history like the crown jewels in the Tower of London. Except of armed guard defending the code that wants to be free once more it's hidden behind arcane git commands. Name me a single person that knows how to navigate the git history. I'm waiting. Spoiler alert: I asked Claude and they don't exist.
And your rebuttal is, "Well you can always recover the code from the git history?"
I mean, this is true, but do you think this really addresses the spirit of the post's complaint? Does mentioning they're a non-profit change anything about the complaint?
The leadership and future of a software project is an important component in its use professionally. If someone believes that the project's leadership is acting in an unfair or unpredictable way then it's rational and prudent for them to first express displeasure, then disassociate with the project if they continue this course. But you've decided to write a post that suggests the poster is being irrational, unfair, and that they want the project to fail when clearly they don't.
If you'd like to critique the post's points, I suggest you do so rather than straw manning and well-poisoning. This post may look good to friends of the project, but to me as someone with only a passing familiarity with what's going on? It looks awful.
I was facing a situation where I either need to stuck with the beta, or paying a pro version, as I was using the replace-url function a lot.
I was emotionally feeling betrayed. I went to the datastar reddit thread to raise my doubt that whether there would be more features that I rely on in the free version would be stripped out and be put behind the paywall. I was fine to convert my service to purely free tier features, when my service is stable and usable, I was very willing to buy a pro license.
But you know what? The datastar author jumped out and stated two points. He said the release version of datastar is a full rewrite, if I am not paying, I could stay in beta or fork it. And in open source world, he owned me nothing. Very legit points.
However, the real reason behind that fuck you statement is that I was attacked by the datastar discord members multiple times. In one of the humiliating replies I got, that guy said some one in the discord server told them to show support to datastar. Instead of supporting, they just mocked me and called me a troll as if I was an obstacle to their potential success, multiple people, multiple times.
I noticed some comments in the thread said that I don't know how to use version control, or ignorant towards software license. Well, I do use version control and occasionally contribute to open source projects. I am a doctor, I may not be as skillful as you all, but I do know some basics in programming.
I got personal attacks, publicly or by DMs. A guy told me that they were told to defend the project.
The only nice thing I got from them was an alternative method to imitate replace-url function using only free-tier features.
A guy told me that they were told to defend the project.
Might be nice if you can back that up. I see no drshapeless in our Discord logs.
he only nice thing I got from them was an alternative method to imitate replace-url function using only free-tier features. So we did give you an alternative to you going against how the framework is suppose to work.
Yeah that tracks with writing that blog. Good luck to ya, Datastar is definitely not a good fit.
I payed the one off 299$ for a pro license but have yet to find a reason to use any of the pro features.
I was hoping to need them for the google sheets clone[1] I was building but I seem to be able to do it without PRO features.
As was said by the commenter in another reply, the inspector is actually the bit that makes the Pro version much more appealing but most people wouldn't know from the sidelines.
[1] https://data-star.dev/ [2]: https://data-star.dev/reference/datastar_pro#attributes
It’s not like your exiting use cases stop working past 10 users or something.
They've said that the feature they put in the premium product are the features they don't want to build or maintain without being paid to do so.
Most people using Datastar will not necessarily be smart enough to fork it and add their own changes. And when Datastar makes a new release of the base/free code people will want to keep up to date. That means individuals have to figure out how to integrate their already done changes into the new code and keep that going. It's not a matter of if something breaks your custom code but when.
Finally, many people internalize time as money with projects like this. They're spending many hours learning to use the framework. They don't want to have the effort made useless when something (ex: costs or features) changes outside of their control. Their time learning to use the code is what they "paid" for the software. Doesn't matter if it's rational to you if it is to them.
I am confused.
Nothing wrong with people making money on their software but you need to make it clear from the start, that it will be paid software and what price range.
Bait and switch is often used to get people to use your software, you spend time into it, and then if you need a Pro feature, well, fork up or rework your code again. So your paying with your time or money. This is why its nasty and gets people riled up.
Its amazing how many people are defending this behavior.
Is the problem thar one needs to fork / maintain the code from now on? Is the problem that one wants free support on top of the free library?
I had a running service written in htmx for some time. It is a clinic opening hour service to inform my patients when I will be available in which clinic. (Yes, I am not a programmer, but a healthcare professional.)
-> that was pretty freaking cool to read, loved it
also chuckled at the idea of my website making, health professional going all "What the fuck." in front of his codebase.
But no, the datastar dev somehow move a portion of the freely available features behind a paywall. What the fuck.
Bait & Switch. They're in their right to do it, but it's a bad move, and nobody should use their project^M^M^M^Mduct anymore.
They focus on the practical solutions much more than on the typical bikeshedding.
I like the communal aspect of open source, but I don’t like overly demanding and entitled free loaders. I’ve had enough of that in my well paid career over the last decade.
This way of getting paid may or may not resonate, but I applaud the attempt to make it work.
The replace-url thing should be a simple JS code using history API no?
[1] https://github.com/sudeep9/datastar-plugins?tab=readme-ov-fi...
I may be a little biased because I've been writing webapps with htmx for 4 years now, but here are my first thoughts:
- The examples given in this blogpost show what seems to be the main architectural difference between htmx and Datastar: htmx is HTML-driven, Datastar is server-driven. So yes, the API on client-side is simpler, but that's because the other side has to be more complex: on the first example, if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side. I guess it's a matter of personal preference then, but from an architecture point-of-view both approaches stand still
- The argument of "less attributes" seems unfair when the htmx examples use optional attributes with their default value (yes you can remove the hx-trigger="click" on the first example, that's 20% less attributes, and the argument is now 20% less strong)
- Minor but still: the blogpost would gain credibility and its arguments would be stronger if HTML was used more properly: who wants to click on <span> elements? <button> exists just for that, please use it, it's accessible ;-)
- In the end I feel that the main Datastar selling point is its integration of client-side features, as if Alpine or Stimulus features were natively included in htmx. And that's a great point!
While morph will figure it outz it's unnecessary work done on the server to evaluate the entire body
Philosophically, I agree with you though.
Partly, because the minute you have a shared widget across users 50%+ of your connected users are going to get an update when anything changes. So the overhead of tracking who should update when you are under high load is just that, overhead.
Being able to make those updates corse grain and homogeneous makes them easy to throttle so changes are effectively batched and you can easily set a max rate at which you push changes.
Same with diffing, the minute you need to update most of the page the work of diffing is pure overhead.
So in my mind a simpler corse grain system will actually perform better under heavy load in that worst case scenario somewhat counter intuitively. At least that's my current reasoning.
Edit - rather than spam with multiple thank you comments, I'll say here to current and potential future repliers: thanks!
This reduces a lot of accidental complexities. If done well, you only need to care about the programming language and some core libraries. Everything else becomes orthogonal of each other so cost of changes is greatly reduced.
I assume it had backend scaling issues, but usually backend scaling is over-stated and over-engineered, meanwhile news sites load 10+ MB of javascript.
Alpine or Stimulus features were natively included in htmx
I'm contemplating using HTMX in a personal project - do you know if there are any resources out there explaining why you might also need other libraries like Alpine or Stimulus?
if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side
I'm not too strong in frontend, but wouldn't this make for a lighter, faster front end? Especially added up over very many elements?
Also, by this argument should we leave out the 'href' attribute from the '<a>' tag and let the server decide what page to serve? Of course not, the 'href' attribute is a critical part of the functionality of HTML.
Htmx makes the same argument for the other attributes.
htmx is HTML-driven, Datastar is server-driven
As far as I understand, the main difference between HTMX and datastar is that HTMX uses innerHTML-swap by default and datastar uses the morph-swap by default, which is available as an extension for HTMX[1].
Another difference is that datastar comes with SSE, which indeed makes it server driven, but you don't have to use SSE. Also datastar comes with client-side scripting by default. So you could say the datastar = integrated HTMX + idiomorph + SSE + Alpine.
And reading comments one would think this is some amazing piece of technology. Am I just old and cranky or something?
This feels... very hard to reason about. Disjoint.
You have a front-end with some hard-coded IDs on e.g. <div>s. A trigger on a <button> that black-box calls some endpoint. And then, on the backend, you use the SDK for your choice language to execute some methods like `patchElements()` on e.g. an SSE "framework" which translates your commands to some custom "event" headers and metadata in the open HTTP stream and then some "engine" on the front-end patches, on the fly, the DOM with whatever you sent through the pipe.
This feels to me like something that will very quickly become very hard to reason about globally.
Presentation logic scattered in small functions all over the backend. Plus whatever on-render logic through a classic template you may have, because of course you may want to have an on-load state.
I'm doing React 100% nowadays. I'm happy, I'm end-to-end type safe, I can create the fanciest shiny UIs I can imagine, I don't need an alternative. But if I needed it, if I had to go back to something lighter, I'd just go back to all in SSR with Rails or Laravel and just sprinkle some AlpineJS for the few dynamic widgets.
Anyway, I'm sure people will say that you can definitely make this work and organize your code well enough and surely there are tons of successful projects using Datastar but I just fail to understand why would I bother.
My dream was having a Go server churning out all this hypermedia and I could swerve using a frontend framework, but I quickly found the Go code I was writing was rigid and convoluted. It just wasn’t nice. In fact it’s the only time I’ve had an evening coding session and forgotten what the code was doing on the same evening I started.
I’m having a completely opposite experience with Elixir and Phoenix. That feels like an end to end fluid experience without excessive cognitive load.
Granted, I’ve only used it for smaller projects, but I can almost feel my brain relax as the JS fades out, and suddenly making web apps is super fun again.
I've also onboard interns and juniors onto React codebase, and there's things about React that only really make sense if you're more old-school and know how different types behave to understand why certain things are necessary.
I remember explaining to an intern why passing an inlined object as a prop was causing the component to rerender, and they asked whether that's a codebase smell... That question kinda shocked me because to me it was obvious why this happens, but it's not even a React issue directly. Howeve the fix is to write "un-javascripty" code in React. So this persons intro to JS is React and their whole understanding of JS is weirdly anchored around React now.
So I totally understand the critique of hooks. They just don't seem to be in the spirit of the language, but do work really well in spite of the language.
As someone who survived the early JS wilderness, then found refuge in jQuery, and after trying a bunch of frameworks and libraries, finally settled on React: I think React is great, but objectively parts of it suck, and it's not entirl its fault
and they asked whether that's a codebase smell...
Something that's been an issue with our most junior dev, he's heard a lot of terminology but never really learned what some of those terms mean, so he'll use them in ways that don't really make sense. Your example here is just the kind of thing I'd expect from him, if he's heard the phrase "code smell" but assumed something incorrect about what it meant and never actually looked up what it means.
It is possible your co-worker was asking you this the other way around - that they'd just learned the term and were trying to understand it rather than apply it.
if I had to go back to something lighter, I'd just go back to all in SSR with Rails
FWIW, default config of Rails include Turbo nowadays, which seems quite similar to Datastar in concept.
e.g. Datastar prescribes a single long lived SSE endpoint that owns the state for the currently connected user's view of the world / app, while common practice in Turbo is to have many small endpoints that return a fragment of html when requested by the client.
I've a tutorial that demonstrates this with Nushell as the backend: https://datastar-todomvc.cross.stream
An interesting characteristic of Datastar: it's very opinionated about the shape of your backend but extremely unopinionated about how you implement that shape.
The term was coined in 1965 by Ted Nelson in: https://dl.acm.org/doi/10.1145/800197.806036
Here's the exact sentence: "The hyperfilm-- a browsable or vari-sequenced movie-- is only one of the possible hypermedia that require our attention."
But it's "just html", so it's all fine
Edit: Oh, don't forget that " Especially datastar, which doesnt add any non-html-spec attributes" in reality ads two custom DSLs. One in the form of HTML attribbutes, and the other in the form of a JS-like DSL:
<button data-on-click__window__debounce.500ms.leading="$foo = ''"></button>
But as long as it's superficially HTML-spec compliant, this is nothing.https://developer.mozilla.org/en-US/docs/Web/HTML/How_to/Use...
At least you're living up to your profile! "Opinions on things I know nothing about"
At least you're living up to your profile! "Opinions on things I know nothing about"
I've had this pinned on my twitter profile for a few years now, for people like you: https://x.com/dmitriid/status/1860589623321280995
I never argued that those attributes weren't compatible with HTML.
1. If the element is out-of-band, it MUST have `htmx-swap-oob="true"` in it, or it may be discarded / cause unexpected results
2. If the element is not out-of-band, it MUST NOT have `htmx-swap-oob="true"` in it, or it may be ignored.
This makes it hard to use the same server-side HTML rendering code for for a component that may show up either OOB or not; you end up having to pass down "isOob" flags, which is ugly and annoying.
node +@ Hx.swap_oob "true"
And this adds the `hx-swap-oob=true` attribute to the given node. It makes it trivial to add on any defined markup in an oob swap.I get that many people prefer template-based rendering, but imho to extract the maximum power from htmx an HTML library that's embedded directly in your programming language is much more powerful.
https://github.com/yawaramin/dream-html/blob/f7928616b9ca1d6...
to extract the maximum power from htmx an HTML library that's embedded directly in your programming language is much more powerful.
I'm actually using gomponents, but the maintainer doesn't like the vibe of adding attributes to existing nodes.
https://github.com/maragudk/gomponents/issues/276
(I don't really understand his argument, but in general I'm in favor of maintainers doing what they think is the right thing; and in any case I'm using his work without paying, so not gonna complain.)
But even if I had an easy way to add the attribute, the fact that I need to think about that extra step is a bit of extra friction HTMX imposes, which datastar doesn't.
Interestingly, elements sent via the HTMX websocket extension[1] do use OOB by default.
As for Datastar, all the signal and state stuff seems to me like a step in the wrong direction.
Edit: right, as long as the element has x-sync on it, it will receive any OOB updates from any response.
For those of you who don't think Datastar is good enough for realtime/collaborative/multiplayer and/or think you need any of the PRO features.
These three demos each run on a 5$ VPS and don't use any of the PRO features. They have all survived the front page of HN. Datastar is a fantastic piece of engineering.
- https://checkboxes.andersmurphy.com/
- https://cells.andersmurphy.com/
- https://example.andersmurphy.com/ (game of life multiplayer)
On both the checkboxes/cells examples there's adaptive view rendering so you can zoom out a fair bit. There's also back pressure on the virtual scroll.
Tbh that mental model seems so much simpler than any or all of the other datastar examples I see with convoluted client state tracking from the server.
Would you build complex apps this way as well? I'd assume this simple approach only works because the UI being rendered is also relatively simple. Is there any content I can read around doing this "immediate mode" approach when the user is navigating across very different pages with possibly complicated widget states needing to be tracked to rerender correctly?
Yes we are building complex accounting software at work with Datastar and use the same model. "Real UI" is often more complex, but a lot less heavy less divs, less data, fewer concurrent users, etc compared to these demos. Checkboxes are a lot more div dense than a list of rows for example.
think you need any of the PRO features
Pro features ? Now I see - it is open core, with a $299 license. I'll pass.
I don't use anything from pro and I use datastar at work. I do believe in making open source maintainable though so bought the license.
The pro stuff is mostly a collection of foot guns you shouldn't use and are a support burden for the core team. In some niche corporate context they are useful.
You can also implement your own plugins with the same functionality if you want it's just going to cost you time in instead of money.
I find devs complaining about paying for things never gets old. A one off life time license? How scandalous! Sustainable open source? Disgusting. Oh a proprietary AI model that is built on others work without their consent and steals my data? Only 100$ a month? Take my money!
On both the checkboxes/cells examples there's adaptive view rendering so you can zoom out a fair bit.
how do you zoom out?
Also, even with your examples, wouldn't data-replace-url be a nice-to-have to auto update the url with current coordinates, e.g. ?x=123&y=456
The billion items themselves are just in a server on the backend, stored in a sqlite database.
I have honestly yet to see an example where using something like React doesn't just look like it's adding unnecessary complexity.
[1] https://github.com/Xajax/Xajax
While htmx reminds me of Adobe Spry Data[2] enough that I did a research into htmx and realize that Spry Data's equivalent is a htmx plugin and htmx itself is more similar to Basecamp's Hotwire. I assume there should be a late 2000 era AJAX library that do something similar to htmx, but I didn't use one as jQuery is easy enough anyway.
[2] https://opensource.adobe.com/Spry/articles/spry_primer/index...
Anyway as other commenters has said, the idea of htmx is basically for some common use cases that you used jQuery, you might as well use no JavaScript at all to achieve the same tasks. But that is not possible today, so think of htmx as a polyfill for future HTML features.
Personally I still believe in progressive enhancements (a website should work 100% without JavaScript, but you'll lose all the syntactic sugar - for example Hashcash-style proof of work captcha may just give you the inputs and you'll have to do the exact same proof of work manually then submit the form), but I've yet to see any library that is able to offer that with complex interface, without code duplication at all. (Maybe ASP.NET can do that but I don't like the somewhat commercialized .NET ecosystem)
The UI I think would require React is a wizard-style form with clientside rendered widgets (eg. tabs). If you can't download a library to implement that, it is a lot of work to implement it on backend especially in modern websites where your session is now JWT instead of $_SESSION that requires a shared global session storage engine. I'd imagine that if you don't use React when the user go back to the tabbed page you'd need to either implement tab switching code on the backend side as well, or cheat and emit a JS code to switch the active tab to whatever the backend wants.
The UI I think would require React is a wizard-style form with clientside rendered widgets (eg. tabs).
Can you think of any example sites/web apps which illustrate what you mean? I'm imagining something like VSCode, but AFAIK it's built with a custom JS framework and not React.
basically built for us old-skool types
Glad I'm not the only one. Ever since the first HTMX article, I felt like I was kidding myself. I had/have this thought in my head that "no way that we were that close to having all this right 25 years ago." I'm coming around and seeing that this tech gets the job done by doing one thing really well, and the whole API around it is dead-simple and bulletproof because of it. It's that good-old UNIX philosophy that's the enabling tech here.
While I can't say for certain that IE6 or early Firefox could have handled DOM swaps gracefully without real shadow DOM support, early Ajax provided the basic nuts-and-bolts to do all of this. So, why haven't we seen partial page updates as a formalism, sooner?
* Datastar sends all responses using SSE (Server Side Events). Usually SSE is employed to allow the server to push events to the client, and Datastar does this, but it also uses SSE encoding of events in response to client initiated actions like clicking a button (clicking the button sends a GET request and the server responds with zero or more SSE events over a time period of the server's choice).
* Whereas HTMX supports SSE as one of several extensions, and only for server-initiated events. It also supports Websockets for two-way interaction.
* Datastar has a concept of signals, which manages front-end state. HTMX doesn't do this and you'll need AlpineJS or something similar as well.
* HTMX supports something called OOB (out-of-band), where you can pick out fragments of the HTML response to be patched into various parts of the DOM, using the ID attribute. In Datastar this is the default behaviour.
* Datastar has a paid-for Pro edition, which is necesssary if you want certain behaviours. HTMX is completely free.
I think the other differences are pretty minor:
* Datastar has smaller library footprint but both are tiny to begin with (11kb vs 14kb), which is splitting hairs.
* Datastar needs fewer attributes to achieve the same behaviours. I'm not sure about this, you might need to customise the behaviour which requires more and more attributes, but again, it's not a big deal.
D* doesnt only use SSE. It can do normal http request-response as well. Though, SSE can also do 0, 1 or infinite responses too.
Calling datastar's pro features "necessary" is a bit disingenuous - they literally tell people not to buy it because those features, themselves, are not actually necessary. Theyre just bells and whistles, and some are actually a bad idea (in their own words).
Datastar is 11kb and that includes all of the htmx plugins you mentioned (sse, idiomorph) and much more (all of alpine js, essentially).
Calling datastar's pro features "necessary" is a bit disingenuous
I didn't. I said:
* Datastar has a paid-for Pro edition, which is necesssary if you want certain behaviours. HTMX is completely free.
I don't need to spell out why this means something very different to what you think it means.
I'll happily concede on the other two quibbles.
<div hx-get="{% url 'web-step-discussion-items-special-counters' object.bill_id object.pk %}?{{ request.GET.url...who knows how many characters long it is.
It's hard to tell whether they optimised the app, deleted a ton of noise, or just merged everything into those 300-character-long megalines.
of course it (should) lead to a lot less code! at the cost of completely foregoing most of the capabilities offered by having a well-defined API and a separate client-side application
... and of course this is happening as over the last ~2 decades we mostly figured out that most of that amazing freedom on the client-side does not worth it
... most clients are dumb devices (crawlers), most "interactions" are primitive read-only ones, and having a fast and simple site is a virtue (or at least it makes economic sense to shunt almost all complexity to the server-side, as we have fast and very capable PoPs close to users)
... and of course this is happening as over the last ~2 decades we mostly figured out that most of that amazing freedom on the client-side does not worth it
It's not that, at least in my opinion, it's that we love (what we perceive as) new and shiny things. For the last ten years with Angular, React, Vue et al., new waves of developers have forgotten that you can output stuff directly from the server to the browser outside of "APIs".
This implementation is "dumb" to me. Feels like the only innovation is using SSE, otherwise it's a `selector.onClick(() => {selector.outerHTML = await (fetch(endpoint)).json()});`. That's most of the functionality right there. You can even use native HTML element attributes instead of doing .onClick https://developer.mozilla.org/en-US/docs/Web/API/Element/cli....
I really don't see any benefit to using this.
saying nah, fuck this, let's just do a rerender is what happened, and going back to doing it on server-side is one way, but doing it on client-side is the "React way"
I have also no idea why people love template spaghetti! But of course it can work, and simply setting the bar lower might actually help projects against feature creep.
These "we cut 70% of our codebase" claims always make me laugh.
There's also a slide in my talk that presents how many JS dependencies we dropped, while not adding any new Python. Retrospectively, that is a much more impressive achievement.
<span hx-target="#rebuild-bundle-status-button" hx-select="#rebuild-bundle-status-button" hx-swap="outerHTML" hx-trigger="click" hx-get="/rebuild/status-button"></span>
Turn into:
<span data-on-click="@get('/rebuild/status-button')"></span>
The other examples are even more confusing. In the end, I don't understand why the author switched from HTMX to Datastar.
Datastar keeps the logic in the backend. Just like we used to do with basic html pages where you make a request, server returns html and your browser renders it.
With Datastar, you are essentially doing kind of PWA where you load the page once and then as you interact with it, it keeps making backend requests and render desired changes, instead of reloading the entire page. But yo uare getting back snippets of HTML so the browser does not have to do much except rendering itself.
This also means the state is back in the backend as well, unlike with SPA for example.
So again, Datastar goes back to the old request-response HTML model, which is perfectly fine, valid and tried, but it also allows you to have dynamic rendering, like you would have with JavaScript.
In other words, the front-end is purely visual and all the logic is delegated back to the backend server.
This essentially is all about thin client vs smart client where we constantly move between these paradigms where we move logic from backend to the frontend and then we swing back and move the logic from the frontend to the backend.
We started with thin clients as computers did not have sufficient computing power back in the day, so backend servers did most of the heavy lifting while the thin clients very little(essentially they just render the ready-made information). That changed over time and as computers got more capable, we moved more logic to the frontend and it allowed us to provide faster interaction as we no longer had to wait for the server to return response for every interaction. This is why there is so much JavaScript today, why we have SPAs and state on the client.
So Datastar essentially gives us a good alternative to choose whether we want to process more data on the backend or on the frontend, whilst also retaining the dynamic frontend and it is not just a basic request-response where every page has to re-render and where we have to wait for request to finish. We can do this in parallel and still have the impression if a "live" page.
If you still don't get it, Datastar is essentially like a server-side rendering in JS, for PWAs, but it allows you to use any language you want on the backend whilst having a micro library(datastar itself) on the frontend. Allowing you to decouple JS from frontend and backend whilst still having all the benefits of it.
The Datastar code instead says: "when this span is clicked, fetch /rebuild/status-button and do whatever it says". Then, it's /rebuild/status-button's responsibility to provide the "swap the existing #rebuild-bundle-status-button element with this new one" instruction.
If /rebuild/status-button returns a bunch of elements with IDs, Datastar implicitly interprets that as a bunch of "swap the existing element with this new one" instructions.
This makes the resulting code look a bit simpler since you don't need to explicitly specify the "target", "select", or "swap" parts. You just need to put IDs on the elements and Datastar's default behavior does what you want (in this case).
This isn't really a criticism of Datastar, though: I think the popularity of OOB in HTMX indicates that the pure form of this is too idealistic for a lot of real-world cases. But it would be nice if we could come up with a design that gives the best of both worlds.
You send down the whole page on every change. The client just renders. It's immediate mode like in video games.
If one were to rerender the entire page every time, what's the advantage of any of these frameworks over just redirecting to another page (as form submissions do by default)?
1. It's much better in terms of compression and latency. As with brotli/zstd you get compression over the entire duration of the connection. So you keep one connection open and push all updates down it. All requests return 204 response. Because everything comes down the same connection brotli/zstd can give you 1000-8000x compression ratios. So in my demos for example, one check is 13-20bytes over the wire even though it's 140k of HTML uncompressed. Keeping the packet size around 1k or less is great for latency. Redirect also has to do more trips.
2. The server is in control. I can batch updates. The reason these demo's easily survive HN is because the updates are batched every 100ms. That means at most a new view gets pushed to you every 100ms, regardless of the number of users interacting with your view. In the case of the GoL demo the render is actually shared between all users, so it's only rendering once per 100ms regardless of the number of concurrent users.
3. The DX is nice and simple good old View = f (state), like react just over the network.
Because everything comes down the same connection brotli/zstd can give you 1000-8000x compression ratios.
Isn't this also the case by default for HTTP/2 (or even just HTTP/1.1 `Connection: keep-alive`)?
The server is in control. I can batch updates.
That's neat! So you keep a connection open at all times and just push an update down it when something changes?
The magic is brotli/zstd are very good at streaming compression thanks to forward/backward references. What this effectively means is the client and the server share a compression window for the duration of the HTTP connection. So rather than each message being compressed separately with a new context, each message is compressed with the context of all the messages sent before it. What this means in practice is if you are sending 140kb of divs on each frame, but only one div changed between frames, then the next frame will only be 13bytes because the compression algorithm basically says to the client "you know that message I sent you 100ms ago, well this one is almost identical apart from this one change". It's like a really performant byte level diffing algorithm, except you as the programmer don't have to think about it. You just re-render the whole frame and let compression do the rest.
In these demos I push a frame to every connected client when something changes at most every 100ms. What that means, it effectively all the changes that happen in that time are batched into a single frame. Also means the server can stay in charge and control the flow of data (including back pressure, if it's under to much load, or the client is struggling to render frames).
The patch statements on the server injecting HTML seems absolutely awful in terms of separation of concerns, and it would undoubtedly be an unwieldy nightmare on an application of any size when more HTML is being injected from the server.
I've written costumer-facing interfaces in HTMX and currently quite like it.
One comment. HTMX supports out-of-bound replies which makes it possible to update multiple targets in one request. There's also ways for the server to redirect the target to something else.
I use this a lot, as well as HTMX's support for SSE. I'd have to check what Datastar offers here, because SSE is one thing that makes dashboarding in HTMX a breeze.
created: 18 minutes ago
Maybe I'm cynical, but fresh new accounts praising stuff that has only the core open source, works as negative-ad.They converted it from React to HTMX, cutting their codebase by almost 70% while significantly improving its capabilities.
Happy user of https://reflex.dev framework here.
I was tired of writing backend APIs with the only purpose that they get consumed by the same app's frontend (typically React). Leading to boilerplate code both backend side (provide APIs) and frontend side (consume APIs: fetch, cache, propagate, etc.).
Now I am running 3 different apps in productions for which I no longer write APIs. I only define states and state updates in Python. The frontend code is written in Python, too, and auto-transpiled into a React app. The latter keeping its states and views automagically in sync with the backend. I am only 6 months into Reflex so far, but so far it's been mostly a joy. Of course you've got to learn a few but important details such as state dependencies and proper state caching, but the upsides of Reflex are a big win for my team and me. We write less code and ship faster.
I was tired of writing backend APIs with the only purpose that they get consumed by the same app's frontend
PostgREST is great for this: https://postgrest.org
I run 6 React apps in prod, which used to consume APIs written with Falcon, Django and FastAPI. Since 2 years ago, they all consume APIs from PostgREST. I define SQL views for the tables I want to expose, and optionally a bunch of SQL grants and SQL policies on the tables if I have different roles/permissions in the app, and PostgREST automatically transforms the views into endpoints, adds all the CRUD + UPSERT capabilities, handles the authorization, filtering, grouping, ordering, insert returning, pagination, and so on.
The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.
But it was a nice pattern to work with: for example if you made code changes you often got hot-reloading ‘for free’ because the client can just query the server again. And it was by definition infinitely flexible.
I’d be interested to hear from anyone with experience of both Datastar and Hotwire. Hotwire always seemed very similar to HTMX to me, but on reflection it’s arguably closer to Datastar because the target is denoted by the server. I’ve only used Hotwire for anything significant, and I’m considering rewriting the messy React app I’ve inherited using one of these, so it’s always useful to hear from others about how things pan out working at scale.
Also, custom actions [https://turbo.hotwired.dev/handbook/streams#custom-actions] are super powerfull, we use it to emmit browser events, update dom classes and attributes and so on, just be careful not to overuse it.
The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.
Basically every single web page on the modern web has the server returning JS that the client then executes. I think you should clarify what's dangerous about the specific pattern you're thinking of that isn't already intrinsic to the web as a whole.
Having just watched the Vite documentary both HTMX and DataStar have a higher order mission to challenge dominant incumbent JS frameworks like React/NextJS. HTMX is struggling and in my opinion Datastar is DOA!
Win the adoption, win the narrative then figure out cashing in. People behind Vite won the JS bundling race, they now have a new company Void(0) and raised venture money. NextJS solved major React pain points, gave it away for free and built a multi-billion$ infrastructure to host it.
Sorry I care more about metrics and flamegraphs than what tech youtuber is faffing on about.
Htmx gives me bad vibes from having tons of logic _in_ your html. Datastar seems better in this respect but has limitations Hotwire long since solved.
Htmx gives me bad vibes from having tons of logic _in_ your html
Write some HTMX and you'll find that exactly the opposite is true
tons of logic _in_ your html
That is not at all what HTMX does. HTMX is "If the user clicks[1] here, fetch some html from the server and display it". HTMX doesn't put logic in your HTML.
[1] or hovers or scrolls.
At this point why not just bite the bullet and go back to the old days of php serving html.
Going back to it is the point. HTMX lets you do that while still having that button refresh just a part of the page, instead of reloading the whole page. It's AJAX with a syntax that frees you from JS and manual DOM manipulation.
I fairly recently developed an app in PHP, in the classic style, without frameworks. It provided me with stuff I remembered, the $annoyance $of $variable $prefixes, the wonky syntax, and a type system that makes JS look amazing -- but it still didn't make me scream in pain and confusion like React. Getting the app done was way quicker than if any JS framework was involved.
Having two separate but tightly integrated apps is annoying. HTMX or any other classic web-dev approaches like PHP and Django make you have one app, the backend. The frontend is the result of executing the backend.
Both are just small javascript libraries that allow you to do some interactive stuff declarative in your ssr html. But Datastar is smaller, simpler, more powerful and closer to web standards.
I wrote /dev/push[1] with FastAPI + HTMX + Alpine.js, and I'm doing a fair bit with SSE (e.g. displaying logs in real time, updating state of deployments across lists, etc). Looking at the Datastar examples, I don't see where things would be easier that this[2]:
<div
hx-ext="sse"
sse-connect="{{ url_for('project_event', team_id=team.id, project_id=project.id) }}"
sse-close="sse:stream_closed"
>
<div
hx-trigger="sse:deployment_creation, sse:deployment_status_update, sse:deployment_rollback, sse:deployment_promotion"
hx-get="{{ url_for('project_deployments', team_slug=team.slug, project_name=project.name).include_query_params(fragment='deployments') }}"
hx-target="#deployments"
></div>
</div>
Also curious what others think of web components. I tried to use them when I was writing Basecoat[3] and ended up reverting to regular HTML + CSS + JS. Too brittle, too many issues (e.g. global styling), too many gaps (e.g. state).[1] https://devpu.sh
[2] https://github.com/hunvreus/devpush/blob/main/app/templates/...
But what I’m most excited about are the possibilities that Datastar enables. The community is routinely creating projects that push well beyond the limits experienced by developers using other tools.
For example when displaying the list of deployments, rather than trying to update any individual deployment as their state is updated, it's just simpler to just update the whole list. Your code is way simpler/lighter as you don't need to account for all the edge case (e.g. pager).
I am not saying it is wrong. Just it is abit funny looking from perspective how pendulum is going now the other way.
Let's say I'm intrigued and on the fence.
One of the amazing things from David Guillot’s talk is how his app updated the count of favored items even though that element was very far away from the component that changed the count.
This might not seem like a big deal, but it looks like Datastar dramatically reduces the overhead of a common use-case. The article shows how to update a component and a related count, elsewhere in the UI.
A more practical use-case might be to show a toast in tandem with navigating to another view. Or updating multiple fields on a form validation failure.
Everything old is new again.
Having the backend aware of the IDs in the HTML leads to pain. The HTMX way seems a lot simpler and Rails + Turbo has gone in that direction as well.
with a REST API, the front and back ends need to agree on the JSON field names
with an HTML API for Datastar, the front and back ends need to agree on the element IDs
Really not a huge difference
That's why the H in HTTP and in HTML stand for "Hypertext." Any time a webserver replies with something other than markup, _that's_ the extension/exception to that very old design.
Now, if you're talking about the separation of user-interface, data, logic, and where HTML fits in, that's a much bigger discussion.
I'm seriously keen on trying it out. It's not like Htmx is bad, I've built a couple of projects in it with great success, but they all required some JS glue logic (I ended up not liking AlpineJS for various reasons) to handle events.
If Datastar can minimize that as well, even better!
If you are looking to understand what's possible when you use datastar and you have some familiarity with Go, I hope this is a solid starting point for you.
I am okay with the open-core and pro model.
But, the maintainers are quite combative on HN and Reddit as well. This does not bode well for the otherwise great project.
Since then, teams everywhere have discovered the same thing: turning a single-page app into a multi-page hypermedia app often slashes lines of code by 60% or more while improving both developer and user experience.
Well, not at all. The only compelling reason for me to use server-side rendering for apps (not blogs obviously,they should be HTML) is for metadata tags. That's why I switched from pure React and everything has been harder, slower for the user and more difficult to debug than client-side rendering.
Datastar developers are free to do what they want with their code, but as someone who releases open source software, I'm tired of projects using open source simply to create a moat or user base then switch to a proprietary model.
then I saw the fact that licensing changes are going to be frequent
What are you referring to here? Sounds important.
Edit: Looks like its this < https://drshapeless.com/blog/posts/htmx,-datastar,-greedy-de... >
Or the then backwards-incompatible HTMX v2 will give it the rest, leaving all the obsolete codebase. It's the circle of life.
...To accomplish this, most HTMX developers achieve updates either by “pulling” information from the server by polling every few seconds or by writing custom WebSocket code, which increases complexity.
This isn't true. HTMX has native support for "pushing" data to the browser with Websockets or SSE, without "custom" code.
You have my sword!