The GitHub website is slow on Safari
It's a product of many cooks and their brilliant ideas and KPIs, a social network for devs and code being the most "brilliant" of them all. For day to day dev operations is something so mediocre even Gitlab looks like the golden standard compared to Github.
And no, the problem is not "Rails" or [ insert any other tech BS to deflect the real problems ].
And no, the problem is not "Rails"
The problem is they abandoned rails for react. The old SSR GitHub experience was very good. You could review massive PRs on any machine before they made the move.
if they were forced to use slow machines, they would not be able to put out crap like that
the problem is developers having fast modern machines.if they were forced to use slow machines, they would not be able to put out crap like that
Lots of developers are rather obsessed with writing good, performant code. The problem is rather that many project managers do not let them do these insane optimizations because they take time.
The only things that forcing developers to use slow machines will bring is developers quitting (and quite a lot of them would actually love to see the person responsible for this decision dead (I'm not joking) because he made the developers' job a hell on earth).
What you should rather do if you want performant software is to fire all the project managers who don't give the developers the necessary time (or don't encourage the developers) to write highly optimized code (i.e. those idiot project managers who argue with "pragmatism" concerning this point).
because they take time
No they don't. It's literally just a skill issue.
To give just one simple example: to get the textbook complexity bound for the Dijkstra algorithm, you need some fancy mergeable heap data structures which are much more complicated, and thus time-intense to implement than the naive implementation.
Or you can get insane low-level optimizations by using the SIMD instructions that modern processors provide. Unluckily, this takes a lot of time and leads to code that is not easy to understand (and thus not easy to write) for "classically trained" programmers.
Yes, you indeed need a lot of skills to write such very fast algorithms, but even for such ultra-smart programmers, finding and applying such optimizations need a lot of development time, which is why this is often only done for code parts that are insanely computation-intense and performance-critical such as video (and sometimes audio) codecs.
Now you CAN so it so that is not the case, but tbh i have never seen that in the wild -
Edit: here's a good investigation on a real-enough app https://www.developerway.com/posts/tailwind-vs-linaria-perfo...
When native software is slow, it's bad software. When web software is slow, react is bad software.
This is such a tired trope.
The reality is both can be slow, it depends on your data access patterns, network usage, and architecture.
But the other reality is that SPAs and REST APIs just usually have less optimal network usage and much worse data access patterns than traditional DB connected SSR monoliths. Same goes for micro service.
Like, you could design a highly scalable and optimal SPA. Who's doing it? Almost nobody.
No, instead they're making basically one endpoint per DB table, recreating SQL queries in client side memory, duplicating complex business logic on the front and back end, and sending 50 requests to load an dashboard.
GitHub is big software, but not that big. Huge monorepos and big big diffs grind GitHub to a pulp.
Which, unfortunately, cannot be measured :( so no KPIs. Darn!
Its all fun and games until you cut quality over and over so much your customers just leave. Ask Chrysler or GE. I mean they must have saved, what, billions across decades? And for free!
Well... um... not free actually, because those companies have been run into the ground, dragged through hell, revived, and then damned again.
> Which, unfortunately, cannot be measured
This is such a subtle but important thing that so many people do not understand about data analysis. It's even at the heart of things like survivor bias[0]. Your measure is always a proxy and this proxy has varying degrees of alignment with what you want to measure.I know everyone knows the cliche "The devil is in the details" but everyone seems to continually make these mistakes because nuance is hard. But then again what is a cliche if not words of wisdom that everyone can recite but fail to follow?
> Its all fun and games until you cut quality over and over so much your customers just leave.
The alternative is you develop a Lemon Market. Which is a terrible situation for all parties involved. Short term profits might be up but these are at the loss of much higher long term rewards.[0] You infer where the downed planes were shot through the measures you can make on recovered planes. But that is very different than measuring where downed planes were shot. You can't just take the inverse of the returned planes and know where to add plating from there.
Maybe it will make a significant enough cumulative impact 5 years later that it can actuallly be noticed and defended in a meeting against other priorities.
But I’ve never heard of anyone hiring someone on minimum wage and deferring a huge bonus to 5 years later.
Even if it does makes a big impact, would anyone even take a such a job?
Available data confirms that SPA tends to perform worse than classic SSR.
I’m not a frontend dev, and have next to zero experience with anything beyond jQuery, but an analogy is shell. Bash (and zsh, though I find some of its syntactic sugar nicer, albeit still inscrutable) will happily let you do extremely stupid things, but it also lets you do extremely complicated things in a very concise manner. That doesn’t mean it’s inherently bad, it means you need to know what the hell you’re doing, and use linters, write tests, etc.
I’m sure you could make something work better as a SPA, but nobody does.
Github's code view page has been unreasonably slow for the last several years ever since they migrated away from Rails for no apparent reason.
The problem is they abandoned rails for react.
Which, it seems, was a result of the M$ acquisition: https://muan.co/posts/javascript
> Writing on the internet can be a two-way thing, a learning experience guided by iteration and feedback. I’ve learned some bad habits from Hacker News. I added Caveats sections to articles to make sure that nobody would take my points too broadly. I edited away asides and comments that were fun but would make articles less focused. I came to expect pedantic, judgmental feedback on everything I wrote, regardless of what it was.
https://macwright.com/2022/09/15/hacker-news
Which is true. Pedantism is the lowest form of pseudo-intelligence.
Pedantism is the lowest form of pseudo-intelligence.
You can’t just lay this bear trap of an opportunity and expect me to not pedantically state that the word is either “pedantry”, the activity performed by pedants, or “pedantic”, to describe such activities.
“Pedantism” would be a philosophy or viewpoint that extols pedantry. Pedantism would be to pedantry as deontology is to rule-following, a justification of an activity. As such, pedantism would be a slightly higher form of pseudo-intelligence than mere pedantry.
But only slightly.
A fun part of a retro at my company last year was me explaining to a team, “had all of your pods’ requests succeeded, the DB would have been pushing out well over 200 Gbps, which is generally reserved for top-of-rack switches.” Of course, someone else then had to translate that into “4K Blu-Rays per second,” because web devs aren’t typically familiar with networking, racks, data centers…
If github has a million users visiting it per day on a FRESH cache, and all of them have to download at least 10 megabytes of text data (both of these numbers are far too high), you are at ... 0.015 "4k blurays per second". Yeah I think MS's datacenters will survive.
Their "solution" was to enable SSR for us ranters' accounts.
Server-side rendering (SSR) flag has been enabled for each of you. Can you take a look, click around and let me know if this has resolved some of the usability issues that you've reported here?
The fact that they have this ability / awareness and haven't completely reverted by now is shocking to me.
Meanwhile, I opened a 100K line CSV in Neovim and while it took a couple of seconds to open and render highlighting, after that, it was fine.
There are of course performant react apps out there. What Steve did with tldraw is amazing.
However, the vast majority of the apps out there are garbage since the framework itself is terribly inefficient.
Too bad Phabricator is maintenance-only now https://en.m.wikipedia.org/wiki/Phabricator
My memory is fuzzy but I think it was on phab that I discovered and loved to use stacked merges. This is where you have a merge request into another open merge request etc. Super useful. Miss that in the git world.
I assume this is fallout from dealing with LLM content scrapers.
https://we.phorge.it/phame/post/view/8/anonymous_cloning_dis... https://we.phorge.it/phame/post/view/9/anonymous_cloning_has...
The Github website is slow everywhere.
Perhaps it depends what software one is using
For example, commandline search and tarball/zipball retrieval from the website, e.g., github.com, raw.githubusercontent.com and codeload.github.com, are not slow for me, certainly not any slower than Gitlab
I do not use a browser nor do I use the git software
I use the Github website as I would any software mirror/repository
I'm not interested in images (mascots or other garbage) or executing code (gratuitous Javascript) when using the Github website, I'm interested in reading and downloading source code
The servers at https://codeload.github.com and https://raw.githubusercontent.com are two examples
Each redirects to https://github.com
I still love it! Works great, makes sense, is fast...
Gitlab is anything but light, by default tends to be slow, but surprisingly fast with a good server ( nothing crazy, but big ) and caching.
Gitea is an example I like because it stores the repository as a bare repository, the same as if I did git clone --bare. I bring it up because when I stopped running Gitea, I could easily go in to the data and backup all the repositories an easily reuse them somewhere else.
GitLab: https://docs.gitlab.com/administration/gitaly/praefect/
GitHub: https://github.blog/engineering/infrastructure/stretching-sp...
Never had any issues with it.
The page the person on the issue had loading for 10s, takes almost 2s here.
I had to alter basically every aspect of how I interact with it because of how fucking slow it is! I still can't shake the sense that it's about to go down or that I've done something wrong every time I click something and nothing happens for several seconds.
At the very least, I wish they set it to auto.
Unrealistic timelines, implementing what should be backend logic in frontend, there's a bunch of ways SPA's tend to be a trap. Was react a bad idea? Can anyone point to a single well made react app?
Back in the day (I was a junior dev) this was easier than grappling with React hooks today:
BOOL CMainDialog::OnInitDialog()
{
CDialogEx::OnInitDialog();
m_pPropertySheet = new CMyPropertySheet(_T("My Tabbed Dialog"), this);
m_pPropertySheet->Create(this, WS_CHILD | WS_VISIBLE, WS_EX_CONTROLPARENT);
CRect rectMainDialog;
GetClientRect(&rectMainDialog);
CRect rectPropertySheet(10, 10, rectMainDialog.Width() - 20, rectMainDialog.Height() - 20);
m_pPropertySheet->MoveWindow(rectPropertySheet);
return TRUE;
}
a single well made react app
What about Slack, the messenger?
Umm, Discord? SoundCloud? Trello? Bandcamp? Spotify?
If I keep going there are actually hundreds and thousands of well-made react apps.
Slack puts a nicer shade of lipstick on the pig than Teams does, but the lips still belong to the same thing.
Slack is the best of a bunch of trash options. That doesn't make it good
Well, that's a valid framework too, but by the practical standard of goodness – the best of trash is actually good — because you don't judge goodness against some abstract ideals, but against available choices. Both are valid frameworks, but only one is useful in practice.
As you point out it's wildly successful and is the backbone of many groups internal communication. Many companies would just stop working without Slack, that's a testament to the current team's efforts, but also something that critical would merit better perfs.
I'd make the comparison with Figma, which went the extra mile to bring a level of smoothness that just wouldn't be there otherwise.
My irc client is taking 60MiB of memory and 0.01% cpu. My IRC client is responsive and faster, it has more configurable notification settings. I like the irc client more.
Bandcamp
I just went to the bandcamp page and it indeed loaded very quickly. As far as I can tell, there's no react in use anywhere so I guess that's why.
What do you mean by bandcamp using react?
It's possible I'm wrong about bandcamp using react but your guess is far from reality as well – react itself does not prevent or discourage loading pages very quickly.
Spotify is also very slow with thousands of placeholder skeletons. Remember that Spotify once had a very fast native player.
Are you under the impression that the placeholder skeletons are there and slow because of React? How would a UI written in C++ get the data quicker from the back end to replace the skeleton with?
Are you under the impression that the placeholder skeletons are there and slow because of React? How would a UI written in C++ get the data quicker from the back end to replace the skeleton with?
Regardless of how, the fact remains that the previous implementation of their UI did fetch and render the data from the backend significantly faster than the current React-based one does.
What about Slack, the messenger?
You call it well made? I'm sorry for you, you must really live a really harsh life.
you must really live a really harsh life.
I do. What are you calling well-made software in your nice enlightened life? Open my eyes.
I don’t think the culprit apps would have substantially better UX if they were rendered on the server, because these issues tend to be a consequence of devs being pressured to rapidly release new features without regard to quality.
As an aside, I was an employee around then and I vividly remember that the next half there was a topline goal to improve web speed. Hmmmm, I wonder what could have happened?
And to be fair, the problems that Facebook had when they introduced React are not common problems at all.
That’s one of my favorites. The exact bug they described during React launch presentation, that React was supposed to help fix with the unidirectional dataflow. You know the one where unread message badges were showing up inconsistently in the UI in different places. They never managed to fix that bug in the 10 years since React was announced and I eventually left Facebook for good.
React can have all the niceties and optimization in the world, but that fails when its users insist on using it incorrectly, building huge tangled messy components and then wondering why a click takes 1.3 seconds to deliver feedback.
IMO it's the MAIN thing to understand about React—how it renders.
Regardless, now I'm the one with egg on my face since the new compiler promises to eventually remove the need for manual memoization almost entirely. The "almost" still fills me with fear
https://alexsidorenko.com/blog/react-render-always-rerenders
It's a set of 6 short and sweet posts that breaks down rendering behavior, memoization, and relevant hooks
In this very thread there's some asshole using the word "memoization" when "caching" would have been fine.
On react, it's funny that sites where the frontend part is really crucial tend to move away from generic frameworks and do really custom stuff to optimize. I'm thinking about Notion, or Google Sheets, or Figma, where the web interface is everything and pretty early on they just bypass the frontend stacks generally used by the industry.
The main problem is that it tries to do away with a view model layer so you can get the data and render it directly in the components, but that makes managing multiple components from a high level perspective literally impossible. Instead of one view model, you end up with 50 React-esque utilities to achieve the same result.
The problem isn't React. The problem are KPIs and unrealistic timeline. It is the same then ever. Not a fault of React at all.
Svelte is ok. It could have been great but the api for their version of observables is a disaster (which I hope they eventually fix). Sveltekit is half baked and convoluted and I strongly advise not touching it.
VDOM is also a good idea that simplifies the mental model tremendously. Of course these days we can do better than a VDOM. Svelte in fact doesn't use a VDOM. You can say that VDOM is a terrible idea in comparison with Svelte, but that's just anachronistic.
I've definitely managed to make a page that uses almost no JavaScript and is dog-slow on Firefox (until Mozilla updated the rendering engine) just by building a table out of flexboxes. There's plenty of places for browsers to chug and die in the increasingly-complicated standard they adhere to.
It's hard to know which member of the duopoly is more guilty for breaking GitHub for me, but I find that blaming both often guarantees success.
I could like, buy a new computer and stuff. But you know, the whole Turing complete thing feels like a lie in the age of planned obsolescence. So web standards are too.
on my 2011 Mac Mini which Apple stopped allowing upgrades on past macOS 10.13
I know some people feel like Apple is aggressive in this respect, but that's an 8 year old version of a browser. That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
I'd put it on the end user for not updating software on 15 y/o hardware and still expecting the outside world to interact cleanly.
hilariously probably Windows
That's probably true.
15 y/0
It's a matter of expectations, many laptops that old still work decently enough with a refreshed battery. Funnily enough win10 was released 15 ago, and one can still get support for it for at least another 3 years until 2028, even on the customer license.
should they be locking safari to the OS, definitely not. but users can just go download another browser if they are actually concerned.
That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
Depending on where you live (or what websites you visit) it's not unreasonable.
…on my 2011 Mac Mini which Apple stopped allowing upgrades on past macOS 10.13.
In case you're one of today's lucky 10,000, OpenCore Legacy Patcher supports Macs going to back as far as 2007: https://github.com/dortania/OpenCore-Legacy-Patcher
Planned obsolescence is some of it, some of it is abstractions making it easier for more people to make software (at the cost of using significantly more compute) and Moore’s law being able to support those abstraction layers. Just imagine if every piece of software had to be written in C, the world would look a whole lot different.
I also think we’ve gone a bit too far into abstraction land, but hey, that’s where we are and it’s unlikely we are going back.
Turing completeness is almost an unrelated concept in all of this if you ask me, and if anything it’s because of completeness that has driven higher and higher memory and compute requirements.
So GitHub is usable but there are a number of UI layout issues and searching within a file is sometimes a mess (eg, highlighting the wrong text, rendering text incorrectly, etc. maybe that's true for all browsers. you're better off viewing a file as text in raw mode)
Firefox doesn't work on Windows 7 anymore but installing Firefox is still a hell of a lot better than sticking to IE.
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
Upgrade solution from .NET Framework 4.8 => .NET 8
Rename 'CustomerEmailAddress' to 'CustomerEmail'
Upgrade 3rd party API from v3 to v4
I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.
A computer will be able to tell that the 497th has a misspelled `CusomerEmail` or that change 829 is a regexp failure that trimmed the boolean "CustomerEmailAddressed" to "CustomerEmailed" with 100% reliability; humans, not so much.
Or that you had to avoid Ctrl+F "CustomerEmail" and see whether you had 1000 matches that matches the number of changed files or only 999 due to some typo.
Or using the web interface to filter by file type to batch your reviews.
Or...
Just that in none of those cases there is anything close to our memory/attention capacity.
I work in a large C++ codebase and a rename like that will actually just crash my vscode instance straight-up.
(There are good automated tools that make it straightforward to script up a repository-wide mutation like this however. But they still generate PRs that require human review; in the case of the one I used, it'd break the PR up into tranches of 50-ish files per tranche and then hunt down individuals with authority to review the root directory of the tranche and assign it to them. Quite useful!)
Of course some languages... PHP... aren't so lucky. $customer->cusomerEmail? Good luck dealing with that critical in production, fuckheads!
The point is moreso that PHP won't stop you from doing that. It will run, and it will continue running, and then it will throw an error at some point. Maybe.
If the code is actually executed. If it's in a branch that's only executed like 1/1000 times... not good.
Sure 1000+ changes kills the soul, we're not good at that, but sometimes there's just no other decent choice.
The usual response is something like "if you're correct, wouldn't that mean there are hundreds of cases where this needs to be fixed to resolve this bug?". The answer obviously being yes. Incoming 100+ file PR to resolve this issue. I have no other ideas for how someone is supposed to resolve an issue in this scenario
I would rather just see the steps you ran to generate the diff and review that instead.
what kind of PR would involve that many files?
A very simple example: migrating from JavaEE to JakartaEE. Every single Java source file has to have the imports changed from "javax." to "jakarta.", which can easily be thousands of files. It's also easy to review (and any file which missed that change will fail when compiling on the CI).
But CSS has bit me with heavy pages (causing a few seconds of lag that even devtools debugging/logging didn't point towards). We know wildcard selectors can impact performance, but in my case there were many open ended selectors like `:not(.what) .ever` where the `:not()` not being attached to anything made it act like a wildcard with conditions. Using `:has()` will do the same with additional overhead. Safari was the worst at handling large pages and these types of selectors and I noticed more sluggishness 2-3 years ago.
Normally, you be able to debug selector matching performance (and in general, see how much style computation costs you), so it's a bit weird if you have phantom multi-second delays.
I actually have been trying to figure out how to get my React application (unreleased) to perform less laggy in Safari than it does in Firefox/Chrome, and it seems like it is related to all the damn DOM elements. This sucks. Virtualizing viewports adds loads of complexity and breaks some built-in browser features, so I generally prefer not to do it. But, at least in my case, Safari seems to struggle with doing certain layout operations with a shit load of elements more than Chrome and Firefox do.
You certainly can build slow apps with React, it doesn't make building slow things that hard.
By all means. It sometimes feels like React is more the symptom than the actual issue, though.
Personally I generally just like having less code; generally makes for fewer footguns. But that's an incredibly hard sell in general (and of course not the entire story).
It's just easier to blame the tools (or companies!) you already hate.
But there is also the Safari Technology Preview, which installs as a separate app, but is also a bit more unstable. Similar to Chrome Canary.
If you put a lot of momentum behind a product with that mentality you get features piled on tech debt, no one gets enthusiastic about paying that down because it was done by some prior team you have no understanding of and it gets in the way of what management wants, which is more features so they can get bonuses.
Speaking up about it gets you shouted down and thrown on a performance improvement plan because you aren't aligned with your capitalist masters.
If a developer has to put up a fight in order to push back against the irresponsibility of a non-technical person, they by definition don't have ownership.
- Project managers putting constant pressure on developers to deliver as fast as possible. It doesn't even matter if velocity will be lost in the future, or if the company might lose customers, or even if it breaks the law.
- Developers pushing back on things that can backfire and burning political capital and causing constant burnout. And when things DO backfire, the developer is to blame for letting it happen and not having pushed it more in the first place.
- Developers who learned that the only way to win is by not giving a single fuck, and just trucking on through the tasks without much thought.
This might sound highly cynical, but unfortunately this is what it has become.
Developers are way too isolated from the end result, and accountability is non-existent for PMs who isolate devs from the result, because "isolating developers" is seem as their only job.
EDIT: This is a cultural problem that can't be solved by individual contributors or by middle management without raising hell and putting a target on their backs. Only cultural change enforced by C-Levels is able to change this, but this is *not* in the interest of most CEOs or CTOs.
But I guess the problem is that every single development position has been converging into this.
The only times in my career as a developer where I was 100% happy was when there was no career PM. Sales, customers, end-users, an engineering manager, another manager, a business owner, a random employee, some rando right out the street... All of those were way better product owners than career PMs in my 25 years of experience.
This is not exactly about competence of the category, it's just about what fits and doesn't. Software development ONLY work when there is a balance of power. PMs have leverage that developers rarely have.
I come from Electrical Engineering. Engineering requires responsibility, but responsibility requires the ability to say "no". PMs, when part of a multi-disciplinary team, make this borderline impossible, and make "being an engineer" synonymous with putting a target on your back.
Its these professional PM's that have done nothing else other than project mangement or PMP that don't have an understanding of the long term dev. cost of features that cause these systemic issues.
IMO "Knowing enough to do damage" is the worst possible situation.
A regular user who's a domain expert is 100x a better PO.
I'm still a big believer in "separation of powers" a la Scrum.
There should be a "Product Owner" that can be anyone really, and in the other side there is a self-managed development team that doesn't include this participant. This gives the team leverage to do things their way and act as a real engineering team.
The reason scrum was killed is because of PMs trying to get themselves into those teams and hijacking the process. Developers hated "PM-based scrum", which is not really scrum at all.
Todays version is: "You will get fired unless you use React".
So every site now uses React no matter if the end result is a dog slow Github.
Bad developers looks at "what are everybody else using?".
Good developers looks at "what is the best and simplest (KISS) tool for this?"
Good developers looks at "what is the best and simplest (KISS) tool for this?"
Good ol’ SSR - but eventually users and PMs start requesting features that can only be implemented with an SPA system, and I (begrudgingly) accept their arguments.
In my role (of many) as technical architect for my org, and as an act of resistance (and possibly to intentionally sabotage LLMs taking over), I opted for hybrid SSR + Svelte - it’s working well for us.
We had/have a similar problem where things began with "a sprinkle of js here/there" and then over time those islands became much bigger and encompassed more and more functionality. Entire backend templates were ported to the JS framework and then the page with load and then stuff would pop in after the DOMReady event was fired and the JS booted.
I've been working backwards to remove many of these changes and handle them server side if possible or at least give a better UX while the frontend is getting ready. It's not easy!
In a perfect world, we could run the output of the PHP backend through a JS SSR endpoint and hydrate the few necessary components into full HTML, but unfortunately, many of today's JS SSR tools are only available if you use the meta framework as well.
What's going to be fun over the next year is finally deciding if we should go "all-in" on a JS frontend (using Inertia.js for the communication with the backend) or go back to PHP entirely and try to leverage more browser capabilities. There's not really a right/wrong answer but if marketing want's to keep adding flashy features, having the flexibility of JS would be handy.
Don't listen to the opinions of the developers writing this code. Listen to the opinions of the people making these tech stack decisions.
Everything else is a distant second, which is why you get shitty performance, developers who cannot measure things. It also explains why when you ask the developers about any of this you get bizarre cognitive complexity for answers. The developers, in most cases, know what they need to do to be hired and cannot work outside those lanes and yet simultaneously have an awareness of various limitations of what they release. They know the result is slow, likely has accessibility problems, and scales poorly, and so on but their primary concern is retaining employment.
The short answer is: no, they don't. Google Cloud relied upon some Googlers happening to be Firefox users. We definitely didn't have a "machine farm" of computers running relevant OS and browser versions to test the UI against (that exists in Google for some teams and some projects, but it's not an "every project must have one" kind of resource). When a major performance regression was introduced (in Firefox only) in a UI my team was responsible for once, we had a ticket filed that was about as low-priority as you can file a ticket. The solution? Mozilla patched their rendering engine two minor versions later and the problem went away.
I put more than zero effort into fixing it, but tl;dr I had to chase the problem all the way to debugging the browser rendering engine itself via a build-from-source, and since nobody had set one of those up for the team and it was the first time I was doing it myself, I didn't get very far; Google's own in-house security got in the way of installing the relevant components to make it happen, I had to understand how to build Firefox from source in the first place, my personal machine was slow for the task (most of Google's builds are farm-based; compilation happens on servers and is cached, not on local machines).
I simply ran out of time; Mozilla fixed the issue before I could. And, absolutely, I don't expect it would have been promotion-notable that I'd pursued the issue (especially since the solution of "procrastinating until the other company fixes it" would have cost the company 0 eng-hours).
I can't speak for GitHub / Microsoft, but Google nominally supports the N (I think N=2) most recent browser versions for Safari, Edge, Chrome, Firefox, but "supports" can, indeed, mean "if Firefox pushes a change that breaks our UI... Well, you've got three other browsers you could use instead. At least." And of course, yes, issues with Chrome performance end up high priority because they interfere with the average in-house developer experience.
Does anyone have concrete information?
[1] https://yoyo-code.com/why-is-github-ui-getting-so-much-slowe...
https://chromewebstore.google.com/detail/make-github-great-a...
Clean code argues that instead of total rewrites you should focus on gradual improvements over time, refactor code so that overtime you pay off the dividends, without re-living through all the bugs you lived through 5 years ago that you don't recall the resolution of. Every rewrite project I've ever worked on, we run into bugs we had already fixed years prior, or the team before me has.
There are times when a total rewrite might be the best and only options such as deprecated platforms (think of like Visual Basic 6 apps that will never get threading).
What frustrates me more is that GitHub used to be open to browse, and the search worked, now in their effort to force you to make an account (I HAVE LIKE TEN ALREADY) and force you to login, they include a few "dark patterns" where parts of search don't work at all.
I don’t know if that’s a good or realistic rule for most projects, but I imagine for performant types of applications, that’s exactly what it takes to prevent eventual slowdown.
If you actually load up a ~2015 version of Jira on today’s hardware it’s basically instant.
It was being hosted on another continent. It was written in PHP. It was rendering server-side with just some light JS on my end.
That used to be the norm.
It's really hard to fight the trend especially in larger orgs.
A lot of the time we just break the branch permissions on the repo we are using and run release branches without PRs and ignore the entire web interface.
publicly disseminate information regarding the performance of the Cloud Products
https://web.archive.org/web/20210624221204/https://www.atlas...
My CPU goes to 100% and fans roaring every time I load the dashboard and transactions. I can barely click on customers/subscriptions/etc. I can't be the only one...
Sourcehut is basically a really barebones web interface for git server, so I don't think it's really comparable to GitHub
For hosting your own projects that's sometimes not a viable solution either. Limiting your open source project to platform other than GitHub hurts it's discoverability, because usually GitHub is what most devs and non devs associate with open source. I heard a lot of "It's not open source if it's not on GitHub". You can mirror your project to GH of course
"Just migrate to X because it's faster" doesn't work that well in the real world
Then some charlatan thought to embrace the React hype and it became terrible to say the least.
Old GitHub was very light on features, whereas the new UIs are way more curated on the surface.
Unfortunately all of this brings in tons of complexity. It doesn't help that there are a lot of junior developers working on it, clearly.
I see loading spanner everywhere and even the page transition take ages compared to before.
I am not sure what metric they are using justify ditching the perfectly working SSR they used before.
On random site, Navigate to GitHub repo, navigate to file in repo, and hit back, and I'm on the random site, hit forward and I'm on the file.
So annoying.
One of a large handful of issues I've encountered post react conversion
Any time I click a GitHub link, if I navigate beyond the readme, then my history is completely borked. Going “back” one page might go to the readme, might go back to HN, or might even go back to the readme and then back to the page I was trying to leave!
It’s infuriating and I always figured it was a bug they’d fix eventually but it’s been at least two years of this crap.
The solution is a test that fails when Chrome and Safari have substantially different render times.
The solution is a test that fails when Chrome and Safari have substantially different render times.
That test will be disabled for being flaky in under a week because the CI runners have contention with other jobs, causing them to randomly be slower and flake, and the frontend team does not want to waste time investigating flakes.
"Just have dedicated runners with guaranteed CPU performance", but that's the CI platform team's issue, the frontend and testing teams can't fix it, and the CI infra team won't prioritize it for a minimum of 5 years.
Good to know others are feeling it too, hopefully it can get resolved soon. In the mean time, i'll try my PR reviews on FF.
Update: Just tested my big PR (+8,661, -1,657) on FF and it worked like a charm!
You really can't escape the enshittification.
I have an ever growing directory listing using SolidJS, and it's up to about 25,000 items. Safari macOS and iOS two major versions ago actually handled it well. After the last major update, my phone rendered it faster than an m1 MacBook Pro.
Slow as hell and the Safari search function stopped working. I loaded the same url on Firefox and it was insta-fast.
The Cloud to make single-digit-seconds operations on a local Raspberry Pi 2 and home Internet take a few minutes.
What a time to be alive.