Hidden interface controls that affect usability
Because it can be trivially duplicated, this is minimally capable engineering. Yet automakers everywhere lack even this level of competence. By reasonable measure, they are poor at their job.
Every control in the car is visible
No. And that would be horrible.
Every control _critically needed while driving_ is visible and accessible. Controls that matter less can be smaller and more convoluted, or straight hidden.
The levers to adjust seat high and positions are hidden while still accessible. The latch to open the car good can (should ?) be less accessible and can be harder to find.
There are a myriad of subtle and opinionated choices to make the interface efficient. There's nothing trivial or really "simple" about that design process, and IMHO brushing over that is part of what leads us to the current situation where car makers just ignore these considerations.
They want to go to war with a simple design? Sure, they get a Lee-Enfield bolt action rifle. They fight against people with EF88 Austeyr.
They will die and lose the war.
Longer range, higher rate of fire, lighter, grenade launcher mount, scopes, more accurate, higher lethality, etc, etc.
Simple design so it doesnt't jam doesn't mean you've maximised all the other areas that are important for winning a war.
Also, bolt actions aren't exactly the definition of simple.
A bolt action is simple compared to any semi-automatic gun. Particularly if it is a single shot bolt action thus dispensing with the complexity of a magazine feed.
Technically, you never see "all" actions - you only see the actions that make sense for the selected units. However, because there is a predictable place where the actions will show up, and because you know those are all the actions that are there, it never feels confusing.
On the contrary, it lets you quickly learn what the different skills are for each unit.
There is also a "default" action that will happen when you right-click somewhere on the map. What this default action will do is highly context specific and irregular: e.g. right-clicking on an enemy unit will trigger an attack order, but only if your selected unit actually has a matching weapon, otherwise it will trigger a move order. Right-clicking a resource item will issue a "mine" order, but only if you have selected a worker, etc etc.
Instead of trying to teach you all those rules, or to let you guess what the action is doing, the UI has two simple rules:
- How the default action is chosen may be complicated, but it will always be one of the actions from the grid.
- If a unit is following an action, that action will be highlighted in the grid.
This means the grid doubles as a status display to show you not just what the unit could do but also what it is currently doing. It also lets you learn the specifics of the default action by yourself, because if you right-click somewhere, the grid will highlight the action that was issued.
The irony is that in the actual game, you almost always use the default action and very rarely actually click the buttons in the grid. But I think the grid is still essential for those reasons: As a status display and to let you give an order explicitly if the default isn't doing what you want it to do.
The counterexample would be the C&C games: The UI there only has the right-click mechanic, without any buttons, with CTRL and ALT as modifier keys if you want to give different orders. But you're much more on your own to memorize what combination of CTRL, ALT, selected unit, target unit and click will issue which order.
(OK, just for fairness: StarCraft also has hidden features that are only reachable through modifier keys, like the entire grouping and command chaining systems - and C&C does have some feedback: They do indicate the action by changing the cursor icon. So there are flaws, but I still find Blizzard's system more consistent and information-rich.)
While it's technically hidden, it can consistently be called within a single swipe, whatever app you're using, whatever the circumstances. The icon positions is also consistent, to the point muscle memory can be built.
It's to me more reliable than the home screen or any other mechanism on the phone (android's double click to open the camera would be on par), I wouldn't mind if more stuff acted that way.
I would gladly gladly keep my AC, heat, hazards, blinkers, wipers, maybe a few other buttons and that's it. I don't need back cameras, lane assist, etc.
I find it hard to believe it's cheaper to have all the cameras, chips, and other digital affordances rather than a small number of analog buttons and functions.
With the land tanks we call SUVs today, I can imagine it wasn't hard for politicians to decide that mirrors are no longer enough to navigate a car backwards.
Still, you don't need touch screens. Lane assist can be a little indicator on a dashboard with a toggle somewhere if you want to turn it off, it doesn't need a menu. A backup camera can be a screen tucked away in the dash that's off unless you've put your car in reverse. We may need processing to happen somewhere, but it doesn't need to happen in a media console with a touch screen.
I would gladly gladly keep my AC, heat, hazards, blinkers, wipers, maybe a few other buttons and that's it. I don't need back cameras, lane assist, etc.
I would pay more for decent physical switches and knobs, but I would give up AC before the backup camera. Getting this was life changing. I also wish all cars had some kind of blind spot monitoring.
If I had to guess, it’s because it’s so closely associated with the awful to use touch controlled center console. That and “new features” in general tend to take away from the ease of use and durability of the vehicle.
It may also have to do with now having an additional place to look during a stressful activity, which I’ve now fully adapted to.
I’m 100% on board with it now, if I had a vehicle without one I’d retrofit one. I also want side and front cameras.
I’ve got a big stupid truck (work provided) with a 140” wheelbase that I use for my agriculture job to transport my ATV (my real work vehicle) around. I absolutely hate the bloated, boxy, dangerous designs of modern pickups. Frankly they should be banned and forced to look stupid via visibility and child collision safety requirements.
I find it hard to believe it's cheaper to have all the cameras, chips, and other digital affordances rather than a small number of analog buttons and functions.
You should check how SW and HW are tested in the car.
A typical SW test is: Requirement: SW must drive a motor if voltage reaches 5 V. A typical SW test is: Increase the voltage to 5 V, see that the motor moves.
Now what happenes at 20 V is left as an exercise for the user.
Compare this to the databus that is used in today's cars, it really isn't even a fair comparison on cost (you don't have to have 100 wires running through different places in your car, just one bus to 100 things and signal is separated from power).
I'm pretty sure that simple switch is something directly in the circuit for the fog light, and there is a dedicated wire between the fog light, the switch, and the fuse box. And if its an old Jag, those wires flake out and have to be redone at great expense.
I don't really want to get into a big debate about this as I haven't worked on Jags, but I don't believe that replacing parts of the loom is would be that expensive. Remaking an entire loom, I will admit that would expensive as that would be a custom job with a lot of labour.
Compare this to the databus that is used in today's cars, it really isn't even a fair comparison on cost (you don't have to have 100 wires running through different places in your car, just one bus to 100 things and signal is separated from power).
Ok fine. But the discussion was button vs touch screens and there is nothing preventing buttons being used with the newer databus design. I am pretty sure older BMWs, Mercs etc worked this way.
In any event. I've never heard a good explanation of why I need all of this to turn the lights on or off in a car, when much simpler systems worked perfectly fine.
Reducing the copper content of cars and reducing the size of the wiring bundles that have to pass through grommets to doors, in body channels, etc. was the main driver. Offering greater interconnectedness and (eventually) reliability was a nice side effect.
It used to be a pain in the ass to get the parking lights to flash some kind of feedback for remote locking, remote start, etc. Now, it’s two signals on the CAN bus.
That sounds like an incredible bargain to me.
Why do you think you should pay near cost? What’s the incentive for all the people who had to make, test, box, pack, move, finance, unpack, inventory, pick, box, label, and send it to you? I can’t imagine the price between £10 and free that you’d think wasn’t a rip-off for a part that probably sells well under a 100 units per year worldwide.
As for it being a bit of a rip off yes it was a little bit. I found the same part for cheaper literally the next day.
In any-event. It isn't the important part of what I was trying to communicate.
(another reason was because it still has a geared transmission instead of a CVT, but that's a separate discussion)
A friend got a tesla on lease and it was quite cheap, 250/month. Been driven in that car a few times and was able to study the driver using the controls and it’s hideusly badly designed, driver has to take eyes off the road and deep dive in menus. Plus that slapped tablet in the middle is busy to look at, tiring and distacting. The 3d view of other cars/ pedestrians is a gimmick, or at least it looks like one to me. Does anyone actually like that? Perhaps im outdated or something but I wouldn’t consider such a bad UX in a car.
In practice many drivers seem to be dealing fine with the touch screen because they've stopped paying attention to the road, trusting their car to keep distance and pay attention for them. Plus, most of the touch screen controls aren't strictly necessary while driving, they mostly control luxury features that you could set up after pulling over.
If you exclusively charged with completely free electricity and still managed to drive that 14K miles in a year, you’d save $187/mo.
If it moved you from 25mpg to 40mpge, it’d save you a little over $70/mo.
Our two cars are a BEV and a hybrid, so I’m no battery-hater, but neither is cheaper than a reasonable gas-only equivalent would be.
Still cars don't last forever - my pervious minivan needed a transmission rebuild so we can cut the cost of the replacement by 10000 since either way that money is spent and now the newer van is break even on payments and it should still work after it is paid off for a few years.
It's cost, not competence.
This implies it's a consequential cost. Building with tactile controls would take the (already considerable) purchase price and boost that high enough to impact sales.
If tactile controls were a meaningful cost difference, then budget cars with tactile controls shouldn't be common - in any market.
It's not just cost, though. The reality is that consumers like the futuristic look, in theory (i.e., at the time of the purchase). Knobs look dated. It's the same reason why ridiculously glossy laptop screens were commonplace. They weren't cheaper to make, they just looked cool.
knobs look dated
Not all. Knobs designed with dated designs and/or materials look dated. There's a million ways to make a knob, just use a modern or novel one.
It is the job (and in my opinion, an exciting challenge) for the UI designers to come up with a modern looking tactile design based on the principles of skeuomorphism, possibly amalgamated with the results of newer HCI research.
Most of the cost savings is in having a single bus to wire up through the car, then everything needs a little computer in it to send on that bus...so a screen wins out.
It allows UI designers to add nearly endless settings and controls where they were before limited by dash space
Except, they don't do it.
Just like your Windows PC is capable of drawing a raised or sunked 3D button, or a scrollbar, but, they don't do it anymore.
My previous one lasted more than 20 years, from when my parents bought it for me when I went to study until some time in my 40s. It was still functional, but its dial had become loose and it didn't look that great anymore.
The one I bought after that follows the new pattern, it has buttons up the wazoo and who even knows what they do? To be honest I just need one power setting with a time and maybe a defrost option?
At first it was a bit annoying because frozen meals sometimes want you to run it at lower power and this microwave has no power setting. If that's a problem, I imagine there's some other similar model that does. But in practice, just running it at full power for shorter seems to work just as well.
It would look much nicer if it didn't have a cooking guide printed on it.
In Europe, I saw some consumer-grade microwaves with similarly minimalist designs, like these Gorenje microwaves[2] with two dials. I'd have gotten one of those, but I couldn't easily find them in the US. But I also did not look especially hard.
[1] https://www.amazon.com/dp/B00ZTVIPZ2?ref_=ppx_hzsearch_conn_...
[2] https://international.gorenje.com/products/cooking-and-bakin...
Most microwaves only have the magnetron (the part actually producing the microwaves) on one side. The rotation is needed to cook your food evenly.
This is why food in the middle of the tray often ends up undercooked. No matter how the tray rotates, that part is never particularly close to it.
For a visual aid, these are pictures of the replacements parts: https://www.partstown.com/panasonic/PANA010T8K10AP https://www.partstown.com/panasonic/PANF202K3700BP
I stab a potato and cover it in butter and salt, put it on a plate, press "potato" and it's cooked just perfect every time. Doesn't matter if it's big or small, it's just right.
When I have a plate of leftovers I just press reheat and it's perfect pretty much every time. Could be pork chops and Mac and cheese, could be a spaghetti with marinara sauce, could be whatever. Toss it in, lightly cover, press reheat, and it's good.
When I want to quickly thaw out some ground beef or ground sausage, I just toss it in, press defrost, put in a weight to a tenth of a pound, and it's defrosted without really being cooked yet.
Back when I microwaved popcorn, just pressing the popcorn button was spot on. Didn't matter what the bag size was, didn't matter the brand, the bag was always pretty much fully popped and not burned.
Despite being the same age it's still in excellent working order while yours with the dials fell apart.
a microwave only really needs one (and ideally its just a dial instead of a button).
The 1967 Amana Radarange (https://media.npr.org/assets/img/2017/08/28/microwave_custom...) had two dials: short duration under 5 minutes and a long duration out to something like 30 minutes.
My parents still have theirs. It needs some resto love, but it’s still fully functional. I’ve already put my foot down in terms of who’s inheriting it.
Power, time, start, stop.
It turns out that luckily there is one like that made. The Y4ZM25MMK. Also as bonus no clock.
That said, I realized only very late that the function dial actually has a marker to show which function it selects. An extremely shallow colorless groove.
Because it can be trivially duplicated
While I agree with your sentiment, designing and manufacturing custom molds for each knob and function (including premium versions) instead of just slapping a screen on the dash does have a cost.Why is this so expensive it can't even be put into a premium car today when it used to be ubiquitous in even the cheapest hardware a few decades ago?
Basically, if you remove the knobs you can save, say, 10 dollars on every vehicle. In return, you have made your car less attractive and will lose a small number of sales. You will never, ever be able to quantify that loss in sales. So, on paper, you've saved money for "free".
Typically, opportunity cost is impossible or close to impossible to measure. What these companies think they are doing is minimizing cost. Often, they are just maximizing opportunity cost of various decisions. Everyone is trying to subtly cut quality over time.
Going from A quality to B quality is pretty safe, it's likely close to zero consumers will notice. But then you say "well we went from A to B and nobody noticed, so nobody will notice B to C!". So you do it again. Then over and over. And, eventually, you go from a brand known for quality to cheap bargain-bin garbage. And it happened so slowly that leadership is left scratching their heads. Sometimes the company then implodes spontaneously, other times it slowly rots and loses to competitors. It's so common it feels almost inevitable.
Really, most companies don't have to do much to stay successful. For a lot of markets, they just have to keep doing what they're doing. Ah, but the taste of cost-cutting is much too seductive. They do not understand what they are risking.
Basically, if you remove the knobs you can save, say, 10 dollars on every vehicle. In return, you have made your car less attractive and will lose a small number of sales.
Is there evidence that fancy looking screens don't show better in the showroom than legacy looking knobs and buttons? Where under use, they may be better, I am not sure all that sells better.
All I know is personal anecdotes from people I talk to. I know a couple people who have a Mercedes EQS - they've all said the same thing: the big screen is cool for a little bit, then it's just annoying.
I think it will take a generation or two of cars before some consumers start holding back on purchases because of this. For now, they don't know better. But I'm sure after owning a car and being pissed off at it, they'll think a little bit harder on their next purchase. I think consumers are highly impacted by these types of things - small cuts that aren't bad, per se, but are annoying. Consumers are emotional, they hold grudges, they get pissed off.
I sort of feel the same way about fix-a-flat kits. Once people actually have the experience of trying to use a fix-a-flat kit, they'll start asking car salesmen if the car comes with a spare...
designing and manufacturing custom molds for each knob and function ... dash does have a cost.
Manufacturing car components already involves designing and custom molds, does it not? Compared to the final purchase price, the cost of adding knobs to that stack seems inconsequential.
Your average transmission will have an order of magnitude more parts that also needed to be designed and produced with much higher precision.
The interior knob controls are just a rounding error in the cost structure.
His perspective was that companies were "run" by engineers first, then a few decades later by managers, and then by marketing.
Who knows what's next, maybe nothing (as in all decisions are accidentally made by AI because everyone at all levels just asks AI). Could be better than our current marketing-driven universe.
Toyota 4Runner, Toyota Tacoma, Jeep Wrangler, Nissan Frontier, Ford Maverick, Ford Bronco, Jeep Gladiator, Mazda MX-5 Miata
I wonder what kind of cars do you guys drive.
Stranger still, if someone comes up with an idea of how to improve that thing that sucks, frequently the reaction is very negative. Sadly, the whole thing more and more gets into “old man yelling at the cloud” territory.
I don't think you can make this assertion without knowing what they were tasked with doing. I very much doubt they were tasked with making the most user friendly cockpit possible. I suspect they were required to minimize moving parts (like switches and buttons) and to enable things like Sirius, iPhone and Android integration, etc.
So guess what Mr.Auto Manufacturer, you can keep your hifi $30K-70K touchscreen surveillance machine on your lot. I'll keep driving my 20+ year old Corolla until you learn to do better.
It’s a race to the bottom to be the least enshittified versus your market competitors. Usability takes a backseat to porcine beauty productization.
I have no idea why some interfaces hide elements hide and leave the space they'd taken up unused.
IntelliJ does this, for example, with the icons above the project tree. There is this little target disc that moves the selection in the project tree to the file currently open in the active editor tab. You have to know the secret spot on the screen where it is hidden and if you move your mouse pointer to the void there, it magically appears.
Why? What is the rationale behind going out of your way to implement something like this?
This stupidity seems to have spread across Windows. No title bars or menus... now you can't tell what application a Window belongs to.
And you can't even bring all of an application's windows to the foreground... Microsoft makes you hover of it in the task bar and choose between indiscernible thumbnails, one at a time. WTF? If you have two Explorer windows open to copy stuff, then switch to other apps to work during the copy... you can't give focus back to Explorer and see the two windows again. You have to hover, click on a thumbnail. Now go back and hover, and click on a thumbnail... hopefully not the same one, because of course you can't tell WTF the difference between two lists of files is in a thumbnail.
And Word... the Word UI is now a clinic on abject usability failure. They have a menu bar... except WAIT! Microsoft and some users claim that those are TABS... except that it's just a row of words, looking exactly like a menu.
So now there's NO menu and no actual tabs... just a row of words. And if you go under the File "menu" (yes, File), there are a bunch of VIEW settings. And in there you can add and remove these so-called "tabs," and when you do remove one, the functionality disappears from the entire application. You're not just customizing the toolbar; you're actually disabling entire swaths of features from the application.
It's an absolute shitshow of grotesque incompetence, in a once-great product. No amount of derision for this steaming pile is too much.
Windows and Unix GUIs had it right: Put an application's menu where it belongs, on the application's main frame.
But now on Windows... NO menu? Oh wait, no... partial menus buried under hamburger buttons in arbitrary locations, and then others buried under other buttons.
All you have to do to get to it is move your mouse up until you can't move it up any more.
This remains a very valuable aspect to it no matter what changes in the vogue of UIs have come and gone since.
The fact that you think that you've "minimized the application" when you minimized a window just shows that you are operating on a different (not better, not worse, just different) philosophy of how applications work than the macOS designers are.
The actual historical rationale for the top menu bar was different, as explained by Bill Atkinson in this video: https://news.ycombinator.com/item?id=44338182. The problem was that due to the small screen size, non-maximized windows often weren't wide enough to show all menus, and there often wasn't enough space vertically below the window's menu bar to show all menu items. That's why they moved the menus to the top of the screen, so that there always was enough space, and despite the drawback, as Atkinson notes, of having to move the mouse all the way to the top. This drawback was significant enough that it made them implement mouse pointer acceleration to compensate.
So targetability wasn't the motivation at all, that is a retconned explanation. And the actual motivation doesn't apply anymore on today's large and high-resolution screens.
With desktop monitor sizes since 20+ years ago, the distance you have to travel, together with the visual disconnect between application and the menu bar, negates the easier targetability.
Try it on a Mac; the way its mouse acceleration works makes it really, really easy to just flick either a mouse or a finger on a trackpad and get all the way across the screen.
Another side effect is the uselessness of the Help menu. What help am I looking at? The application owns the menu, so where's the OS help?
Oh right, it's just all mixed together. When I'm searching for information in some developer tool I'm using, I really enjoy all the arbitrary hits from the OS help about setting up printers, sending E-mail, whatever.
And we're talking about a GUI here, so when I minimize an application's GUI then yes, I expect that I've minimized the application. And again, I think you'll find that the vast majority of users work under this M.O.
But your observation raises another usability issue caused by the single menu: Instead of an "infinite" desktop, the Mac reduces the entire screen to a single application's client area... so, historically, Mac applications treated it that way...littering it with an armada of floating windows that you had to herd around.
The problem is that turning the whole screen into one application's client area fails because you can see all the other crap on your desktop and all other open applications' GUIs THROUGH the UI of the app you're trying to use. It's stupid.
So, to users' relief, the floating-window nonsense has been almost entirely abandoned over the last couple of decades and single-window applications have become the norm on Mac as they have been on Windows forever. Oh wait, hold on... here comes Apple regressing back to "transparent" UI with "liquid glass;" a failed idea from 20+ years ago.
Full circle, sadly.
But yeah... now I'm relieved when I go home from work and get back on my Mac. I waste so much time hunting for stuff on Windows now... it's just incredible.
Pompous pedants used to trot out "Fitt's Law" in defense of the Mac's dumb menu all the time, when in fact it contra-indicates it:
Fitts’ law states that the amount of time required for a person to move a pointer (e.g., mouse cursor) to a target area is a function of the distance to the target divided by the size of the target. Thus, the longer the distance and the smaller the target’s size, the longer it takes.
Right, so where should an application's menu go? ON ITS WINDOW. Not way up at the top of the screen. It's as if the people citing this "law" don't even read it.
I hate when applications stuff other controls (like browser tabs) into the title bar --- leaving you with no place to grab and move the window.
The irony is that we had title bars when monitors were only 640x480, yet now that they have multiplied many times in resolution, and become much bigger, UIs are somehow using the excuse of "saving space" to remove title bars and introducing even more useless whitespace.
I don't need to know that what I'm using is Edge/Chrome/Firefox any more than I need to know that what I'm using is Windows/etc.
now that they have multiplied many times in resolution
Did they though? Quite a few laptops barely have 720 pixels in (scaled) height. That's less than your CRT with 1024x768, back in the days.
Second, I want to give focus to the entire application at once. ALL of its windows need to be brought to the foreground at once.
macOS works like this though, IIRC, and no other way.
This stupidity seems to have spread across Windows. No title bars or menus... now you can't tell what application a Window belongs to.
I disable the title bars on almost everything I use. Except some custom applications that resist such attempts. I do not give a rat's ass what is open, it's already immediately obvious. Just wasting valuable screen real-estate.
Some people are like airliner pilots. They enjoy every indicator to be readily visible, and every control to be easily within reach. They can effortlessly switch their focus.
Of course, there is a full range between these extremes.
The default IDE configuration has to do a balancing act, trying to appeal to very different tastes. It's inevitably a compromise.
Some tools have explicit switches: "no distractions mode", "expert mode", etc, which offer pre-configured levels of detail.
For example, if the GUI can have more than one instance of the same view open, toggle buttons for view modes become specific to individual view instances. Putting those into a global toolbar is wrong.
An IDE, and the browser example given below, are tools I'll spend thousands of hours using in my life. The discoverability is only important for a small percentage of that, while viewing the content is important for all of it.
This is exactly when I will have the 'knowledge in the head'.
I get why you would hide interface elements to use the screen real estate for something else.
Except that screens on phones, tablets, laptops and desktops are larger than ever. Consider the original Macintosh from 1984 – large, visible controls took up a significant portion of its 9" display (smaller than a 10" iPad, monochrome, and low resolution.) Arguably this was partially due to users being unfamiliar with graphical interfaces, but Apple still chose to sacrifice precious and very limited resources (screen real estate, compute, memory, etc.) on a tiny, drastically underpowered (by modern standards) system in the 1980s for interface clarity, visibility, and discoverability. And once displays got larger the real estate costs became negligible.
There is this little target disc that moves the selection in the project tree to the file currently open in the active editor tab.
Don’t quote me on this, but I vaguely remember there being an option to toggle hiding it, if not in the settings it is in a context menu on the panel.
That thing is a massive time saver, and I agree—keeping it hidden means most people never learn it exists.
I have no idea why some interfaces hide elements hide and leave the space they'd taken up unused.
UI has been taken over by graphic designers and human interaction experts have been pushed out. It happened as we started calling it "user experience" rather than "user interface" because people started to worry about the emotional state of the user, rather than being a tool. It became about being form over function, and now we have to worry about holding it wrong when in reality machines are here to serve humans, not the other way around.
It might seem counter intuitive that hiding your interface stops your users leaving. But it does it because it changes your basis of assumptions about what a device is and your relationship with it. It's not something you "use", but something you "know". They want you to feel inherently linked to it at an intuitive level such that leaving their ecosystem is like losing a part of yourself. Once you've been through the experience of discovering "wow, you have to swipe up from a corner in a totally unpredictable way to do an essential task on a phone", and you build into your world of assumptions that this is how phones are, the thought of moving to a new type of phone and learning all that again is terrifying. It's no surprise at all that all the major software vendors are doing this.
Consider that all the following are true (despite their contradictions):
- "Bloated busy interface" is a common complaint of some of Google, Apple, Microsoft, and Meta. people here share a blank vscode canvas and complain about how busy the interface is compared to their 0-interface vim setup.
- flat design and minimalism are/were in fashion (have been for few years now).
- /r/unixporn and most linux people online who "rice" their linux distros do so by hiding all controls from apps because minimalism is in fashion
- Have you tried GNOME recently?
Minimal interface where most controls are hidden is a certain look that some people prefer. Plenty of people prefer to "hide the noise" and if they need something, they are perfectly capable to look it up. It's not like digging in manuals is the only option
I do think it's likely more passive than active. People at Google aren't deviously plotting to hide buttons from the user. But what is happening is that when these designs get reviewed, nobody is pushing back - when someone says "but how will the user know to do that?", it doesn't get listend to. Instead the people responsible are signing off on it saying, "it's OK, they will just learn that, once they get to know it, then it will be OK". It's all passive but it's based on an implicit assumption that uses are staying around and optimising for the ones that do, making it harder for the ones that want to come and go or stop in temporarily.
Once three or four big companies start doing it, everybody else cargo cults it and before you know it, it looks like fashion and GNOME is doing it too.
I do think it's likely more passive than active. People at Google aren't deviously plotting to hide buttons from the user.
This is important, thank you for mentioning it: actions have consequences besides those that motivated the action. I don't like when people say "<actor> did <action>, and it leads to this nefarious outcome, therefore look how evil <actor> must be". Yes, there is always a chance that <actor> really is a scheming, cartoonish villain who intended that outcome all along. But how likely is it that <actor> is just naive, or careless, or overly optimistic?
Of course, the truth is almost certainly somewhere in the middle: familiarity with a hard-to-learn UI as a point of friction that promotes lock-in may not be a goal, but when it manifests, it doesn't hurt the business, so no one does anything about it. Does that mean the designers should be called out for it? If the effect is damaging enough to the collective interest, then maybe yes. But we needn't assume nefarious intentions to do so.
Then again, everyone thinks their own actions are justified within their own value system, and corporate values do tend toward the common denominator (usually involving profit-making). Maybe the world just has way more cartoonish villains than I give it credit for.
- Dribbble-driven development, where the goal is to make apps look good in screenshots with little bearing to their practical usability
- The massive influx of designers from other disciplines (print, etc) into UI design, who are great at making things look nice but don’t carry many of the skills necessary to design effective UIs
Being a good UI designer is seeking out existing usability research, conducting new research to fill in the gaps, and understanding the limits of the target platform on top of having a good footing in the fundamentals. The role is part artist, part scientist, and part engineer. It’s knowing when to put ego aside and admit that the beautiful design you just came with isn’t usable enough to ship. It’s not just a sense for aesthetics and the ability to wield Photoshop or Figma or whatever well.
This is not what hiring selects for, though, and that’s reflected in the precipitous fall in quality of software design in the past ~15 years.
Have you tried GNOME recently?
God, no. I switched to xfce when GNOME decided that they needed to compete with Unity by copying whatever it did, no matter how loudly their entire user base complained.
Why would I try GNOME again?
The tone of your post and especially this phrase is inappropriate imo. The GP's comment is plausible. You're welcome to make a counter-argument but you seem to be claiming without evidence their was no thinking behind their post.
Apple's interface shits me because it's all from that one button, and I can never remember how to get to settings because I use that interface so infrequently, so Android feels more natural. Ie. Android has done it's lock-in job, but Apple has done itself a disservice.
(Not entirely fair, I also dislike Apple for all the other same old argument reasons).
Another comment elsewhere on this page informed me that the universal button no longer exists.
The home button was an important lifeline. It did one thing if you pressed it once. No matter what, it took you home. For my older relatives, that one button made using iOS incredibly easy and safe feeling. No matter what, no matter where you ended up, just hit the home button. There, back to start. Easy peasy.
Now with everything hidden behind stupid gestures that even I don't fully understand, my parents struggle to not inundate their iPads with extra windows and split window "Windows" that show up in the app switcher. My mom has had a none home-button iPhone for years and still can't get the home screen gesture right most of the time, and she hates it. I had to scramble to buy up old 9th gen iPads for them and my grandmother once it was discontinued.
First thing I do on new Pixel phones is enable 3 button navigation, but lately that's also falling out of favor in UI terms, with apps assuming bottom navigation bar and not accounting for the larger spacing of 3 button nav and putting content or text behind it.
However, I think they do a decent job at resisting it in general, and specifically I disagree that removing the home button constitutes hiding an UI element. I see it as a change in interaction, after which the gesture is no longer “press” but “swipe” and the UI element is not a button but edge of the screen itself. It is debatable whether it is intuitive or better in general, but I personally think it is rather similar to double-clicking an icon to launch an app, or right-clicking to invoke a context menu: neither have any visual cues, both are used all the time for some pretty key functions, but as soon as it becomes an intuition it does not add friction.
You may say Apple is way too liberal in forcing new intuitions like that, and I would agree in some cases (like address bar drag on Safari!), but would disagree in case of the home button (they went with it and they firmly stuck with it, and they kept around a model with the button for a few more years until 2025).
Regarding explaining the lack of home button: on iOS, there is an accessibility feature that puts on your screen a small draggable circle, which when pressed displays a configurable selection of shortcuts—with text labels—including the home button and a bunch of other pretty useful switches. Believe it or not, I know people who kept this circle around specifically when hardware home button was a thing, because they did not want to wear out the only thing they saw as a moving part!
Right, but while it's obvious to everyone that a button is a control, it's not obvious that an edge is a control. On top of that, swiping up from the bottom edge triggers two completely different actions depending on exactly when/where you lift your finger off the screen.
Why not move the physical home button to the back of the phone?
Forwhat it’s worth, back tap is a feature of iOS to which you can assign an action, though it only triggers on double or triple tap.
I couldn't disagree more.
A big physical button on the surface of the device that is both visible and touchable is completely unmissable. More importantly, it's unmistakably a control. There is simply no other explanation for its existence than being a control.
The edge of the screen on the other hand exists because the screen has to end somewhere. There is no hint whatsoever that it doubles as a control when touched in a certain way or that it doubles as multiple different controls when touched slightly differently.
That said, I'm not a dogmatic "UI physicalist" (if that's a thing). I hate the physical mute switch for instance and I'm not a huge fan of the physical double click to authorise purchases. And I don't want scrollbars constantly in my face.
I do believe that new ways of interacting with hardware can be introduced over time even if hidden. There's a legitimate trade-off beteween discoverability and productivity once you're familiar with the way a device works.
The problem is that some people really struggle with gestures even when they know they exist. I watched people fail to answer calls on Android because it required them to swipe up an on-screen icon.
The number of things you can do swiping or just touching somewhere near the bottom of the screen is staggering and constantly changing.
You're welcome
Now that Pixel cameras outclass iPhone cameras, and even Samsung is on par, there is really no reason to ever switch to the Apple ecosystem anymore IMO.
there is really no reason to ever switch to the Apple ecosystem anymore IMO
Not having anything to do with Google is a pretty good reason I think.
As far as the Back button, on iOS the norm is for it to be present somewhere in the UI of the app in any context where there's a "back" to go to. For cross-app switching, there's an OS-supplied Back button in the status bar on top, again, showing only when it's relevant (admittedly it's very tiny and easy to miss). Having two might sound complicated but tbh I rather prefer it that way because in Android it can sometimes be confusing as to what the single global Back button will do in any given case (i.e. whether it'll navigate within the current app, or switch you back to the previous app).
[iPhone] Interactions are hidden, not intuitive, or just plain missing.
And they aren't even consistent from app to app. That's perhaps the most frustrating thing.
You see this under macOS, too. A lot of Electron apps for instance replace the window manager’s standard titlebar with some custom thing that doesn’t implement chunks of the standard titlebar’s functionality. It’s frustrating.
In Notes, to create a new Note, tap the pencil-in-a-square icon in the lower right corner.
In Calendar, to create a new appointment, tap the + icon in the upper right corner.
In Reminders, to create a new reminder, tap the + in a blue circle in the lower left corner. At least it offers a text label "New Reminder"
These are all Apple apps and they all do it differently. And that's not even getting into gestures and other actions that you just have to stumble upon to even know they exist.
The main one I end up missing most is the swipe to go back gesture within apps. It comes for “free” when using UIKit and SwiftUI navigation primitives (UINavigationController, UISplitViewController, and their SwiftUI counterparts) but it’s almost always missing from apps built with React Native and such.
Take a simple example: Open a read-only file in MS Word. There is no option to save? Where's it gone? Why can I edit but not save the file?
A much better user experience would be to enable and not hide the Save option. When the user tries to save, tell them "I cannot save this file because of blah" and then tell them what they can do to fix it.
Interesting article. Some points I didn't quite agree entirely with. There's a cost and practically limitation to some things (like a physical knob in a car for zooming in and out on a map - although that was probably just an example of intuitive use).
I just recently switched a toggle on a newly installed app that did the opposite of what it was labelled - I thought the label represented the current state, but it represented the state it would switch to if toggled. It became obvious once changed, but that seems the least helpful execution.
If you have (next to your monitor on the left side) a narrow physical display with menu entries in it. You get 4 things for "free", the user will expect there to be menu entries, the developer will understand the expectations to have menu entries, there is limited room to go nuts with the layout or shape of the menu and last but most funny, you won't feel part of the screen has been taken away from you.
The physical scrollbar should be a transparent tube with a ball (or ideally a bubble) floating in it.
Usage could be moving the pointer out of the screen. The scrollbar led goes on and you can hold the button to move the page. When using the menu the pointer [also] vanishes and the menu entry at that height is highlighted. (much better usability) Moving the mouse up or down highlights the above or below entries, if there are a lot of entries it may also scroll. It may be a touch screen but the most usuable would be a vertical row of 5 extra wide (3 fingers) keyboard buttons on the left with the top 4 corresponding to the 1st, 2nd, 3rd, 4th menu entry and the 5th one for page down. (scrolling down 4 entries) Ideally these get some kind of texturing so that one can feel which button one is touching.
This way knowledge in the world can smoothly migrate to knowledge in the head until eventually you can smash out combinations of M keys in fractions of a second without looking at the screen or the keyboard. The menu displayed is always in focus, you don't have to examine the view port to use it. Having a row of horizontal F keys is a design fiasco. Instinctively bashing the full row of those might come natural after learning to type, then learning to type numbers, then symbols and only if you frequently use applications that have useful F key functionality. I only really know F5 and F11 but I cant smash them blindly as I pretty much never use them. I just tried F1 in firefox and no help documentation showed up... I think that was what it was suppose to do? Not even sure anymore.
Having the antenna menu (file, edit, etc) at the top of the viewport is also ugly. For example, smashing the second then the top M key could easily become second nature. CTRL+Z is fine of course but it aint knowledge in the world. Does anyone actually use ALT+E+U for undo? Try it on the CTRL+F input area. It's just funny. Type something in the address bar then compare ALT+E+U with using the Edit menu.
A separate display would take many of these "design" privileges away from the clowns.
(note: I think it is ALT+E+U as the Dutch layout is forced on me by windos. Edit is called Bewerken and the shortcut is ALT+W!?! ALT+E does nothing.)
The physical scrollbar should be a transparent tube with a ball (or ideally a bubble) floating in it.
Oh, god, the Touch Bar was already a frustrating enough piece of UI, don't give Apple more ideas.
If I was on the design team they would have fired me for screaming at everyone. Screaming is good UI tho.
If I was on the design team they would have fired me for screaming at everyone.
Oh man. I really do start screaming sometimes.
At user interfaces, too often. At unbelievably bad product choices of all kinds.
The simpler & dumber the issue the louder I get.
Someone creates a quality flat tine garden rake with about 40 metal tines, and charges accordingly. The person who manages stickers, because everything needs stickers, creates huge stickers they glue across all the tines. You try to peel it off and now you have over two dozen tines with long streaks of shredded paper glued hard to them.
Screaming is an appropriate place to put the high spin WTF-a-tons that might otherwise feed the universe’s dark energy.
And that, dear reader, is my theory of dark energy.
The analogy is probably to start with a 5G wifi HD camera doorbell with cloud hosting, night vision, human body motion detection zones and you end with a heavy duty cast iron door knocker that has one moving part and instantly reveals something personal about the person at the door. A small art work depicting something about you. They come as dragons, goats, unicorns, vikings, snakes, Bumble bees, Longhorns, Moose, Neptune, Bacchus, all kinds of tools and all kinds of symbols. Build to last many times longer than the house.
And then people cant find the bell...
It enabled a neat set of affordances, but not worth losing core functionality over.
I don't agree scrollbars work fine, they use to work fine, now they are to tiny to click on.
There also was/is the issue where the view port width needs to be adjusted when page state grows beyond the screen height then word wrap makes the content shift down. Is the solution to have one so tiny it is hard to use or should one always display a scrollbar? The one outside the screen is always there :)
I like things that do only one thing, do it well and in a simple way.
You could also go the other direction and put everything on the screen. Huawei just made a horrifying laptop where the keyboard is also a screen.
If you are reading (top to bottom) the wheel is really useful. It scrolls in chunks tho. A touch pad is more accurate but I always attach a mouse to my laptops. Not sure why but it feels less convenient than the scroll wheel.
If you need to travel slightly further dragging the middle mouse "button" is great. I have no idea how to do that with a touch pad but one can also use page up and page down (or [shift+]space when available)
If pages or documents are really long it isn't fast enough. I might roughly know where the thing I'm looking for is. Holding down the mouse button on that offset. Is faster.
The scrollbar does both extremely fast and fine scrolling. I rarely start reading a page of code from the top. The larger sections are usually the most relevant. Here I hold the scroll bar and move it up and down quickly and accurately while reading.
But the real trick is that it doesn't affect the cursor position. You can scroll then continue typing and it will jump to the cursor. [say] inline ccs, I'm typing some html but need to lookup what some class name was called. I can scroll to it, read and continue typing. Same for the name of functions and variables defined elsewhere in the same file. Or [say] I don't recall how to write someones name eventho I've already used it in the same article.
In the 90's I had this vision that the menu and the scrollbar should be physically separated from the screen.
Buttons alongside, above, or below screens appear now and then. Some early terminals had them. Now that seems to be confined to aircraft cockpits and gasoline dispensers.
Some ATMs have unmarked physical buttons next the screen and tge text displayed on the screen next to those buttons defines what the key does.
TV remotes have A/B/C/D (red/blue/green/yellow) physical buttons whose function is dynamically defined by your context or which setting / function / menu you are currently inside.
I guess this goes back to video game controllers that have A/B X/Y buttons that can have different Functions in different contexts.
About the scroll bars: Also stop making them so thin that I have to have FPS skills to hit them! Looking at you, Firefox! (And possibly what standard CSS allows?) Yeah, I can scroll, but horizontally the scrollbar would be more convenient than pressing shift with my other hand.
widget.gtk.overlay-scrollbars.enabled = false
layout.css.scrollbar-width-thin.disabled = true
widget.non-native-theme.scrollbar.style = 3
widget.non-native-theme.scrollbar.size.override = 30
Boggles my mind how badly many interfaces manage to be.
More seriously, my understanding is that the octopus retina does not have color receptors, just aggregate light, I.e. brightness.
But the octopus practically has a sub-brain behind each respective eye, and the eye brains can extract color from the slight lensing differences across frequencies.
They are amazing magical creatures.
Taking that approach, and some sort of ocular lathe, and we can fix this.
widget.non-native-theme.scrollbar.style = 4
widget.non-native-theme.scrollbar.size.override = 20
Though I'm not sure if I maybe want:
widget.non-native-theme.scrollbar.style = 3
widget.non-native-theme.scrollbar.size.override = 30
Plus of course that, which I had found earlier and did nothing on it's own:
widget.gtk.overlay-scrollbars.enabled = false
layout.css.scrollbar-width-thin.disabled = true
It made for the quickest pee break ever.
(Most of the time I use the scroll gesture on the trackpad to get round this)
I thought the label represented the current state, but it represented the state it would switch to if toggled. It became obvious once changed, but that seems the least helpful execution.
Such ambiguous switches are often associated with "opt out" misfeatures.
The other day I was locked out of my car
the key fob button wouldn't work
Why didn't I just use my key to get in?
First, you need to know there is a hidden key inside the fob.
Second, because there doesn't appear to be a keyhole on the car door,
you also have to know that you need to disassemble a portion
of the car door handle to expose the keyhole.
Hiding critical car controls is hostile engineering. In this, it doesn't stand out much in the modern car experience.Basic knowledge about the things you own isn’t hard. My god there is a lot of old man shakes fist at cloud in here.
How can I trust a driver to take things like safe maximum load into account when they don't even know they can open their car if their battery ever goes flat?
For example (Kia Carnival): Holding the lock button on the fob for 20 seconds will automatically close the sliding doors and any open windows.
I did know that there must be a physical key (unless Tesla?), and the only way I found the keyhole was because a previous renter had scratched the doorknob to shit trying to access the very same keyhole.
Contrast this with something like an airplane cockpit, which while full of controls and assuming expert knowledge, still has them all labeled.
Phones aren’t 747’s, and guess what every normal person that goes into an airplane cockpit who isn’t a pilot is so overwhelmed by all the controls they wouldn’t know what anything did.
Interface designers know what they’re doing. They know what’s intuitive and what isn’t, and they’ve refined down to an art how to contain a complicated feature set in a relatively simple form factor.
The irony of people here with no design training that they could do a better job than any “so called designer” shows incredible levels of egotism and disrespect to a mature field of study.
Also demonstrably, people use their phones really quite well with very little training, that’s a modern miracle.
Stop shaking your fist at a cloud.
They know what’s intuitive and what isn’t
... and then they ignore it? It triggers me when someone calls hidden swipe gestures intuitive. It's the opposite of affordance, which these designers should be familiar with if they are worth their salaries.
No they don't. The article refutes your points entirely, as does everyone else here who has been confounded by puzzling interfaces.
“I’m smarter than every designer” is such a common programmer trope at this point that it’s hilarious. Speaking as a developer myself.
Win NT-Vista style, aka the way web browsers show tabs with an icon + label is peak desktop UX for context switching and nobody can convince me otherwise. GNOME can't even render taskbars that way.
The appification of UI is a necessary evil if you want people in their mid twenties or lower to use your OS. The world is moving to mobile-first, and UI is following suit, even in places it doesn't make sense.
Give a kid a UI from the 90s, styled after industrial control panels, and they'll be as confused as you are with touch screen designs. Back in the day, stereos used to provide radio buttons and sliders for tuning, but those devices aren't used anymore. I don't remember the last device I've used that had a physical toggle button, for instance.
UI is moving away from replicating the stereos from the 80s to replicating the electronics young people are actually using. That includes adding mobile paradigms in places that don't necessarily make sense, just like weird stereo controls were all over computers for no good reason.
If you prefer the traditional UX, you can set things up the way you want. Classic Shell will get you your NT-Vista task bar. Gnome Shell has a whole bunch of task bar options. The old approach may no longer be the default one, but it's still an option for those that want it.
Classic Shell, Gnome Shell task bar options
Yeah mods, hacks, and extensions don't really count for either. The more time passes the more this nonsense becomes mandatory. Luckily KDE still exists for now and has it all native.
The appification of UI is a necessary evil if you want people in their mid twenties or lower to use your OS.
If they're using it at work they're going to use it anyways because they probably want to keep the job.
The old desktop operating system UIs were designed for people with zero computer experience, yet now...they would be too hard to learn for someone with only Android experience?
Most people for most situations, using most phone apps, do not have that familiarity. Mobile design has to simultaneously provide a lot of power and progressively disclose it such that it keeps users at or just past their optimal level of comfort, and that involves tradeoffs to hide some things and expose others at different levels of depth.
So while I agree that a lot of mobile design, and OS design in particular, pulls back way too far on providing affordances for actions, I would not use an airplane cockpit as a good guide, unless you’re also talking about a specialist tool.
I recall learning that the four corners of the screen are the most valuable screen real estate, because it's easy to move the mouse to those locations quickly without fine control. So it's user-hostile that for Windows 11 Microsoft moved the default "Start" menu location to the center. And I don't think they can ascribe it to being mobile-first. Maybe it's "touch-first", where mouse motion doesn't apply.
I think they wanted the start menu to be front and center. And honestly, that just sounds like a good idea, because it is where you go to do stuff that's not on your desktop already. But clicking a button in the bottom left and having the menu open in the middle would look weird, so centering the icons would make sense.
I think there are better ways to do it and I'm sure they've been tried, but they would probably confuse existing Windows users even more.
My metaverse client normally presents a clean 3D view of the world. If you bring the cursor to the top or bottom of the screen, the menu bar and controls appear. They stay visible as long as the cursor is over some control, then, after a few seconds, they disappear.
This seems to be natural to users. I deliberately don't explain it, but everybody finds the controls, because they'll move the mouse and hit an edge.
My car has something like that, but thankfully I have only needed to adjust volume, which can be done from the steering wheel…
You want to mess with your equalizer, do it when stopped. IDGAF if it's dozens of physical buttons and knobs and sliders or hidden in menus; you're supposed to be driving not mastering an audio file.
So my iMac, among many other devices like the light I wear on my head camping, has a button which you long-press to turn on. It is a very common pattern which most people will have come across, and it’s reasonable to expect people to learn it. The buttons are even labelled with an ISO standard symbol which you are expected to know.
If it’s just a button the user just has to know two things: turn the switch on at the wall socket when plugging in, which becomes habit since childhoood; and press and hold the button on the fan to make it go, which I suspect most children in 2025 can manage. These two things don’t interact and can be known and learned separately.
As you said, the knob’s position tells you about the switch. But it’s the fan the user is interested in, not the switch.
(BTW, if the fan has a motion sensor you can’t tell it’s off by the fact the blades aren’t turning. There’s probably a telltale LED.)
A better example may be a solenoid button, used on industrial machinery which should remain off after a power failure, which stays held in when pushed, but pops out when the power is cut. They are not common outside of such machinery, because they're extremely expensive. In the first half of the 20th century, they also saw some use in elevators: https://news.ycombinator.com/item?id=37385826
Your average dev who's never used vim or vi will start frustrated by default.
The other important thing is learning to fit into the conventions of the platform: for example, Cocoa apps on Mac all inherit a bunch of consistent behaviors.
The other way around is yeah, hostile. But of course it looks sleek and minimalistic!
On the early iPhones, they had to figure out how to move icons around. Their answer was, hold one of the icons down until they all start wiggling, that means you've entered the "rearrange icons" mode... Geezus christ, how intuitive. Having a button on screen, which when pressed offers a description of the mode you've entered would be user-friendly, but I get the lack of appeal, for me it would feel so clunky and like it's UI design from the 80's.
"Another example is the absurd application of icons. An icon is a symbol equally incomprehensible in all human languages."
Being a modal editor probably makes removing all persistent chrome more feasible.
Once you’ve learned the tool
I don’t have time to learn the tool. I want to use the tool immediately. Otherwise, I’m moving on.
Configurable options are certainly a good approach for those that know the tool well, but the default state shouldn’t require “learning.”
There is a tradeoff between efficiency and learnability, in some cases learning the tool pays off.
https://statetechmagazine.com/article/2013/08/visual-history...
Look at the image of 2.0. There is permanent screen space dedicated to:
- Open
- Print
- Save
- Cut
- Copy
- Paste
I'm guessing you know the shortcuts for these. You learned the tool.But by taking up so much space, these are given the same visual hierarchy as the entirety of the word 'Wikimedia'!
Configurable options are certainly a good approach for those that know the tool well, but the default state shouldn’t require “learning.”
In practice, IME, this just means there being combinatorially many more configurations of the software and anything outside the default ends up clashing with the rest of the software and its development.
Im especially passionate about this because having ADHD makes one sensitive to irrelevant stimuli in the periphery but being a power user for most software the dumbification of software happening since mobile apps drives me insane. I want software where a feature being used by the top 5 to 10 % power users once a month is not being ripped out if that once a month use provides high value for that group.
(1) The "fast" path: Provide toolbars, keyboard shortcuts and context menus for quick access to the most important features. This path is for users who already have the "knowledge in the head" and just want to get there quickly, so speed takes priority over discoverability.
(2) The "main" path: Provide an exhaustive list of all features in the "title bar"/"top of the screen" menus and the settings dialogues. This path is mainly for users who don't have the "knowledge in the head" and need a consistent, predictable way to discover the application's features. But it's also a general-purpose way to provide "knowledge in the world" for anyone who needs it, which may also include power users. Therefore, for this path, discoverability and consistency is more important than speed.
Crucially, the "main" features are a superset of the "quick" features. This means, every "quick-access" feature actually has at least two different ways to activate it, either through 1 or through 2.
This sounds redundant, but makes perfect sense if it allows peoples to first use the feature through 2 and then later switch to 1 when they are more confident.
My impression is that increasingly, UIs drop 2 and only provide 1, changing the "fast" into the "main" path. Then suddenly "discoverability" becomes a factor of its own that needs to be implemented separately for each feature - and in the eyes of designers seems to become an unliked todo-list bullet point like "accessibility".
Usually then, it's implemented as an afterthought: Either through random one-time "new feature" popups (if it popped up at an inappropriate time and you just closed it to continue with what you wanted to to, or if you want to reopen it later - well, sucks to be you) - or through unordered "everything" menus that just contain a dump of all features in an unordered list, but are themselves hidden behind some obscure shortcut or invisible button.
if you stop trying to build a compromise of a UI for both, touch screens and desktops
Agree many of the problems have to do with this, yet it’s barely mentioned by armchair designers. Temporally hidden and narrow scrollbars? Makes perfect sense for scrolling on touch screen (since you don’t touch them directly), but very annoying on desktop.
Back in the pre-touch days we’d have a lot of hover menus. But with a phone today? Nobody likes the hamburger/three dots, but there isn’t a better alternative without losing context. And nobody uses hover anymore for functional purposes.
But, I also don’t think building entirely separate apps and especially web sites for different form factors is desirable. We probably should be better at responsive design, and develop better tooling and guidelines.
Touch grass people.
no one else seems to have any issues with most of this stuff
In my experience, 9 times out of 10 what this actually means is that they just don't know it's an issue! The type of person who would be confused by, say, the iOS control center, is not necessarily the type of person who would easily identify and raise the issue of it being difficult to do something on their device. They would just be mildly annoyed that they can't figure it out, or that the device "can't do it", and move on to find some other way. You may not realize it if you don't interact with those types of people but they fundamentally do not think like you or I do and what may be an obvious problem-solving process to you (e.g. identify a problem, figure out what tools are at your disposal and whether each could be helpful, check for functionality that could do what you are wanting, ask for help from others if you can't figure it out on your own, etc.) may actually not always be so obvious.
That's why the main way I find out people don't know how to do something is from them seeing me do it with my device and going "what!! I didn't know it could do that!!"
It makes it impossible to locate files later when I need to move or transfer them.
When people who are not thinking in that bigger-scale, zoomed-out, societal-level perspective conduct A/B testing or usability testing in a lab or focus group setting, they focus on the wrong metrics (the ones that make an immediate, short-term KPI go up) and then promote the resulting objectively worse UX designs as being evidence-based and data-driven.
It has been destroying software usability for the last 20 years and doing a deep disservice to subsequent generations who are growing up without having been exposed to TRULY thoughtful UX except very rarely.
I will die on this hill.
It's often more useful to share the directory it's in rather that the file itself. MS Office dies have a way to get that information, but you have to look for it.
Essentially it's UI text in random places telling you what steps you should take to activate some other feature, instead of - you know - just providing a button to activate that feature.
A variant of this is buttons or menu items that don't do anything else than move focus onto another button, or open a menu in a different location, so you can then click on that one.
Increasingly seeing this in Microsoft products, especially in VS Code.
Game Helpin' Squad: World Quester 2
https://www.youtube.com/watch?v=0Gy9hJauXns
Every time I'm using Cursor and select "Cursor => Settings => Cursor Settings" I giggle and think of World Quester 2.
I love World Quester 2 so much, I implemented its most innovative feature, the "Space Inventory", in the WASM version of Micropolis (SimCity):
WARNING: DO NOT PRESS THE SPACE BAR!!!! (And if you accidentally do, then definitely DO NOT PRESS IT AGAIN!!!! Or AGAIN!!! Or AGAIN!!!)
SimCity Micropolis Tile Sets Space Inventory Cellular Automata To Jerry Martin's Chill Resolve:
In my opinion, hidden controls aren’t bad per se. But they are something you have to learn to use. That makes them generally worse for beginners and (hopefully) better for experts. It’s a trade off and sometimes getting users to learn your UI is the right decision. I’m glad my code editor puts so much power at my fingertips. I’m glad git is so powerful. I don’t want a simplified version of git if it means giving up some of its power.
That said, I think we have gone way too far toward custom per-app controls. If you’re going to force users to learn your UI conventions, those learnings should apply to other applications on the same platform. Old platforms like the palm were amazing for this - custom controls were incredibly rare. When you learned to use a palm pilot, you could use all the apps on it.
One press turns on/off the display Two taps enables Apple Pay
Quite often my timing is not perfect or one press isn’t hard enough so I shut off the display
Then, paying with Apple Pay is a double press but paying for transit is no press. but often I’m absent minded and as I’m walking through the transit gate my brain thinks “must pay” “pay = double press” so I subconsciously double press and the gate screams since is not in transit mode now it’s in Apple Pay mode
Gradually, over decades, society has evolved a "shared language of touch-screen actions" for controlling touch-screen devices. Many actions are familiar to everyone here: tap to hide/show controls, press and hold to bring contextual menus, pinch with two fingers to zoom out, etc.
It's OK for UI designers to assume familiarity with this common language to keep UIs clean, calm, and uncluttered. I like it.
If you want to lock the door, then the hidden control problem becomes evident... to lock the door, I must know that the hidden control to lock is the pound key. To make matters worse, it's not a simple press of the pound key. It's a press of the pound key for a full five seconds in order to activate the lock sequence. The combination of the long temporal window and the hidden control makes locking the door nearly impossible, unless you are well acquainted with the system and its operation.
Isn't that kind of the point? You don't want people accidentally locking the door, but if it's your door, it's easy enough to remember how to do it.
As an example:
I think hiding controls in favor of "knowledge in the head", as the author phrases it, is absolutely fine when the user is presumed to be aware of features, should be able to understand they exist and know how to use them, and can reasonably learn them. Especially fine if those controls aren't used all that often, and are behind a keyboard shortcut or other common and efficient route to reach them.
On the other hand - I think there's also been a drive to visibly reduce how much control and understanding basic users might have about how a machine works. Examples of this are things like
- Hiding the scheme/path in browser url bars
- Hiding the file path in file explorers and other relevant contexts
- Hiding desired options behind hoops (ex - installing windows without signing into an account, or disabling personalized ads in chrome)
Those later options feel hostile. I need to know the file path to understand where the file is located. I can't simply memorize it - even if I see the same base filename, is it in "c:/users/me/onedrive/[file]" or "c:/users/me/backed_up_spot/[file]"? No way to know without seeing the damn path, and I can have multiple copies floating around. That's intentional (it drives users to Microsofts paid tooling), and hostile.
Basically - knowledge that can be learned and memorized can benefit from workflows that give you the "blank canvas" that the author seems to hate. Command lines are a VERY powerful tool to use a computer, and the text interface is a big part of that. R is (despite my personal distaste for it as a language) a very powerful tool. Much more powerful and flexible than SPSS.
But there are also places where companies are subverting user goals to drive revenue, and that can rightfully fuck right off.
One of my biggest complaints with modern computing is that "The internet" has placed a lot of software into a gray zone where it's not clear if it's respecting my decisions/needs/wants or the publisher's decisions/needs/wants.
It used to be that the publisher only mattered until the moment of sale. Then it was me and the software vs the world - ride or die. Now far too much software is like judas. Happy to sell me out if there's a little extra silver in it.
Look at Google Meet for example. How many times and I trying to remember what the Share Screen icon looks like? Apple generally does this stuff far better: text labels for example. Also clicking some “+” icon to reveal more options — how does a “normal” person know what’s buried inside all of those click to reveal options?
Diversity in tech has always been a concern — but one concern I have is that diversity has always meant race, gender, or sexual orientation stuff — but a 28 year old Hispanic LGBT person doesn’t react to a UI much differently than a 28 year old Black hetero person. But a 68 year old Hispanic woman with English as a second language absolutely has potentially different UI understandings than an 18 year old white woman from Palo Alto.
Real diversity (especially age and tech experience levels) should be embraced by the tech companies — that would have a strong impact on usability. Computers are everywhere and we shouldn’t be designing UI around “tech people” understanding and instead strive for more universal accessibility — especially for products we expect “everyone” to potentially use. (Some dev ops tool obviously would have more latitude than an email app, but even then, let’s stop assuming users understand your visual language just because you do.)
I want to see more UX designers who are “old” rather than some clever kid who lives on Behance. I also want to see more design that isn’t created by typical higher educated designers who think everyone should understand things they take for granted. The blue collar worker that works construction, the grandmother from Peru, the restaurant cook, or the literature professor — whatever. Usability should be clear and obvious. That’s really hard — but that’s the job.
One of the original genius aspects of iPad is that a toddler can immediately start using it. We need all usability to be in that vein.
Not too convenient to carry along with a pocket computer, though.
We need a viable third option in mobile operating systems. At least with cars, we have high-quality infotainment systems such as those from Tesla and Rivian. In the mobile phone space, we have tow poor options and a few alternatives with vanishingly small market share.
As a user, you have no way to see if a photo has been "scanned" with smart features and what it has detected (e,g found person x, found dog, blue sky, beach etc).
Trips features, has this algorithm finished scanning your library? You have no idea, it's just hidden.
Faces, detection, has this completely scanned your library? You don't know. Photos that don't seem to have faces detect, was it scanned or failed or did it not scan yet?
The list is nearly endless - but in line with the rest of the direction of MacOS, getting worse.
Then on the software side I find Youtube particularly annoying, especially with their show-on-hover buttons for thumbnails. You want to click on a video, right, so you stop thinking about it and move your mouse to it and click it, but when you hover over it, buttons spawn, meaning there's a fair chance you're not going to click the video to launch it as you intended, but may be redirected to Youtube's ad disclosure policy page instead, as if anyone wanted to read that.
A DOS command window. Without specific knowledge in the head, the user cannot perform a single action.
To be fair, even CLI environments provide some UI discovery. E.g. DOS had 'help' and it would list available commands and a short description.
Witness the navigation system in Apple Maps in CarPlay. The system developers obviously wanted to display as much map as possible, as shown in Figure 3 a). This makes sense, but to do that they relied on the use of hidden controls. If I want to enter a destination or zoom in on the map, I have to know to touch the bottom left-hand portion of the map
What? You don't have to touch any specific portion of the map. You tap anywhere and it brings up those controls.
I think this article largely has a point, and most of it seems true, but to me these bits of untruth are unamusing at best.
if I want to activate the flashlight on my iPhone, I have to know to swipe up from the bottom left-hand corner in order to bring up a control panel where the flashlight button exists.
The author even drives the point home by getting this wrong.
To get to the control panel it’s not a swipe up from the bottom left, it’s a swipe down from the top right.
And you don’t even need to do that. The flashlight and camera icons exist right on the Lock Screen for immediate use without having to bring up the control panel
It’s only the dangerously obsolete iPhones - iPhone 8 / 2016 and earlier - where you swipe from the bottom up. And from the bottom-anywhere straight up, no need to go from any corner. We’ve had 9 years of iPhones with the swipe-down action, and less than 2% of iPhones still in use are iPhone 8 and earlier.
Nowadays everything has to be clean and minimalist. No scrollbar, no buttons, just gestures. Hand a modern smartphone to someone who never used one in their life and see how they struggle to ever leave the first app they open. What are the odds they discover one of the gestures?
My wife's Pixel was hung and I was trying to reboot it. Long press, double tap, triple tap. up-down volume, then power was the answer.
We are at the point where our gadgets expect bespoke Konami codes before they respond to input.
It's maddening!
None of this is new. But this kind of dysfunctional product is what a dysfunctional organization ships, despite knowledge.
Why? Because leadership wants features. Leadership also wants a clean, marketable product. Leadership also wants both of those done on a dime, quickly and doesn't care about the details. The only way to satisfy all constraints at the same time is to implement features and hide them so they don't clutter the UI.
The problem isn't awareness. It goes deeper.
And some of their conferences are just downright awful UI
The golden age of computing is sadly long, long passed.
Mark Weiser, Ben Shneiderman, Jack Callahan, and I published a paper at ACM CHI'88 about pie menus, which seamlessly support both relaxed "self revealing" browsing for novices, and accelerated gestural "mouse ahead" for experts: smoothly, seamlessly, and unconsciously training users to advance from novice to expert via "rehearsal".
Pie menus are much better than gesture recognition for several synergistic reasons: Most importantly, they are self revealing. Also, they support visual feedback, browsing, error recovery, and reselect. And all possible gestures have a valid and easily predictable and understandable meaning, while most gestures are syntax errors.
Plus the distance can also be used as an additional parameter, like a "pull out" font:direction / size:distance selection pie menu, with live feedback interactive both in the menu center and in the text document itself, which is great during "mouse ahead" before the menu has even been shown.
The exact same gesture that novices learn to do by being prompted by the pop-up pie is the exact same action experts use more quickly to "mouse ahead" through even nested menus without looking at the screen or needing to pop up the pie menu. (By the principle of "Lead, follow, or get out of the way!")
Linear menus with keyboard accelerators do not have this "rehearsal" property, because pressing multiple keys down at once is a totally different (and more difficult to remember and perform) action than pointing and clicking at tiny little menu labels on the screen, each one further from the cursor and more difficult to hit than the next.
Our controlled experiment compared pie menus to linear menus, and proved that pie menus were 15% faster, and had a significantly lower error rate.
Fitts' Law unsurprisingly predicted that result: it essentially says the bigger and closer a target is to the cursor, the faster and more reliably you can hit it. Pie menus optimize both the distance (all items directly adjacent in different directions), and the area (all items huge wedge shaped target areas that get wider as you move away from the center, so you get more precise "leverage" as you move more, trading off distance for angular precision.
https://en.wikipedia.org/wiki/Fitts%27s_law
An Empirical Comparison of Pie vs. Linear Menus, Proceedings of CHI'88:
https://donhopkins.medium.com/an-empirical-comparison-of-pie...
Pie Menus: A 30 Year Retrospective (37 years now):
https://donhopkins.medium.com/pie-menus-936fed383ff1
The Design and Implementation of Pie Menus: They’re Fast, Easy, and Self-Revealing. Originally published in Dr. Dobb’s Journal, Dec. 1991, cover story, user interface issue:
https://donhopkins.medium.com/the-design-and-implementation-...
[...] Pie Menu AdvantagesPie menus are faster and more reliable than linear menus, because pointing at a slice requires very little cursor motion, and the large area and wedge shape make them easy targets.
For the novice, pie menus are easy because they are a self-revealing gestural interface: They show what you can do and direct you how to do it. By clicking and popping up a pie menu, looking at the labels, moving the cursor in the desired direction, then clicking to make a selection, you learn the menu and practice the gesture to “mark ahead” (“mouse ahead” in the case of a mouse, “wave ahead” in the case of a dataglove). With a little practice, it becomes quite easy to mark ahead even through nested pie menus.
For the expert, they’re efficient because — without even looking — you can move in any direction, and mark ahead so fast that the menu doesn’t even pop up. Only when used more slowly like a traditional menu, does a pie menu pop up on the screen, to reveal the available selections.
Most importantly, novices soon become experts, because every time you select from a pie menu, you practice the motion to mark ahead, so you naturally learn to do it by feel! As Jaron Lanier of VPL Research has remarked, “The mind may forget, but the body remembers.” Pie menus take advantage of the body’s ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels.
By moving further from the pie menu center, a more accurate selection is assured. This feature facilitates mark ahead. Our experience has been that the expert pie menu user can easily mark ahead on an eight-item menu. Linear menus don’t have this property, so it is difficult to mark ahead more than two items.
This property is especially important in mobile computing applications and other situations where the input data stream is noisy because of factors such as hand jitter, pen skipping, mouse slipping, or vehicular motion (not to mention tectonic activity).
There are particular applications, such as entering compass directions, time, angular degrees, and spatially related commands, which work particularly well with pie menus. However, as we’ll see further on, pies win over linear menus even for ordinary tasks.
Gesture Space:
https://donhopkins.medium.com/gesture-space-842e3cdc7102
[...] Excerpt About Gesture SpaceI think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
[...] Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing”[5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
They also provide the ability of “Reselection”[6], which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
Pie menus also support “Rehearsal”[7] — the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it’s not rehearsal.
Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
I am a fan of the conceptional clarity, but having to wait for my PC to shutdown only to have to flip a switch myself is not good UX. The absolute ideal would be the switch mechanically turning to off once it is off, and such switches exist, but they are expensive and require extra electronics to drive the electromagnetic part. A really good example of this UX principle are the motor faders in digital audio mixers: You can move them with your hand but if you cange to a different channel layout the mixer can move the faders for you. The downside of those is mainly cost.
The cheap 80/20 solution for the PC is a momentary push-button and a Green/Red LED to display the current state. 5s holding is power-off because everything else has the danger of accidentally switching off — but this isn't obvious to the non-initialized.
You don’t even know what features Bamboo has that would be nice to use - or ask someone else to use on your behalf - because if you don’t have permission it’s almost all hidden away.
Developer tools in particular and productivity tools in general need to leave everything out there for discoverability to function. Taking it away is a disservice to everyone.