The Unix-Haters Handbook (1994) [pdf]
Like:
1. You start and stop services with 'systemctl start/stop nginx'. But logs for that service can be read through an easy-to-remember 'journalctl -xeu nginx.service'. Why not 'systemctl logs nginx'? Nobody knows.
2. If you look at the built-in help for systemctl, the top-level options list things like `--firmware-setup` and `--image-policy`.
3. systemd unifies devices, mounts, and services into unit files with consistent syntax. Except where it doesn't. For example, there's a way to specify a retry policy for a regular service, but not for mount units. Why? Nobody knows.
(To be clear, I _like_ systemd. But it definitely follows the true Unix philosophy of being wildly internally inconsistent.)
1. `systemctl status nginx.service` suffices in many cases. journalctl is for when you need to dig deeper, and it demands many more options. You would have complained about "too noisy CLI arguments" if these were unified.
2. I am not sure about how I should parse this. You mean there are too many arguments in total (2a) or the man page or the help message is not ordered correctly (2b)?
(2a). If you just care about services, you already know [well] a handful of subcommands (start, stop, enable, etc.) and just use those, the other args don't get in your way. For example your everyday commands have safe, sane default options that you will not have to override 99% of the time.
Furthermore, this is much better than the alternative of having a dozen different utilities that have a non-trivial inter-utility interaction that has to be solved externally. Sometimes an application that does (just) one thing won't do well.
(2b). This is subjective (?). I have experienced a few week-long total internet outages (in Iran). I had to study the man pages and my offline resources in those contingencies, and have generally been (extremely) satisfied with the breadth, depth, and the organization of the systems docs. In the age of LLMs this is much less of a problem anyways. I think reading the man page of a well-known utility is not an everyday task, and for a one-off case you will grep the man page anyways.
3. Your point is ~valid. But automount exists for ephermal resources. By default, we won't touch a failing drive without some precautions at least. So fail-fast and no retry is not always wrong. Perhaps it is virtue signaling ... On my PC I don't want to retry anything if a mount fails. In fact I might even want it to fail to boot so that it doesn't go undetected.
Also, for something as critical as mounting, I would probably want other "smart" behavior as well (exponential backoff for network, email, alert, DB fail-over, etc.) and these require specific application knowledge.
So ... they are trying to prevent a foot gun.
1. `systemctl status nginx.service` suffices in many cases. journalctl is for when you need to dig deeper, and it demands many more options. You would have complained about "too noisy CLI arguments" if these were unified.
I'm not at all a systemd hater (I think it was needed and it's nowadays a very solid piece of software) but the logs thing should be totally tweakable when viewing it from `systemctl status` and it is n.... [goes to check the man page]
-n, --lines=
When used with status, controls the number of journal lines to show, counting from the most recent ones. Takes a positive integer argument, or 0 to disable journal output. Defaults to 10.
Oooh, so TIL.2. The options are not filtered, so useful options like ('--lines') are lost. E.g. what other options apply to "systemctl status"? The systemd documentation, in general, is a mess. It's a good _reference_ documentation (like 'man') but not a good guide.
3. Network filesystems exist. And they can become unavailable for a time.
3. Network filesystems exist. And they can become unavailable for a time.
See[1]
the same applies to remote file system mounts. If you want them to be mounted only upon access, you will need to use the x-systemd.automount parameters. In addition, you can use the x-systemd.mount-timeout= option to specify how long systemd should wait for the mount command to finish. Also, the _netdev option ensures systemd understands that the mount is network dependent and order it after the network is online.You may also specify an idle timeout for a mount with the x-systemd.idle-timeout flag.
[1] https://wiki.archlinux.org/title/Fstab#Automount_with_system...
That is like opening the manual for your dishwasher and reading a section about how you may check the control-boards conformal coating after the warranty has expired. Useful when you need it and have the repair skills, but a bad way to start a manual.
When I open a manual it’s usually for: flags and argument ordering; argument format (for things like string format or globbing). Some manuals are short enough that it can serve as a guide, but most assumes domain knowledge.
What you want is a cheatsheet. And there’s a lot on the internet and even some tools that collect them. But most practitioners write shell aliases and functions
To be fair, this could happen to any of us, especially early in career. But the real hubris is presuming that things are, as they are, without cause or reason. Along with never really knowing how things actually worked. Or why.
I envision a layperson (which is sort of understanding the author had of modern init systems, when starting on systemd). Said person walks up to a complex series of gears, and thinks a peg is just there for no reason, looks unused, and pulls it out. Only to have the whole mess go bananas. You can follow this logic with all of the half baked, partially implemented services like timekeeping, DNS, and others that barely work correctly, and go sideways if looked at funny.
I think if the author took their current knowledge, and this time wrote it from scratch, it could be far better.
However there still seems to be a chip on their shoulder, with an idea that "I'll fix Linux!" still, when in reality these fixes are just creating immense complication with very minimal upside. So any re-write would likely still be an over-complicated contraption.
Current areas include managing services on a server, managing a single-user laptop, and enterprise features for fleet of devices/users.
There is some overlap at the core where sharing code is useful, but it feels way more complexity than needed gets shipped to my laptop. I wonder how much could be shaved off when focusing only on a single scenario.
That way you turn a very complex system into a set of much simpler artificial systems that you can control the interaction.
On your example, that would mean having different kinds of configuration options that go for each of those scenarios, but still all on the same software.
One can argue that systemd tries this (for example, there are many kinds of services). But in many cases, it does the complete opposite of this and reducing scope.
Still, I don't think init systems are a wicked problem (and so, it doesn't need advanced solutions to managing complexity). The wickedness is caused by the systemd's decision to do everything.
It evolved organically so it's a bit of a mess as a result, but it's the fate of most long-term projects (including Linux).
It's the least reliable init system I've ever used!
To give you some perspective, at that time, upstart was using ptrace() to detect the double-forking to allow services to be tracked.
Not to keep them running. Not to restart them. Not to track them.
I have logs, and monitoring software for that. I have loads of applications to do that, if I wish. But regardless of what you believe an init system is for, the reliability of it is separate from "keeping apps that are so crappy they crash, running".
The inconsistency comes from the author thinking "All this init stuff is ancient, and filled with absurd work arounds, hacks, and inconsistencies. I'll fix that!". Then as time passes discovering that "Oh wait, I should add a hack for this special case, and this one, and this one, guess these were really needed!" as bug reports come in over the years.
Don't forget the best one: "We don't support that uncommon use case, we will not accept nor maintain patches to support it, and you shoulden't do it that way anyway, and we are going to make it impossible in the future" -- to something that's worked well for decades.
What I would like to see is something that is to systemd what PipeWire is to PulseAudio.
Before PulseAudio getting audio to work properly was a struggle. PA introduced useful abstractions, but when it was rolled out it was a buggy mess. Eventually it got good over time. Then PipeWire comes in, and it does more with less. The transition was so smooth, I did not even realize it I had been running it for a while, just one day I noticed it in the update logs.
systemd now works well enough, but it would be nice to get rid of that accumulated cruft.
This is of course about tradeoffs and about the complexities of the problems you're solving, but his software is full of choices that only make sense if you priorize elegant code over elegant software only to then grow into something that is neither.
I think he's well suited for his new employer (Microsoft).
[1] (in German) https://cre.fm/cre209-das-linux-system
When people say "PulseAudio is not a broken mess anymore", what they really mean is "my audio driver is not a broken mess anymore".
I want to write a systemd haters handbook.
Why ? Systemd really fits the Unix haters handbook. It is anti unix as much as it can be ( one command to rule them all, binary logs, etc).
In the end it realy seems that the mantra: GNU is not UNIX is true. Just look at the GNU/Linux: pulseaudio, systemd, polkit, wayland, the big, fat linux kernel
Opening up tens or hundreds of XML config files for resync was disgustingly slow. I've developed software on Maemo and Scratchbox; the I/O wait for on-device config changes was a real problem. So of course someone came up with a modified concept of Windows registry - a single, binary format config storage, with a suitably "easy" API. As a result you'd sacrifice write/update latency for the cases where you wanted to modify configurations and gain a much improved read/refresh latency when reading them up.
Of course that all broke down when reading a single config block required to read the entire freaking binary dump and the config storage itself was bigger than the block device cache. Turns out that if you give app developers a supposedly easy and low-friction mechanism to store app configs, their respective PMs would go wild and demand that everything is configurable. Multiply by tens, even low hundreds of apps, each registering an idle-loop callback to re-read their configs to guarantee they would always have the correct settings ready. A system intended to improve config load/read times ended up generating an increased demand for already constrained read I/O.
1. systemctl is the controller. Its job is to change and report on the state of units. journalctl is the query engine. Merging the query engine into the systemctl controller would make the controller bloated and complex, so a dedicated tool is the cleaner approach. I think you can also rip out the journal and use other tools if you so decide, making building logs into systemctl a bad idea.
2. systemd is a system manager, not just a service manager. It replaced not only the old init system but also a collection of other tools that managed the machine's core state
3. A service runs a process, which can fail for many transient reasons. Trying again is a sensible and effective recovery strategy. A mount defines a state in the kernel. If it fails, it's almost always for a "hard" reason that an immediate retry won't fix. Retrying a failed mount would just waste time and spam logs during boot.
If you want really everything is a file, that was fixed by UNIX authors in Plan 9 and Inferno.
object orientation is kind of everything is a file 2.0 in the form everything is an object
That is why I love Plan 9. 9P serves you a tree of named objects that can be byte addressed. Those objects are on the other end of an RPC server that can run anywhere, on any machine, thanks to 9p being architecture agnostic. Those named objects could be memory, hardware devices, actual on-disk files, etc. Very flexible and simple architecture.
The real interesting magic behind Plan 9 is 9P and its VFS design so that leaves Inferno with one thing going for it: Dis, its user space VM. However, Dis does not protect memory as it was developed for mmu-less embedded use. It implicitly trusts the programmer not to clobber other programs memory. It is also hopelessly stuck in 32bit land.
These days Inferno is not actively maintained by anyone. There are a few forks in various states and a few attempts to make inferno 64 bit but so far no one has succeeded. You can check: https://github.com/henesy/awesome-inferno
Alef was abandoned because they needed to build a compiler for each arch and they already had a full C compiler suite. So they took the ideas from Alef and made the thread(2) C library. If you're curious about the history of Alef and how it influenced thread(2), Limbo and Go: https://seh.dev/go-legacy/
These days Plan 9 is still alive and well in the form of 9front, an actively developed fork. I know a lot of the devs and some of them daily drive their work via 9front running on actual hardware. I also daily drive 9front via drawterm to a physical CPU sever that also serves DNS and DHCP so my network is managed via ndb. Super simple to setup vs other clunky operating systems.
And lastly, I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.
I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.
Doesn't Wasm/WASI provide these same features already? That doesn't seem like "a lot of work", it's basically there already. Does dis add anything compelling when compared to that existing technology stack?
An inferno built using WASM would be interesting. Though WASI would likely be supplanted by a Plan 9/Inferno interface possibly with WASI compatibility. Instead of a hacked up hyper text viewer you start with a real portable virtual OS that can run hosted or native. Then you build whatever you'd like on top like HTML renderers, JS interpreters, media players/codecs, etc. You profile is a user account so you get security for free using the OS mechanisms. Would make a very interesting platform.
I know pretty well the history, I was around at the time after all, and Plan 9 gets more attention these days, exactly because most UNIX heads usually ignore Inferno.
Unix, Plan 9 and the Lurking Smalltalk
https://www.humprog.org/~stephen/research/papers/kell19unix-...
Late binding is a bit out of fashion these days but it really brings a lot of cool benefits for composition.
UNIX Needs A True Integrated Environment: CASE Closed
http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-89-4...
For the TL;DR; crowd
"We 've painted a dim picture of what it takes to bring IPEs to UNIX. The problems of locating. user interfaces. system seamlessness. and incrementality are hard to solve for current UNIXes--but not impossible. One of the reasons so little attention has been paid to the needs of IPEs in UNIX is that UNIX had not had good examples of IPEs for inspiration. This is changing: for instance. one of this article's authors has helped to develop the Small talk IPE for UNIX (see the adjacent story). and two others of us are working to make the Cedar IPE available on UNIX.
What's more. new UNIX facilities. such as shared memory and lightweight processes (threads). go a long way toward enabling seamless integration. Of course. these features don't themselves deliver integration: that takes UNIX programmers shaping UNIX as they always have--in the context of a friendly and cooperative community. As more UNIX programmers come to know IPEs and their power. UNIX itself will inevitably evolve toward being a full IPE. And then UNIX programmers can have what Lisp and Small talk and Cedar programmers have had for many years: a truly comfortable place to program."
That was back in the mid-90s but even today I still don't understand why network interfaces are treated differently than other devices
Especially since there actually is a very useful thing that writing to /dev/eth0 would do: Put a raw frame on the wire, and reading from it would read raw frames.
Network packets don't need a destination address. Broadcast addresses exist. Also, packets to invalid/unknown destinations exist. You can send network packets with invalid source or destination addresses already anyway.
Taking a raw chunk of data and putting it on the wire as-is is the most logical interpretation of "writing to the ethernet device". Does it make sense to allow everyone to do that? Certainly not, that's why you restrict access to devices anyway.
The fact that not every chunk of data "makes sense" for every device in /dev is certainly nothing new, since that is the case for all other devices already (I mentioned a few in my post above).
Sometimes you don't even want TCP/IP on the wire. Heck, sometimes you maybe don't even want DIX Ethernet on the wire.
Anyway, this discussion is going nowhere. Handcrafting packets is possible (it's basically what the kernel does anyway), sometimes it's useful, and if you could write a user-space program that could just open /dev/eth0 and write its own handcrafted packets to that stream would be helpful.
Of course, this is meaningless, as you can't actually do any common operation, except maybe Close*, on all of them. So them being the same type is actually a hindrance, not a help - it makes it easier to accidentally pass a socket to a function that expects a file, and will fail badly when trying to, for example, seek() in it.
* to be fair, Windows actually has WaitForSingleObject / WaitForMultipleObjects as well, which I think does do something meaningful for any Handle. I don't think Linux has anything similar.
Of course, this is meaningless, as you can't actually do any common operation, except maybe Close*, on all of them.
You can write and read on anything on Unix that "is a file". You can't open or close all of them.
Annoyingly, files come in 2 flavors, and you are supposed to optimize your reads and writes differently.
It won't make sense to try to read from all things you can get a HANDLE to on Windows either, but it's up to what created the HANDLE/object as to what operations are valid.
https://learn.microsoft.com/en-us/windows/win32/sysinfo/kern...
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=40110729 - April 2024 (87 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=38464715 - Nov 2023 (139 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=31417690 - May 2022 (86 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=19416485 - March 2019 (157 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=13781815 - March 2017 (307 comments)
The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=9976694 - July 2015 (5 comments)
The Unix Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=7726115 - May 2014 (50 comments)
Anti-foreword to the Unix haters handbook by dmr - https://news.ycombinator.com/item?id=3106271 - Oct 2011 (31 comments)
The Unix Haters Handbook - https://news.ycombinator.com/item?id=1272975 - April 2010 (28 comments)
The Unix Hater’s Handbook, Reconsidered - https://news.ycombinator.com/item?id=319773 - Sept 2008 (5 comments)
We often go to Germany, but this summer we went to Rügen. To get there we have to travel directly east, those trips just make you realize how close we were to the border and thus to an authoritarian regime.
Many of the buildings still had bullet holes and it felt like you could touch history.
When you know where to look, you can still find the scars everywhere. Our church tower still has bullet holes from WWII.
e.g. It’s really interesting reading about LISP machines but no-one’s building a new one. Equally, all the criticism of sendmail and csh is valid but no-one uses them anymore either.
Most of the reliability criticisms have been addressed over the years but people are still trying to address the design of C, usually by replacing it. Equally, sh remains a problematic scripting language but at least it’s reliably there, unlike many of its many alternatives.
There are some people building new Lisp machines: https://opencores.org/projects/igor https://github.com/lisper/cpus-caddr https://interlisp.org/ http://pt.withington.org/publications/LispM.html http://pt.withington.org/publications/VLM.html https://github.com/dseagrav/ld http://www.aviduratas.de/lisp/lispmfpga/ https://groups.google.com/g/comp.lang.lisp/c/36_qKNErHAg https://frank-buss.de/lispcpu/
Also, Morello includes some Lisp-machine-like features. In my view knowing about the history of hardware architectures is far more important for designing new ones than for reproducing old ones.
I'm assuming you're using octal here. Myself, I haven't used octal since 03677.
:-)
I see you mentioned https://interlisp.org/ ; while it's not a Lisp machine, the Medley Interlisp Project aims to recreate the Interlisp environment that ran on Xerox D-machines up through the 1980s or so. Still very interesting.
It’s really interesting reading about LISP machines but no-one’s building a new one
There have been two open source Lisp Machine OS created in the last 15 or 10 years.
However a big part of the power of the Symbolics/LMI machines was in the software itself (applications), and this is still propietary code.
To reimplement the Lisp Machine applications would take quite a big effort.
Most of the benefit was pushing their interpreter into microcode, leaving more of the data bus free for actual data. Now we have ubiquitous icaches which give you a pseudo harvard architecture when it comes to the core's external bandwidth.
Some of the benefit was having a separate core with it's own microcode doing some of the garbage collection work. Now we have ubiquitous general multicore systems.
Etc.
Equally, sh remains a problematic scripting language but at least it’s reliably there
I too still have a hard copy of this from way back. This book was my introduction to Unix, as I shifted from programming for DOS/Windows/NT to SunOS, and later, Linux. Despite the many issues (humorously) exposed by this book, the one thing that hooked me is what that quote above implies: It was accessible, durable, and thus worth taking the time to learn, warts and all.
The EMACS hater handbook. Under a GFDL license, of course.
No multithreading, I/O locks under GNUs/eww, glacial slow email header parsing under GNUs, huge badass file for RMAIL if you don't like GNUs (instead of parsing MailDir) and so on.
No multithreading, I/O locks under GNUs/eww, glacial slow
All this would not happen if RMS had chosen Common Lisp to implement it...
If your spec is small but you have hundreds of megabytes of bloat, it means you're not even remotely documenting everything.
It's really docummented. But the standard compared to Scheme it's huge.
PD: 2.2MB as HTML text weights nothing. You don't need images. It's 16MB uncompressed. More than 1500 items. People often forgets how little plain text weights.
Era-appropriate joking aside: There's no actual evidence that Cutler held the views on Unix, or even on DEC's Eunice, that have been ascribed to xem from anecdotes by Armando Stettner and edits to Wikipedia and writing by G. Pascal Zachary. I and others went into more detail on this years ago: https://news.ycombinator.com/item?id=22814012
https://retrocomputing.stackexchange.com/questions/14150/how...
[Cutler] expressed his low opinion of the Unix process input/output model by reciting "Get a byte, get a byte, get a byte byte byte" to the tune of the finale of Rossini's William Tell Overture.
- A well-defined HAL for portability
- An object manager for unified resource lifecycle and governance
- Asynchronous I/O by default
- User-facing APIs bundled into independent “personalities” and decoupled from the kernel
The only real black mark I’m aware of is the move of the graphics subsystem into the kernel for performance, which I don’t think was Cutler’s idea.
Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.
This was the best I could find as to its origins:
https://boards.straightdope.com/t/where-did-thats-mighty-whi...
I wonder why Dennis Ritchie was so infuriated though. He criticizes them for wanting simple functionality, but it's not because language is a powerful tool for solving problems it's because it limits the potential of the platform to it's functionality (which has been simplified and in of itself limited).
So this is confusing to me. Using language to solve problems is the advantage that Unix offers. But, neither the authors nor Dennis care about this? Or they do care in limited ways, but ultimately it's about something else?
A more likely candidate is the Kochan book but the original 1985 first edition. It had the scrappy sense of humor that characterized the Unix culture in the 80's.
https://www.goodreads.com/book/show/293206.UNIX_Shell_Progra...
It's my favorite OS.
And I like it for its fundamental process model.
That combined with stdin/out and pipes.
All stitched together with a process aware shell.
Lots (most) OSes had a process concept. But in Unix, they not only existed, they were everywhere, they were dynamic, and they were "cheap". They were user accessible. A process with its ubiquitous stdin/out interface gave us great composablilty. We can click the processes together like legos.
For example, VMS had processes. But after 4 years of using it, I never tossed processes around using it like I did on Unix. I never "shelled out" of an editor. I never & something into the background. Just never came up. One terminal, one process.
On Unix, however, oh yea. Pipe construct on the command line, bang out of the editor, :r! in vi. And the eco-system the was created out of this simple concept. The "Unix Way(tm)".
And anything was a process. A C program. A shell script. At this level, everything was a "system language".
Then, They (those Unix wizard folks) made networking a well behaved citizen in this process stdin/out world. `inetd` could turn ANYTHING (because everything had stdin/out) into a network server. This command is magic: `ls | cpio -ov | rsh otherhost cat > /dev/tape`
Does `ls` know anything about file archives? No. Does `cpio` know anything about networking, or tape drives? No. Heck, `cat` doesn't know anything about tape drives.
You just could not, back in the day, plumb processes and computers together trivially like you could with Unix. Humans could do this. They didn't need to be wizard status, "admins", guys in white coats locked in the raised floor rooms, huffing Halon on the side. Assuming you could grok the arcane syntaxes (which, absolutely, were legion), you could make Unix do amazing things.
This flexibility allowed me to make all sorts of Rube-Goldbergian constructs of data flows and processes. Unix has always been empowering, and not constraining, once you accept it for what it is.