Collapse OS
I feel these kinds of projects may be fun, but they're essentially just LARPing. Though this one seems to have a more reasonable concept than most (e.g. "My raspbery pi-based "cyberdeck" with offline wikipedia will help!" No, you need a fucking farming manual on paper.).
Some people doubt that computers will stay relevant after a civilizational collapse. I mostly agree. The drastic simplification of our society will mean that we have a lot less information to manage. We'll be a lot more busy with more pressing activities than processing data.However, the goal of Collapse OS is not to save computing, but electronics. Managing electricity will stay immensely useful.
I think thats a notable goal. We will conduct business with pen and paper but given that even in a collapse scenario there will be lots of remnants of the current era lying around might as well try and make use of it. Its one of the reason I really don't like the people collecting e-waste in bulk and then melting down the chips for tiny scraps of gold. So much effort went into making those chips there has got to be a better way to preserve these things for usage later.
Things supported by this OS like the Z80, 8086, 6502 etc. use around 5-10W. Using simple parts to control complicated machines is a standard operation, and even advanced electronics tend to use a lot of parts using older manufacturing techniques because it's more efficient to keep the old processes running.
Here's a fun article with some context about old processors like this still in production: https://hackaday.com/2022/12/01/ask-hackaday-when-it-comes-t...
5 watts will drain a 100-amp-hour car battery in 10 days and is basically infeasible to get from improvised batteries made with common solid metals. Current mainstream microcontrollers like an ATSAMD20 are not only much nicer to program but can use under 20 μW, twenty thousand times less. A CR2032 coin cell (220mAh, 3V) can provide 20 μW for about 4 years. But the most the coin cell can provide at all is about 500 μW, so to run a 5-watt computer you'd need 10,000 coin cells. Totally impractical.
And batteries are a huge source of unreliability. What if you make your computing device more reliable by eliminating the battery? Then you need a way to power it, perhaps winding a watchspring or charging a supercapacitor. Consider winding up such a device by pulling a cord like the one you'd use to start a chainsaw. That's about 100 newtons over about a meter, so 100 joules. That energy will run a 5W Z80 machine for 20 seconds, so you have to yank the cord three times a minute, or more because of friction. That yank will run a 20 μW microcontroller for two months.
What cases are you thinking of when you say "Plenty of people are still using technology developed thousands of years ago even when there are modern alternatives that are thousands of times more efficient"? I considered hand sewing, cultivation with digging sticks instead of tractors, cooking over wood fires, walking, execution by stoning, handwriting, and several other possibilities, but none of them fit your description. In most cases the modern alternatives are less efficient but easier to use, but in every case I can think of where the efficiency ratio reaches a thousand or more in favor of the new technology, the thousands-of-years-old technology is abandoned, except by tiny minorities who are either impoverished or deliberately engaging in creative anachronisms.
I don't think "the relative simplicity of old technology" is a good argument for attempting to control your tractor with a Z80 instead of an ATSAMD20. You have to hook up the Z80 to external memory chips (both RAM and ROM) and an external clock crystal, supply it with 5 volts (regulated with, I think, ±2% precision), provide it with much more current (which means bypassing it with bigger capacitors, which pushes you towards scarcer, shorter-lived, less-reliable electrolytics), and program it in assembly language or Forth. The ATSAMD20 has RAM, ROM, and clock on chip and can run on anywhere from 1.62 to 3.63 volts, and you can program it in C or MicroPython. (C compilers for the Z80 do exist but for most tasks performance is prohibitively poor.) You can regulate the ATSAMD20's voltage adequately with a couple of LEDs and a resistor, or in many cases just a resistor divider consisting of a pencil lead or a potentiometer.
It would be pragmatically useful to use a Z80 if you have an existing Z80 codebase, or if you're familiar with the Z80 but not anything current, or if you have Z80 documentation but not documentation for anything current, or if you can get a Z80 but not anything current. (One particular case of this last is if the microcontrollers you have access to are all mask-programmed and don't have an "external access" pin like the 8048, 8051, and 80C196 family to force them to execute code from external memory. In that case the fact that the Z80 has no built-in code memory is an advantage instead of a disadvantage. But, if you can get Flash-programmed microcontrollers, you can generally reprogram their Flash.)
Incidentally, the Z80 itself "only" uses about 500 milliwatts, and there are Z80 clones that run on somewhat less power and require less extensive external supporting circuitry. (Boston Scientific's pacemakers run on a Z80 softcore in an FPGA, for example, so they don't have to take the risk of writing new firmware.) But the Z80's other drawbacks remain.
The types of things I had in mind are old techniques that people use for processing materials, like running a primitive forge or extracting energy from burning plant material or manual labor. What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor? Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher, but it relies on a lot of infrastructure to get to that point. The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.
In the same way, while old computers are much less efficient, models like these that have been manufactured for decades and exist all over might end up being a better fit in some cases, even with less efficiency. I can appreciate that the integration of components in newer machines like the ATSAMD20 can reduce complexity in many ways, but projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.
The Z80 voltage is 5V+/-5%, so right around what you were thinking. Considering the precision required for voltage regulation required is smart, but if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.
Your point about documentation is a good one. It does require more complicated programming, but there are plenty of paper books out there (also digitally archived) that in many situations might be easier to locate because they have been so widely distributed over time. If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like this: https://archive.org/details/Programming_the_Z-80_2nd_Edition...
Anyway, thank you again for taking so much time to respond so thoughfully. You make great points, but I'm still convinced that it's worthwhile to make old hardware useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available.
Projects like this one will hopefully never be used for their intended purpose, but they may form a basis for other interesting uses of technology and finding ways to take advantage of available computing resources even as machines become more complicated.
What's the energy efficiency difference between generating electricity with a hand crank vs. a nuclear reactor?
A hand crank is about 95% efficient. An electromechanical generator is about 90% efficient. Your muscles are about 25% efficient. Putting it together, the energy efficiency of generating electricity with a hand crank is about 21%. Nuclear reactors are about 40% efficient, though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc. The advantages of the nuclear reactor are that it's more convenient (requiring less human attention per joule) and that it can be fueled by uranium rather than potatoes.
Even if you take into account all the inputs it takes to build and run the reactor, the overall output to input energy ratio is much higher. (...) The type of efficiency I'm thinking of is precisely the energy required to maintain and run something vs. the work you get out of it.
The term for that ratio, which I guess is a sort of efficiency, is "ERoEI" or "EROI". https://en.wikipedia.org/wiki/Energy_return_on_investment#Nu... says nuclear power plants have ERoEI of 20–81 (that is, 20 to 81 joules of output for every joule of input, an "efficiency" of 2000% to 8100%). A hand crank is fueled by people eating biomass and doing work at energy efficiencies within about a factor of 2 of the best power plants. Biomass ERoEI varies but is generally estimated to be in the range of 3–30. So ERoEI might improve by a factor of 30 or so at best (≈81 ÷ 3) in going from hand crank to nuclear, and possibly get slightly worse. It definitely doesn't change by factors of a thousand or more.
Even if it were, I don't think hand-crank-generated electricity is used by "plenty of people".
projects like CollapseOS are specifically meant to create code that can handle low-level complexity and make these things easier to use and maintain.
I don't think CollapseOS really helps you with debugging the EMI on your RAM bus or reducing your power-supply ripple, and I don't think "ease of use" is one of its major goals. Anti-goals, maybe. Hopefully Virgil will correct me on that if he disagrees.
if you were having to replace crystals, they are simple and low frequency, 2-16Mhz, and lots have been produced, and once again the fact that it uses parts that have been produced for decades and widely distributed may be an advantage.
I don't think a widely-distributed crystal makes assembly or maintenance easier than using an on-chip RC oscillator instead of a crystal. It does have real advantages for timing precision, but you can use an external crystal with most modern microcontrollers just as easily as with a Z80, the only drawback being that the cheaper ones are rather short on pins. Sacrificing two pins of a 6-pin ATTiny13 to your clock really reduces its usefulness by a lot.
If I look at archive.org for ATSAMD20 I come up empty, but Z80 gives me tons of results like...
Oh, that's because you're looking for the part number rather than the CPU architecture. If you don't know that the ATSAMD20 is a Cortex-M0(+) running the ARM Thumb1 instruction set, you are going to have a difficult time programming it, because you won't know how to set up your C compiler.
There is in fact enormously more information available for how to program in 32-bit ARM assembly than in Z80 assembly, because it's the architecture used by the Acorn, the Newton, the Raspberry Pi, almost every Android phone ever made, and old iPhones. See my forthcoming sibling comment for information about ARM programming.
Aside from being a much better compilation target for high-level languages like C, ARM assembly is much, much easier than Z80 assembly. And embedded ARMs support a debugging interface called OCD which dramatically simplifies the task of debugging broken firmware.
models like [Z80s and 6502s] that have been manufactured for decades and exist all over might end up being a better fit
There are definitely situations where Z80s or 6502s, or entire computers already containing them, are more easily available than current ARM microcontrollers. (For example, if you're at my cousin's house—he's a collector of obsolete computers.) However, it's difficult to overstate how much more ubiquitous ARM microcontrollers are. The heyday of the Z80 and 6502 ended in about 01985, at which point a computer using one still cost about US$2000 and only a few million such computers were sold per year. The most popular 6502 machine was the Commodore 64, whose total lifetime production was 12 million units. The most popular 8080-family machine (supporting a few Z80 instructions) was probably the Gameboy, with 119 million units. We can probably round up the total of deployed 8080 and 6502 family machines to 1 billion, most of which are now in landfills.
By contrast, we find ARMs in things like not just the Gameboy Advance but the Anker PowerPort Atom PD 2 USB-C charger http://web.archive.org/web/20250101181745/https://forresthel... and disposable vapes https://ripitapart.com/2024/04/20/dispo-adventures-episode-1... https://old.reddit.com/r/embedded/comments/1e6iz4a/chinese_c... — and, as of 02021, ARM tells us 200 billion ARMs had been shipped https://newsroom.arm.com/blog/200bn-arm-chips and were then being produced at 900 ARMs per second.
That means about as many ARMs were being produced every two weeks as 8080 and 6502 machines in history, a speed of production which has probably only accelerated since then. Most of those are embedded microcontrollers, and I think that most of those microcontrollers are reflashable.
Other microcontroller architectures like the AVR are also both more pleasant to program and more abundant than Z80s and 6502s. They also feature simpler and more consistent sets of peripherals than typical Z80 and 6502 machines, in part because the CPU itself is so fast that a lot of the work these obsolete chips need special-purpose hardware for can instead be done in software.
So, I think that, if you want something useful and resilient in situations where people have limited access to resources, people who may still want to deploy some forms of automation using what's available, you should focus on ARM microcontrollers. Z80s and 6502s are rarely available, much less useful, fragile rather than resilient, inflexible, and unnecessarily difficult to use.
About the return on investment, the methodology is interesting, and I’m surprised that a hand crank to nuclear would increase so little in efficiency. But although the direct comparison of EROI might be small, I wonder about this part from that article:
It is in part for these fully encompassed systems reasons, that in the conclusions of Murphy and Hall's paper in 2010, an EROI of 5 by their extended methodology is considered necessary to reach the minimum threshold of sustainability,[22] while a value of 12–13 by Hall's methodology is considered the minimum value necessary for technological progress and a society supporting high art.
So different values of EROI can yield vastly different civilizational results, the difference between base sustainability and a society with high art and technology. The direct energy outputs might not be thousands of times different, but the information output of different EROI levels could be considered thousands of times different. Without a massive efficiency increase, society over the last few thousand years got much more complex in its output. I’m not trying to change terms here just to win an argument but trying to qualify the final results of different capacities of harnessing energy and technology.
I think this gets to the heart of the different arguments we’re making. I’m not in any way arguing that these old architectures are more common in total quantity than ARM. That difference in production is only going to increase. I wouldn’t have known the specific difference, but your data is great for understanding the scope.
My argument is that projects meant to make technology that has been manufactured for a long period of time and has been widely distributed more useful and sustainable are worthwhile, even when we have more common and efficient alternatives. This doesn’t in any way contradict your point about ARM architecture being more common or useful, and I’d be fully in favor of someone extending this kind of project to ARM.
In response to some of the other points: using an external crystal is just an example of how you could use available parts to maintain the Z80 if it needed fixing but you had limited resources. In overall terms, it might be easier to throw away an ARM microcontroller and find 100 replacements for it than even trying to use an external crystal for either one, but again I’m not saying it’s a specific advantage to the Z80 that you could attach a common crystal, just something that might happen in a resource-constrained situation using available parts. Better than the kid in Snowpiercer sitting and spinning the broken train parts at least.
Also, let me clarify the archive.org part. I wasn’t trying to demonstrate the best process for getting info. I just picked that because they have lots of scanned books to simulate someone who needed to look up how to program a part they found. I know it’s using ARM, but the reason I mentioned that had to do with the distribution of paper books on the subject and how they’re organized. The book I linked to starts with very basic concepts for someone who has never programmed before and moves quickly into the Z80, all in one old book, because it was printed in a simpler time when no prior knowledge was assumed.
There are plenty of paper books on ARM too, and probably easier to find, but now that architectures are becoming more complicated, you’re more likely to find sources online that require access to a specific server and have specialized information requiring a certain familiarity with programming and the tools needed for it. More is assumed of the reader.
If you were able to find that one book, you could probably get pretty far in using the Z80 without any familiarity with complex tools. Again, ARM is of course popular and well-documented, but the old Z80 stuff is still out there and simple enough to understand and even analyze with your bare eyes in more detail than you could analyze an ARM microcontroller without some very specific tools.
So all that info about ARM is excellent, but this isn’t necessarily a competition. It’s someone’s passion project who chose a few old, simple, and still-in-production technologies to develop a resilient and translatable operating system for. It makes sense to start with the earlier technology because it’s simpler and less proprietary, but it would also make sense to extend it to modern architectures like ARM or RISC-V. I wouldn’t be surprised if sometime in the future some person or AI did just that. This project just serves as a nice starting point for an idea on resilient electronics.
though that goes down to about 4% if you include the energy cost of building the power plant, enriching the fuel, etc.
Rereading this, I don't know in what sense it could be true.
What I was thinking of was that the cost of energy from a nuclear power plant is on the order of ten times as many dollars as the cost of the fuel, largely as a result of the costs of building it, which represents a sort of inefficiency. However, what's being consumed inefficiently there isn't energy; it's things like concrete, steel, human attention, bulldozer time, human lives, etc., collectively "money".
If, as implied by my 4% figure, what was being consumed by the plant construction were actually 22.5x as much energy as comes out of the plant over its lifetime, rather than money, its ERoEI would be about 0.044. It would require the lifetime output of twenty or thirty 100-megawatt power plants to construct a single 100-megawatt nuclear power plant. That is not the case. In fact, as I explained later down in the same comment, the ERoEI of nuclear energy is generally accepted to be in the range of about 10 to 100.
Searching the Archive instead for [arm thumb programming] I find https://archive.org/details/armassemblylangu0000muha https://archive.org/details/digitaldesigncom0000harr_f4w3 https://archive.org/details/armassemblyforem0000lewi https://archive.org/details/SCE-ARMref-Jul1996 (freely available!) https://archive.org/details/armassemblylangu0000hohl https://archive.org/details/armsystemarchite0000furb https://archive.org/details/learningcomputer0000upto https://archive.org/details/raspberrypiuserg0000upto_i5z7 etc.
But the Archive isn't the best place to look. The most compact guide to ARM assembly language I've found is chapter 2 of "Archimedes Operating System: A Dabhand Guide" https://www.pagetable.com/docs/Archimedes%20Operating%20Syst..., which is 13 pages, though it doesn't cover Thumb and more recently introduced instructions. Also worth mentioning is the VLSI Inc. datasheet for the ARM3/VL86C020 https://www.chiark.greenend.org.uk/~theom/riscos/docs/ARM3-d... sections 1 to 3 (pp. 1-3 (7/56) to 3-67 (45/56)), though it doesn't cover Thumb and also includes some stuff that's not true of more recent processors. These are basically reference material like the ARM architectural reference manual I linked above from the Archive; learning how to program the CPU from them would be a great challenge.
There's a lovely short tutorial at https://www.coranac.com/tonc/text/asm.htm as well (43 pages), and another at https://www.mikrocontroller.net/articles/ARM-ASM-Tutorial (109 pages). And https://azeria-labs.com/writing-arm-assembly-part-1/ et seq. is probably the most popular ARM tutorial. None of these is as well written as Raymond Chen's introductory Thumb material: https://devblogs.microsoft.com/oldnewthing/20210615-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210616-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210617-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210625-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210624-46/?p=10... https://devblogs.microsoft.com/oldnewthing/20210531-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210601-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210602-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210603-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210604-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210607-00/?p=10... https://devblogs.microsoft.com/oldnewthing/20210608-00/?p=10... (I'd link an index page but I couldn't find one.) Chen covers most of the pragmatics of using the Thumb instruction set well.
There's an ARM Thumb assembler in μLisp (which can itself run on embedded ARMs) at https://github.com/technoblogy/lisp-arm-assembler, which of course explains all the instruction encodings, documented at http://forum.ulisp.com/t/an-arm-assembler-written-in-lisp/12.... Lots of free software already runs on the chip, including FreeRTOS.
https://mcuoneclipse.com/2016/08/14/arm-cortex-m-interrupts-... covers the Cortex-M interrupt system, and lcamtuf has written an excellent tutorial for getting the related ATSAMS70J21 up and running https://lcamtuf.substack.com/p/mcu-land-part-3-baby-steps-wi....
Stack Overflow has 12641 questions tagged [arm] https://stackoverflow.com/questions/tagged/arm, as opposed to 197 for [z80]. Most of these are included in the Kiwix ZIM files of SO like https://download.kiwix.org/zim/stack_exchange/stackoverflow.... (see https://library.kiwix.org/?lang=eng&q=&category=stack_exchan...).
There are a bazillion Z80s and 8051s, and many of them are in convenient packages like DIP. You can probably scavenge some from your nearest landfill using a butane torch to desolder them from some defunct electronics.
In contrast, there are a trillion flavours of modern MCUs, not all drop-in interchangeable. If your code and tooling is designed for an ATSAMD20, great, but I only have a bag of CH32V305s. Moreover, you're moving towards finer pitches and more complex mounting-- going from DIP to TSSOP to BGA mounting, I'd expect every level represents a significant dropoff of how many devices can be successfully removed and remounted by low-skill scavengers.
I suppose the calculus is different if you're designing for "scavenge parts from old games consoles" versus proactively preparing a hermetically sealed "care package" of parts pre-selected for maximum usability.
On the other hand, old microcontrollers are a lot more likely to be mask-programmed or OTP PROM programmed, and most of them don't have an EA pin. And they have a dizzying array of NIH instruction sets and weird debugging protocols, or, often, no debugging protocol ("buy an ICE, you cheapskate"). And they're likely to have really low speeds and tiny memory.
Most current microcontrollers use Flash, and most of them are ARMs supporting OCD. A lot of others support JTAG or UPDI. And SMD parts can usually be salvaged by either hot air or heating the board up on a hotplate and then banging it on a bucket of water. Some people use butane torches to heat the PCB but when I tried that my lungs were unhappy for the rest of the day.
I was excited to learn recently that current Lattice iCE40 FPGAs have the equivalent of the 8051's EA pin. If you hold the SPI_SS pin low at startup (or reset) it quietly waits for an SPI master to load a configuration into it over SPI, ignoring its nonvolatile configuration memory. And most other FPGAs always load their configuration from a serial Flash chip.
The biggest thing favoring recent chips for salvage, though, is just that they outnumber the obsolete ones by maybe 100 to 1. People are putting 48-megahertz reflashable 32-bit ARMs in disposable vapes and USB chargers. It's just unbelievable.
In terms of hoarding "care packages", there is probably a sweet spot of diversity. I don't think you gain much from architectural diversity, so you should probably standardize on either Thumb1 ARM or RISC-V. But there are some tradeoffs around things like power consumption, compute power, RAM size, available peripherals, floating point, GPIO count, physical size, and cost, that suggest that you probably want to stock at least a few different part numbers. But more part numbers means more pinouts, more errata, more board designs, etc.
Even if your objectives are humbler than "rebooting civilization" (an objective I think Virgil opposes), you might still want to, for example, predict the weather, communicate with faraway family members, automatically irrigate plants and build other automatic control systems, do engineering and surveying calculations, encrypt communications, learn prices in markets that are more than a day's travel away, hold and transmit cryptocurrencies, search databases, record and play back music and voice conversations, tell time, set an alarm, carry around photographs and books in a compact form, and duplicate them.
Even a washing-machine microcontroller is enormously more capable of these tasks than an unaided human, though, for tasks requiring bulk data storage, it would need some kind of storage medium such as an SD card.
Other kinds of very-low-bit-rate telecommunications messages that are still extremely valuable:
Lucretia gravely ill. Hurry.
I-44 mile 451: bandits.
Corn $55 at Salem.
Trump died.
Springfield captured.
General Taylor signed ceasefire.
Livingstone found alive.
The first of these inspired Morse to invent the telegraph; she died before the mail reached him. None of them are over 500 bits even in ASCII, and probably each could be encoded in under 100 bits with some attention to coding, some much less. 100 bits over, say, 2 hours, requires a channel capacity of 0.014 bits per second.
Even without advanced compression algorithms, you could easily imagime the corn message being, say, "<figs>!$05000<ltrs>ZCSXV" in ITA2 "Baudot" code: 14 5-bit characters, 70 bits.
Information theory shows that there's no such thing as being out of communication range; it's just a question of what the bit rate of the channel is. But reducing that to practice requires digital signal processing, which is many orders of magnitude more difficult if you are doing it with pencil and paper. It also benefits greatly from precise timekeeping, which quartz resonator crystals make cheap, reliable, robust, and lightweight.
Encryption is another case where an amount of computation that is small for a microcontroller can be very valuable, even if you have to transmit the encrypted message by carving it into wood with your stone knife.
The Bitcoin blockchain in its current form requires higher bandwidth than a weather station network, but still a tiny amount by current internet standards, about 12kbps originally, I think about 26kbps with segwit. Bitcoin (or an alternative with a longer block time) could potentially provide a way to transmit not just prices but actual payments under adverse circumstances. It does require that participants have enough computing power to sign transactions; I think it should be relatively resilient against imbalances of computation power among participants, as long as no 51% attack becomes feasible through collusion.
In an environment where there isn't a world hegemon to run something like the post-Bretton-Woods system, international payments, if they are to happen at all, need to be settled somehow. The approach used for the 3000 years up to and including the Bretton Woods period was shipping gold and silver across oceans in boats. Before that, the Mediterranean international economy was apparently a gift economy, while intranational trade in Mesopotamia used clay bills of deposit.
In a hypothetical post-collapse future without a Singularity, there may be much less international trade. But I hope it's obvious that international trade covers a spectrum from slightly advantageous to overwhelmingly advantageous, so it is unlikely to disappear altogether. And Bitcoin has overwhelming advantages over ocean shipping of precious metals. For example, it can't be stolen in transit or lost in a shipwreck, and the latency of a payment is about half an hour rather than six weeks.
And all the blockchain requires to stay alive is about 26 kilobits per second of bisection bandwidth.
Also, see https://news.ycombinator.com/item?id=43487785 for a list of other end-uses for which even a microcontroller would provide an enormous advantage over no electronics at all.
Did your grandpa do it at the same scale, speed, and with the same accuracy as modern day weather forecasting?
No. But if you're in a "post collapse" situation, building 8-bit computers from scavenged parts to run FORTH, you aren't either.
our descendents will perhaps not be thrilled to learn about the "time bombs" we have left them, steadily inching into aquifers or up into surface water. that is of course if they are not too distracted locating water of even dubious potability to care
Crude automated [solar powered] farm machines - would probably be useless. Animal power is probably the way to go. Or steam. Go to a threshing bee sometime: I've seen a tractor that runs on wood.
Solar powered ebook readers to store those farming manuals and other guides - the life span of batteries would be short, get that shit on paper fast.
Solar powered computers for spreadsheets and planning software - plan in your head or use a paper spreadsheet.
Computers might be everywhere today, and you personally might not know how to do anything without a computer, but practically no one had a computer 40-45 years ago, and literally no one had a computer when society was last at a "collapse" level of technology. COMPUTERS ARE NOT NECESSARY.
The point of having computers is simply that they perform certain tasks orders of magnitude faster than humans. They're a tool, no more and no less. Before computers, a "calculator" was a person with paper and a slide rule, and you needed hundreds of them to do something like compute artillery trajectories, army logistics, machine tool curves, explosive lensing, sending rockets into space, etc. Managing to keep just one solar-powered calculator working for 10 years after a collapse frees up all those people to do things like farming. Keeping a solar-powered electric tractor working frees up all those farmers, and frees up the animals for eating.
IMHO this project is at least operating under the right principles, i.e. make the software work on scavenged parts, control your dependencies, be efficient with your computations, focus on things you can't do with humans.
Just because computers weren't around 40-50 years ago doesn't mean that computers won't be very handy to have around in a post-collapse world.
They won't be handy to you or me, because we'd need to put our efforts into securing basic needs. You won't have the luxury to spend your time scavenging parts to build an 8-bit computer to do anything, then spending a bunch more time programming it. Even if you did, how would give you and advantage in acquiring food, shelter, or fuel over much simpler solutions using more basic technologies like paper?
Computers are the kind of thing people with a lot of surplus food spend their time with.
The point of having computers is simply that they perform certain tasks orders of magnitude faster than humans. They're a tool, no more and no less. Before computers, a "calculator" was a person with paper and a slide rule, and you needed hundreds of them to do something like compute artillery trajectories, army logistics, machine tool curves, explosive lensing, sending rockets into space, etc.
Computers are useful for those tasks, but those are tasks only giant organizations like governments need to do. That's not you in a post-collapse world.
Managing to keep just one solar-powered calculator working for 10 years after a collapse frees up all those people to do things like farming.
I think you have that backwards. No one's going to skip needed farming work and starve so they can go compute artillery trajectories. If they need to farm, they'll go without the artillery computations.
Keeping a solar-powered electric tractor working frees up all those farmers, and frees up the animals for eating.
I address that up-thread, but solar-powered electric tractors are a fantasy. Even if such a thing existed, it would wear out, break down, and become irreparable long before technological civilization could be rebooted, so you might as well assume it doesn't exist in your planning.
Also, I don't think you're thinking things through: an animal can both be used to do work and (later) be eaten. If you're very poor, which you would be after some kind of civilization collapse, you don't eat animals very often.
The point, as with every capital investment, is to make more efficient the labor of the people who are securing those basic needs, so that you can free them up for work progressively higher on the value chain.
During the collapse itself, the way to do this is pretty easy: you kill the people who have food, shelter, or fuel but are not aligned with you, and give it to people who are aligned with you. And then once you have gotten everyone aligned with you, you increase the efficiency of the people who are doing the work. Saving even just one working tractor can cut the labor requirements from farming enough to support a village from several hundred people to one or two people. You will not have petrol in a post-collapse world, so better hope it's an electric tractor, or drop a scavenged electric motor + EV battery into an existing tractor. Use scavenged solar panels for power, there's plenty of that where I am.
All this requires that you know how things work, so you can trace out what to connect to what and repurpose electronic controls and open up the innards of the stuff you find abandoned on the street, and that's where having a computer and a lot of downloaded datasheets and physical/electronic/mechanical/chemical principles available will help.
Having a government in a box when everyone around you is scrounging for food makes you king, particularly if you also managed to save a couple militarized drones through the collapse. That's a pretty enviable position to be in.
Come the fuck on. A fucking 8-bit computer (even a fucking 64-bit computer) is not a fucking "government in a box." And where the fuck are you going to get your "couple militarized drones"? Assuming they're not suicide drones (where "a couple" is not much), how long will they last? How useless would they be without spare parts, maintenance, and ammunition?
We live in the fucking real world, not some videogame where you can find a goddamn robot in a cave still functioning after 500 years and lethal enough for a boss-battle.
You will not have petrol in a post-collapse world, so better hope it's an electric tractor, or drop a scavenged electric motor + EV battery into an existing tractor. Use scavenged solar panels for power, there's plenty of that where I am.
Look: if they don't have petrol, they won't have battery factories either. Batteries wear out. Your fantasy electric tractor will be just as useless a petrol one in short order.
I may not need automatic accounting and inventory via spreadsheets at a small scale, but being able to model the next 3 days of weather based on local conditions without any expectation of online communications could come in pretty handy
Alright. You have a computer in your possession that is vastly more powerful than an 8-bit machine built from scavenged parts.
1. Do you actually "model the next 3 days of weather based on local conditions without any expectation of online communications" with it?
2. If not, do you know how to built the required sensor suite and write the software to do that?
I feel like you're misunderstanding computers as magic boxes that can do some useful thing with little effort. But this is supposed to be a form of software engineers, building a weather forecasting system would be hard to do even with full access to a university library, Digikey, and the money to work on it full time. But we're talking about doing it with scavenged components while you're hungry looking for food.
Our modern institutions and infrastructure depend on impossibly-complex, precariously-fragile world-spanning supply chains that rely on untold quantities of highly-skilled labor whose own training and employment is dependent upon having enough pre-existing material prosperity that 90% of the population is exempt from needing to grow their own food.
Meanwhile, the supply chain for the pre-Roman and post-Roman worlds were not very different. They were producing tin in Britain in 2000 BC, and they were producing tin in Britain in 1000 AD. Crucially, the top of the production pyramid (finished goods) was still close to the bottom of it (raw materials harvestable with minimal material dependencies) without a hundred zillion intervening layers of middlemen.
Post-collapse society will look very different from modern Information-age society, and will definitely have a lot more people growing their own food. Knowing how to identify plants, and the care instructions (sun/soil/water/space requirements) for each variety you're growing, and how other people have handled problems like pests and rot, can save you several years of failed harvests. Several years of failed harvests is likely the difference between surviving and not surviving.
Meanwhile, the supply chain for the pre-Roman and post-Roman worlds were not very different.
This isn't true! We know of huge differences between who was producing what goods and where between Roman and post-Roman Britain. To give one example: ceramic production came to a complete halt, and people essentially had to make do with whatever pre-exiting ceramics they had had beforehand. Sure, an agricultural worker living on their own land off in the countryside might not have noticed a huge difference -- but someone who had been living by a legionary fortress, or one of the primary imperial administrative centers, or in one of the burgeoning villas, certainly would have had to make significant changes across the period.
Considering how much potentially invaluable info is only/mostly/only easily available as a PDF, I'm thinking a working ereader would be of nontrivial value.
Now...if there was only a way to crunch a PDF on an 8-bit processor I recovered from my washing machine...
- wv/odt2txt and friends
- cp/m can read TXT files generated from gnuplot
- A Forth with Starting Forth it's hugely valuable, ditto with a Math Book like the Calculus from Spivak.
Not a collapse, but a network attach on infra makes most modern OSes unusable, they need to be constantly updated. If you can salvage some older machine with DuskOS+networking, simple gopher and IRC clients/servers will work with really low bandwidth (2/3 KBPS and less).
Based on xpdf. Probably not 8-bit capable.
- pdftotext from poppler-tools under Linux/BSD
From OpenOffice. Probably not 8-bit capable.
Point to a PDF to something useful converter that runs in 64K bytes and can handle the 'PDF as a wrapper around some non-text image of the document' and we can talk. Seriously...I'd be fascinated.
- cp/m can read TXT files generated from gnuplot
Not sure how that helps. And can you port gnuplot to run in 8-bit/64k?
* a network attach on infra makes most modern OSes unusable, *
Ridiculous, unless your definition of 'usable' is 'unless I can get to TwitFaceTubeIn and watch cat videos ima gonna die!'. If civilization collapses and the network goes away tomorrow, my Debian 12, FreeBSD 14 and NetBSD 10 machines will work exactly as well as it does today until I can't power it and/or the hardware dies (sans email and web, of course). Yeah, the windows 10/11 things will bitch and moan constantly, and I assume MacOS too, but even with degraded functionality, it's far from 'unusable'. And I'll be able to load Linux or BSD on them so no worries.
they need to be constantly updated
No, they don't. Updates come in 2 broad categories: security fixes and feature release. Post-collapse and no network makes security much less urgent, and no new features is the new normal...get used to it. I have gear that runs HP/UX 10 (last support in 2003); still runs fine and delivering significant value.
And that ignores DOS, Win3, XP and such, which are still (disturbingly) common.
will work with really low bandwidth
You mean....with a network?
On 'gnuplot' for cp/m... doing a simple chart from X and Y values in TSV paired in two colums can be done either from forth or pascal or whatever you have to read two arrays.
I'm not stating a solution to read future PDF files. Forget the future ones; what I mean it's to 'convert' the ones we currently have.
I'm at a Spanish pubnix/tilde, a public unix server. Here I have a script which converts -with the help of a Cron job and sfeed-, RSS feeds into plain text to be readable over gopher. These can be read even from the crustiest DOS machines in Latin America and Windows 95/98/XP machines. I even pointed a working Retrozilla release, and it's spreading out quickly.
They are at least able to read news too with a gopher/gemini client and News Waffle, with a client written in TCL/TK saving tons of bandwidth. The port with IronTCL will work in the spot on XP machines. Download, decompress, run 'launch.bat' (lanzar.bar in Spanish). The ZIP file weights 15MB. No SSE2 it's required. Neither tons of RAM.
Compare that to a Chrome install. And Chrome requeriments.
The 2nd/3rd world might not be under an apocalipse, but they don't have the reliability of the first world. And lots of folks adored the News Waffle service saving a 95% of the bandwidth.
Instead of the apocalipse, think about 3rd world guys or the rural America. Somethink like https://lite.cnn.com and https://neuters.de will work even under natural disasters with really reduced bandwidth data. Or https://telae.net for Google Maps searchs.
gopher://magical.fish has news feeds, an English to French/Spanish and so on translator , good links to blogs, games and even TPB search. These can be run on any machine, or Lagrange under Android. And, yes, it might work better under a potential earthquake/flood than the web.
But regardless of that, the ability to program microcontrollers is still a superpower and if you can have it, you have a hell of an edge.
I really disagree with that. It will give practically no edge. It's a specialist skill that's only really useful in the context of an already computerized society for mass production or big one-offs.
If you have a collapse, I think the assumption is there would little to no mass production of advanced goods (hence, the scavenging concept). Then you're left with big one-offs, which are things large organizations like governments build and not all the time.
I would think even in small scale having things like a 3d printers, CNC machines, networked video surveillance and alarm systems, solar arrays etc would be very beneficial.
Absolutely though the top priorities would be much simpler things though like food, clean water and shelter.
>you need a fucking farming manual on paper.
With an adjustable hole punch, prong fasteners, (optionally) duct tape for the spine, and (optionally) sheet protectors for the front and back covers, you can crank out a survival library of (open-source) books as fast as pages come out your Brother laser.I never bothered with fancy acid-free paper. Modern paper has good longevity, but to be safe I use non-recycled, non-whitened (91 brightness) paper.
Guide with prong fasteners (I prefer white tape, full-size pages, double/triple`inside a sheet protector as covers): https://www.youtube.com/watch?v=KVnpnHWcE04
A different technique with brad fasteners: https://www.youtube.com/watch?v=vD3vWZ0I85g
Lots of fancy book-binding tutorials out there, but I suspect most people don't realize how simple a paperback can actually be.
I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
An EMP destroys your electronics and makes accessing the digital version impossible?
You could also say in the middle of a house file with the paper on fire the digital version would be better but it's pointless to invent around the assumptions.
I'm still WAY better off with my solar panels and ALL the books on an external hard drive. I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
I just gave a scenario where a print out of Wikipedia would be better than a digital version.
> In this scenario we have the print version AND the digital version.
Except this is the exact scenario I've been suggesting all along. If you want access to both versions, you still employ the same DIY bookbinding techniques. Nothing changes.Of the bookbinding tips I gave, "obliterate your digital copy" was (and I didn't think I had to explain this) not one of the suggested steps. ;-)
Hence backup, not format shift.
I'm still WAY better off with my solar panels and ALL the books on an external hard drive.
No you're not. One component is your setup gets fried and you lose access to everything.
Paper is its own reader device: it's far more resilient, because it has much few dependencies for use. Just think about digital archiving vs paper archiving for a bit, and it becomes clear.
I can't think of ANY situation where a print out of Wikipedia would be better than a digital version.
Wikipedia would be of little practical value in a post collapse situation. And frankly, it's pretty terrible now for anything besides satisfying idle curiosity.
With an adjustable hole punch, prong fasteners, (optionally) duct tape for the spine, and (optionally) sheet protectors for the front and back covers, you can crank out a survival library of (open-source) books as fast as pages come out your Brother laser.
This is actually one of the few "post apocalyptic" computer ideas that actually makes some sense to me. Though it would probably still make more sense to pre-print that library than wait until conditions are difficult to do the printing.
Unless your plan is assume availability of printer paper, you'd still need to store the same volume of blank paper as your library, and you'd be stuck trying to run a printer when things like electricity are much less available.
> it would probably still make more sense to pre-print that library than wait until conditions are difficult to do the printing.
That's what I meant. Sorry I left it unclear. I should have explicitly called that out, thanks.Basically it's cliff-notes from my shot at an "MVP of home hardcopy backups." No surprise, these simpler techniques (often seen when corporations had internal printing for employee handbooks and such) are better suited to home processing with minimal equipment. All you need is a three-hole punch.
It's not about achieving a "perfect" bookbinding (a real term), or for people who do bookbinding as a hobby. Instead it's a fast/easy/cheap technique for people who just want a hardcopy backup, without needing a special bookbinding vice in the house.
Three ring binders were my obvious first choice, but they're surprisingly costly, somewhat awkward to use, prone to tearing pages, and usually take more space on the shelf.
Hope that explains it better. Cheers
In more plausible cases, we're talking about a population collapse where the deaths are either concentrated in cities, spread out over several years, or both. If they're concentrated in cities, maybe avoid the cities for three to six months until they finish rotting. If they're spread out over several years, those who die later will be able to bury those who die earlier; it only takes a few man-hours of labor to dig a grave.
510 million km², in the worst case, that's a sudden event producing 14 bodies per km², about 25 meters from one body to the next.
That's the area of the planet; you only get that distribution if the event also redistributes the bodies evenly over the entire globe, including oceans. (Though I'm not sure how you go from 14/km^2 to 25 meter separation?)
If they're spread out over several years, those who die later will be able to bury those who die earlier; it only takes a few man-hours of labor to dig a grave.
During the covid pandemic, which was around a single percentage point of the world population over a few years, there were reports in the UK and the USA of mass graves, normal death procedures being overwhelmed.
Global supply chain collapse is kinda the "mass starvation because farms can't get fertiliser to grow enough crops, nor ship them to cities" scenario. If you can't hunt or fish, you're probably one of the corpses (very few people will have the land to grow their own crops, especially without fertiliser).
Though I'm not sure how you go from 14/km^2 to 25 meter separation?
√14 ≈ 4 and I somehow managed to think that 1000m ÷ 4 = 25m. In fact that calculation should have given 250m, and √47 ≈ 7, and 1000m ÷ 7 ≈ 140m. So we're talking about on the order of a city block between corpses.
there were reports in the UK and the USA of mass graves, normal death procedures being overwhelmed.
Yeah, but it wasn't a question of corpses rotting in the streets despite all the survivors digging graves full-time; it was just a question of the usual number of gravediggers being unable to cope. If people had just dug graves themselves for their family members in their yards, the way they do for pets, it wouldn't have been a problem; but that was prohibited.
Anyway, I think people who worry about health risks from corpse pileups due to society collapsing are really worrying about the wrong thing. The corpses won't be piled up; at most, they'll be so far apart that you could walk for days without seeing one unless you're someplace especially flat or with an especially high density of corpses, and in realistic scenarios, they'll just be buried like normal, mostly over years or decades, not left out to rot en masse.
Regarding hunting, if you're near a river and it isn't piling up with bodies from upstream, you can take up fishing. It's easier to learn from scratch on your own and requires less effort.
1. they rot on the face of the ground and the carrion eaters will take care of it
2. they are already cinder/charcoal and the plants will take care of it
The serious take: live in the countryside. This is an urban problem.
I'm not an optimist in today's societal climate, but the rationale[1] stems from peak oil and cultural bankruptcy. Peak oil is (was?) the concern of limited oil production in the face of static or increasing demand. The 2019 peak from big oil wasn't about declining production, but declining demand[2]. Which is good, if the ideal is long-lasting migration to renewables.
I won't try to predict the future w/r/t what current societal trends mean for the long-term success of global supply chains, but I would be greatly surprised if cultural bankruptcy alone causes the complete collapse of society in the next 5-10 years.
[1] http://collapseos.org/civ.html
[2] https://www.bp.com/content/dam/bp/business-sites/en/global/c...
- lightly protected and globally reachable attack surface
- increasing geopolitical tensions
- the bizarre tendency to put everything possible online, i.e. IoT devices that always need internet, resources that rely on CDNs to function, other weird dependencies.
I think we'll see more of it as the boomers continue to age. Some folks mistake their own looming end for the end of all things.
I assume there's some significant hard-wiring for this type of emotion, with some people having a different baseline for their sensitivity to it. I also suspect the environment might push regional genetic population to have different sensitivities, depending on how harsh/unstable the environment is. I say "unstable" because I see doom as anxiety about the possibility. For example, in a region with a consistent harsh winter, doom has less use because piling food up is just routine, everyone does it, it's obvious. But, in an unstable environment, with a winter that's only sometimes harsh, you need to fear that possibility of a harsh winter. You're driven by the anxiety of what might be. You stockpile food even though you don't need it now: you're irrational in the instantaneous context, but rational for the long term context. It's a neat future looking emotion that probably evolved closely with intelligence.
It also left me wanting more. It has pretty extensive references at the end, I wonder if anyone's put together a collection of all the referenced materials?
why is having some type of computer a priority.
Mechanization and Automation.
It creates time you would normally have to spend actively on labor. Age of energy and all.
The main inputs for this though would be quite difficult to bootstrap. I'm talking about magnet wire enamel and electrical isolation.
You need that to make motors and actuators. You need computers for the control systems.
In terms of computing, this means that any strategy looking at a before/after collapse framing is missing the entire interesting middle part - how do we transition from this high-tech to low-tech world over the course of decades. I seriously doubt it can happen by anticipating what computing we might need and just leapfrogging there.
Which means we might be living in collapse now, we just don't know it yet. "The Long Emergency" as James Howard Kunstler put it.
When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?
When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?
Funnily enough - not whatsoever! And the benefits of this amazing fact are reaped by economists* the world over, year after year.
* And, to be fair, journalists, politicians, various flavours of youtuber, etc, etc.
When a professional prognosticator's prognostications continue to be falsified by actual events, year after endless year, at some point don't people stop listening to them?
Yet people still listen to Musk about when Tesla will have autonomous driving.
I'm assuming you're talking about the internet, networked services, etc, because that's the only conceivable way a collapse could happen that quickly... except all modern infrastructure we rely on like that has backups and redundancy built in, and largely separated failure domains, so that such a failure would never be able to happen that quickly.
I think the vision here is more likely to play out, if any were to. Not many fabs that can make modern chips, and the biggest one is under the business end of Chinese artillery so. Gotta make do with whatever we have even if new supply dries up.
We really have no way to predict it. It is even likely to be different in different locales.
- Industrial food will gradually become extremely expensive and homesteading will become more popular. Crop failures and famines will be routine as food security and international trade wanes.
- Unemployment will soar with automation and overall decreased purchasing power.
- Crime and theft will soar as police decline (Austin TX is already understaffed by 300 officers and doesn't respond to property crimes until 48 hours later)
- Civil/national/world wars.
- 100M's of climate refugees migrating across and between continents. If you think scapegoating of "illegal" immigrants is bad now, just wait until all forms of hate in the forms of racism, xenophobia, and classism become turbocharged. Expect militarized borders guarded by killer robots.
I didn't design it for a post-collapse scenario, but one can salvage program parts from binary artifacts and create new programs with it, just like how a mechanic in a Mad Max world can scavenge car parts from a scrapyard and create new contraptions. It deconstructs the classical compile-assemble-link workflow in a manner that tends to cause migraines to sane persons.
I've even managed to slightly bend the rules around ABI and sling binary code across similar-ish platforms (PlayStation to Linux MIPS and Linux x86 to Windows x86), but to really take this to its logical conclusion I'd need a way to glue together incompatible ABIs or even ISAs, in order to create program chimeras from just about any random set of binaries.
Here, I'm actually stealing code and data from one or more executables and turning these back into object files for further use. Think Doctor Frankenstein, but with program parts instead of human parts.
I only operate on dead patients in a manner of speaking, but you could delink stuff from a running process too I suppose. I don't think it would be useful in the context of return-oriented programming, since all the code you can work with is already loaded in memory, there's no need to delink it back to relocatable code first.
Please consider writing up something about this binary mashup toolkit. You have taken an unusual path and shown how far one can go... it's worthy of sharing more widely.
Good luck!
I've mostly used a linker to build new programs with the parts I've exported. Some of my users use objdiff to compare the output of their decompilation efforts against recreated object files. Others use objcopy or homemade tools to further modify the object files (mostly tweaking the symbol table) prior to reuse. One can generate assembly listings with objdump, although Ghidra already gives you that.
Ironically, the hardest part about using my delinker I think (besides figuring out errors when things go wrong) is just realizing and imagining what you can do with it. It's rather counter-intuitive at first because this goes against anything one might learn in CS 101, but once you get it then you can do all kinds of heretical things with it.
To take this to the next level, beyond adding support for more ISAs and object file exporters, I'd need to generate debugging symbols in order to make debugging the pilfered code a less painful experience. There are plenty of ways delinking can go wrong if the Ghidra database is incorrect or incomplete and troubleshooting undefined behavior from outer space can get tedious.
But in the context of a post-collapse scenario, what one would be really after is some kind of linker that can stitch together a working program from just about any random bunch of object files, regardless of ABI and ISA. My experiments on cross-delinking remained rather tame (same ISA, similar ABI) because gluing all that mess by hand gets really complicated really fast.
I've demonstrated a bunch of use-cases on my blog, ranging from porting software to pervasive binary patching and pilfering application code into libraries, all done on artifacts without source code available.
I have one user in particular who successfully used it on a huge (7 MiB) program in a span of a couple of weeks, most of them spent fixing bugs on my end. They then proceeded to crank up the insanity to 11 by splitting that link-time optimized artifact into pieces and swapping parts out willy-nilly. That is theoretically impossible because LTO turns the entire program into a single translation unit subject to inter-procedure optimizations, so functions will exhibit nonstandard calling conventions you can't replicate with freshly built source code, but I guess anything's possible when you binary patch __usercall into MSVC.
I should get back to it once work is less hectic. Delinking is an exquisitely intricate puzzle to solve algorithmically and I've managed to make that esoteric but powerful reverse-engineering technique not only scalable to megabytes worth of code, but also within reach of everyday reverse-engineers.
Guess I'd better unbrick my Chromebook first. I still don't know how changing its battery managed to bork an unmodded, non-developer mode Chromebook so thoroughly that the official recovery claims it doesn't know what model it is ("no defined device identities matched this device.").
Virtual machines/emulators are one extreme, recreating the environment it ran in so no human examination of the particular program is necessary. The approach you describe is at the other end, using bits of programs directly in other code to do basically black-box functions that aren't worth figuring out and coding properly.
Unless I'm mistaken, it doesn't seem to do anything in particular w.r.t. relocations, which is the tricky part about delinking. My educated guess it that repacking might be enough for purely self-contained position-independent code and data, but anything that contains an absolute address will lead to dangling references to the original program's address space.
My tooling recovers relocatable section bytes and recreates relocation tables in order to export object files that are actually relocatable, regardless of their contents.
I wonder if ESP32s and Arduinos might be more commonly found, though I could see the argument that places with those newfangled chips may be more likely to become radioactive craters in some scenarios.
That's not as easy as just dropping a full computer on your desk, but having a low power processor that's easy to find replacements for would be useful. That is, of course, if you spent the time pre-collapse to learn how to make a useful system out of those components, which I suspect is the real goal of Collapse OS.
TI uses them in one of their calculator lines. I've seen gobs of them as embedded controllers in invarious industrial systems (I have family in manufacturing). I understand a lot of coin-op games (think pinball or pachinko) use them. I've seen them in appliance controller boards (don't recall the brand).
So? That's like saying "there are no 8051s because Intel doesn't make them", even tho there are millions if not billions of clones made every year (since you'll undoubtedly say "well I haven't seen one", if your car tells you when your tire pressure is low, there's 4 8051s).
(It also matters whether you need things like an external crystal or three power rails.)
I mean, it's convenient when you can use an existing compiler, or at least can find documentation for the instruction set; and 8-bit instruction sets and address buses are constraining enough that they can really make it hard to do things. But these are not nearly as important as being able to get your code running on the chip at all.
So, no, instruction-set-compatible clones (like the ones I mentioned I found in https://news.ycombinator.com/item?id=43488079, the day before your comment) are not interchangeable with Intel 8051s in the context of improvising computational capabilities out of garbage. Pinouts matter. Programming waveforms matter.
With respect to the Z80-clone TI calculators, in https://news.ycombinator.com/item?id=43488344 Virgil explained that they can in fact run CollapseOS, but can't yet self-host because CollapseOS can't yet access their Flash. If you want to use them to control motors or solenoids or something, you still need some pinout information, which I'm not sure we have.
Not a hugely compelling argument for "the z80 is still a popular microcontroller", mind.
[0] https://en.wikipedia.org/wiki/Comparison_of_Texas_Instrument...
[1] https://www.amazon.co.uk/Texas-Instruments-TI-84-CE-T-Python...
[2] Mild cheating going on though because it runs the Python on an ARM copro.
under what conditions do you believe the civilization you're living under will collapse? [...] Oh, you just haven't made that list yet? [...] Or you could take the question from the other side and list the conditions that are necessary for your civilization to continue to exist. That's an interesting list too.
I've always dismissed collapse talk as "batshit crazy" as the author says. but he raises good points here. maybe I'm not batshit crazy enough.
In the wired article the author says he thinks climate change will disrupt trade routes, but obviously society would route around the damage as long as it remained profitable to do so. The only scenario in which this hypothetical makes sense would be mass causality events like pandemics or wars that prevent travel and trade.
So we're talking about a hypothetical situation in which global trade has halted almost completely for some reason, and has been stopped for a very, very long time. This means that the majority of the worlds population are either dead or dying, which means that YOU will be either dead or dying as well.
Even if we accept the premise (a tall ask) AND we assume you will happen to be one of the survivors because you're "special", wouldn't it make more sense to prep by building a cargo ship so trade can resume sooner than it does to build apocalypse proof micro-controller code?
If we forget about typical ARM CPUs for the moment, and just look at ARM CPUs in general, the ARM 2 was supposedly 27000 transistors according to https://en.wikipedia.org/wiki/Transistor_count. If you had to hand-solder 27000 SOT23 transistors onto PCBs at 10 seconds per transistor, it would take you a couple of weeks of full-time work to build one by hand, and probably another week or two to find and fix the assembly errors. It would be maybe a square meter or two of PCBs. At today's labor prices such a CPU would cost on the order of US$5000. At today's 1.3¢ per MOSFET (LBSS84LT1G and 2N7002 from JLCPCB's basic parts list a few years ago), we're talking about US$400 of transistors.
(Incidentally, Chuck Moore's MuP21 chip was 9000 transistors, so we know how to make an acceptably convenient-to-program chip in a lot less space than the ARM. It just does less computation per cycle. A big chunk of the ARM 2 was the multiplier, which Moore left out.)
It probably wouldn't run at more than 5 million instructions per second (maybe 2 VAX MIPS, slower than a 386), and because it's built out of discrete power transistors, it would use a lot more power than the original ARM. But it would run ARM code, and the supply chain only needs to provide two types of MOSFET, PCBs, and solder.
US$5400 for a 2-VAX-MIPS CPU is not a competitive price for computing power in today's world, and if you want to drive the cost down, you need to automate, specialize, and probably diversify the supply chain. If you were building it out of 74HC00-class chips, for example, you'd probably need a dozen or so SKUs, but each chip would be equivalent to about 20 transistors, so you'd only need about 1400 chips, currently costing about 10¢ each, cutting your parts price to about US$140 and your assembly time to probably a day or two of work, so maybe US$500 including debugging. And your clock rates would be higher and power usage lower, because a gate input built out of 2N7002 and similar power MOSFETs will have a gate capacitance around 60pF, while a 74HC08 is more like 6pF. We're down to US$640, which is still far from economically competitive but sure looks a lot better.
The 74HC08 and family are CMOS clones of the SN7400 series launched by Texas Instruments in 01966, at a time when most of the world electronics supply chain (both providing their materials and buying the chips to put into products) was inside the US. It couldn't have happened in Cameroon or Paraguay for a variety of reasons, one of which was that they weren't sufficiently prosperous due to a lack of international trade. But that's somewhat incidental—what matters for the feasibility is that the supply chain had the money it needed, not where that money came from. Unlike the SR-71 project, they didn't have to import titanium from Russia; unlike the US ten years later, they didn't have to import energy from Saudi Arabia.
In his garage, using surplus machinery and wafers from the existing semiconductor supply chain, Sam Zeloof has reached what he says is the equivalent of Intel's 10μm process from 01971 http://sam.zeloof.xyz/category/semiconductor/ in his garage.
On this basis, it seems to me that making something like the 74HC08 from raw materials is something that a dozen or so people could manage, as long as they had existing equipment. It wouldn't even require a whole city, much less a worldwide supply chain.
So why don't we see this? Why is it not happening if it's possible? Well, we're still talking about building something with 80386-like performance for US$700 or so. This isn't currently a profitable product, because LCSC will sell you a WCH RISC-V microcontroller that's several times that fast for 14¢ in quantity 500 (specifically https://www.lcsc.com/product-detail/Microcontrollers-MCU-MPU...), and it includes RAM, Flash, and several peripherals.
If you want to build something like the actual ARM2 chip from 01986, you'll need to increase transistor density by another factor of 25 over what Zeloof has done and get to a 2μm process, slightly better than the process used for the 8086 and 68000: https://en.wikipedia.org/wiki/List_of_semiconductor_scale_ex...
Now, as it happens, typical ARM CPUs today are 48MHz, and Dennard scaling gets you to 25–50MHz at around 800nm, like the Intel 80486 from 01989. So to make a typical ARM CPU, you don't have to catch up to TSMC's 6nm process. You can get by with an 800nm process. So you might need the work of hundreds or even thousands of people to be able to make something like a typical ARM CPU, and it would probably take them a year or two of full-time work. This works out to an NRE cost on the order of US$50 million. Recouping that NRE cost at 14¢ per chip, assuming a 7¢ cost of goods sold, would require you to sell 700 million chips. And, using an antiquated process like that, you aren't going to be able to match WCH's power consumption numbers, so you probably aren't going to be able to dominate the market to such an extent, especially if you're paying more for your raw materials and machinery.
So it's possible, but it's unprofitable, because the worldwide supply chain can make a better product for a lower cost than this hypothetical Silicon River Rouge plant.
Make no mistake, though: if the worldwide supply chain were to vanish, typical ARM CPUs would be back in less than a decade. We're currently watching the PRC play through this in real time with SMIC. The USA kneecapped 天河-2, the top supercomputer on the TOP500 list, in 02015, and has been fervently attempting to cut off the PRC's semiconductor industry from the world supply chain ever since, on the theory that the US government should have jurisdiction over which companies inside the PRC are allowed to sell to the PRC's military forces. They haven't quite caught up, but with the HiSilicon CPUs used in Huawei's Mate60 cellphones, they've reached 7nm: https://www.youtube.com/watch?v=08myo1UdTZ8
Later they were community patches but there was a violent litigious case, until the 60's, where the consumer rights were almost fully respected.
But now they made a hostile bid against the company and they are enshitifficating America like the old times.
Then keep some laptops in a waterproof box.
The hard part is powering it, when the infrastructure to generate clean electricity is gone. What will you plug your transformer into? So solve that, create robust sources for electric power, and the rest can be solved with a few bulk laptop purchases off ebay.
If you have more than one survivor, you quickly need to learn to trade and cooperate to maximize your energy return, or else you all die, one by one.
Speaking from a North American perspective, kids are educated in how to succeed in a national/global economy, not how to build small communities and develop/share useful skills. TBH, the latter feels "obsolete" nowadays. Maybe that's a problem.
Plug it into the 'universal transformer' I was talking about, and you're in business. Known power output (that won't fry your precious electronics) and you don't have to care much what the input is.
But, if you absolutely needed to depend on it, I think that other technology might be better for the kinds of things we normally use microcontrollers for. If I need to control electronics where something that does this for a minute, then waits for that so it can do something else until this other thing, it's hard to imagine that a would-be apocalypse-ee would want to use a microcontroller rather than a more easily improvised mechanical cam timer[0]
There are no one-time programmable versions of AVR as far as I know, so they can all have their internal Flash reprogrammed.
I can't think of a way to come into possession of MPUs like that without intentionally buying them ahead of time. And if I'm going to stockpile those, I might as well stockpile a more capable MCU or MPU instead and flash it with something else. 99.9% of what I'd want to do with minimalist computers in the apocalyptic wasteland would be just fine without an OS. Bare-bones MCUs work spectacularly for control systems, wireless cryptosystems, data logging, etc.
Maybe I didn't look hard enough in the README[1], but I don't see how I'd bootstrap a system like this without already having a much more capable system on standby. Which comes back to the issue of... why?
Yes, it's kind of a LARP situation, but imagine a future scenario where some hacker (who also is physically resilient and well-protected in the face of apocalypse) has to figure out how to boot or get some system operating that might control solar panels. Not knowing the architecture - can you boot it up? Can you analyze existing binaries, ports, and other features and get it crudely operating? This sounds like a helluva of a video game.
"But I do tons of hobbyist electronics with surface mount!", some could say. Yeah, sure, but how do you wire it? You order a PCB from OSH Park? That's not very scavenge-friendly. " - https://new.collapseos.org/why.html
Not that I totally disagree with this but see clay PCBs, another post-supply-chain-collapse electronics project https://media.ccc.de/v/38c3-clay-pcb#t=1689
For $0.1 I can buy an MCU that can bit bang a keyboard , mouse, sound, VGA, with 2x the memory and 96 times the processing power as my old 6502 based computer. An esp32 is much, much more capable, like better than an old pentium machine and has wifi, usb, Bluetooth, etc…. And costs 0.7-2$ on a module. They can be found in home automation lightbulbs, among other things.
Espressos has shipped over a billion esp32 chips since the platform launched.
Sure, we should have a 6502 based solution, as it has a lot of software out there and a minimal transistor count, making it possible to hand-build. But for a long time we will be digging up esp32s and they are much more useful.
Running CollapseOS on an Esp8266 - https://news.ycombinator.com/item?id=38645124 - Dec 2023 (1 comment)
DuskOS: Successor to CollapseOS - https://news.ycombinator.com/item?id=36688676 - July 2023 (4 comments)
Collapse OS – Why? - https://news.ycombinator.com/item?id=35672677 - April 2023 (1 comment)
Collapse OS: Winter is coming - https://news.ycombinator.com/item?id=33207852 - Oct 2022 (2 comments)
Collapse OS - https://news.ycombinator.com/item?id=31340518 - May 2022 (8 comments)
Collapse OS Status: Completed - https://news.ycombinator.com/item?id=26922146 - April 2021 (2 comments)
Collapse OS – bootstrap post-collapse technology - https://news.ycombinator.com/item?id=25910108 - Jan 2021 (116 comments)
Collapse OS Web Emulators - https://news.ycombinator.com/item?id=24138496 - Aug 2020 (1 comment)
Collapse OS, an OS for When the Unthinkable Happens - https://news.ycombinator.com/item?id=23535720 - June 2020 (2 comments)
Collapse OS - https://news.ycombinator.com/item?id=23453575 - June 2020 (15 comments)
Collapse OS – Why Forth? - https://news.ycombinator.com/item?id=23450287 - June 2020 (166 comments)
Collapse OS – Why? - https://news.ycombinator.com/item?id=22901002 - April 2020 (3 comments)
'Collapse OS' Is an Open Source Operating System for the Post-Apocalypse - https://news.ycombinator.com/item?id=21815588 - Dec 2019 (3 comments)
Collapse OS - https://news.ycombinator.com/item?id=21182628 - Oct 2019 (303 comments)
Been doing some retro hobby (16-bit, real-mode, 80286) DOS development lately. It is refreshing to look at a system and be able to at least almost understand all the things going on. It might not be the simplest possible system, not the most elegant CPU design, but compared to the bloated monsters we use today it is very nice to work with. DOS is already stretching the limits of what I can keep in my head and reason about without getting completely lost in over-engineered (and leaky) abstraction layers.
EDIT: oh, and in case the website goes over quota, try https://new.collapseos.org/ . I haven't thrown the DNS switch yet, but I expect my meager Fastmail hosting will bust today.
And from your website:
What made me turn to the "yup, we're fucked" camp was "Comment tout peut s'effondrer" by Pablo Servigne, Éditions du Seuil, 2015.
That'll make for some light reading next time I head up north. Thanks for the recommendation.
In case it's down for others: https://web.archive.org/web/20250221070009/http://magic-1.or...
This web page is being served by a completely home-built computer: Bill Buzbee's Magic-1 HomebrewCPU. Magic-1 doesn't use an off-the-shelf CPU. Instead, its custom CPU is built out of ~200 74 series TTL chips.
Magic-1 is running a new port of Minix 2.0.4, compiled with a retargeted LCC portable C compiler. The physical connection to the internet is done using a native interface based on Wiznet's w5300 TCP/IP stack.
While I hate to condone using TTL rather than CMOS, this is extremely cool!
The CPU is documented at https://homebrewcpu.com/. Unfortunately the TCP/IP information was only posted on Google Plus, which has now been memory-holed by Google.
There's a clone by Aidil Jazmi documented at https://www.aidilj.com/homemadecpu/.
Perhaps I should drag my Osborne out of the cellar and see if the floppies still work.
Funny to see how the comments haven't shifted (and have!) in the past 15 years.
Bandwidth RestrictedThe page you have tried to access is not available because the owner of the file you are trying to access has exceeded our short term bandwidth limits. Please try again shortly.
It seems it collapsed!
I can't imagine a single 302 request getting redirected 10x in a loop to the redirect limit per visitor is good for bandwidth.