Their excellent backward compatibility and longevity is both their strongest point and their eventual weakness. Part of it is likely because the same familiar desktop environment is also able to act as a server environment, and so it's had a huge sticking power.
I'd like to think that Linux as a platform for running such systems would have gotten a mention but it seems that BBC is unaware it exists.
Generally no, and that's a feature, not a bug. The main problem you run into is dynamically linked dependencies. If a program depends on some particular behavior in a particular version of a library that has been updated, it won't work on a modern system with modern libraries. You can work around it in most cases, but it's not particularly easy or straightforward.
Old programs with statically linked dependencies might work, but you run into issues where the GUI framework is broken or incompatible or your window manager doesn't like it. Lots of little random stuff like that.
Windows is best in class at backwards compatability, though whether that's a good thing is up for debate.
Not disputing the obvious advantages, but since you asked:
Being forced to maintain compatibility for all previously written apis (and quite a large array of private details or undocumented features that applications ended up depending on) means windows is quite restricted in how it can develop.
As a random example, any developers who have written significant cross platform software will be able to attest that the file system on windows is painfully slow compared to other platforms (MS actually had to add a virtual file system to git at one point after they transitioned to it because they have a massive repo that would struggle on any OS, but choked especially badly on Windows). The main cause (at least according to one windows dev blog post I remember reading) is that windows added apis to make it easy to react to filesystem changes. That’s an obviously useful feature, but in retrospect was a major error, so much depends on the filesystem that giving anything the ability to delay fs interaction really hurts everything. But now lots of software is built on that feature, so they’re stuck with it.
On the other hand, I believe the Linux kernel has very strict compatibility requirements, they just don’t extend to the rest of the OS, so it’s not like there’s a strict rule on how it’s all handled.
Linux has the obvious advantage that almost all the software will have source code available, meaning the cost of recompiling most of your apps for each update with adjusted apis is much smaller.
And for old software that you need, there’s always VMs.
Kind of a bad example. Firstly because you are comparing windows with the Linux kernel. The Linux kernel has excellent backwards compatibility. Every feature introduced will be kept if removing it could break a userland application.
Linus is very adamant about "not breaking userspace"
The main problem with backwards compatibility (imho) is glibc. You could always ship your software with all dynamic lobs that you need, but glibc does make it hard because it likes to move awkward and break things.
Glibc is one of the few userspace libraries with backwards compatibility in the form of symbol versioning.
Any program compiled for glibc 2.1 (1999!) and later, using the publically exposed parts of the ABI, will run on modern glibc.
The trouble is usually with other dynamically linked libraries not being available anymore on modern distributions.
Depends; you should be able to still run binaries from the 90s, but if it's dynamically linked and doesn't ship with the libraries finding compatible libraries might be a pain and it won't run out the box. If you have the source code, then it should usually compile with minimal or no changes unless it depends on very old libraries that have seen incompatible changes (which is often the case). One of the nicer things about Windows is that it's a much more comprehensive "batteries included" system.
We encountered this recently - we have some monitoring software for a ride that was written in-house by a guy who no longer works for us. It was running on a Windows XP machine that needed connectivity via 2 serial ports.
We ended up creating a disk image then emulating the machine in Hyper-V and passing through 2 usb-based serial ports. Works like a charm!
I took a few old systems that 'only ran on XP' and upgraded them to Windows 10 and an SSD. They worked fine. I guess sometimes it is just that the manufacturer didn't want to take the risk.
Where I’ve seen these systems most in my work is connected to scientific instruments, where the manufacturer would rather you spend another half million dollars for a marginally improved model with more recent io and os support vs shipping a patch for the machine you already paid a quarter million for 15 years ago.
The system being slow and old doesn’t matter. It is running xp and airgapped. Sometimes you access the data by usb stick or burning a cd rom. The software stack it runs mainly dumps sensor data onto a flat file so its not really necessary to be very robust. And sure the ancient optiplex desktop idling all day drinks more electricity than a modern light weight chip, but that couple dollars more a week if that in electricity costs is hardly a concern in research setting.
Read the article. Its mostly to do with inertia in large organisations or multiple failed projects to replace old systems
For the people who use this old technology, life can get tedious. For four years, psychiatrist Eric Zabriskie would show up to his job at the US Department of Veterans Affairs (VA) and start the day waiting for a computer to boot up. "I had to get to the clinic early because sometimes it would take 15 minutes just to log into the computer," Zabriskie says. "Once you're in you try to never log out. I'd hold on for dear life. It was excruciatingly slow."
..
Most VA medical facilities manage health records using a suite of tools launched by the US government in 1997 called the Computerized Patient Record System (CPRS). But it works on top of an even older system called VistA – not to be confused with the Windows Vista operating system – which first debuted in 1985 and was originally built on the operating system MS-DOS.
The VA is now on its fourth attempt to overhaul this system after a series of fits and starts that dates back almost 25 years. The current plan is to replace it with a health record system used by the US Department of Defense by 2031. "VA remains steadfast in its commitment to implementing a modernised, interoperable Federal [electronic health record] system to improve health care delivery and positively impact patient care," says VA press secretary Pete Kasperowicz. He says the system is already live at six VA sites and will be deployed at 19 out of 170 facilities by 2026.
Some businesses are still using DEC PDP-11s, first released in the 1970s. Those are even more "ancient." Many of these are used for industrial controls, among other applications.
The article does not mention what OS is being used, but RT-11 was designed for "real-time" applications. That was released in 1973, so over 50 years ago.
> A while back we looked into upgrading one of the computers to Windows Vista. By the time we added up the money it would take to buy new licenses for all the software, it was going to cost $50,000 or $60,000 [£38,000 to £45,000]
I wonder if at some point virtualizing, and potentially adding a modern control layer on top of their current machines is a potential path forward.
No it isn’t. I tech infrastructure gets the same treatment as regular infrastructure: we don’t want to build a new one because the old one kinda still works and a new one would cost us money.
The reality is that you need to keep upgrading and building new infrastructure. Because inevitably the old one won’t work or no longer be enough to support the needs of the users. And when that happens, it will be even more painful and expensive to get it up and running again. And the best case scenario would be that no one loses their lives over it.
This is what I see in a lot of systems. E.g. costco inventory manager you can see at their manager stations looks like its old dos software but its running in some sort of container on a modern i5 workstation. Some of my friends in sales use similar setups.
My modernest operating system is a current distro of Ubuntu.
Still in love with my Mac OS 10.6 (Snow Leopard) tax machine (offline).
I keep another machine of the same era (Intel Core2Duo) online with Win7Pro, for official paperwork/logins. Doesn't seem to be hacked / compromised, yet (what people usually say).
Also rocking modern Apple Silicon (M2Pro/3/4) which is impressive equipment, particularly considering their miniscule power usage. The current 15" MacBookAir will stream video for the majority of a day on a single charge, and from CostCo can be occassionally purchased for $849 (which includes an additional year of warranty).
A call to action for anyone with a 5 1/4" floppy drive:
> The only thing missing from Grigar's collection is a PC that reads five-and-a-quarter-inch floppy disks, she says. Despite their ubiquity, the machines are surprisingly hard to find. "I look on eBay, Craigslist, I have friends out looking for me, nothing. I've been looking for six years," she says. If you have one of these old computers lying around, and it still works, Grigar would love to hear from you.
It’s hilarious that people are claiming windows 3.1 or 95 is super stable, when in my life experience windows didn’t become super stable until Windows 7.
Something irks me about the BBC using "ancient" to refer to something a few mere decades old. I know we use it that way but the irony seems to be lost here.
Their excellent backward compatibility and longevity is both their strongest point and their eventual weakness. Part of it is likely because the same familiar desktop environment is also able to act as a server environment, and so it's had a huge sticking power.
I'd like to think that Linux as a platform for running such systems would have gotten a mention but it seems that BBC is unaware it exists.
Is Linux better for backward compatibility than Windows?
Generally no, and that's a feature, not a bug. The main problem you run into is dynamically linked dependencies. If a program depends on some particular behavior in a particular version of a library that has been updated, it won't work on a modern system with modern libraries. You can work around it in most cases, but it's not particularly easy or straightforward.
Old programs with statically linked dependencies might work, but you run into issues where the GUI framework is broken or incompatible or your window manager doesn't like it. Lots of little random stuff like that.
Windows is best in class at backwards compatability, though whether that's a good thing is up for debate.
Why wouldn't that be a good thing? I don't want my apps breaking just because the OS updated.
Not disputing the obvious advantages, but since you asked:
Being forced to maintain compatibility for all previously written apis (and quite a large array of private details or undocumented features that applications ended up depending on) means windows is quite restricted in how it can develop.
As a random example, any developers who have written significant cross platform software will be able to attest that the file system on windows is painfully slow compared to other platforms (MS actually had to add a virtual file system to git at one point after they transitioned to it because they have a massive repo that would struggle on any OS, but choked especially badly on Windows). The main cause (at least according to one windows dev blog post I remember reading) is that windows added apis to make it easy to react to filesystem changes. That’s an obviously useful feature, but in retrospect was a major error, so much depends on the filesystem that giving anything the ability to delay fs interaction really hurts everything. But now lots of software is built on that feature, so they’re stuck with it.
On the other hand, I believe the Linux kernel has very strict compatibility requirements, they just don’t extend to the rest of the OS, so it’s not like there’s a strict rule on how it’s all handled.
Linux has the obvious advantage that almost all the software will have source code available, meaning the cost of recompiling most of your apps for each update with adjusted apis is much smaller.
And for old software that you need, there’s always VMs.
Kind of a bad example. Firstly because you are comparing windows with the Linux kernel. The Linux kernel has excellent backwards compatibility. Every feature introduced will be kept if removing it could break a userland application.
Linus is very adamant about "not breaking userspace"
The main problem with backwards compatibility (imho) is glibc. You could always ship your software with all dynamic lobs that you need, but glibc does make it hard because it likes to move awkward and break things.
Glibc is one of the few userspace libraries with backwards compatibility in the form of symbol versioning. Any program compiled for glibc 2.1 (1999!) and later, using the publically exposed parts of the ABI, will run on modern glibc.
The trouble is usually with other dynamically linked libraries not being available anymore on modern distributions.
Depends; you should be able to still run binaries from the 90s, but if it's dynamically linked and doesn't ship with the libraries finding compatible libraries might be a pain and it won't run out the box. If you have the source code, then it should usually compile with minimal or no changes unless it depends on very old libraries that have seen incompatible changes (which is often the case). One of the nicer things about Windows is that it's a much more comprehensive "batteries included" system.
We encountered this recently - we have some monitoring software for a ride that was written in-house by a guy who no longer works for us. It was running on a Windows XP machine that needed connectivity via 2 serial ports.
We ended up creating a disk image then emulating the machine in Hyper-V and passing through 2 usb-based serial ports. Works like a charm!
This is such a weird take.
People are upset because their hardware is still working??
Is it better if it just stopped working one day?
Often these old systems are slow. They could get a big boost from an SSD or a newer CPU but the owners don't want to risk any incompatibilities.
I took a few old systems that 'only ran on XP' and upgraded them to Windows 10 and an SSD. They worked fine. I guess sometimes it is just that the manufacturer didn't want to take the risk.
If the hazards aren’t there then sure. But if you’re risking a CNC throwing a tool or a ride crashing then you may need to consider new failure modes.
Where I’ve seen these systems most in my work is connected to scientific instruments, where the manufacturer would rather you spend another half million dollars for a marginally improved model with more recent io and os support vs shipping a patch for the machine you already paid a quarter million for 15 years ago.
The system being slow and old doesn’t matter. It is running xp and airgapped. Sometimes you access the data by usb stick or burning a cd rom. The software stack it runs mainly dumps sensor data onto a flat file so its not really necessary to be very robust. And sure the ancient optiplex desktop idling all day drinks more electricity than a modern light weight chip, but that couple dollars more a week if that in electricity costs is hardly a concern in research setting.
Read the article. Its mostly to do with inertia in large organisations or multiple failed projects to replace old systems
For the people who use this old technology, life can get tedious. For four years, psychiatrist Eric Zabriskie would show up to his job at the US Department of Veterans Affairs (VA) and start the day waiting for a computer to boot up. "I had to get to the clinic early because sometimes it would take 15 minutes just to log into the computer," Zabriskie says. "Once you're in you try to never log out. I'd hold on for dear life. It was excruciatingly slow."
..
Most VA medical facilities manage health records using a suite of tools launched by the US government in 1997 called the Computerized Patient Record System (CPRS). But it works on top of an even older system called VistA – not to be confused with the Windows Vista operating system – which first debuted in 1985 and was originally built on the operating system MS-DOS.
The VA is now on its fourth attempt to overhaul this system after a series of fits and starts that dates back almost 25 years. The current plan is to replace it with a health record system used by the US Department of Defense by 2031. "VA remains steadfast in its commitment to implementing a modernised, interoperable Federal [electronic health record] system to improve health care delivery and positively impact patient care," says VA press secretary Pete Kasperowicz. He says the system is already live at six VA sites and will be deployed at 19 out of 170 facilities by 2026.
Some businesses are still using DEC PDP-11s, first released in the 1970s. Those are even more "ancient." Many of these are used for industrial controls, among other applications.
see: https://news.ycombinator.com/item?id=30505421
The article does not mention what OS is being used, but RT-11 was designed for "real-time" applications. That was released in 1973, so over 50 years ago.
IBM is still producing mainframe systems to this day. More modern for sure, but still fundamentally an ancient system and architecture
> A while back we looked into upgrading one of the computers to Windows Vista. By the time we added up the money it would take to buy new licenses for all the software, it was going to cost $50,000 or $60,000 [£38,000 to £45,000]
I wonder if at some point virtualizing, and potentially adding a modern control layer on top of their current machines is a potential path forward.
No it isn’t. I tech infrastructure gets the same treatment as regular infrastructure: we don’t want to build a new one because the old one kinda still works and a new one would cost us money.
The reality is that you need to keep upgrading and building new infrastructure. Because inevitably the old one won’t work or no longer be enough to support the needs of the users. And when that happens, it will be even more painful and expensive to get it up and running again. And the best case scenario would be that no one loses their lives over it.
This is what I see in a lot of systems. E.g. costco inventory manager you can see at their manager stations looks like its old dos software but its running in some sort of container on a modern i5 workstation. Some of my friends in sales use similar setups.
I've been using a Windows 95 Beta CD for a coffee coaster all these years. Still works just fine without patching or service packs :)
My modernest operating system is a current distro of Ubuntu.
Still in love with my Mac OS 10.6 (Snow Leopard) tax machine (offline).
I keep another machine of the same era (Intel Core2Duo) online with Win7Pro, for official paperwork/logins. Doesn't seem to be hacked / compromised, yet (what people usually say).
Also rocking modern Apple Silicon (M2Pro/3/4) which is impressive equipment, particularly considering their miniscule power usage. The current 15" MacBookAir will stream video for the majority of a day on a single charge, and from CostCo can be occassionally purchased for $849 (which includes an additional year of warranty).
A call to action for anyone with a 5 1/4" floppy drive:
> The only thing missing from Grigar's collection is a PC that reads five-and-a-quarter-inch floppy disks, she says. Despite their ubiquity, the machines are surprisingly hard to find. "I look on eBay, Craigslist, I have friends out looking for me, nothing. I've been looking for six years," she says. If you have one of these old computers lying around, and it still works, Grigar would love to hear from you.
'People privileged to use ancient computers'
I was happy with Windows XP, Windows vista, 7,8,8.1,10, and 11 added nothing to my quality of life that I can think of?
Hey, if it still works. . . .
Just don’t connect it to the internet.
I wonder how many nuclear power plants run on windows xp.
It’s hilarious that people are claiming windows 3.1 or 95 is super stable, when in my life experience windows didn’t become super stable until Windows 7.
Something irks me about the BBC using "ancient" to refer to something a few mere decades old. I know we use it that way but the irony seems to be lost here.