What a blast from the past. I recall my own experimenting with DJGPP and DPMI in my own software. It felt futuristic at the time. I was blown away.
Another fond memory: I was playing Star Wars: Dark Forces (I think that was the one) and was frustrated with the speed of level loading. I think it used DOS/4GW and I recall renaming it, copying in a new dos extender (was it CWSDPMI? not sure), and renaming it to the one Star Wars used. I was shocked when it not only worked, but the level loading was MUCH faster (like 3-5x faster I recall). My guess is whatever one I had swapped in wasn't calling the interrupt in DOS (by swapping back to real mode), but perhaps was calling the IDE disk hw directly from protected mode. Not sure, but it was a ton faster, and I was a very happy kid. The rest of the game had the same performance (which makes sense I think) with the new extender.
I learned to solder as a pre-teen so I could make a nullmodem :) Then I learned that resistors were a thing when I made a parallel port sound card (this thing https://en.wikipedia.org/wiki/Covox_Speech_Thing). Fun times!
I wasn't allowed a soldering iron as a kid, so I ended up just chopping and splicing a regular serial cable and turned it into a null modem, all so that I could play OMF2097 with my friends without having to share the same keyboard (we would always fight over right side, which defaulted to using the arrow keys for movement - and so the person who got the right side generally had the advantage, as back then arrow keys were the default movement keys, unlike these days where WASD is default.)
Shared-keyboard OMF 2097 also had an overwhelming advantage for the first mover, since most keyboards had 2-3 key rollover--if you hit wd to jump forward, your opponent had to be fast to do anything before you hit your attack key.
This must have been around the same time (1993 or so) when many organisations were upgrading old coax 10Base2 network equipment to modern 10BaseT (and eventually 100BaseT). My friends and I, strongly motivated by the incentive of being able to play multiplayer DOOM, managed to source some free ISA 10Base2 Ethernet cards and coax cable and T-connectors from someone's Dad. The only thing we were missing was the terminators which could be made yourself by cutting a coax cable and soldering a resistor between the conductors... fun introduction to LAN technology for us!
Nice. I learned on a DE-9 cable making an HP-48 cable from an internal CD-ROM analog cable. I was such a poor student cliché that I used Scotch tape instead of electrical tape to ensure the RX, TX, and GND lines didn't short.
There wasn't one kind of null modem cable, per se, there were serial and parallel null modem cables.
Originally, there were null modem (serial) adapters that worked with straight through cables but that got expensive, awkward, and complicated. A universal serial null modem cable had a pair of "DB-9" DE-9 female and DB-25 female connectors on both ends so it would work with either system having either type of connector.
A parallel null modem cable had DB-25 male connectors on both ends.
Really fun times. I “learned” to solder around that time and age also. Playing Mod files through a DIY version of that “thing” piped into a portable stereo speaker was awesome.
Years later I learned what flux was, and soldering became quite a bit better and easier.
That parallel port sound card was my primary sound card for a long time. I bought a bunch of full sized resistors from Maplin and soldered them all as janky as a kid can with huge blobs of solder, but it worked perfectly from day one.
> From the beginning of the development, id had requested from djgpp engineers that their DPMI client would be able to run on djgpp's DPMI server but also Windows 95 DPMI server.
I'm pretty sure that "DJGPP engineers" is just one guy, DJ Delorie. DJGPP was always open source so I bet he got some contributors, but if the rest of this sentence is true that "id has requested from djgpp engineers", it just means they asked the maker of an open source tool they used to please add a feature. I wonder whether they paid him for it or whether DJ just hacked it all in at id's request for kicks. His "about me" page suggests he does contracting so might be the latter.
DJGPP was spectacularly good back in the day. I didn't appreciate at the time what a monumental effort it must have been to port the entire GCC toolchain and runtime to DOS/Windows. Hats off to DJ Delorie!
Back then, DJGPP was a much bigger group, and most of the Quake kudos go to Charles Sandmann, author of cwsdpmi, who worked directly with Id to help them optimize their code for our environment.
Thank you for this work. It was my first C compiler in 1998. Y'all helped me on the mailing list and you even replied to my e-mails! I was 11 and this was insane to me.
I too need to thank you for the very first C compiler I ever had access to in 1999, after 10 years of having a book on C in my possession that I couldn't use until then.
Just passing by to thank you. As many others have mentioned, DJGPP was pivotal for my life. I compiled my first C/Allegro games in DJGPP back in the mid/late 90s.
The DJGPP project and its contributors were 100% volunteer. I'm sure some of the contributors took advantage of their involvement to obtain consulting gigs on the side (I did ;) but DJGPP itself didn't involve any. The Quake work was a swap; we helped them with optimizing, and they helped us with near pointers. Win-win!
I think I remember there was some communication between ID and Charles Sandmann about CWSDPMI, so even though it's worded a bit strange for an open source project there's probably some thruth in it?
Also a bit strange how the author is surprised about Quake running in a 'VM', apparently they don't really know about VM86 mode in x86 processors...
Is the author surprised by that, or did you just misread it? The only relevant quote on that page that I see is “It is impressive to see Quake run at full speed knowing that Windows 95 runs DOS executable in a virtual machine.”
He is perhaps surprised that it runs _at speed_ in the VM, not that it runs in the VM which he already knows about.
Only if they never call DOS or the BIOS or execute a Real Mode Software Interrupt. When they do, they ask the DPMI server (which could be an OS like Windows 9x or CWSDPIM) to make the call on their behalf. In doing so, the DPMI server will temporarily enter into a VM86 Virtual Machine to do execute the Real Mode code being requested.
So... Win32 runs in virtual mode. In 2025, we don't think of that as a Virtual Machine, but it totally is. Hardware access is trapped by the CPU and processed by the OS/DPMI server.
VM in this usage means Virtual Memory - i.e. with page tables enabled. Two "processes" can use the same memory addresses and they will point to different physical memory. In Real Address mode every program has to use different memory addresses. The VM86 mode lets you to have several Real Mode programs running, but using Virtual Memory.
VM does not mean Virtual Memory in this context. VM does mean Virtual Machine. When an OS/DPMI Server/Supervisor/Monitor provides an OS or program a virtual interface to HW interrupts, IO ports, SW interrupts, we say that OS or program is being executed in a Virtual Machine.
For things like Windows 3.x, 9x, OS/2, CWSDPMI, DOS/4G (DPMI & VCPI), Paging & Virtual Memory was an optional feature. In fact, CWSDPMI/djgpp programs had flags (using `CWSDPR0` or `CWSDPMI -s-` or programmatic calls) to disable Paging & Virtual Memory. Also, djgpp’s first DPMI server (a DOS extender called `go32`) didn’t support Virtual Memory either but could sub-execute Real Mode DOS programs in VM86 mode.
I agree that my comment about VM was imprecise and inaccurate.
I do dispute your assertion that virtual memory was "disabled". It isn't possible to use V86 mode (what the Intel Docs called it) without having a TSS, GDT, LDT and IDT set up. Being in protected mode is required. Mappings of virtual to real memory have to be present. Switching in and out of V86 mode happens from protected mode. Something has to manage the mappings or have set it up.
Intel's use of "virtual" for V86 mode was cursory - it could fail to work for actual 8086 code. This impacted Digital Research. And I admit my experiences are mostly from that side of the OS isle.
I did go back and re-read some of [0] to refresh some of my memory bitrot.
It's a bit surprising because this is the author of the DooM Black Book and they know the underpinnings pretty well.
However, the difference between a DOS VM under Windows 9x and a Windows command prompt and a w32 program started from DOS is all very esoteric and complicated (even Raymond Chen has been confused about implementation details at times).
I think if you're relatively young is hard to know computing history. Its oddly older than one thinks, even concepts that are seen as new. Its sometimes interesting to see people learn about BBS's which flourished 40 years ago.
I remember back in the day using DJGPP (DJ Delorie) with the Allegro library (Shawn Hargreaves), building little games that compiled and ran on Windows and other OSes, and being part of the community.
Yes. DJGPP and Allegro was a great help, and a big step up from the old Borland Turbo Pascal I started out with. I remember trying to rotate an image pixel by pixel in Pascal. Allegro simply had a function to do it. And yes, the mailing list was great - Shawn Hargreaves and the couple of people in the inner circle (I seem to remember someone called George) were simply awesome, helpful people.
I eventually installed Red Hat, started at university and lost most of my free time to study projects.
I was quite active in the Allegro community around that time, mostly on the allegro.cc forums - but I was still a 14-year old learning the ropes back then. Missed out on DJGPP, it was already MinGW under Windows for me.
I took part in a few of the later Speedhacks, around 2004-2005, I think?
Allegro will always have a warm place in my heart, and it was a formative experience that still informs how I like to work on games.
EDIT: Hah, actually found my first Speedhack [1]! Second place, not bad! Interestingly, the person who took first place (Rodrigo Braz Monteiro) is also a professional game developer now.
I was, but my application was less fun: porting Perl code from Windows NT to MS-DOS to integrate with software that required direct hardware access to a particular model of SCSI card.
Worked great, and saved a bunch of time vs writing a VDD to enable direct hardware access from NTVDM or a miniport driver from scratch.
IIRC, the underlying problem was that none of the NT drivers for any of the cards we'd tested were able to reliably allocate enough sufficiently contiguous memory to read tapes with unusually large (multi-megabyte) block sizes.
So I just took a look at DJ’s website and he has a college transcript there. Something looked interesting.
Apparently he passed a marksmanship PE course at the first year. Is that a thing in US? I don’t know, maybe its common and I have no idea. I’d love to have a marksmanship course while studying computer science though.
US colleges have a very open curriculum, where you have wide leeway in what classes you actually take, especially in the early years of study. If you're coming from more European-style universities, this is vastly different to the relatively rigid course set you'd take (with a few electives here and there).
My college required its graduates to pass a minimal swimming test. Just enough swimming ability to give a potential rescuer some extra time to effect the rescue, rather than have us go straight to the bottom of the sea. We all took a test in the first week or so. Those who failed had to take a course and retake the test.
Yeah, in Russia even thought everything is decided for you once you've selected your major, PE classes still for you to choose. Competition to get in was crazy too, none of that "first come, first served" - swimming only accepted top N students, table tennis held a tournament style competition (I went there with two friends and I had to play against both of them).
I needed one PE credit to get a degree from my community college. My school didn't offer marksmanship, but I would imagine it would fit into PE, archery certainly would and there's synergy. I took Table Tennis to graduate. I don't think my engineering school where I got my BS required Physical Education though.
It's definitely not common. My US university required 2 physical education classes, but only if you were under 30 and hadn't served in the military. They may have offered marksmanship, but I just took running and soccer (aka football). The classes were graded pass-fail and didn't even count for academic credit.
My high school had some marksmanship trophy's in their case dating back to the 70s. Responsible gun ownership was a real thing when a sizable portion of the male population were veterans.
We have myriad available "electives" that contribute towards our degrees. I have college credit for "bowling and billiards" and "canoeing and kayaking".
This worked in DOS, but was easily ported to Linux.
As far as DPMI: I used the CWSDPMI client fairly recently because it allows a 32-bit program to work in both DOS and Windows (it auto-disables its own DPMI functions when Windows detected).
Or from the Seinfeld episode "The Pool Guy" (Aired November 1995) which had a fictional movie called "Chunnel" -- probably based on the very same channel tunnel.
From Google (AI slop at top of search results): "Chunnel" is not a real movie but a fictional film from the TV show Seinfeld. It is depicted as a disaster movie about an explosion in the Channel Tunnel...
Weird hearing that name now though. Around that time, everybody referred to it as the "Chunnel", but I don't think I've heard it as anything but the "Channel Tunnel" since maybe 2000. I suspect even that usage is now limited to only taking cars on the train from Folkestone. Every time I've travelled on it as a regular passenger from London, it's just been referred to as the Eurostar without any mention of the tunnel at all.
I recall reading about TCP/IP-powered Internet multiplayer DOS Quake in TECHINFO.TXT that shipped with the retail version of the game, and I quote:
Beame & Whiteside TCP/IP
------------------------
This is the only DOS TCP/IP stack supported in the test release.
It is not shareware...it's what we use on our network (in case you
were wondering why this particular stack). This has been "tested"
extensively over ethernet and you should encounter no problems
with it. Their SLIP and PPP have not been tested. When connecting
to a server using TCP/IP (UDP actually), you specifiy it's "dot notation"
address (like 123.45.67.89). You only need to specify the unique portion
of the adress. For example, if your IP address is 123.45.12.34
and the server's is 123.45.56.78, you could use "connect 56.78".
I looked around a little and sure enough, a copy of the software was avaiable in a subdirectory of idgames(2?) at ftp.cdrom.com. I knew nothing about TCP/IP networking at the time, so it was a struggle to get it all working, and in the end, the latency and overall performance was miserable and totally not worth it. Playing NetQuake with WinQuake was a much more appropriate scenario.
I feel he's stepping his way to that, but Quake is an entire other world of complexity from DooM (which is simple enough that a 400+ page book can "explain" it and the OS and the computers it ran on).
Andre Lamothe's "Tricks of the 3d Game Programming Gurus: Advanced 3d Graphics and Rasterization" is a great book for anyone interested in graphics programming from those days.
There's also the columns he wrote for Dr Dobbs' Journal during development. They're like an expanded, illustrated version of GPBB chapter's first half. https://www.bluesnews.com/abrash/contents.shtml
It's amusing to me that in the 90s you could easily play Quake or Doom with your friends by calling their phone number over the modem whereas now setting up any sort of multiplayer essentially requires a server unless you use some very user-unfriendly NAT busting.
Glad you mentioned DOOM! Sometimes people forget that DOOM supported multiplayer as early as December 1993, via a serial line and February 1994 for IPX networking. 4 player games on a LAN in 1994! On release, TCP/IP wasn't supported at all, but as the Internet took off, that was solved as well. I remember testing an early-ish version of the 3rd party iDOOM TCP setup driver from my dorm room (10 base T connection) when I was supposed to be in class, and it was a true game changer.
What was even more amazing is you could daisy chain serial ports on computers to get multiplayer Doom running. One or more of those links could even be a phone modem.
Downside is that your framerate was capped to the person with the slowest computer, and there was always that guy with the 486sx25 who got invited to play.
Hamachi and STUN were what I was thinking of when I referred to user-unfriendly NAT busting. It's true that these are not much harder to get working than a modem, but they don't match up with modern consumer expectations of ease-of-use and reliability on firewalled networks. It would be nice if Internet standards could keep up with industry so that these expectations could be met. It's totally understandable where we've landed due to modern security requirements, but I still feel something has been lost.
You usually just need to forward a port or two on your router. That gets through the NAT because you specify which destination IP to forward it to. You also need to open that port in your Windows firewall in most cases.
Some configuration, but you don't have to update the port forwarding as often as you would expect.
The reason you can't just play games with your friends anymore is that game companies make way too much money from skins and do not want you to be able to run a version of the server that does not check whether you paid your real money for those skins. Weirdly, despite literally inventing loot boxes, Valve does not suffer from this sometimes. TF2 had a robust custom server community that had dummied out checks so you could wear and use whatever you want. Similar to how Minecraft still allows you to turn off authentication so you can play with friends who have a pirate copy.
Starcraft could only do internet play through battle.net, which required a legit copy. Pirated copies could still do LAN IPX play though, and with IPX over IP software you could theoretically do remote play with your internet buddies.
By the way, this is why bnetd is illegal to distribute and was ruled such in a court of law: authenticating with battle.net counts as an "effective" copy protection measure under the DMCA, and providing an alternate implementation that skips that therefore counts as "circumvention technology".
I was half expecting something about how to get tcp into windows, but this is win95 where they shipped it inside the os and put some company out of business that used to sell that.
The turbulent times and the breakneck speed of computer development need to be taken into account. Not long before that computer networks were strictly corporate things installed by contractors choosing hardware, driver and software suppliers suitable for tasks performed by employees or students, and someone who installed it at home was the same kind of nerd who would drag an engine from the work into his room to tinker. Non-business-oriented software rarely cared about third party network functions. Then network card became a consumer device, and a bit later it became integrated and expected.
Also, Windows did not install TCP/IP components on computers without a network card (most of them until the Millennium era), it was an optional component. You could not “ping” anything, as there was no ping utility, nor libraries it could call. In that aspect, those network-less Windows systems were not much different from network-less DOS systems. The installer probably still works that way (or can be made to, by excluding some dependencies), but it's hard to find hardware without any network connectivity today. I wonder what Windows 11 installer does when there is no network card to phone home...
> I wonder what Windows 11 installer does when there is no network card to phone home...
One of "works fine", "needs a command line trick to continue" or "refuses to work completely" depending on which specific edition of win11 your computer has been afflicted with.
No I'm fairly certain that berkley sockets were used as a foundation to integrate a full network stack under winsockets so people wouldn't have to go buy things like Trumpet (Windows 3.1) and you could coax out messages saying as much from the commandline but Google is failing me (I'm sure most of this stuff is on usenet which no one seems to care about these days)
It's interesting how STREAMS pervaded everything for a short while (Apple's Open Transport networking stack for System 7.5 and up was also based on STREAMS) but everyone almost immediately wanted to get rid of it and just use Berkley sockets interfaces.
I still don't quite get how you should had communicate with the other systems over the network with STREAMS.
With IP you have an address and the means to route the data to that address and back, with TCP/UDP sockets you have the address:port endpoint so the recipient doesn't need to pass a received packet to the all processes on the system, asking "is that yours".
So if there is already some network stack providing both the addressing and the messaging...
STREAMS isn’t a networking protocol, it’s an internal data routing thing some UNIXes use locally, and amongst other things to implement the network stack in it.
You’d still be talking of stuff like IP addresses and the like with it. Probably with the XTI API instead of BSD sockets, which is a bit more complex but you need the flexibility to handle different network stacks than just TCP/IP, like erm…
This article makes it seems like 1996 was ancient times. There was the internet then, browsers, Mac’s had a tcp stack for a while by then, quake was an extremely advanced game.
Yeah, the dos to windows transitions was a big deal, but it was a pretty ripe time for innovation then.
Yeah, but dial-up was slow, laggy, and what 95% of people used to access the internet in those days. Real-time gaming was not fun with anything that used it. I grew up in a rural area in the 1990s and was no match for people that started to get cable modems as time went on.
Even when people had dial-up, a huge majority were using portal-dialers like AOL or Compuserve, and it took extra steps to use those to "get" the Internet directly as opposed to within the walled garden.
And even then they'd often just use the bundled browser.
I remember the first friend who got a cable modem, that shit was insane compared to dial-up.
If you're dialed up directly, you should be able to get a little bit better latency as you won't need IP, UDP, and PPP/SLIP headers; at modem bandwidth, header bytes add meaningful latency. But data transmission is still pretty slow, even with small packets.
You're using confusing terminology so you look very wrong. What you mean to say is direct modem-to-modem connections were not laggy because there was no packet switching. This is a true statement.
What the GP comment was talking about was dial-up Internet being most people's exposure to TCP/IP gaming in the 90s. That was most assuredly laggy. Even the best dial-up Internet connections had at least 100ms of latency just from buffering in the modems.
The QuakeWorld network stack was built to handle the high latency and jitter of dial-up Internet connections. The original Quake's network was fine on a LAN or fast Internet connection (e.g. a dorm ResNet) but was sub-par on dial-up.
> If you wanted to play a multiplayer game on the internet, either you needed to have explicit host & port information, or you needed to use an online multiplayer gaming service.
Technically true, although tools like Kali existed which could simulate IPX style networks over the internet. I know this because I played a substantial amount of Mechwarrior 2 online when it designed only for local network play!
> My guess is that, in full screen, memory writes and reads to the VGA are given direct access to the hardware to preserve performances. [sic]
When a DOS VM is in windowed mode, Windows must intercept VGA port I/O and video framebuffer RAM access to a shadow framebuffer and scale/bitblt it to display in its hosted window.
In full screen, exclusive VGA access is possible so it doesn't need to do anything special except restore state on task switching.
Quake would be even faster if it didn't have to support DPMI/Windows and ran in "unreal mode" with virtual memory and I/O port protections disabled.
Of course Quake had to support DOS but id developed Quake on NeXTSTEP which of course had TCP/IP and they had been supporting Linux and other commercial Unix versions like Solaris since Doom a few years earlier.
In an interview with Lex Fridman, John Carmack said that in retrospect, Quake was too ambitious in terms of development time, as it both introduced network play and a fully polygonal 3D engine written in assembly. So it would have been better to split the work in two and publish a "Network Doom" first and then build on that with a polygonal Quake.
Which seems to imply that the network stack was about as difficult to implement as the new 3D engine.
> And in 1995 there were only two: us, and Total Entertainment Network. You might think game creators would come to us and say "please put my game on your service!", but... nope! Not only did we have a licensing team that went out and got contracts to license games for our service, but we had to pay the vendor for the right to license their game, which was often an exclusive. So, we had Quake and Unreal; TEN got Duke Nukem 3D and NASCAR.
FWIW, the Total Entertainment Network (TEN) got Quake later, here's a press release from September 30 1996 [1]. Wikipedia says QTest was released Feb 24, 1996, and I can't find when MPath support launched, so I don't know how long the exclusive period was, but not very long. Disclosure: I was a volunteer support person (TENGuide) so I could get internet access as a teen; I was GuideToast and/or GuideName.
Ah I remember running a serial cable from my bedroom to the hallway so we could play 1v1 quake via direct connect. Good times! I think we used to play age of empires that way too.
” It is impressive to see Quake run at full speed knowing that Windows 95 runs DOS executable in a virtual machine. My guess is that, in full screen, memory writes and reads to the VGA are given direct access to the hardware to preserve performances”
Virtual x86 mode had little to do with what we nowadays think of when someone says ”virtual machine”
Arguably it had a great deal to do with what we think of as a "virtual machine."
Virtual 8086 remapped all opcodes and memory accesses to let a program pretend to be on a single, real-mode 8086 when in reality it was one of many programs running in protected mode on a newer chip
AMD-V and whatever the Intel counterpart is do the almost exactly the same thing: re-map all ia32 and amd64 instructions and memory accesses to let a program pretend to be doing ring 0 stuff on an x86 box, when in reality it is one of many programs running with fewer privileges on a newer chip
There are a few more wrinkles in the latter case -- nested page tables, TLB, etc -- but it is the same idea from the viewpoint of the guest.
QuakeC was compiled into QuakeVM bytecode, which made all modes and logic portable between platforms without having to recompile things everytime, unlike what had to be done for Quake 2 (which was 100% native code).
This hurt performance a bit but in the longer term benefited the modding scene massively.
What a blast from the past. I recall my own experimenting with DJGPP and DPMI in my own software. It felt futuristic at the time. I was blown away.
Another fond memory: I was playing Star Wars: Dark Forces (I think that was the one) and was frustrated with the speed of level loading. I think it used DOS/4GW and I recall renaming it, copying in a new dos extender (was it CWSDPMI? not sure), and renaming it to the one Star Wars used. I was shocked when it not only worked, but the level loading was MUCH faster (like 3-5x faster I recall). My guess is whatever one I had swapped in wasn't calling the interrupt in DOS (by swapping back to real mode), but perhaps was calling the IDE disk hw directly from protected mode. Not sure, but it was a ton faster, and I was a very happy kid. The rest of the game had the same performance (which makes sense I think) with the new extender.
I learned to solder as a pre-teen so I could make a nullmodem :) Then I learned that resistors were a thing when I made a parallel port sound card (this thing https://en.wikipedia.org/wiki/Covox_Speech_Thing). Fun times!
I wasn't allowed a soldering iron as a kid, so I ended up just chopping and splicing a regular serial cable and turned it into a null modem, all so that I could play OMF2097 with my friends without having to share the same keyboard (we would always fight over right side, which defaulted to using the arrow keys for movement - and so the person who got the right side generally had the advantage, as back then arrow keys were the default movement keys, unlike these days where WASD is default.)
I wasn't allowed one either so I soldered with a screwdriver heated up on the gas stove when my parents weren't home...
"We need laws to keep children away from soldering irons"
Later that day...
That's pretty hardcore, respect :)
It also taught me valuable lessons about hardening.
Your parents: A soldering iron is dangerous!
You: I'll show you!
I am not in danger.
I am the danger.
[Sticks glowing hot screw driver in molten lead]
Shared-keyboard OMF 2097 also had an overwhelming advantage for the first mover, since most keyboards had 2-3 key rollover--if you hit wd to jump forward, your opponent had to be fast to do anything before you hit your attack key.
I wonder if OpenOMF has the same limits.
It's a keyboard thing and less of a software thing.
This must have been around the same time (1993 or so) when many organisations were upgrading old coax 10Base2 network equipment to modern 10BaseT (and eventually 100BaseT). My friends and I, strongly motivated by the incentive of being able to play multiplayer DOOM, managed to source some free ISA 10Base2 Ethernet cards and coax cable and T-connectors from someone's Dad. The only thing we were missing was the terminators which could be made yourself by cutting a coax cable and soldering a resistor between the conductors... fun introduction to LAN technology for us!
Nice. I learned on a DE-9 cable making an HP-48 cable from an internal CD-ROM analog cable. I was such a poor student cliché that I used Scotch tape instead of electrical tape to ensure the RX, TX, and GND lines didn't short.
There wasn't one kind of null modem cable, per se, there were serial and parallel null modem cables.
Originally, there were null modem (serial) adapters that worked with straight through cables but that got expensive, awkward, and complicated. A universal serial null modem cable had a pair of "DB-9" DE-9 female and DB-25 female connectors on both ends so it would work with either system having either type of connector.
A parallel null modem cable had DB-25 male connectors on both ends.
http://www.nullmodem.com/LapLink.htm
Both were used with LapLink and often interchangeably called "LapLink cables" too because the boxed version of LapLink included both cables.
Really fun times. I “learned” to solder around that time and age also. Playing Mod files through a DIY version of that “thing” piped into a portable stereo speaker was awesome.
Years later I learned what flux was, and soldering became quite a bit better and easier.
That parallel port sound card was my primary sound card for a long time. I bought a bunch of full sized resistors from Maplin and soldered them all as janky as a kid can with huge blobs of solder, but it worked perfectly from day one.
Random drive-by nitpick:
> From the beginning of the development, id had requested from djgpp engineers that their DPMI client would be able to run on djgpp's DPMI server but also Windows 95 DPMI server.
I'm pretty sure that "DJGPP engineers" is just one guy, DJ Delorie. DJGPP was always open source so I bet he got some contributors, but if the rest of this sentence is true that "id has requested from djgpp engineers", it just means they asked the maker of an open source tool they used to please add a feature. I wonder whether they paid him for it or whether DJ just hacked it all in at id's request for kicks. His "about me" page suggests he does contracting so might be the latter.
DJGPP was spectacularly good back in the day. I didn't appreciate at the time what a monumental effort it must have been to port the entire GCC toolchain and runtime to DOS/Windows. Hats off to DJ Delorie!
Back then, DJGPP was a much bigger group, and most of the Quake kudos go to Charles Sandmann, author of cwsdpmi, who worked directly with Id to help them optimize their code for our environment.
Thank you for this work. It was my first C compiler in 1998. Y'all helped me on the mailing list and you even replied to my e-mails! I was 11 and this was insane to me.
You're welcome!
I too need to thank you for the very first C compiler I ever had access to in 1999, after 10 years of having a book on C in my possession that I couldn't use until then.
Just passing by to thank you. As many others have mentioned, DJGPP was pivotal for my life. I compiled my first C/Allegro games in DJGPP back in the mid/late 90s.
And here you are!!
+1 DJGPP/Allegro key life experience on my parents Windows machine, thankyou!
Omg I’m totally starstruck now!
Purely out of curiosity, was that all a volunteer open source effort or was the entire DJGPP group acting as a consulting organization?
The DJGPP project and its contributors were 100% volunteer. I'm sure some of the contributors took advantage of their involvement to obtain consulting gigs on the side (I did ;) but DJGPP itself didn't involve any. The Quake work was a swap; we helped them with optimizing, and they helped us with near pointers. Win-win!
Amen to that!
I think I remember there was some communication between ID and Charles Sandmann about CWSDPMI, so even though it's worded a bit strange for an open source project there's probably some thruth in it?
Also a bit strange how the author is surprised about Quake running in a 'VM', apparently they don't really know about VM86 mode in x86 processors...
Is the author surprised by that, or did you just misread it? The only relevant quote on that page that I see is “It is impressive to see Quake run at full speed knowing that Windows 95 runs DOS executable in a virtual machine.”
He is perhaps surprised that it runs _at speed_ in the VM, not that it runs in the VM which he already knows about.
DPMI clients don’t run in a VM, though. They’re just a normal task like any other task / process in Windows.
Only if they never call DOS or the BIOS or execute a Real Mode Software Interrupt. When they do, they ask the DPMI server (which could be an OS like Windows 9x or CWSDPIM) to make the call on their behalf. In doing so, the DPMI server will temporarily enter into a VM86 Virtual Machine to do execute the Real Mode code being requested.
http://www.delorie.com/djgpp//doc/libc-2.02/libc_220.html
So... Win32 runs in virtual mode. In 2025, we don't think of that as a Virtual Machine, but it totally is. Hardware access is trapped by the CPU and processed by the OS/DPMI server.
No, in 386 mode 3.x and 9x the System VM and other DPMI clients runs in protected mode.
Virtual 8086 mode, as its name somewhat suggests, only runs real mode code.
VM in this usage means Virtual Memory - i.e. with page tables enabled. Two "processes" can use the same memory addresses and they will point to different physical memory. In Real Address mode every program has to use different memory addresses. The VM86 mode lets you to have several Real Mode programs running, but using Virtual Memory.
VM does not mean Virtual Memory in this context. VM does mean Virtual Machine. When an OS/DPMI Server/Supervisor/Monitor provides an OS or program a virtual interface to HW interrupts, IO ports, SW interrupts, we say that OS or program is being executed in a Virtual Machine.
For things like Windows 3.x, 9x, OS/2, CWSDPMI, DOS/4G (DPMI & VCPI), Paging & Virtual Memory was an optional feature. In fact, CWSDPMI/djgpp programs had flags (using `CWSDPR0` or `CWSDPMI -s-` or programmatic calls) to disable Paging & Virtual Memory. Also, djgpp’s first DPMI server (a DOS extender called `go32`) didn’t support Virtual Memory either but could sub-execute Real Mode DOS programs in VM86 mode.
http://www.delorie.com/djgpp/v2faq/faq15_2.html
I agree that my comment about VM was imprecise and inaccurate.
I do dispute your assertion that virtual memory was "disabled". It isn't possible to use V86 mode (what the Intel Docs called it) without having a TSS, GDT, LDT and IDT set up. Being in protected mode is required. Mappings of virtual to real memory have to be present. Switching in and out of V86 mode happens from protected mode. Something has to manage the mappings or have set it up.
Intel's use of "virtual" for V86 mode was cursory - it could fail to work for actual 8086 code. This impacted Digital Research. And I admit my experiences are mostly from that side of the OS isle.
I did go back and re-read some of [0] to refresh some of my memory bitrot.
[0] https://www.ardent-tool.com/CPU/docs/Intel/386/manuals/23098...
It's a bit surprising because this is the author of the DooM Black Book and they know the underpinnings pretty well.
However, the difference between a DOS VM under Windows 9x and a Windows command prompt and a w32 program started from DOS is all very esoteric and complicated (even Raymond Chen has been confused about implementation details at times).
I think if you're relatively young is hard to know computing history. Its oddly older than one thinks, even concepts that are seen as new. Its sometimes interesting to see people learn about BBS's which flourished 40 years ago.
Would love to see some interviews etc with DJ if he's up for it
Same. I only have experience from M.U.G.E.N fighting engine with respect to DJGPP.
I remember back in the day using DJGPP (DJ Delorie) with the Allegro library (Shawn Hargreaves), building little games that compiled and ran on Windows and other OSes, and being part of the community.
You can still play the little game I made in under 10K for the Allegro SizeHack competition in 2000: https://web.archive.org/web/20250118231553/https://www.oocit...
Back then I was also writing a bunch of articles on game development: https://www.flipcode.com/archives/Theory_Practice-Issue_00_I...
Anyone on HN was active around that time? :) Fun time to be hacking!
Yes. DJGPP and Allegro was a great help, and a big step up from the old Borland Turbo Pascal I started out with. I remember trying to rotate an image pixel by pixel in Pascal. Allegro simply had a function to do it. And yes, the mailing list was great - Shawn Hargreaves and the couple of people in the inner circle (I seem to remember someone called George) were simply awesome, helpful people.
I eventually installed Red Hat, started at university and lost most of my free time to study projects.
George Foot :)
I was quite active in the Allegro community around that time, mostly on the allegro.cc forums - but I was still a 14-year old learning the ropes back then. Missed out on DJGPP, it was already MinGW under Windows for me.
I took part in a few of the later Speedhacks, around 2004-2005, I think?
Allegro will always have a warm place in my heart, and it was a formative experience that still informs how I like to work on games.
EDIT: Hah, actually found my first Speedhack [1]! Second place, not bad! Interestingly, the person who took first place (Rodrigo Braz Monteiro) is also a professional game developer now.
[1]: https://web.archive.org/web/20071101091657/http://speedhack....
I was, but my application was less fun: porting Perl code from Windows NT to MS-DOS to integrate with software that required direct hardware access to a particular model of SCSI card.
Worked great, and saved a bunch of time vs writing a VDD to enable direct hardware access from NTVDM or a miniport driver from scratch.
IIRC, the underlying problem was that none of the NT drivers for any of the cards we'd tested were able to reliably allocate enough sufficiently contiguous memory to read tapes with unusually large (multi-megabyte) block sizes.
Completely off topic;
So I just took a look at DJ’s website and he has a college transcript there. Something looked interesting.
Apparently he passed a marksmanship PE course at the first year. Is that a thing in US? I don’t know, maybe its common and I have no idea. I’d love to have a marksmanship course while studying computer science though.
US colleges have a very open curriculum, where you have wide leeway in what classes you actually take, especially in the early years of study. If you're coming from more European-style universities, this is vastly different to the relatively rigid course set you'd take (with a few electives here and there).
MIT offers a Pirate Certificate: https://physicaleducationandwellness.mit.edu/about/pirate-ce...
My college required its graduates to pass a minimal swimming test. Just enough swimming ability to give a potential rescuer some extra time to effect the rescue, rather than have us go straight to the bottom of the sea. We all took a test in the first week or so. Those who failed had to take a course and retake the test.
I wouldn't be surprised if it's a pretty normal thing in a few countries or regions in the world. Marksmanship and archery are also olympic sports.
Yeah, in Russia even thought everything is decided for you once you've selected your major, PE classes still for you to choose. Competition to get in was crazy too, none of that "first come, first served" - swimming only accepted top N students, table tennis held a tournament style competition (I went there with two friends and I had to play against both of them).
US colleges still have far more options, though.
It would be an easy “A” for a lot of people in the US!
I needed one PE credit to get a degree from my community college. My school didn't offer marksmanship, but I would imagine it would fit into PE, archery certainly would and there's synergy. I took Table Tennis to graduate. I don't think my engineering school where I got my BS required Physical Education though.
It's definitely not common. My US university required 2 physical education classes, but only if you were under 30 and hadn't served in the military. They may have offered marksmanship, but I just took running and soccer (aka football). The classes were graded pass-fail and didn't even count for academic credit.
My high school had some marksmanship trophy's in their case dating back to the 70s. Responsible gun ownership was a real thing when a sizable portion of the male population were veterans.
US colleges last one year longer, and the first year is more academically similar to the last year of high school in Europe.
We have myriad available "electives" that contribute towards our degrees. I have college credit for "bowling and billiards" and "canoeing and kayaking".
I took an 8-week, 1-credit badminton course to fulfill my PE requirements. I wouldn't be surprised to find a marksmanship course.
It was great indeed. DJGPP is how I learned to to program.
In the early years of Linux, before it had networking, we used KA9Q for the TCP/IP stack:
https://www.ka9q.net/code/ka9qnos/
This worked in DOS, but was easily ported to Linux.
As far as DPMI: I used the CWSDPMI client fairly recently because it allows a 32-bit program to work in both DOS and Windows (it auto-disables its own DPMI functions when Windows detected).
https://en.wikipedia.org/wiki/CWSDPMI
> I didn't work on the Chunnel. That was mainly a British guy named Henry
The British guy named Henry might have named it after another feat of engineering completed around the same time.
https://en.wikipedia.org/wiki/Channel_Tunnel
Or from the Seinfeld episode "The Pool Guy" (Aired November 1995) which had a fictional movie called "Chunnel" -- probably based on the very same channel tunnel.
From Google (AI slop at top of search results): "Chunnel" is not a real movie but a fictional film from the TV show Seinfeld. It is depicted as a disaster movie about an explosion in the Channel Tunnel...
Weird hearing that name now though. Around that time, everybody referred to it as the "Chunnel", but I don't think I've heard it as anything but the "Channel Tunnel" since maybe 2000. I suspect even that usage is now limited to only taking cars on the train from Folkestone. Every time I've travelled on it as a regular passenger from London, it's just been referred to as the Eurostar without any mention of the tunnel at all.
Yes, it's definitely a word from a 1990s geography textbook.
I recall reading about TCP/IP-powered Internet multiplayer DOS Quake in TECHINFO.TXT that shipped with the retail version of the game, and I quote:
I looked around a little and sure enough, a copy of the software was avaiable in a subdirectory of idgames(2?) at ftp.cdrom.com. I knew nothing about TCP/IP networking at the time, so it was a struggle to get it all working, and in the end, the latency and overall performance was miserable and totally not worth it. Playing NetQuake with WinQuake was a much more appropriate scenario.Is this the sign that Fabian is beginning his look and research at Quake (ie, in slow preparation for another Game Engine Black Book)?
He most definitely is cooking something. Seen contributions from him to chocolate-quake: https://github.com/Henrique194/chocolate-quake/pull/60
As well as bug reports: https://github.com/Henrique194/chocolate-quake/issues/57
I personally would love to get some help with my chocolate Doom 3 BFG fork, specially with pesky OpenGL issues: https://github.com/klaussilveira/chocolate-doom3-bfg
He said he was working on something in a comment a couple of months ago on a thread about typst: https://news.ycombinator.com/item?id=44356883
I feel he's stepping his way to that, but Quake is an entire other world of complexity from DooM (which is simple enough that a 400+ page book can "explain" it and the OS and the computers it ran on).
Andre Lamothe's "Tricks of the 3d Game Programming Gurus: Advanced 3d Graphics and Rasterization" is a great book for anyone interested in graphics programming from those days.
Also "The Black Book"[0] of Michael Abrash who worked on Quake with Carmack.
[0]: https://www.amazon.fr/Michael-Abrashs-Graphics-Programming-S...
https://github.com/othieno/GPBB - the last chapter is a Quake retrospective.
There's also the columns he wrote for Dr Dobbs' Journal during development. They're like an expanded, illustrated version of GPBB chapter's first half. https://www.bluesnews.com/abrash/contents.shtml
Which itself is the inspiration for the DooM/Wolf3d Black Books.
It's amusing to me that in the 90s you could easily play Quake or Doom with your friends by calling their phone number over the modem whereas now setting up any sort of multiplayer essentially requires a server unless you use some very user-unfriendly NAT busting.
Glad you mentioned DOOM! Sometimes people forget that DOOM supported multiplayer as early as December 1993, via a serial line and February 1994 for IPX networking. 4 player games on a LAN in 1994! On release, TCP/IP wasn't supported at all, but as the Internet took off, that was solved as well. I remember testing an early-ish version of the 3rd party iDOOM TCP setup driver from my dorm room (10 base T connection) when I was supposed to be in class, and it was a true game changer.
What was even more amazing is you could daisy chain serial ports on computers to get multiplayer Doom running. One or more of those links could even be a phone modem.
Downside is that your framerate was capped to the person with the slowest computer, and there was always that guy with the 486sx25 who got invited to play.
Or slave two copies to yours and get "side view" which was only supported for a few releases IIRC - https://doomwiki.org/wiki/Three_screen_mode
Yes, Doom with multi-monitors! There's (at least) one video on Youtube showing it in action with 3 monitors plus a fourth one with the map: https://www.youtube.com/watch?v=q3NQQ7bPf6U#t=1798.333333
You can kinda solve that problem with STUN servers. Most games on Steam use Steam Datagram Relay, which also solves this: https://partner.steamgames.com/doc/features/multiplayer/stea...
It's like in-engine Hamachi. Works really well with P2P games.
I wonder if there is a way to use tailscale to make it easy again?
Quite literally folks have done this for decades using Hamachi.
Hamachi and STUN were what I was thinking of when I referred to user-unfriendly NAT busting. It's true that these are not much harder to get working than a modem, but they don't match up with modern consumer expectations of ease-of-use and reliability on firewalled networks. It would be nice if Internet standards could keep up with industry so that these expectations could be met. It's totally understandable where we've landed due to modern security requirements, but I still feel something has been lost.
But how are you going to circumvent the user firewall? He still has to open ports there, even using STUN or Steam Relay or Hamachi.
You usually just need to forward a port or two on your router. That gets through the NAT because you specify which destination IP to forward it to. You also need to open that port in your Windows firewall in most cases.
Some configuration, but you don't have to update the port forwarding as often as you would expect.
The reason you can't just play games with your friends anymore is that game companies make way too much money from skins and do not want you to be able to run a version of the server that does not check whether you paid your real money for those skins. Weirdly, despite literally inventing loot boxes, Valve does not suffer from this sometimes. TF2 had a robust custom server community that had dummied out checks so you could wear and use whatever you want. Similar to how Minecraft still allows you to turn off authentication so you can play with friends who have a pirate copy.
Starcraft could only do internet play through battle.net, which required a legit copy. Pirated copies could still do LAN IPX play though, and with IPX over IP software you could theoretically do remote play with your internet buddies.
By the way, this is why bnetd is illegal to distribute and was ruled such in a court of law: authenticating with battle.net counts as an "effective" copy protection measure under the DMCA, and providing an alternate implementation that skips that therefore counts as "circumvention technology".
I was half expecting something about how to get tcp into windows, but this is win95 where they shipped it inside the os and put some company out of business that used to sell that.
The turbulent times and the breakneck speed of computer development need to be taken into account. Not long before that computer networks were strictly corporate things installed by contractors choosing hardware, driver and software suppliers suitable for tasks performed by employees or students, and someone who installed it at home was the same kind of nerd who would drag an engine from the work into his room to tinker. Non-business-oriented software rarely cared about third party network functions. Then network card became a consumer device, and a bit later it became integrated and expected.
Also, Windows did not install TCP/IP components on computers without a network card (most of them until the Millennium era), it was an optional component. You could not “ping” anything, as there was no ping utility, nor libraries it could call. In that aspect, those network-less Windows systems were not much different from network-less DOS systems. The installer probably still works that way (or can be made to, by excluding some dependencies), but it's hard to find hardware without any network connectivity today. I wonder what Windows 11 installer does when there is no network card to phone home...
> I wonder what Windows 11 installer does when there is no network card to phone home...
One of "works fine", "needs a command line trick to continue" or "refuses to work completely" depending on which specific edition of win11 your computer has been afflicted with.
I used to use Trumpet Winsock with Windows 3.1
https://en.wikipedia.org/wiki/Trumpet_Winsock
Didn't Win95 get tcp from FreeBSD?
That was Windows 2000.
The story with the Windows NT IP stack is nuanced, but it wasn't just lifted from BSD: https://web.archive.org/web/20051114154320/http://www.kuro5h...
No I'm fairly certain that berkley sockets were used as a foundation to integrate a full network stack under winsockets so people wouldn't have to go buy things like Trumpet (Windows 3.1) and you could coax out messages saying as much from the commandline but Google is failing me (I'm sure most of this stuff is on usenet which no one seems to care about these days)
The history of the Windows TCP/IP stack went most likely like this:
IBM (NetBEUI, no TCP/IP) -> Spider TCP/IP Stack + SysV STREAMS environment -> MS rewrite 1 (early NT, Winsock instead of STREAMS) -> MS rewrite 2 (make win2000 faster):
https://web.archive.org/web/20151229084950/http://www.kuro5h...
It's interesting how STREAMS pervaded everything for a short while (Apple's Open Transport networking stack for System 7.5 and up was also based on STREAMS) but everyone almost immediately wanted to get rid of it and just use Berkley sockets interfaces.
Berkeley, for disambiguation.
Oops, too late to edit my comment!
I still don't quite get how you should had communicate with the other systems over the network with STREAMS.
With IP you have an address and the means to route the data to that address and back, with TCP/UDP sockets you have the address:port endpoint so the recipient doesn't need to pass a received packet to the all processes on the system, asking "is that yours".
So if there is already some network stack providing both the addressing and the messaging...
STREAMS isn’t a networking protocol, it’s an internal data routing thing some UNIXes use locally, and amongst other things to implement the network stack in it.
You’d still be talking of stuff like IP addresses and the like with it. Probably with the XTI API instead of BSD sockets, which is a bit more complex but you need the flexibility to handle different network stacks than just TCP/IP, like erm…
Win95 had its own stack, codenamed Wolverine
I still get Google hits on my 25 year old DJGPP/NASM tutorial...
This article makes it seems like 1996 was ancient times. There was the internet then, browsers, Mac’s had a tcp stack for a while by then, quake was an extremely advanced game.
Yeah, the dos to windows transitions was a big deal, but it was a pretty ripe time for innovation then.
Yeah, but dial-up was slow, laggy, and what 95% of people used to access the internet in those days. Real-time gaming was not fun with anything that used it. I grew up in a rural area in the 1990s and was no match for people that started to get cable modems as time went on.
Even when people had dial-up, a huge majority were using portal-dialers like AOL or Compuserve, and it took extra steps to use those to "get" the Internet directly as opposed to within the walled garden.
And even then they'd often just use the bundled browser.
I remember the first friend who got a cable modem, that shit was insane compared to dial-up.
Dial-up, has better latency, since their is no packet-switching. So it is slow, but not laggy.
> Dial-up, has better latency, since their is no packet-switching. So it is slow, but not laggy.
It was laggy as there was buffering and some compression (at least for later revisions of dial-up) that most definitely added latency.
Dialup has a ton of latency (100+ms), but little jitter.
If you're dialed up directly, you should be able to get a little bit better latency as you won't need IP, UDP, and PPP/SLIP headers; at modem bandwidth, header bytes add meaningful latency. But data transmission is still pretty slow, even with small packets.
Dialing-up a friend to play Quake, there essentially was no lag.
Dialing-up to the Internet to play Quake via TCP/IP...shit tons of lag (150+ ms).
You're using confusing terminology so you look very wrong. What you mean to say is direct modem-to-modem connections were not laggy because there was no packet switching. This is a true statement.
What the GP comment was talking about was dial-up Internet being most people's exposure to TCP/IP gaming in the 90s. That was most assuredly laggy. Even the best dial-up Internet connections had at least 100ms of latency just from buffering in the modems.
The QuakeWorld network stack was built to handle the high latency and jitter of dial-up Internet connections. The original Quake's network was fine on a LAN or fast Internet connection (e.g. a dorm ResNet) but was sub-par on dial-up.
And then there was also this: https://superuser.com/questions/419070/transatlantic-ping-fa...
> If you wanted to play a multiplayer game on the internet, either you needed to have explicit host & port information, or you needed to use an online multiplayer gaming service.
Technically true, although tools like Kali existed which could simulate IPX style networks over the internet. I know this because I played a substantial amount of Mechwarrior 2 online when it designed only for local network play!
> My guess is that, in full screen, memory writes and reads to the VGA are given direct access to the hardware to preserve performances. [sic]
When a DOS VM is in windowed mode, Windows must intercept VGA port I/O and video framebuffer RAM access to a shadow framebuffer and scale/bitblt it to display in its hosted window.
In full screen, exclusive VGA access is possible so it doesn't need to do anything special except restore state on task switching.
Quake would be even faster if it didn't have to support DPMI/Windows and ran in "unreal mode" with virtual memory and I/O port protections disabled.
Of course Quake had to support DOS but id developed Quake on NeXTSTEP which of course had TCP/IP and they had been supporting Linux and other commercial Unix versions like Solaris since Doom a few years earlier.
In an interview with Lex Fridman, John Carmack said that in retrospect, Quake was too ambitious in terms of development time, as it both introduced network play and a fully polygonal 3D engine written in assembly. So it would have been better to split the work in two and publish a "Network Doom" first and then build on that with a polygonal Quake.
Which seems to imply that the network stack was about as difficult to implement as the new 3D engine.
And then you had Romero saying that Quake wasn't ambitious enough...
That's the difference between an engine and game developer.
They theoretically had more than enough time for game design in the ~2 year development period, which was long for the time.
From that first graph: Who was using WinNT in 2005?!
Businesses. And I knew a few diehards who swore Windows 2000 was shit and Windows NT 4.0 was where it was at, even as a workstation.
Windows 95 and MS-DOS in 2004 worry me a bit more.
I'm guessing none of these diehards were running domain controllers.
> And in 1995 there were only two: us, and Total Entertainment Network. You might think game creators would come to us and say "please put my game on your service!", but... nope! Not only did we have a licensing team that went out and got contracts to license games for our service, but we had to pay the vendor for the right to license their game, which was often an exclusive. So, we had Quake and Unreal; TEN got Duke Nukem 3D and NASCAR.
FWIW, the Total Entertainment Network (TEN) got Quake later, here's a press release from September 30 1996 [1]. Wikipedia says QTest was released Feb 24, 1996, and I can't find when MPath support launched, so I don't know how long the exclusive period was, but not very long. Disclosure: I was a volunteer support person (TENGuide) so I could get internet access as a teen; I was GuideToast and/or GuideName.
[1] https://web.archive.org/web/20110520114948/http://www.thefre...
Ah I remember running a serial cable from my bedroom to the hallway so we could play 1v1 quake via direct connect. Good times! I think we used to play age of empires that way too.
On DOS runnning fast under W95, how no one created a SVGALIB wrapper trapping i386 code and redirecting the calls to an SDL window?
” It is impressive to see Quake run at full speed knowing that Windows 95 runs DOS executable in a virtual machine. My guess is that, in full screen, memory writes and reads to the VGA are given direct access to the hardware to preserve performances”
Virtual x86 mode had little to do with what we nowadays think of when someone says ”virtual machine”
Arguably it had a great deal to do with what we think of as a "virtual machine."
Virtual 8086 remapped all opcodes and memory accesses to let a program pretend to be on a single, real-mode 8086 when in reality it was one of many programs running in protected mode on a newer chip
AMD-V and whatever the Intel counterpart is do the almost exactly the same thing: re-map all ia32 and amd64 instructions and memory accesses to let a program pretend to be doing ring 0 stuff on an x86 box, when in reality it is one of many programs running with fewer privileges on a newer chip
There are a few more wrinkles in the latter case -- nested page tables, TLB, etc -- but it is the same idea from the viewpoint of the guest.
Not entirely related but Quake had a VM though, executing scripts written in QuakeC[0] which would drive the AI, game events, etc.
[0]: https://en.wikipedia.org/wiki/QuakeC
But your link says that QuakeC was a compiled language
QuakeC was compiled into QuakeVM bytecode, which made all modes and logic portable between platforms without having to recompile things everytime, unlike what had to be done for Quake 2 (which was 100% native code).
This hurt performance a bit but in the longer term benefited the modding scene massively.
It's compiled into bytecode, so it still requires a VM / bytecode interpreter (whatever you want to call it).
Compiled to bytecode.