The title reminds me of the 5th installment of The Hitchhiker's Guide to the Galaxy by Douglas Adams:
"Further investigation quickly established what it was that had happened. A meteorite had knocked a large hole in the ship. The ship had not previously detected this because the meteorite had neatly knocked out that part of the ship's processing equipment which was supposed to detect if the ship had been hit by a meteorite."
The book ("Mostly harmless") and especially the beginning of the first chapter is worth reading as it describes how the automated systems of the space ship try to resolve the situation.
The page is 320KB in size. They could have made it a static page with some simple HTML, the whole thing would have been under 10KB and would not have needed a CDN.
The thing that worries me the most, is that oftentimes nobody cares. That demotivates me a lot, as I tend to invest huge loads of my time into optimising various things, and all of them are meaningless if you ‘just buy a faster computer.’ Most of my websites are served with a low-powered computer, and I tend to optimise them to work well on them. But buying just one beefy server compensates all my optimisations. I have no idea what to do about that. I still care about these things, as I believe that’s what makes me a professional. But there are countless examples when you can just ignore all that and see no real difference.
Bad news about ISPs... Really you want a RPi on solar power, attached to a longwave transmitter, and with direct peering agreements with all dominant global providers. Most well-connected rpi in existence.
I'm not affiliated with this genius. I was just snooping around the other thread (https://news.ycombinator.com/item?id=45974012), took a chance at modifying the site's URL, and found myself pleasantly surprised.
Just thinking about it, wouldn't a distributed P2P "mesh" be a better fit for reliability probing? We could share results, see where it was inaccessible from. It's kind of an oxymoron to have a centralized down detector lol
Sure, a p2p network of people doing distributed pings on a wide range of services sounds like a good idea. Of course, you'd need people willing to run it. A small incentive might be needed... or just a default of "if you want to use this software, you agree to also have your client ping other websites to check if they're up from your location".
Or—hear me out—we actually build services that leverage the native distributed infrastructure of the internet, so that we don't need down detectors. What a concept.
100% agree. But with most used services being pushed by coorps, it will remain centralized until the "distributed mesh" becomes at least as good/robust.
I think this is so important and in fact with services now becoming utilities for daily life and the national/global economy, it's something that people like DARPA could get behind. We understand why a big peering corp's incentives might not align with true distributed (and hence how they may lobby for the crippling of certain useful p2p APIs from being widely 'distributed'), but it's something we should really push for and technically just do. And we'd probably find many allies doing it in the continuity of system and reliability space.
I think we need to make a highly-available downdetector from a collection of SBCs hosted around the world. Each node gets its configuration via git-pull which is self-hosted/republished. Simplest DNS configuration possible: each node has a unique $n.isdowndetectordown.ultradowndetector.com while they also happily host a common hostname with simple dns round robin entries for it.isdowndetectordown.ultradowndetector.com. The common page attempts to load a check resource (perhaps just a tiny css output?) from all of the $n.i.u.c nodes which just changes a div from gray to green/red.
It would be interesting to see just how small this whole thing could be; I bet it could be made into a <500MB sdcard image for a RaspberryPi4/2GB that simply updates a static css out of (say) cron and serves a surprising number of HN requests.
With all of this redundancy, there is no way it could fail! /s
The title reminds me of the 5th installment of The Hitchhiker's Guide to the Galaxy by Douglas Adams:
"Further investigation quickly established what it was that had happened. A meteorite had knocked a large hole in the ship. The ship had not previously detected this because the meteorite had neatly knocked out that part of the ship's processing equipment which was supposed to detect if the ship had been hit by a meteorite."
The book ("Mostly harmless") and especially the beginning of the first chapter is worth reading as it describes how the automated systems of the space ship try to resolve the situation.
https://www.penguinrandomhouse.ca/books/661/mostly-harmless-...
It's down detectors all the way down
Unfortunately this website relies on Tailwind's CDN for styling, which in turn is deployed on Vercel, which in turn is mostly hosted on AWS.
The page is 320KB in size. They could have made it a static page with some simple HTML, the whole thing would have been under 10KB and would not have needed a CDN.
Wasn't there some tech demo some time ago how to store a tiny webpage in DNS TXT records? I think this would be the usecase for that :)
https://isitdns.com/ would like a word
Probably churned out using v0 which defaults to bloat
The thing that worries me the most, is that oftentimes nobody cares. That demotivates me a lot, as I tend to invest huge loads of my time into optimising various things, and all of them are meaningless if you ‘just buy a faster computer.’ Most of my websites are served with a low-powered computer, and I tend to optimise them to work well on them. But buying just one beefy server compensates all my optimisations. I have no idea what to do about that. I still care about these things, as I believe that’s what makes me a professional. But there are countless examples when you can just ignore all that and see no real difference.
The bottom turtle should be a raspberry pi in somebody’s closet. No dependencies.
Bad news about ISPs... Really you want a RPi on solar power, attached to a longwave transmitter, and with direct peering agreements with all dominant global providers. Most well-connected rpi in existence.
Add that moon-bouncing thing that got popular last week. For redundancy.
Those responsible for sacking the people who have just been sacked, have been sacked
[0] https://youtu.be/79TVMn_d_Pk?t=117
This is beginning to be a good sign that it was AI generated. For some reason the AI's really love using Tailwind CSS.
Human devs also love using tailwind.
I'm not affiliated with this genius. I was just snooping around the other thread (https://news.ycombinator.com/item?id=45974012), took a chance at modifying the site's URL, and found myself pleasantly surprised.
It would be great to register this in downdetector to make sure it is up.
And a page monitoring this one: https://onlineornot.com/website-down-checker?requestId=o398t...
This one looks like it's behind a CDN, at least
Duplicate: https://news.ycombinator.com/item?id=45974012
No, it's just one layer deeper.
Relevant: https://en.wikipedia.org/wiki/List_of_lists_of_lists (editing discussion is amusing).
Clearly, the proper solution is to have a p2p mesh of down detectors.
As per usual, all new is something old, well-forgotten.
Interestingly enough, the architecture of "a p2p mesh of down detectors" converges with the architecture of "not using a down detector".
yes, downdetectorsdowndetectorsdowndetectorsdowndetector is available.
Well, that was fast.
https://downdetectorsdowndetectorsdowndetectorsdowndetector....
Is there a length limit for domain names? :)
Yes, according to RFC 1035 section 2.3.4 [0], it's 255 octets. Long answer written by a human: https://superuser.com/a/1843870
[0] https://www.rfc-editor.org/rfc/rfc1035#section-2.3.4
i've reached semantic satiation
Time for updetector.com! (On the plus side, this could detect if itself was up!)
In a similar fashion, Datadog just released: https://updog.ai
What's Updog?
Can it detect when it itself is down
No, you will need another layer of down detector.
That'll be HN indeed.
Just thinking about it, wouldn't a distributed P2P "mesh" be a better fit for reliability probing? We could share results, see where it was inaccessible from. It's kind of an oxymoron to have a centralized down detector lol
Sure, a p2p network of people doing distributed pings on a wide range of services sounds like a good idea. Of course, you'd need people willing to run it. A small incentive might be needed... or just a default of "if you want to use this software, you agree to also have your client ping other websites to check if they're up from your location".
But it's not a new idea apparently, a quick search led to https://www.reddit.com/r/selfhosted/comments/1lv9flt/built_a... / https://synthmon.io/home,
How To Build a Botnet 101
Or—hear me out—we actually build services that leverage the native distributed infrastructure of the internet, so that we don't need down detectors. What a concept.
100% agree. But with most used services being pushed by coorps, it will remain centralized until the "distributed mesh" becomes at least as good/robust.
I think this is so important and in fact with services now becoming utilities for daily life and the national/global economy, it's something that people like DARPA could get behind. We understand why a big peering corp's incentives might not align with true distributed (and hence how they may lobby for the crippling of certain useful p2p APIs from being widely 'distributed'), but it's something we should really push for and technically just do. And we'd probably find many allies doing it in the continuity of system and reliability space.
It’s down for me.
I'm almost wishing for the next major outage just so I can see this working :-)
It's not checking for South America, they need to deploy more capital
fyi theres also a 4x https://downdetectorsdowndetectorsdowndetectorsdowndetector....
didnt check past that
Seems to be down...
Quick! Time to register downforeveryone-orjustdowndetector.com :D
Who watches the watchmen indeed
Seems like this madness is only going to end when we hit the 63-character limit for domain name labels.
who detects the down detectors's down detector's downs?
I'm really hoping downdetector.com
its down detectors all the way down
Hm, looks like this site is down.
Now let's make it into the Downdetector's site list and complete the loop!
UNLIMITED POWER!
This is down.
The ultimate down detector should have a fixed IP address as well, in the case of other stuff failing as well
Yes, the ultimate down detector should be hosted on a static IP, without need to pass through DNS.
Down detectors all the way down
I think we need to make a highly-available downdetector from a collection of SBCs hosted around the world. Each node gets its configuration via git-pull which is self-hosted/republished. Simplest DNS configuration possible: each node has a unique $n.isdowndetectordown.ultradowndetector.com while they also happily host a common hostname with simple dns round robin entries for it.isdowndetectordown.ultradowndetector.com. The common page attempts to load a check resource (perhaps just a tiny css output?) from all of the $n.i.u.c nodes which just changes a div from gray to green/red.
It would be interesting to see just how small this whole thing could be; I bet it could be made into a <500MB sdcard image for a RaspberryPi4/2GB that simply updates a static css out of (say) cron and serves a surprising number of HN requests.
With all of this redundancy, there is no way it could fail! /s