Thanks for pointing to Gibson’s DNS Benchmark — it’s definitely a classic and set the stage for this kind of testing. This project takes a different angle: it’s CLI‑first, scriptable, and designed for both quick “top” checks and deeper “benchmark” runs, plus a monitoring mode for ongoing resolver health. Glad you’re giving it a try, feedback is welcome.
Here's my take. Ads will happily eat 300ms per webpage if you allow them to load. A fast DNS is great, but an adblocking DNS will save you much more time if you're just browsing.
DNS is utilized for many things besides looking up web sites (and consequently ads on web sites). DNS was used for many things etcd was invented to solve, and still is by many. Adblocking is kidstuff; the bearded, motorcycle riding, gun-shooting, jumping out of airplanes and hanging off of rocks jackals use a "DNS firewall" (just posted this the other day): https://www.dnsrpz.info/ and Dnstap for application-level DNS logging.
Absolutely — DNS goes way beyond just resolving websites. It’s been used for service discovery and coordination long before tools like etcd came along, and still is in many systems today. Adblocking is one use case, but DNS firewalls (like RPZ) and logging frameworks such as Dnstap show how powerful DNS can be at the infrastructure level. Thanks for sharing the link — it’s a great reminder that benchmarking speed is only one piece of the bigger DNS picture.
That’s a good point — adblocking DNS can definitely save time by cutting requests before they even reach the browser. The focus here was on resolver speed and monitoring, but pairing it with an adblocking DNS is a smart way to get both performance and less clutter while browsing.
Fast DNS and adblocking DNS (or other methods, for that matter) are not mutually exclusive topics, even assuming your primary use case for DNS resolution on a given machine is web browsing.
Absolutely — fast DNS and adblocking DNS aren’t mutually exclusive. The tool here is focused on resolver speed and monitoring, but it can benchmark adblocking resolvers just as well. That way you can pick the one that balances performance with blocking, depending on your browsing needs.
I doubt that your conclusion is correct (because local DNS resolvers that consult blocklists are often surprisingly slow) but I think your theory of the matter is accurate. The raw speed of the DNS server is almost irrelevant because there are other much larger systemic performance issues at stake. For example Cloudflare does not forward EDNS to the origin, so the records it returns are suboptimal for services that use DNS-based service affinity. It doesn't make a difference to me if Cloudflare is a few microseconds faster — and by the way I sincerely doubt that this python program is observing meaningful microsecond-scale differences — because overall it makes applications slower.
Fair points — blocklist‑based local resolvers can indeed be slower, and raw speed alone doesn’t capture the bigger systemic issues. The tool isn’t trying to measure microsecond‑scale differences, but rather provide a clear comparison of resolver behavior under load and over time. Things like EDNS handling and service affinity are exactly the kind of deeper characteristics that benchmarking can help surface, so users can decide which trade‑offs matter most for their environment.
Yes, it’s in the same space as Gibson’s GRC DNS Benchmark — that tool has been around for years and set the standard for GUI‑based testing. This project takes a different angle: it’s CLI‑first, scriptable, and adds modes for quick checks, deeper benchmarks, and ongoing monitoring. So it’s more aimed at automation and sysadmin workflows than interactive GUI use.
Yes, dnsdiag is a solid toolbox — it’s great for digging into DNS issues at the packet level. This project is aimed more at benchmarking and monitoring resolvers over time, so they complement each other well: dnsdiag for diagnostics and troubleshooting, dns‑benchmark‑tool for comparative speed and health checks.
Thanks! Glad you find it neat. The goal was to make DNS benchmarking simple to run from the CLI, with quick checks, deeper benchmarks, and monitoring all in one place. Feedback is always welcome.
Yes, the ISC Looking Glass is a great resource — it’s handy for quick DNS lookups and seeing how queries resolve from their vantage point. This project is aimed more at benchmarking and monitoring resolvers over time, so they complement each other: Looking Glass for snapshots, dns‑benchmark‑tool for comparative speed and ongoing health checks.
Things built with asyncio and dnspython are close to my heart. ;-)
So, my impression from the doc (and a quick browse of the code) is that this is a tool for monitoring DNS caching / recursing resolver (RD) performance, not authoritative. If performance really matters to you, you should be running your own resolver(s). [0] Granted, you will quickly realize that some outfits running auth servers seem to understand that they're dependent on caching / recursing resolvers, and some are oblivious. Large public servers (recursing and auth) tend to "spread the pain" and so most people don't feel the bumps; but when they fall over they fall over large, and they bring some principles (and thereby create "vulnerabilities") at odds with what the DNS was architected for and throw the mitigation on the other operators, including operators who never accepted these self-anointed principles to begin with.
I have a hard time understanding how DNS is adding 300ms to every one of your API requests... unless DNS is both the API and transport, or you're using negative TTLs /s.
Thanks for the thoughtful read — and yes, the tool is focused on caching / recursing resolver performance, not authoritative. The asyncio + dnspython stack makes it easy to script and monitor those behaviors over time. Running your own resolver is definitely the gold standard if performance and control really matter, but benchmarking public ones helps surface the trade‑offs users face in practice. The 300ms example was more about illustrating how ads and systemic factors can dwarf raw resolver speed, not a claim about per‑request DNS overhead. Appreciate the detailed perspective and glad the doc came across clearly.
The Spinrite guy was the first to do this (I think). https://www.grc.com/dns/benchmark.htm
That said, more options are good. I'll give this one a go.
Thanks for pointing to Gibson’s DNS Benchmark — it’s definitely a classic and set the stage for this kind of testing. This project takes a different angle: it’s CLI‑first, scriptable, and designed for both quick “top” checks and deeper “benchmark” runs, plus a monitoring mode for ongoing resolver health. Glad you’re giving it a try, feedback is welcome.
Works on Linux with Wine too =)
Nice! Good to know it runs under Wine on Linux as well. That makes it easier for folks who want to try it outside native Python environments.
If you want to run python tools without installing them:
If you want to run Python tools without installing them, you can use uvx: uvx --from dns-benchmark-tool dns-benchmark top
This pulls the package from PyPI and runs the top command right away.
Here's my take. Ads will happily eat 300ms per webpage if you allow them to load. A fast DNS is great, but an adblocking DNS will save you much more time if you're just browsing.
DNS is utilized for many things besides looking up web sites (and consequently ads on web sites). DNS was used for many things etcd was invented to solve, and still is by many. Adblocking is kidstuff; the bearded, motorcycle riding, gun-shooting, jumping out of airplanes and hanging off of rocks jackals use a "DNS firewall" (just posted this the other day): https://www.dnsrpz.info/ and Dnstap for application-level DNS logging.
Absolutely — DNS goes way beyond just resolving websites. It’s been used for service discovery and coordination long before tools like etcd came along, and still is in many systems today. Adblocking is one use case, but DNS firewalls (like RPZ) and logging frameworks such as Dnstap show how powerful DNS can be at the infrastructure level. Thanks for sharing the link — it’s a great reminder that benchmarking speed is only one piece of the bigger DNS picture.
That’s a good point — adblocking DNS can definitely save time by cutting requests before they even reach the browser. The focus here was on resolver speed and monitoring, but pairing it with an adblocking DNS is a smart way to get both performance and less clutter while browsing.
Fast DNS and adblocking DNS (or other methods, for that matter) are not mutually exclusive topics, even assuming your primary use case for DNS resolution on a given machine is web browsing.
Absolutely — fast DNS and adblocking DNS aren’t mutually exclusive. The tool here is focused on resolver speed and monitoring, but it can benchmark adblocking resolvers just as well. That way you can pick the one that balances performance with blocking, depending on your browsing needs.
I doubt that your conclusion is correct (because local DNS resolvers that consult blocklists are often surprisingly slow) but I think your theory of the matter is accurate. The raw speed of the DNS server is almost irrelevant because there are other much larger systemic performance issues at stake. For example Cloudflare does not forward EDNS to the origin, so the records it returns are suboptimal for services that use DNS-based service affinity. It doesn't make a difference to me if Cloudflare is a few microseconds faster — and by the way I sincerely doubt that this python program is observing meaningful microsecond-scale differences — because overall it makes applications slower.
Fair points — blocklist‑based local resolvers can indeed be slower, and raw speed alone doesn’t capture the bigger systemic issues. The tool isn’t trying to measure microsecond‑scale differences, but rather provide a clear comparison of resolver behavior under load and over time. Things like EDNS handling and service affinity are exactly the kind of deeper characteristics that benchmarking can help surface, so users can decide which trade‑offs matter most for their environment.
is this similar to the GRC tool?
Yes, it’s in the same space as Gibson’s GRC DNS Benchmark — that tool has been around for years and set the standard for GUI‑based testing. This project takes a different angle: it’s CLI‑first, scriptable, and adds modes for quick checks, deeper benchmarks, and ongoing monitoring. So it’s more aimed at automation and sysadmin workflows than interactive GUI use.
https://github.com/farrokhi/dnsdiag is another great toolbox for looking into DNS problems.
Yes, dnsdiag is a solid toolbox — it’s great for digging into DNS issues at the packet level. This project is aimed more at benchmarking and monitoring resolvers over time, so they complement each other well: dnsdiag for diagnostics and troubleshooting, dns‑benchmark‑tool for comparative speed and health checks.
Very neat tool!
Thanks! Glad you find it neat. The goal was to make DNS benchmarking simple to run from the CLI, with quick checks, deeper benchmarks, and monitoring all in one place. Feedback is always welcome.
You are, presumably, already familiar with the ISC Looking Glass?
https://isc.sans.edu/api/dnslookup/google.com
Yes, the ISC Looking Glass is a great resource — it’s handy for quick DNS lookups and seeing how queries resolve from their vantage point. This project is aimed more at benchmarking and monitoring resolvers over time, so they complement each other: Looking Glass for snapshots, dns‑benchmark‑tool for comparative speed and ongoing health checks.
This GPTSlop isn't endearing me to your future service offering.
Things built with asyncio and dnspython are close to my heart. ;-)
So, my impression from the doc (and a quick browse of the code) is that this is a tool for monitoring DNS caching / recursing resolver (RD) performance, not authoritative. If performance really matters to you, you should be running your own resolver(s). [0] Granted, you will quickly realize that some outfits running auth servers seem to understand that they're dependent on caching / recursing resolvers, and some are oblivious. Large public servers (recursing and auth) tend to "spread the pain" and so most people don't feel the bumps; but when they fall over they fall over large, and they bring some principles (and thereby create "vulnerabilities") at odds with what the DNS was architected for and throw the mitigation on the other operators, including operators who never accepted these self-anointed principles to begin with.
I have a hard time understanding how DNS is adding 300ms to every one of your API requests... unless DNS is both the API and transport, or you're using negative TTLs /s.
Good doc, by the way.
[0] Actual resolvers. Not forwarders.
Thanks for the thoughtful read — and yes, the tool is focused on caching / recursing resolver performance, not authoritative. The asyncio + dnspython stack makes it easy to script and monitor those behaviors over time. Running your own resolver is definitely the gold standard if performance and control really matter, but benchmarking public ones helps surface the trade‑offs users face in practice. The 300ms example was more about illustrating how ads and systemic factors can dwarf raw resolver speed, not a claim about per‑request DNS overhead. Appreciate the detailed perspective and glad the doc came across clearly.