(Q|O)SFP are basically just raw high speed serial interfaces to whatever - you see this a lot in FPGAs, you can use the QSFP interfaces for anything high speed - PCIe, SATA, HDMI…
> Although we can already buy commercial transceiver solutions that allow us to use PCIe devices like GPUs outside of a PC, these use an encapsulating protocol like Thunderbolt rather than straight PCIe.
> [snip]
> As explained in the intro, this doesn’t come without a host of compatibility issues, least of all PCIe device detection, side-channel clocking and for PCIe Gen 3 its equalization training feature that falls flat if you try to send it over an SFP link.
So, uh… what’s the benefit? How much overhead does Thunderbolt really introduce, given it solves these other issues?
The benefits are twofold: physical colocation and bandwidth.
Thunderbolt 5 offers 80Gbps of bidirectional bandwidth. PCIe 5.0 16x offers 1024Gbps of bidirectional bandwidth. This matters.
TB5 cables can only get so long whereas fiber can go much farther more easily. This means that in a data center type environment, you could virtualize your GPUs and attach them as necessary, putting them in a separate bank (probably on the same rack).
The video is about a 2x1 link, which the author hopes to eventually scale up to 3x4 using 40 gig transceivers. I'd say thunderbolt is probably safe in the near future.
Bidirectional is a lot like biweekly. Biweekly depending on context means twice a week or once every two weeks and bidirectional can both mean per direction and total of both directions.
I'm only a single datapoint but I've never encountered that usage. My understanding of a bidirectional link is that it meets the same spec in both directions simultaneously. It's important precisely because many links aren't bidirectional, sharing a single physical link between two logical links.
This was a super interesting video to watch. I honestly thought SFP required more setup, but this explains why AliExpress is so ripe with USB3 and HDMI over SFP converters that are dirt cheap.
So you're saying I can put a handful of 4090's out in the middle of snowy Michigan with a handful of OM4 cables snaking into my basement to run legit arctic cooling with no noise?
Blog post for people who prefer reading: https://hackaday.com/2026/04/11/implementing-pcie-over-fiber...
While at a higher level, thunderbolt and https://en.wikipedia.org/wiki/ExpEther can both of course work over fiber too!
(Q|O)SFP are basically just raw high speed serial interfaces to whatever - you see this a lot in FPGAs, you can use the QSFP interfaces for anything high speed - PCIe, SATA, HDMI…
> Although we can already buy commercial transceiver solutions that allow us to use PCIe devices like GPUs outside of a PC, these use an encapsulating protocol like Thunderbolt rather than straight PCIe.
> [snip]
> As explained in the intro, this doesn’t come without a host of compatibility issues, least of all PCIe device detection, side-channel clocking and for PCIe Gen 3 its equalization training feature that falls flat if you try to send it over an SFP link.
So, uh… what’s the benefit? How much overhead does Thunderbolt really introduce, given it solves these other issues?
The benefits are twofold: physical colocation and bandwidth.
Thunderbolt 5 offers 80Gbps of bidirectional bandwidth. PCIe 5.0 16x offers 1024Gbps of bidirectional bandwidth. This matters.
TB5 cables can only get so long whereas fiber can go much farther more easily. This means that in a data center type environment, you could virtualize your GPUs and attach them as necessary, putting them in a separate bank (probably on the same rack).
"same rack" should still be fine for 1m passive TB5 cable though, right?
> 1024Gbps
Good luck getting a 1Tbit tranceiver. Anydirectional. Also it's 512Gbitish per direction.
Easy, fs.com has 1.6Tbps OSFP for about 570€ - though only up to 1m lenght apparently.
The video is about a 2x1 link, which the author hopes to eventually scale up to 3x4 using 40 gig transceivers. I'd say thunderbolt is probably safe in the near future.
That's 64Gb per lane across x16 lanes. That sounds not daunting?
There's already 800Gb transceivers readily available, 1.6 is probably getting preview deploys to some hyperscalers & other early adopters as we speak.
Bidirectional is a lot like biweekly. Biweekly depending on context means twice a week or once every two weeks and bidirectional can both mean per direction and total of both directions.
But yes I meant 512Gbps each way, to be clear.
I'm only a single datapoint but I've never encountered that usage. My understanding of a bidirectional link is that it meets the same spec in both directions simultaneously. It's important precisely because many links aren't bidirectional, sharing a single physical link between two logical links.
I love the Neon Genesis background, awesome project too.
The neon genesis background plus this awesome technical breakdown feels so early 2000s.
This was a super interesting video to watch. I honestly thought SFP required more setup, but this explains why AliExpress is so ripe with USB3 and HDMI over SFP converters that are dirt cheap.
How does this compare to something like RDMA over Converged Ethernet (RoCE)?
A fun tangent - if someone wants to explore how Azure is performing RDMA over RoCEv2 - check this paper out - https://www.microsoft.com/en-us/research/wp-content/uploads/...
There is an interesting NSDI talk on the paper too - https://www.youtube.com/watch?v=kDJHA7TNtDk (2023)
It seems rather educational.
Cool project! PCIe itself I think is likely to end up doing something similar soon, there are provisions in the spec now for optical retimers.
There's a number of optical modules for TB3 and TB4, might be an easier (but less fun) route as TB3 and TB4 can carry PCIe.
So you're saying I can put a handful of 4090's out in the middle of snowy Michigan with a handful of OM4 cables snaking into my basement to run legit arctic cooling with no noise?
No part of Michigan is in the arctic, but sure, outside of mosquito season, that would work.
Might as well put your entire computer outside and use thunderbolt/usb-4 over fiber docks
Watercooling loop light be better, the radiator fins will still rust from condensation.
I mean yes, but you could also just place the entire computer out there as well