We built a technology which uses light to control light: Finchetto CEO on ditching electronics to make networks faster
Date:
Mon, 26 Jan 2026 23:35:00 +0000
Description:
Finchetto CEO tells us about switching data using light instead of
electronics in hyperscale networks.
FULL STORY ======================================================================
In August 2025, I wrote about Finchetto , a UK photonics startup working on
an optical packet switch that keeps data entirely in the optical domain
rather than bouncing between light and electronics.
The firms breakthrough technology could make hyperscale networks dramatically faster, just as AI systems begin to strain todays infrastructure. The idea also aims to cut power use while remaining scalable as link speeds increase.
In a bid to find out more, I spoke to Finchetto CEO Mark Rushworth about how the technology works, why packet switching in optics matters, where the hard problems still are, and how this could fit into real hyperscale and AI networks. What inspired Finchetto to focus on photonic packet switching, and how does it differ from traditional electronic switching?
With Finchetto, we looked at the way networks run today and saw that there
was a lot of unnecessary work going on.
A server or GPU often sends data as light, then that light gets converted
into electrons inside a switch so a processor can figure out where it should go. It's then turned back into light again to leave the box. This back and forth introduces a cost in the form of power and latency.
We then asked ourselves if we could do that without falling back into the electronic domain. To do that, we built a technology which uses light to control light, so the switching all happens in the optical domain.
Most of the photonics work you see elsewhere is still circuit switching,
which pins a path between two endpoints, using things like MEMS mirrors or thermo-optic devices to steer light.
The disadvantage there is a relatively slow reconfiguration, and it doesnt keep up with packet-by-packet decisions at 1.6 or 3.2 Tbps. Its in packet switching in optics that you get the real flexibility and performance, and thats the gap we set out to fill. When you bring that into big networks, what advantages do you see in terms of speed, efficiency, and scalability?
Id say speed is the most obvious advantage, but efficiency is just as important. When you keep the signal as light, rather than translating it from light to electrons and back, you dont burn as much power or experience as
much of a delay.
In terms of scalability, all optical packet switching allows you to build
very large, very flexible networks. You can make routing decisions at the packet level, so you can spread workloads much more evenly across a big fabric.
Using standard concepts like spine & leaf, but implemented with our photonic switches, you can push to tens of thousands of nodes without the network itself becoming a choke point. How does that translate into real-world impact for hyperscale data centers from a performance and energy perspective?
Energy is top of the agenda for any hyperscaler right now. Anything that reduces network power consumption without hurting performance is going to
have a positive impact on the bottom line and therefore on competitiveness.
Our approach removes a lot of the electro-optical conversions and many of the transceivers that fail most often, so you get a network that uses less power and is more resilient at the same time.
You can add Finchetto switches in phases, so youre improving performance and energy efficiency over time while still sweating existing assets. Thats a
much easier business case than ripping and replacing. What does this mean specifically for emerging workloads like AI and other advanced compute?
AI is a perfect example of where the network can quietly kill your performance. These training clusters want to move huge volumes of data
between GPUs with very tight timing. If the fabric cant keep up, you end up with expensive silicon sitting idle.
By doing packet switching in optics with extremely low latency, we remove a lot of those bottlenecks at the hardware level. It also opens up options that werent practical before. Some of the more exotic topologies - torus, dragonfly-style architectures and so on - were historically hard to justify because the latency budget just didnt work with conventional switching.
When your switch isnt the limiting factor anymore, network architects can revisit those ideas and pick the topology that really suits the workload, rather than the one that works around the hardware. How easily can data centers plug Finchetto into what they already have?
Thats been one of our big design principles from day one. The reality is that hyperscale data centers are already operating at a level the market accepts, and a lot of capital has gone into getting them there.
No one is going to say, Nice idea, well rebuild everything around it. Weve spent a lot of time making sure our technology looks and feels like a good citizen in a modern network.
It interoperates with existing transceivers, NICs, GPUs and cabling, and it drops into familiar architectures rather than demanding you redesign the
whole thing. That means you can start with targeted deployments - a new AI
pod or a performance-critical part of the fabric - and grow from there as you see the benefits. Stepping back a bit, what trends in photonics and
networking excite you most right now, and what are the main hurdles to wider adoption?
Photonics has moved from being an interesting research focus to being central to the roadmap for the biggest players in the industry. You can see that in the attention around co-packaged optics, and in major acquisitions of early-stage photonics companies.
When leaders like Nvidia say, We need optics right next to the compute, the rest of the industry listens. The hard part is building a complete system
that operators trust. It must integrate cleanly with GPUs, NICs,
motherboards, and tools they already use; it must be reliable over its lifetime; and it must be straightforward to manage and upgrade.
Our answer is to make the optical core as passive and line-rate agnostic as possible. If you go from 800gbs to 1.6tbs, the switch in the middle doesnt need to change, which is a very different proposition from replacing whole tiers of electronic gear every time you move up a speed notch. If your switch is entirely optical and doesnt have internal buffers, how do you stop packet loss and collisions in hot spots?
In a traditional electronic or hybrid switch, you lean on memory and
buffering to smooth things out. In a pure optical system, you dont get that, so you have to think differently.
What weve done is build collision avoidance and return to sender into the optical layer itself.
The switch can effectively tell whether a given path is free before it sends traffic down it. If it isnt, the packet doesnt go, so you avoid most collisions up front.
In the rare case where two packets clash, theres a mechanism to return one of the packets to sender to retry.
All of this happens in optics, which is the clever bit, and it means you keep the benefits of an all-optical fabric with the complexities of packet switching functionality in the network. Zooming out to the UK specifically:
as the country ramps up investment in AI and data centers, what should it be doing to make sure homegrown photonics and networking tech actually gets
used?
Most of the real UK innovation in this area is coming from startups, simply because there arent any big domestic switch vendors.
The risk is that we spend a lot of public money building AI infrastructure thats essentially a shop window for overseas suppliers, while the UK
companies doing the hard R&D never really get a foothold.
What would really help is proper support through the scale-up phase and into deployment: funded testbeds, like you see in quantum, where new technologies can be proven in realistic environments, and procurement frameworks that make it natural rather than exceptional to include UK-developed tech.
If were serious about sovereign capability in data centers and AI, we have to move beyond just hosting other peoples hardware. Where else do you see photonics transforming networking?
Its easy to focus on the big data centers because thats where AI and cloud live today, but networking is much broader than that.
Think about intersatellite links in space, free space optical links bringing connectivity to hard-to-reach areas, or secure, high-bandwidth connections between aircraft or autonomous vehicles in defense.
Those are all fundamentally networking problems, and theyre all places where photonics can make a big impact. Finally, how do you see Finchettos architecture evolving to meet future needs like quantum networking, optical compute, or photonic memory?
The way weve structured our IP is quite intentional. At its heart, what weve patented is a method and apparatus for switching data using nonlinear optics. In other words, its not tied to one very narrow implementation or use case.
That gives us a lot of headroom. The same underlying switching principle can be applied to different kinds of networks, whether thats classical high-speed packet networks, future quantum adjacent architectures, or systems where compute and memory themselves are optical.
Were focused on solving todays problems around AI and hyperscale networking, but were doing it with a technology base that can move with the industry rather than getting stranded as the next wave arrives.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the
Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
======================================================================
Link to news story:
https://www.techradar.com/pro/we-built-a-technology-which-uses-light-to-contro l-light-finchetto-ceo-on-ditching-electronics-to-make-networks-faster
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)