- Neurophos develops the optical Tulkas T100 processor capable of computing 470 petaFLOPS AI
- Optical transistors are 10,000 times smaller than conventional silicon photonics today
- Dual crosshair design integrates 768 GB HBM for memory-intensive workloads
Austin-based startup Neurophos has revealed that it is hard at work developing an optical processing device called the Tulkas T100, which promises huge leaps forward in computation.
Funded by Bill Gates’ Gates Frontier Fund, the company claims the chip can deliver 470 petaFLOPS of FP4 and INT4 calculations while consuming between 1 and 2kW under load.
Its optical tensor core measures approximately 1,000 x 1,000, which is approximately 15 times larger than the standard 256 x 256 arrays used in current AI GPUs.
Optical transistors and extreme speeds
Neurophos’ optical transistors aim to exceed traditional semiconductor limits by extending Moore’s Law through higher computational density without increasing power consumption or chip size.
Despite its scale, the startup says it only requires a single core per chip, supported by extensive RAM and vector processing units to maintain throughput.
Its optical transistors are about 10,000 times smaller than current silicon photonics components, enabling a high-density array to fit on a single reticle size.
“The equivalent of the optical transistor that you get from Silicon Photonics factories today is massive. It’s like 2mm long,” said Neurophos CEO Patrick Bowen.
“You just can’t fit enough of them on a chip to get a computational density that remotely rivals digital CMOS today.”
Tulk’s T100 runs at 56GHz, which far exceeds previous CPU and GPU clock speeds.
SRAM feeds the tensor core to maintain efficiency, and SSD storage can help move large data sets during testing and simulation.
The chip uses a dual reticle design with 768GB of HBM to support memory-intensive AI workloads.
Neurophos says the first generation of Tulka’s T100 will focus on the prefill stage of AI inference by handling input token processing for large language models.
Bowen envisions pairing racks of Tulkas chips with existing AI GPU racks to accelerate computation.
However, the company doesn’t expect full production until mid-2028, with the first shipments numbering in the thousands.
Engineers are currently testing a proof-of-concept chip to validate the claimed computational density and power consumption.
Competitors such as Nvidia and AMD are also investing heavily in silicon photonics, signaling increasing competition in the field.
AI tools and memory bandwidth limitations remain key considerations as optical accelerators seek to complement conventional GPUs.
While Tulka’s T100 shows potential to advance AI computing, its practical impact remains uncertain until the company achieves reliable production.
The optical approach remains experimental and faces challenges related to SRAM requirements, vector processing, and CMOS fabrication integration.
Optical transistors could speed up matrix multiplication and reduce energy per operation, but efficiency depends on memory, SSD storage and AI integration.
Neurophos claims its chips are compatible with standard semiconductor fabrications, but mass production depends on solving these technical challenges.
Via The register
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



