A new approach to optical computing using metamaterials could result in power-efficient AI inference accelerators for the data center, Neurophos CEO Patrick Bowen told EE Times.
Optical computing based on silicon photonics has been commercialized by several AI accelerator startups in recent years, but the technology has yet to take off.
“From my perspective, [other companies] were running toward a brick wall with optical compute, and that’s why most of them have either failed or pivoted,” Bowen said. “There’s a lot of disagreement about why they’ve failed or pivoted, but my take is really centered on the scalability of optical processors.”
Bowen pointed out that the components used to build optical compute chips are relatively large—a Mach-Zehnder interferometer (MZI) based on a standard foundry process design kit (PDK) might be 200 × 20 microns—which he says severely limits compute density.
If whole matrices can fit in one chip, it means the memory access bottleneck can be reduced. Smaller compute arrays mean large matrices can’t be processed in one go—they need to be broken into chunks, with partial results sent back and forth to memory at intermediate stages, resulting in more memory accesses being required.
To read more, click here.