Startseite » Photonic Computing
Industry-first Photonic AI Accelerator for energy-efficient High-Performance Computing and real-time AI Applications available in a 19″ rack-mountable Server
The Q.ANT Native Processing Server (NPS) is the first commercial photonic processor for energy-efficient and accelerated AI and HPC workloads. By computing complex, mathematical functions with the natural properties of light, we realize significant performance and energy efficiency without digital detours while communicating seamlessly with existing computing infrastructure. This is why we call it Native Computing.
By removing on-chip heat and cooling demands, Q.ANT’s photonic technology enables a new class of high-performance, energy-efficient server rack solutions, with:
Seize the exclusive opportunity to experience Q.ANT’s first commercial Photonic AI Accelerator, promising to set new standards in energy efficiency and computational speed. Test it, push it and see what is possible when AI runs natively on light. Get hands-on access to a completely new way of computing and redefine the possibilities of AI processing.
The Native Processing Server (NPS) is a 19″rack-mountable server with the Q.ANT photonic NPU PCIe card and is designed specifically for AI inference and advanced data processing. Its Plug & Play system design enables seamless integration into existing data centers and HPC environments, providing immediate access to photonic computing. The NPS is upgradable with additional NPU PCIe cards to increase processing power as workloads grow.
System / Subsystem | Feature |
System node | x86 processor architecture; 19” 4U commercially available rack system |
Operating System | Linux Debian/Ubuntu with Long-Term Support |
Network interface | Ethernet with up to 10 Gbit speed |
Software interface | C / C++ and Python API |
API to subsystem | Q.ANT Toolkit (SDK) |
Native Processing Unit NPU |
|
Power consumption of NPU | 45 W |
Photonic integrated circuit (PIC) | Ultrafast photonic core based on z-cut Lithium Niobate on Insulator (LNoI) |
Throughput of NPU | 100 MOps |
Cooling of NPU | Passive |
Operating temperature range | 15 to 35°C |
Each milestone brings us closer to unlocking the full potential of Photonic Computing: dramatically faster performance at a fraction of the energy.
Operation speed
Our photonic processors compute mathematical functions natively in light. This enables unprecedented throughput, projected to accelerate from 0.1 GOps in 2024 to 100,000 GOps by 2028 – a million-fold increase within five years.
Operation speed in GOps
Energy efficiency
Unlike transistors, photonic processors do not generate on-chip heat and use less components and parameters to solve complex tasks. This allows up to 30x lower energy consumption for AI workloads compared to conventional CMOS hardware, cutting both power demand and cooling requirements.
Energy consumption in fJ
We provide complimentary access to the Gartner® Hype Cycle™ for Data Center Infrastructure Technologies 2025 report. Learn how Photonic Computing can enable energy efficient data centers in a future of rising AI and HPC.
The Q.ANT Native Processing Unit (NPU) is an analog computing engine that solves complex, non-linear mathematical functions natively in light – especially workloads that are too energy-intensive for conventional processors. Initial applications focus on AI inference and training, paving the way for sustainable, high-performance AI computing. Start programming the Q.ANT NPU using our custom Software Development Kit, the Q.ANT Toolkit. This toolkit enables AI models to be built, tested and optimized for photonic computing and gives developers chip-level control for working directly with the photonic core, optimised neural network operations (e.g. fully connected and convolutional layers) and reference applications to accelerate development or serve as the basis for custom implementations.
Name | Description | Programming Language |
Matrix Multiplication | Multiplication of a matrix and a vector | Python / C++ |
Image Classification | Classification of an image (e.g. based on the ImageNet data set) | Python (Jupyter) |
Semantic Segmentation | Segmentation of an image (e.g. based on a brain MRI scan data set) | Python (Jupyter) |
Attention-based AI models (coming soon) | e.g. speech recognition | Python (Jupyter) |
U-Net for cancer detection in brain MRI scans running on a Native Processing Unit (NPU)
Q.ANT’s Native Processing Server executes demanding AI workloads — such as image recognition and image segmentation — directly in light. When running for example ResNet for object detection or U-Net for cancer region analysis in MRI scans, the system delivers billions of operations with 99% consistency to digital computation. The result: The proof that photonic analog processing is ready for real-world AI.
Photonic Computing enables faster execution of matrix operations and non-linear functions directly in hardware. This allows for more efficient model architectures with fewer parameters. The result: higher throughput and lower power consumption for both, training large-scale models and real-time inference.
Many image processing tasks are inherently mathematical relying heavily on transforms like Fourier or convolution operations. With photonic processors, these operations can be performed optically, in parallel and at the speed of light. This dramatically increases frame rates and lowers energy usage.
Scientific simulations often depend on solving complex partial differential equations and large-scale matrix systems. Photonic hardware provides a powerful platform for these workloads by enabling high-bandwidth, computing that scales with problem complexity, helping simulate physical phenomena faster and with greater efficiency.
Q.ANT builds its photonic processors on a proprietary Thin-Film Lithium Niobate on Insulator (TFLNoI) platform. This material system enables the photonic integrated circuits (PICs) at the core of our NPUs. On a silicon wafer, a thin layer of lithium niobate is bonded to create optical waveguides, modulators, and other functional blocks — allowing high-speed, precise control of light on a single chip. We believe that TFLNoI is the key to the future of photonic computing.
PICs based on TFLNoI provide the physical foundation for scalable, efficient photonic computing delivering unique advantages:
SVP Native Computing
I look forward to discussing the potentials of Photonic Computing with you.