Q.ANT Native Processing Server

Photonic AI Accelerator for energy-efficient High-Performance Computing and real-time AI Applications available in a 19″ rack-mountable Server

Shifting paradigms in compute: The photonic approach for high energy efficiency and accelerated data processing

The Q.ANT Native Processing Unit NPU, the first commercially available photonic processor, sets a new era for energy-efficient and accelerated AI and HPC. By using the natural properties of light, we realize basic AI functions purely optically – without electronic detours. This is why we call it Native Computing. Here are the potentials setting photonic computing apart from conventional computing:

  • Up to 30x higher energy efficiency on hardware level and 90x lower power consumption per application, due to the absence of on-chip heat and reduced cooling requirements.
  • Up to 100x increase in data center capacity through greater computational density and simultaneous execution of complex operations using multiple wavelengths of light on a single chip.
  • 40–50% fewer operations required for equivalent output since photonics runs at few tens of GHz bandwidth compared to few GHz in digital electronics.
  • 16-bit floating point precision with close to 100 % accuracy for all computational operations on the chip.
  • Delivered as a 19” server system for seamless integration into existing infrastructure via standard PCIe interface and x86 software compatibility.

Meet Q.ANT at Supercomputing 2025

The future of efficient computing is here. Will your data center be a part of it?

Skip the Queue and be part of a new era

Seize the exclusive opportunity to experience Q.ANT’s first commercial Photonic AI Accelerator, promising to set new standards in energy efficiency and computational speed. Test, innovate, and get hands-on with a technology that promises a sustainable and powerful future. Redefine the possibilities of AI processing – where cutting-edge efficiency meets the brilliance of light.

Technical specifications of the Native Processing Server NPS

The Native Processing Server NPS as 19″rack-mountable server with our photonic NPU PCIe card is designed specifically for AI inference and advanced data processing. Its Plug & Play system design makes it ready to be integrated in data centers and HPCs for immediate access to photonic computing. The NPS is upgradable with additional NPU PCIe cards for even more processing power in the future.

System / SubsystemFeature
System nodex86 processor architecture; based 19” 4U commercially available rack system
Operating SystemLinux Debian/Ubuntu with Long-Term Support
Network interfaceEthernet with up to 10 Gbit speed
Software interfaceC / C++ and Python API
API to subsystemLinux device driver
Native Processing Unit NPU
  • Full length PCle card with 3 slot height
  • PCIe Gen3 x8 interface, shared memory & I/O windows
  • Upgradable with enhanced photonic integrated circuits
  • Upgradable with enhanced logic functions for performance
Power consumption of NPU45 W
Photonic integrated circuit (PIC)Ultrafast photonic core based on z-cut Lithium Niobate on Insulator (LNoI)
Throughput of NPU100 MOps
Cooling of NPUPassive
Operating temperature range15 to 35°C

Exploit the potential of Photonic Computing

Each milestone brings us closer to unlocking the full potential of Photonic Computing: dramatically faster perfomance at a fraction of the energy.

Operation speed

Our system offers the potential of an exponential growth in operational speed, which is supposed to accelerate from 0.1 GOps in 2025 to 100,000 GOps by 2028 – marking a million-fold increase in five years.

Operation Speed Chart
Operation speed in GOps 2023 2024 2025 2026 2027 2028 0.00001 0.1 10 1000 10000 100000

Energy efficiency

Photonics saves energy while working. For AI, photonic processors promise 30x less energy consumption on hardware level than digital processors (transistors, CMOS).

Bar Chart Graph
0 500 1.000 1.500 2.000 2.400 8 bit TFLN - 76 IJ 8 bit CMOS - 2300 IJ

Q.ANT is recognized as a Sample Vendor in three Gartner® Hype Cycle™ 2025 reports.  

We provide complimentary access to the Gartner® Hype Cycle™ for Data Center Infrastructure Technologies 2025 report. Learn how Photonic Computing can enable energy efficient data centers in a future of rising AI and HPC.

Light meets algorithms to redefine AI processing - The Q.ANT Toolkit

As an analog computing unit, the NPU enables the solution of complex, non-linear mathematical functions that would be too energy-intensive to calculate on conventional processors. Initial applications are in the field of AI inference and AI training, paving the way for efficient and sustainable AI computing. Start programming the Q.ANT NPU using our custom Software Development Kit, called the Q.ANT Toolkit. This interface enables users to operate directly at the chip level or to leverage higher level optimized neural network operations, such as fully connected layers or convolutional layers to build your AI model. The Toolkit offers a comprehensive collection of example applications that illustrate how AI models can be programmed. These examples can be used directly or as a foundation for creating own implementations.

 

NameDescriptionProgramming Language
Matrix MultiplicationMultiplication of a matrix and a vectorPython / C++
Image ClassificationClassification of an image (e.g. based on the ImageNet data set)Python (Jupyter)
Semantic SegmentationSegmentation of an image (e.g. based on a brain MRI scan data set)Python (Jupyter)
Attention-based AI models (coming soon)e.g. speech recognitionPython (Jupyter)

U-Net for cancer detection in brain MRI scans running on a Native Processing Unit (NPU)

Powering real-world AI Applications with Photonic Analog Computing

Q.ANT’s Native Processing Server, a photonic analog processor, is solving complex, real-world AI computations such as image recognition and segmentation tasks. From executing ResNet for object recognition in images to applying U-Net for identifying cancer regions in brain MRI scans: The Native Processing Server handles billions of operations with 99% consistency to conventional digital computation, demonstrating the viability of photonic analog computing.

Unlocking new performance levels for essential applications in AI and HPC

AI inference and training

Photonic Computing enables faster execution of matrix operations and nonlinear functions directly in hardware. This reduces latency and allows for more efficient model architectures with fewer parameters. The result: higher throughput and lower power consumption for both training large-scale models and real-time inference.

artificial intelligence,AI chat bot concept.Hands holding mobile phone on blurred urban city as background

Large Language Models (like GPT)

Car Factory Digitalization Industry 4.0 5G IOT Concept: Automated Robot Arm Assembly Line Manufacturing High-Tech Electric Vehicles. AI Computer Vision Analyzing, Scanning Production Efficiency

Reinforcement Learning

Advanced image processing

Many image processing tasks are inherently mathematical relying heavily on transforms like Fourier or convolution operations. With photonic processors, these operations can be performed optically, in parallel and at the speed of light. This dramatically increases frame rates and lowers energy usage.

Autonomous Self-Driving Cars Using Sensing System and Wireless Communication Network on Curved Highway at Dusk, Smart Traffic Technology, Driverless Vehicles, Evening Urban Road

Computer Vision

Real_time_video_analytics

Real-time video analytics

Physics and scientific simulations

Scientific simulations often depend on solving complex partial differential equations and large-scale matrix systems. Photonic hardware provides a powerful platform for these workloads by enabling high-bandwidth, low-latency computing that scales with problem complexity, helping simulate physical phenomena faster and with greater efficiency.

quadix_Computational_fluid_dynamics_simulation_3D_scientific_vi_738ac7c0-4003-463d-9683-f47b33a533cc

Computational fluid dynamics

universal_upscale_0_48d8b3e3-c93f-42aa-9399-8b14650dfdc4_0

Molecular Dynamics

The Game Changer in Photonic Computing: Thin Film Lithium Niobate on Insulator

Thin Film Lithium Niobate on Insulator TFLNoI – The Optimal Material Choice for Photonic Integrated Circuits PIC. Q.ANT relies on our proprietary material platform for making the photonic chips inside the NPU. The central components in the PIC are optical waveguides, modulators and various other building blocks, which enable  high-speed and precise control of light, all  integrated in a single chip at nanoscopic level. In this chip, a very thin layer of lithium niobate  is bonded on a silicon wafer, on which the photonic components are fabricated. We believe that TFLNoI is the key to the future of photonic computing.

PICs based on TFLNoI show several main advantages:

Your contact

Andreas-Abt

Andreas Abt

SVP
Native Computing

I look forward to discussing the potentials of Photonic Computing with you.

Ready to take The leap?

Work in the field of photonic computing

Q.ANT is recognized as a Sample Vendor in three Gartner® Hype Cycle™ 2024 reports.  

We provide exclusive access to the Gartner® Hype Cycle™ for Compute 2024 report. Learn how Photonic Computing can transform future business and society.