Q.ANT Native Processing Server

Industry-first Photonic AI Accelerator for energy-efficient High-Performance Computing and real-time AI Applications available in a 19″ rack-mountable Server

Introducing the Next Generation of Photonic Processing

We unveil the second generation of our photonic processor to power the next wave of IA and HPC. Learn what is new for NPU Gen 2:  

Join the path towards up to 30× higher energy efficiency and 50× faster computation, unprecedented compute density and bandwidth and radical reductions in operational costs for data centers. 

Shifting paradigms in compute: The photonic approach for energy efficient and accelerated data processing

The Q.ANT Native Processing Server (NPS) is the first commercial photonic processor for energy-efficient and accelerated AI and HPC workloads. By computing complex, mathematical functions with the natural properties of light, we realize significant performance and energy efficiency without digital detours while communicating seamlessly with existing computing infrastructure. Native Computing by Q.ANT promises:

Operational at LRZ and JSC

Photonic Computing is reality: The Leibniz Supercomputing Centre (LRZ) and the Juelich Supercomputing Centre (JSC) – two of Europe’s leading HPC data centers – integrate Q.ANT’s NPS into their operational HPC environment. These first deployments mark a major step towards redefining how data centers approach performance, footprint and energy-efficiency.

Meet Q.ANT at Supercomputing 2025

The future of efficient computing is here. Will your data center be a part of it?

Skip the Queue and be part of a new era

Seize the exclusive opportunity to experience Q.ANT’s first commercial Photonic AI Accelerator, promising to set new standards in energy efficiency and computational speed. Test it, push it and see what is possible when AI runs natively on light. Get hands-on access to a completely new way of computing and redefine the possibilities of AI processing.

Technical specifications of the Native Processing Server (NPS)

The Native Processing Server (NPS) is a 19″rack-mountable server with the Q.ANT photonic NPU PCIe card and is designed specifically for AI inference and advanced data processing. Its Plug & Play system design enables seamless integration into existing data centers and HPC environments, providing immediate access to photonic computing. The NPS is upgradable with additional NPU PCIe cards to increase processing power as workloads grow.

System / SubsystemFeature
System nodex86 processor architecture; 19” 4U commercially available rack system
Operating SystemLinux Debian/Ubuntu with Long-Term Support
Network interface2x 10 Gbit ethernet, 1x 1 Gbit service interface
Software interfaceC / C++ and Python API
API to NPU subsystemLinux device driver
Native Processing Unit NPU
  • Full length PCle card with 3 slot height
  • PCIe Gen4 x8 interface, shared memory & I/O windows
  • Upgradable with enhanced photonic integrated circuits
  • Upgradable with enhanced logic functions for performance
Power consumption of NPU150 W
Photonic integrated circuit (PIC)Ultrafast photonic core based on z-cut Lithium Niobate on Insulator (LNoI)
Throughput of NPU8 GOPS
Operating temperature range15 to 35°C

Exploit the potential of Photonic Computing

Each milestone brings us closer to unlocking the full potential of Photonic Computing: dramatically faster performance at a fraction of the energy.

Operation speed

Our photonic processors compute mathematical functions natively in light. This enables unprecedented throughput, projected to accelerate from 0.1 GOps in 2024 to 100,000 GOps by 2028 – a million-fold increase within five years.

Operation Speed Chart
2023 2024 2025 2026 2027 2028 0.00001 0.1 10 1000 10000 100000

Operation speed in GOps

Energy efficiency

Unlike transistors, photonic processors do not generate on-chip heat and use less components and parameters to solve complex tasks. This allows up to 30x lower energy consumption for AI workloads compared to conventional CMOS hardware, cutting both power demand and cooling requirements.

Bar Chart Graph
0 500 1.000 1.500 2.000 2.400 8 bit TFLN - 76 fJ 8 bit CMOS - 2300 fJ

Energy consumption in fJ

Q.ANT is recognized as a Sample Vendor in three Gartner® Hype Cycle™ 2025 reports.  

We provide complimentary access to the Gartner® Hype Cycle™ for Data Center Infrastructure Technologies 2025 report. Learn how Photonic Computing can enable energy efficient data centers in a future of rising AI and HPC.

The Q.ANT Photonic Algorithms Library (Q.PAL)

The Q.ANT Photonic Algorithms Library (Q.PAL) is the software interface to the NPS which enables users to operate directly at the multiplication level or to leverage optimized neural network operations such as fully connected layers or convolutional layers. Q.PAL offers a comprehensive collection of example applications that illustrate how AI applications can be enhanced. These examples can be implemented directly or as a foundation for creating custom use cases.

 

NameDescriptionProgramming Language
Matrix MultiplicationMultiplication of a matrix and a vectorPython / C++
Image ClassificationClassification of an image (e.g. based on the ImageNet data set)Python (Jupyter)
Semantic SegmentationSegmentation of an image (e.g. based on a brain MRI scan data set)Python (Jupyter)
Complex Line FittingFitting of a high frequency line with a nonlinear network
(e.g. based on simulated training data)
Python (Jupyter)

Q.ANT integrates into the established compute landscape

Q.ANT’s photonic processing solution seamlessly integrates into the existing compute landscape. The Native Processing Unit at the heart of the NPS provides a PCIe interface housed in a standard 19” server, which makes the system plug-and-play. The NPU can be accessed via a software interface with C/C++ and Python APIs and will integrate into common AI frameworks such as PyTorch. Q.ANT supports customers in creating custom applications, providing the Q.ANT Photonic Algorithms Library and training resources.

Q.ANT NPS – The choice for efficient nonlinear networks

Q.ANT’s Native Processing Server executes demanding AI workloads directly in light. While CMOS processors excel at linear, sequential processing, photonic processors are the natural hardware fit for large-scale nonlinear algorithms. Networks with nonlinear functions reduce the number of model parameters needed allowing higher accuracy per parameter and training budget. In photonic processors one single optical element performs one nonlinear operation while CMOS requires 100-1000 transistors and multiple cycles.

In this example, a network using learnable nonlinear functions on Q.ANT‘s NPS reconstructs complex image patterns more accuretaly than a linear network on a CPU while 2x less parameters and 3x less operations are needed.

Original Image

Linear network on CPU

# parameters: ~20k
# operations: ~670m

Nonlinear network on NPU

# parameters: ~10k
# operations: ~250m

Photonic Computing unlocks new performance levels for essential applications in AI and HPC

AI inference and training

Photonic Computing enables faster execution of matrix operations and non-linear functions directly in hardware. This allows for more efficient model architectures with fewer parameters. The result: higher throughput and lower power consumption for both, training large-scale models and real-time inference.

artificial intelligence,AI chat bot concept.Hands holding mobile phone on blurred urban city as background

Large Language Models (like GPT)

Car Factory Digitalization Industry 4.0 5G IOT Concept: Automated Robot Arm Assembly Line Manufacturing High-Tech Electric Vehicles. AI Computer Vision Analyzing, Scanning Production Efficiency

Reinforcement Learning

Advanced image processing

Many image processing tasks are inherently mathematical relying heavily on transforms like Fourier or convolution operations. With photonic processors, these operations can be performed optically, in parallel and at the speed of light. This dramatically increases frame rates and lowers energy usage.

Autonomous Self-Driving Cars Using Sensing System and Wireless Communication Network on Curved Highway at Dusk, Smart Traffic Technology, Driverless Vehicles, Evening Urban Road

Computer Vision

Real_time_video_analytics

Real-Time Video Analytics

Physics and scientific simulations

Scientific simulations often depend on solving complex partial differential equations and large-scale matrix systems. Photonic hardware provides a powerful platform for these workloads by enabling high-bandwidth, computing that scales with problem complexity, helping simulate physical phenomena faster and with greater efficiency.

quadix_Computational_fluid_dynamics_simulation_3D_scientific_vi_738ac7c0-4003-463d-9683-f47b33a533cc

Computational Fluid Dynamics

universal_upscale_0_48d8b3e3-c93f-42aa-9399-8b14650dfdc4_0

Molecular Dynamics

The Game Changer in Photonic Computing: Thin Film Lithium Niobate on Insulator

Q.ANT builds its photonic processors on a proprietary Thin-Film Lithium Niobate on Insulator (TFLNoI) platform. This material system enables the photonic integrated circuits (PICs) at the core of our NPUs. On a silicon wafer, a thin layer of lithium niobate is bonded to create optical waveguides, modulators, and other functional blocks — allowing high-speed, precise control of light on a single chip. We believe that TFLNoI is the key to the future of photonic computing.

PICs based on TFLNoI provide the physical foundation for scalable, efficient photonic computing delivering unique advantages:

Your contact

Andreas-Abt

Andreas Abt

SVP
Native Computing

I look forward to discussing the potentials of Photonic Computing with you.

Ready to take The leap?

Work in the field of photonic computing