When you pick up your phone to make a call or send a text, you might not think too much about everything that has to take place to make that happen. But in the massive, nationwide wireless networks we have today, it takes an awful lot to keep things humming along smoothly. And as the number of people using these networks grows, and the wireless technologies that underlie them become more complex, a great deal of optimizations will be needed to keep things running efficiently.
One of the hottest areas of telecom research these days is in AI-RAN (artificial intelligence radio access networks). The hope is that by leveraging AI algorithms in real-time, providers will be able to improve the performance, efficiency, and capabilities of their networks to keep up with demands. However, deploying AI algorithms in this environment is more challenging than most other applications because of the tight latency and throughput requirements of wireless systems. Furthermore, realistic testbeds, from which new ideas can be tested, are not very accessible to developers and researchers, especially in academia.
The hardware used in the demo setup (📷: S. Cammerer et al.)
For these reasons, a team at NVIDIA has created the Sionna Research Kit, a GPU-accelerated research platform for AI-RAN testing and development. Built on the NVIDIA Jetson AGX Orin platform and the OpenAirInterface software-defined radio stack, the Sionna Research Kit provides a flexible, real-time environment for experimenting with 5G NR systems and AI algorithms. This gives both academic and industry researchers a platform for deploying AI-powered wireless components in a fully operational, real-world 5G network using commercial user equipment.
One of the key features of the platform is its ability to perform real-time signal processing and inference using GPU acceleration. This is made possible through the Jetson’s unified memory architecture, which minimizes data transfer latency between CPU and GPU, which is an important factor when testing AI models in time-sensitive wireless systems.
The platform supports both “look-aside” and “inline” hardware acceleration approaches. While the former offloads tasks asynchronously to the GPU, the latter embeds acceleration directly in the signal pipeline, resulting in more efficient performance. This hybrid flexibility makes the platform particularly well-suited for testing diverse AI applications with real-world latency constraints.
A schematic of the demo setup (📷: S. Cammerer et al.)
The team carried out two case studies to demonstrate the platform’s capabilities. The first demonstrated a 5G NR-compliant neural receiver that replaced parts of traditional signal processing with machine-learned models, trained using NVIDIA Sionna and executed via the TensorRT inference engine. The second showcased a CUDA-accelerated LDPC decoder, integrated directly into the software-defined stack for efficient wireless error correction.
The complete demo setup included a Jetson AGX Orin, an Ettus Research USRP B210 software-defined radio, and a Quectel RM520N-GL 5G modem — components that are both affordable and accessible. Tutorials and code examples are expected to be made publicly available in the future, offering a low-barrier path for researchers to collect data, train AI models, and validate them in real-time scenarios. If you have some ideas to improve today’s 5G networks, the Sionna Research Kit might be the most accessible option out there today.