Future Cellular Networks Could Reach Top Speed with Help from NJIT Professor

Written by: Evan Koblentz
Published: Tuesday, December 9, 2025

Sixth-generation mobile networks arriving in the 2030s will connect your smart devices by dozens or even hundreds of times faster than the data rates of today, and those networks might employ important research from New Jersey Institute of Technology and the State University of New York - Buffalo to make their promises into reality.

The upcoming 6G networks in some cases will have razor-thin, terahertz radio signals measured in micrometers — thousandths of a millimeter — and so the computing power to constantly realign themselves will mean sacrificing some top speed, as compared to current 5G networks with relatively stable gigahertz signals measured in centimeters.

Think of it as race car drivers backing off from their maximum possible speed, so that they can stay on-track and provide users with fast-yet-stable connections for instant movie downloads and life-like virtual reality experiences.

This task chops about 10 percent off network top speeds if you use conventional 5G techniques. Reducing that overhead is the goal of NJIT Associate Prof. Jacob Chakareski, from Ying Wu College of Computing, and Buffalo peers Nicholas Mastronarde, Anjali Omer, Zijun Wang and Rui Zhang. They’ve got it down to 3 percent in simulations, by taking advantage of 6G network capabilities to natively run artificial intelligence applications.

The team presented their paper, Cross-Layer Design for Near-Field mmWave Beam Management and Scheduling under Delay-Sensitive Traffic, at the Neural Information Processing Systems conference in San Diego this month.

“If the beam misses you, you’re getting nothing,” Chakareski said. “This training to identify where the receivers are, it takes a little bit of time. So during that time, the network may be not offering service. This paper looks at how to do effective beam management while not penalizing the service, which typically supports latency-sensitive traffic. And we use AI to solve that optimization problem.”

He explained that an AI strategy based on deep reinforcement learning, specifically a version called proximal policy optimization, is at the heart of the proposal. It runs an infinite loop using neural networks to consider which balance of queued data vs. beam realignment worked best in the past, making the necessary adjustments through a technique called compressive sensing, and then checking again.

“We are extending this work further, to introduce more efficient AI methods that may reduce the time needed for training, and to reduce the complexity of the method so that it can be deployed more easily … You're basically using a data-driven approach to learn what your optimal decision policy is in a given context and for a specific problem.

“We are working on mathematical methods to improve the efficiency of converging to those optimal policies,” Chakareski continued. “Basically, machine learning works in batches. You take some data, you try to adjust the parameters of the neural network, aiming to minimize some objective or cost function.”