- IEEE Access
The fifth generation (5G) of wireless communications has led to many advancements in technologies such as large and distributed antenna arrays, ultra-dense networks, software-based networks and network virtualization. However, the need for a higher level of automation to establish hyper-low latency, and hyper-high reliability for beyond 5G applications requires extensive research on machine learning with applications in wireless communications. Thereby, learning techniques will take a central stage in the sixth generation of wireless communications to cope up with the stringent application requirements. This paper studies the practical limitations of these learning methods in the context of resource management in a non-stationary radio environment. Based on the practical limitations we carefully design and propose supervised, unsupervised, and reinforcement learning models to support rate maximization objective under user mobility. We study the effects of practical systems such as latency and reliability on the rate maximization with deep learning models. For common testing in the non-stationary environment, we present a generic dataset generation method to benchmark across different learning models versus traditional optimal resource management solutions. Our results indicate that learning models have practical challenges related to training limiting their applications. These models need an environment-specific design to reach the accuracy of an optimal algorithm. Such an approach is practically not realistic due to the high resource requirement needed for frequent retraining.