Publication

Sie verwenden einen Browser, in dem JavaScript deaktiviert ist. Dadurch wird verhindert, dass Sie die volle Funktionalität dieser Webseite nutzen können. Zur Navigation müssen Sie daher die Sitemap nutzen.

You are currently using a browser with deactivated JavaScript. There you can't use all the features of this website. In order to navigate the site, please use the Sitemap .

A Study on Deep Learning for Latency Constraint Applications in Beyond 5G Wireless Systems

Authors:
Sritharan, S. ,  Sumedha Weligam, H. ,  Gacanin, H.
Journal:
IEEE Access
Volume:
8
Page(s):
218037-218061
Date:
Nov. 2020
ISSN:
2169-3536
DOI:
10.1109/ACCESS.2020.3040133
hsb:
RWTH-2021-00393
Language:
English

Abstract

The fifth generation (5G) of wireless communications has led to many advancements in technologies such as large and distributed antenna arrays, ultra-dense networks, software-based networks and network virtualization. However, the need for a higher level of automation to establish hyper-low latency, and hyper-high reliability for beyond 5G applications requires extensive research on machine learning with applications in wireless communications. Thereby, learning techniques will take a central stage in the sixth generation of wireless communications to cope up with the stringent application requirements. This paper studies the practical limitations of these learning methods in the context of resource management in a non-stationary radio environment. Based on the practical limitations we carefully design and propose supervised, unsupervised, and reinforcement learning models to support rate maximization objective under user mobility. We study the effects of practical systems such as latency and reliability on the rate maximization with deep learning models. For common testing in the non-stationary environment, we present a generic dataset generation method to benchmark across different learning models versus traditional optimal resource management solutions. Our results indicate that learning models have practical challenges related to training limiting their applications. These models need an environment-specific design to reach the accuracy of an optimal algorithm. Such an approach is practically not realistic due to the high resource requirement needed for frequent retraining.

Download

BibTeX