On Adjusting Data Throughput in IoT Networks: A Deep Reinforcement Learning-Based Game Approach
Résumé
In this work, adjustment of nodes’ sending rate in IPv6 over Low-power Wireless Personal Area Networks (6LoWPAN) is investigated. 6LoWPAN enables low-power equipment connecting to the Internet via Internet of Things (IoT) network. In such network, nodes are competing to share the bandwidth, in order to deliver their sensed data as fast as possible, to a central node (access point, cloud server, aggregator, etc.). With the lack of an optimal sharing policy, such competitive behavior however may affect directly networks’ quality of service and degrade their performance in terms of nodes’ throughput (sending rate), latency of the network, and nodes’ energy consumption. To overcome this, we propose a new non-cooperative game based scheme, called DeepGame, where each IoT device is acted as a player, asking for a high data throughput. DeepGame enables to adjust nodes’ throughput based on four main criteria: nodes’ preferences concerning the data rate, nodes priorities in the IoT network, the quality of nodes data, and nodes’ remaining energy. Moreover, a multi-agent deep reinforcement learning model is built in federated way, on top of our game model in order to enable nodes (agents) learning the optimal action at each step of the game, and hence reaching the Nash Equilibrium state. We use Cooja emulator on top of Contiki OS to implement our game-based model. We evaluate and validate DeepGame scheme on top of two different medium access techniques, CSMA/CA and TDMA. Numerical results, with a good confidence interval, illustrate the efficiency of our scheme when leveraging TDMA access technique in not only quickly converging to Nash equilibrium situation, but also improving the performances of IoT network including, nodes’ energy consumption, nodes’ throughput, and network overhead, when compared to other schemes.