Haku

Cooperative edge caching via multi agent reinforcement learning in fog radio access networks

QR-koodi

Cooperative edge caching via multi agent reinforcement learning in fog radio access networks

Abstract

In this paper, the cooperative edge caching problem in fog radio access networks (F-RANs) is investigated. To minimize the content transmission delay, we formulate the cooperative caching optimization problem to find the globally optimal caching strategy. By considering the non-deterministic polynomial hard (NP-hard) property of this problem, a Multi Agent Reinforcement Learning (MARL)-based cooperative caching scheme is proposed. Our proposed scheme applies a double deep Q-network (DDQN) in every fog access point (F-AP), and introduces the communication process in a multi-agent system. Every F-AP records the historical caching strategies of its associated F-APs as the observations of communication procedure. By exchanging the observations, F-APs can leverage the cooperation and make the globally optimal caching strategy. Simulation results show that the proposed MARL-based cooperative caching scheme has remarkable performance compared with the benchmark schemes in minimizing the content transmission delay.

Tallennettuna: