A Dynamic Service Placement Based on Deep Reinforcement Learning in Mobile Edge Computing
dc.creator | Lu, Shuaibing | |
dc.creator | Wu, Jie | |
dc.creator | Shi, Jiamei | |
dc.creator | Lu, Pengfan | |
dc.creator | Fang, Juan | |
dc.creator | Liu, Haiming | |
dc.date.accessioned | 2023-12-21T19:37:06Z | |
dc.date.available | 2023-12-21T19:37:06Z | |
dc.date.issued | 2022-02-24 | |
dc.identifier.citation | Lu, S.; Wu, J.; Shi, J.; Lu, P.; Fang, J.; Liu, H. A Dynamic Service Placement Based on Deep Reinforcement Learning in Mobile Edge Computing. Network 2022, 2, 106-122. https://doi.org/10.3390/network2010008 | |
dc.identifier.issn | 2673-8732 | |
dc.identifier.uri | http://hdl.handle.net/20.500.12613/9393 | |
dc.description.abstract | Mobile edge computing is an emerging paradigm that supplies computation, storage, and networking resources between end devices and traditional cloud data centers. With increased investment of resources, users demand a higher quality-of-service (QoS). However, it is nontrivial to maintain service performance under the erratic activities of end-users. In this paper, we focus on the service placement problem under the continuous provisioning scenario in mobile edge computing for multiple mobile users. We propose a novel dynamic placement framework based on deep reinforcement learning (DSP-DRL) to optimize the total delay without overwhelming the constraints on physical resources and operational costs. In the learning framework, we propose a new migration conflicting resolution mechanism to avoid the invalid state in the decision module. We first formulate the service placement under the migration confliction into a mixed-integer linear programming (MILP) problem. Then, we propose a new migration conflict resolution mechanism to avoid the invalid state and approximate the policy in the decision modular according to the introduced migration feasibility factor. Extensive evaluations demonstrate that the proposed dynamic service placement framework outperforms baselines in terms of efficiency and overall latency. | |
dc.format.extent | 17 pages | |
dc.language | English | |
dc.language.iso | eng | |
dc.relation.ispartof | Faculty/ Researcher Works | |
dc.relation.haspart | Network, Vol. 2, Iss. 1 | |
dc.relation.isreferencedby | MDPI | |
dc.rights | Attribution CC BY | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | Dynamic service placement | |
dc.subject | Delay optimization | |
dc.subject | Cost efficiency | |
dc.subject | Mobile edge computing | |
dc.title | A Dynamic Service Placement Based on Deep Reinforcement Learning in Mobile Edge Computing | |
dc.type | Text | |
dc.type.genre | Journal article | |
dc.contributor.group | Center for Networked Computing (Temple University) | |
dc.description.department | Computer and Information Sciences | |
dc.relation.doi | http://dx.doi.org/10.3390/network2010008 | |
dc.ada.note | For Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.edu | |
dc.description.schoolcollege | Temple University. College of Science and Technology | |
dc.temple.creator | Wu, Jie | |
refterms.dateFOA | 2023-12-21T19:37:06Z |