1.8 KiB
参考文献 (References)
[1] H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, "Deep learning enabled semantic communication systems," IEEE Trans. Signal Process., vol. 69, pp. 2663–2675, 2021.
[2] J. Noh, J. Park, and S.-L. Kim, "Deep reinforcement learning for resource allocation in semantic communication networks," IEEE Commun. Lett., 2024.
[3] H. Xie, Z. Qin, and G. Y. Li, "Hybrid digital-analog semantic communication with deep learning," IEEE Trans. Commun., 2025.
[4] Y. Zhang, D. Li, and Y. Qiao, "Resource allocation for semantic communication: A survey and future directions," IEEE Commun. Surveys Tuts., 2026.
[5] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch, "Multi-agent actor-critic for mixed cooperative-competitive environments," in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), 2017, pp. 6379–6390.
[6] A. M. Brandenburger and B. J. Nalebuff, Co-opetition. New York, NY, USA: Currency Doubleday, 1996.
[7] E. Parzy and B. Bogucka, "Coopetition in OFDMA-based cognitive radio networks," IEEE Commun. Lett., vol. 17, no. 7, pp. 1380–1383, Jul. 2013.
[8] C. E. Shannon, "A mathematical theory of communication," Bell Syst. Tech. J., vol. 27, no. 3, pp. 379–423, Jul. 1948.
[9] M. Wang, L. Chen, and J. Li, "SoLPO: Social reward-guided multi-agent reinforcement learning for cooperative autonomous driving," in Proc. IEEE Intell. Transp. Syst. Conf. (ITSC), 2023.
[10] Z. Yang, J. Hu, and Y. Chen, "Stackelberg-MADDPG: Hierarchical multi-agent reinforcement learning with Stackelberg game structure," in Proc. Int. Conf. Auton. Agents Multi-Agent Syst. (AAMAS), 2023.
[11] X. He, H. Jiang, and Y. Song, "Multi-agent deep reinforcement learning for wireless network resource management: A cooperative approach," IEEE Trans. Wireless Commun., 2024.