Learning to Play Robot Soccer from Partial Observations

Published in 23rd International Symposium on Measurement and Control in Robotics (ISMCR), 2020

Recommended citation: M. Szemenyei and P. Reizinger. (2020). "Learning to Play Robot Soccer from Partial Observations" 23rd International Symposium on Measurement and Control in Robotics (ISMCR).

Abstract

Reinforcement learning (RL) has undergone unprecedented evolution in the last few years, managing to surpass the human baseline in several high-profile board and computer games. Despite these successes, deep neural agents remained absent in several challenging fields, such as robot soccer, which is a rather complex task involving simultaneous cooperation and competition between multiple agents. In this paper, we investigate the feasibility of playing robot soccer via deep reinforcement learning using an environment of our own making. This environment provides the agent with imperfect and incomplete observations to simulate the errors of a real vision pipeline. To improve the agent’s performance on this task, an intrinsic reward module is proposed based on self-supervised learning. Our experiments show that the proposed extension significantly improves the agent.

Citation

M. Szemenyei and P. Reizinger, “Learning to Play Robot Soccer from Partial Observations,” 2020 23rd International Symposium on Measurement and Control in Robotics (ISMCR), 2020, pp. 1-6, doi: 10.1109/ISMCR51255.2020.9263715.

@INPROCEEDINGS{szemenyei2020partialobs,
    author={Szemenyei, Márton and Reizinger, Patrik},
    booktitle={2020 23rd International Symposium on Measurement and Control in Robotics (ISMCR)},   title={Learning to Play Robot Soccer from Partial Observations},
    year={2020},
    volume={},
    number={},
    pages={1-6},
    doi={10.1109/ISMCR51255.2020.9263715}
}