Abstract
Imitation learning field requires expert data to train agents in a task. Most often, this learning approach suffers from the absence of avail- able data, which results in techniques being tested on its dataset. Creating datasets is a cumbersome process requiring researchers to train expert agents from scratch, record their interactions and test each benchmark method with newly created data. Moreover, creating new datasets for each new technique results in a lack of consistency in the evaluation process since each dataset can drastically vary in state and action distribution. In response, this work aims to address these issues by creating Imitation Learning Datasets, a toolkit that allows for: (i) curated expert policies with multithreaded support for faster dataset creation; (ii) readily available datasets and techniques with precise measurements; and (iii) sharing implementations of common imitation learning techniques.
Original language | English |
---|---|
Title of host publication | The 23rd International Conference on Autonomous Agents and Multi-Agent Systems |
Publisher | International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) |
Number of pages | 3 |
Publication status | E-pub ahead of print - 22 Feb 2024 |
Event | AAMAS (Autonomous Agents and Multi-Agent Systems) Conference - Duration: 30 May 2012 → … |
Conference
Conference | AAMAS (Autonomous Agents and Multi-Agent Systems) Conference |
---|---|
Period | 30/05/2012 → … |
Keywords
- Imitation Learning
- Benchmarking
- Dataset