Curiosity-driven Exploration by Self-supervised Prediction


93
views
0
9 weeks ago by

Show paper

"In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent’s ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch."

Community: paperstack

1 Answer


0
9 weeks ago by
In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, for example when babies play driven by intrinsic motivation or curiosity.

In order to allow agents learning under similar conditions, this paper introduces the following modules:
<Illustration missing>

  • Inverse Model: A neural net is trained to predict the next action given the current state. Features are learned for this task. This component is only there to compute features that can affect the agent.
  • Forward Model: These features are used by another neural net to predict the next state's features given current action and features.
  • RL Agent: An out-of-the-box RL agent tries to maximize the error of these predicted features by choosing actions. This intrinsic reward is similar to curiosity.

To make the agent perform given tasks, a corresponding extrinsic reward can be combined with the intrinsic reward.
Please login to add an answer/comment or follow this question.

Similar posts:
Search »