ValueError: RolloutWorker has no input_reader object

293 Views Asked by At

I am using RLlib and I am trying to run APEX_DDPG with tune on a multi-agent environment with Ray v1.10 on Python 3.9.6. I get the following error:

raise ValueError("RolloutWorker has no input_reader object! " ValueError: RolloutWorker has no input_reader object! Cannot call sample(). You can try setting create_env_on_driver to True.

I found the source of the error in docs, which is in RolloutWorker class definition :

if self.fake_sampler and self.last_batch is not None:\
   return self.last_batch\
elif self.input_reader is None:\
   raise ValueError("RolloutWorker has no input_reader object! "\
   "Cannot call sample(). You can try setting "
   "create_env_on_driver to True.")

But I do not know how to solve it, since I am a little bit new to RLlib.

1

There are 1 best solutions below

0
oliverwang15 On

I' m also new to Ray and RLlib. I also encounter this error today. My problem is that I forgot to add my env to config. You may try adding you environment to you config before using ApexDDPGTrainer(config=config) or using ray.tune(config=config)

The following is an example from ray's official doc:

import gym, ray
from ray.rllib.agents import ppo

class MyEnv(gym.Env):
    def __init__(self, env_config):
        self.action_space = <gym.Space>
        self.observation_space = <gym.Space>
    def reset(self):
        return <obs>
    def step(self, action):
        return <obs>, <reward: float>, <done: bool>, <info: dict>

ray.init()
trainer = ppo.PPOTrainer(env=MyEnv, config={
    "env_config": {},  # config to pass to env class
})

You may also register your custom environment first:

from ray.tune.registry import register_env

def env_creator(env_config):
    return MyEnv(...)  # return an env instance

register_env("my_env", env_creator)
trainer = ppo.PPOTrainer(env="my_env")