I am using RLlib and I am trying to run APEX_DDPG with tune on a multi-agent environment with Ray v1.10 on Python 3.9.6. I get the following error:
raise ValueError("RolloutWorker has no input_reader object! " ValueError: RolloutWorker has no input_reader object! Cannot call sample(). You can try setting create_env_on_driver to True.
I found the source of the error in docs, which is in RolloutWorker class definition :
if self.fake_sampler and self.last_batch is not None:\
return self.last_batch\
elif self.input_reader is None:\
raise ValueError("RolloutWorker has no input_reader object! "\
"Cannot call sample(). You can try setting "
"create_env_on_driver to True.")
But I do not know how to solve it, since I am a little bit new to RLlib.
I' m also new to Ray and RLlib. I also encounter this error today. My problem is that I forgot to add my
envtoconfig. You may try adding you environment to youconfigbefore usingApexDDPGTrainer(config=config)or usingray.tune(config=config)The following is an example from ray's official doc:
You may also register your custom environment first: