I'm using TF-Agents library for reinforcement learning, and I would like to take into account that, for a given state, some actions are invalid.
How can this be implemented?
Should I define a "observation_and_action_constraint_splitter" function when creating the DqnAgent?
If yes: do you know any tutorial on this?
Yes you need to define the function, pass it to the agent and also appropriately change the environment output so that the function can work with it. I am not aware on any tutorials on this, however you can look at this repo I have been working on.
Note that it is very messy and a lot of the files in there actually are not being used and the docstrings are terrible and often wrong (I forked this and didn't bother to sort everything out). However it is definetly working correctly. The parts that are relevant to your question are:
rl_env.pyin theHanabiEnv.__init__where the_observation_specis defined as a dictionary ofArraySpecs(here). You can ignoregame_obs,hand_obsandknowledge_obswhich are used to run the environment verbosely, they are not fed to the agent.rl_env.pyin theHanabiEnv._resetat line 110 gives an idea of how the timestep observations are constructed and returned from the environment.legal_movesare passed through anp.logical_notsince my specific environment marks legal_moves with 0 and illegal ones with -inf; whilst TF-Agents expects a 1/True for a legal move. My vector when cast to bool would therefore result in the exact opposite of what it should be for TF-agents.These observations will then be fed to the
observation_and_action_constraint_splitterinutility.py(here) where a tuple containing the observations and the action constraints is returned. Note thatgame_obs,hand_obsandknowledge_obsare implicitly thrown away (and not fed to the agent as previosuly mentioned.Finally this
observation_and_action_constraint_splitteris fed to the agent inutility.pyin thecreate_agentfunction at line 198 for example.