I was testing SARSA with lambda = 1 with Windy Grid World and if the exploration causes the same state-action pair to be visited many times before reaching the goal, the eligibility trace gets incremented each time without any decay, therefore it explodes and causes everything to overflow. How can this be avoided?
How to prevent the eligibility trace in SARSA with lambda = 1 from exploding for state-action pairs that are visited a huge number of times?
444 Views Asked by Ahmed El-Hinidy At
1
There are 1 best solutions below
Related Questions in REINFORCEMENT-LEARNING
- pygame window is not shutting down with env.close()
- Recommended way to use Gymnasium with neural networks to avoid overheads in model.fit and model.predict
- Bellman equation for MRP?
- when I run the code "env = gym.make('LunarLander-v2')" in stable_baselines3 zoo
- Why the reward becomes smaller and smaller, thanks
- `multiprocessing.pool.starmap()` works wrong when I want to write my custom vector env for DRL
- mat1 and mat2 must have the same dtype, but got Byte and Float
- Stable-Baslines3 Type Error in _predict w. custom environment & policy
- is there any way to use RL for decoder only models
- How do I make sure I'm updating the Q-values correctly?
- Handling batch_size in a TorchRL environment
- Application of Welford algorithm to PPO agent training
- Finite horizon SARSA Lambda
- Custom Reinforcement Learning Environment with Neural Network
- Restored Policy gives action that is out of bound with RLlib
Related Questions in TEMPORAL-DIFFERENCE
- Not converge- Simple Actor Critic for Multi-discrete Action Space
- Problem with Q-learning/TD(0) for Tic-Tac-Toe
- BACI design: How to account for the difference in Before-After Control?
- How to go from an episodic task to a continuing one
- Why does my implementation of TD(0) not work?
- Python Overflow Implementing TD Learning
- If -1 and +1 = landcover, then make 1 that landcover as well code
- Create n period differences in a panel in R
- Deep Reinforcement Learning 1-step TD not converging
- Reinforced Learning Example
- Is repeated anova what i am looking for?
- Python Time Series has been differenced, how do I undifference to make the values normal again
- learning estimated value AND expected temporal-difference error
- How do you create an optimizer for the TD-Lambda method in Tensorflow 2.0?
- Several dips in accumulated episodic rewards during training of a reinforcement learning agent
Related Questions in SARSA
- Reward calculation for a SARSA model to reduce traffic congestion
- How can I understand whether my coded SARSA Algorithm works?
- Implementing Sarsa(lambda) - Gridworld - in Julia language
- Problem with Deep Sarsa algorithm which work with pytorch (Adam optimizer) but not with keras/Tensorflow (Adam optimizer)
- Helipad Co-ordinates of LunarLander v2 openai gym
- Implementing SARSA from Q-Learning algorithm in the frozen lake game
- Converting to Python scalars
- SARSA implementation with tensorflow
- Can not save Sarsa in Accord.NET
- is this true ? what about Expected SARSA and double Q-Learning?
- Teach robot to collect items in grid world before reach terminal state by using reinforcement learning
- Eligibility trace algorithm, the update order
- Sarsa and Q Learning (reinforcement learning) don't converge optimal policy
- SARSA value approximation for Cart Pole
- Implementing SARSA in Unity
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
If I've understood correctly your question, the problem is that the trace for a given state gets incremented too much. In this case, a potential solution is to use replacing traces instead of the classic incremental traces.
The idea in replacing traces is to reset the trace to a value (typically 1) each time the state is visited. The following figure illustrates the main difference between both kinds of traces:
You can find more information in the classical Sutton & Barto book Reinforcement Learning: An Introduction, especifically in Section 7.8.