Skip to content

Codebase of "Actor-Critic Reinforcement Learning for Control with Stability Guarantee" by Han et al. 2020

Notifications You must be signed in to change notification settings

rickstaa/Actor-critic-with-stability-guarantee

 
 

Repository files navigation

Actor-critic-with-stability-guarantee

Important

You're currently viewing the master branch of my Actor-critic-with-stability-guarantee fork. If you're interested in my Master's thesis, Stability guarantees for learning-based effort control in rigid robotics manipulators, please navigate to the rstaa2024 branch.

Conda environment

From the general python package sanity perspective, it is a good idea to use conda environments to make sure packages from different projects do not interfere with each other.

To create a conda env with python3, one runs

conda create -n test python=3.6

To activate the env:

conda activate test

Installation Environment

git clone https://github.com/hithmh/Actor-critic-with-stability-guarantee
pip install numpy==1.16.3
pip install tensorflow==1.13.1
pip install tensorflow-probability==0.6.0
pip install opencv-python
pip install cloudpickle
pip install gym
pip install matplotlib

Then you are free to run main.py to train agents. Hyperparameters for training LAC in Cartpole are ready to run by default. If you would like to test other environments and algorithms, please open variant.py and choose corresponding 'env_name' and 'algorithm_name'.

About

Codebase of "Actor-Critic Reinforcement Learning for Control with Stability Guarantee" by Han et al. 2020

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%