Skip to content

sisl/PyroRL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

██████╗ ██╗   ██╗██████╗  ██████╗ ██████╗ ██╗     
██╔══██╗╚██╗ ██╔╝██╔══██╗██╔═══██╗██╔══██╗██║     
██████╔╝ ╚████╔╝ ██████╔╝██║   ██║██████╔╝██║     
██╔═══╝   ╚██╔╝  ██╔══██╗██║   ██║██╔══██╗██║     
██║        ██║   ██║  ██║╚██████╔╝██║  ██║███████╗
╚═╝        ╚═╝   ╚═╝  ╚═╝ ╚═════╝ ╚═╝  ╚═╝╚══════╝

example workflow codecov DOI

PyroRL is a new reinforcement learning environment built for the simulation of wildfire evacuation. Check out the docs and the demo.

How to Use

First, install our package. Note that PyroRL requires Python version 3.8:

pip install pyrorl

To use our wildfire evacuation environment, define the dimensions of your grid, where the populated areas are, the paths, and which populated areas can use which path. See an example below.

# Create environment
kwargs = {
    'num_rows': num_rows,
    'num_cols': num_cols,
    'populated_areas': populated_areas,
    'paths': paths,
    'paths_to_pops': paths_to_pops
}
env = gymnasium.make('pyrorl/PyroRL-v0', **kwargs)

# Run a simple loop of the environment
env.reset()
for _ in range(10):

    # Take action and observation
    action = env.action_space.sample()
    observation, reward, terminated, truncated, info = env.step(action)

    # Render environment and print reward
    env.render()
    print("Reward: " + str(reward))

A compiled visualization of numerous iterations is seen below. For more examples, check out the examples/ folder.

Example Visualization of PyroRL

For a more comprehensive tutorial, check out the quickstart page on our docs website.

How to Contribute

For information on how to contribute, check out our contribution guide.