rldurham.bipedal_walker.BipedalWalker

class rldurham.bipedal_walker.BipedalWalker(render_mode=None, hardcore=False)[source]

Bases: Env, EzPickle

## Description This is a simple 4-joint walker robot environment. There are two versions: - Normal, with slightly uneven terrain. - Hardcore, with ladders, stumps, pitfalls.

To solve the normal version, you need to get 300 points in 1600 time steps. To solve the hardcore version, you need 300 points in 2000 time steps.

A heuristic is provided for testing. It’s also useful to get demonstrations to learn from. To run the heuristic: ` python gymnasium/envs/box2d/bipedal_walker.py `

## Action Space Actions are motor speed values in the [-1, 1] range for each of the 4 joints at both hips and knees.

## Observation Space State consists of hull angle speed, angular velocity, horizontal speed, vertical speed, position of joints and joints angular speed, legs contact with ground, and 10 lidar rangefinder measurements. There are no coordinates in the state vector.

## Rewards Reward is given for moving forward, totaling 300+ points up to the far end. If the robot falls, it gets -100. Applying motor torque costs a small amount of points. A more optimal agent will get a better score.

## Starting State The walker starts standing at the left end of the terrain with the hull horizontal, and both legs in the same position with a slight knee angle.

## Episode Termination The episode will terminate if the hull gets in contact with the ground or if the walker exceeds the right end of the terrain length.

## Arguments

To use the _hardcore_ environment, you need to specify the hardcore=True:

```python >>> import gymnasium as gym >>> env = gym.make(“BipedalWalker-v3”, hardcore=True, render_mode=”rgb_array”) >>> env <TimeLimit<OrderEnforcing<PassiveEnvChecker<BipedalWalker<BipedalWalker-v3>>>>>

```

## Version History - v3: Returns the closest lidar trace instead of furthest;

faster video recording

  • v2: Count energy spent

  • v1: Legs now report contact with ground; motors have higher torque and

    speed; ground has higher friction; lidar rendered less nervously.

  • v0: Initial version

<!– ## References –>

## Credits Created by Oleg Klimov

Public Data Attributes:

metadata

terrain

hull

screen

render_mode

spec

action_space

observation_space

Inherited from Env

metadata

render_mode

spec

unwrapped

Returns the base non-wrapped environment.

np_random_seed

Returns the environment's internal _np_random_seed that if not set will first initialise with a random int as seed.

np_random

Returns the environment's internal _np_random that if not set will initialise with a random seed.

action_space

observation_space

Public Methods:

__init__([render_mode, hardcore])

reset(*[, seed, options])

Resets the environment to an initial internal state, returning an initial observation and info.

step(action)

Run one timestep of the environment's dynamics using the agent actions.

render()

Compute the render frames as specified by render_mode during the initialization of the environment.

close()

After the user has finished using the environment, close contains the code necessary to "clean up" the environment.

Inherited from Env

step(action)

Run one timestep of the environment's dynamics using the agent actions.

reset(*[, seed, options])

Resets the environment to an initial internal state, returning an initial observation and info.

render()

Compute the render frames as specified by render_mode during the initialization of the environment.

close()

After the user has finished using the environment, close contains the code necessary to "clean up" the environment.

__str__()

Returns a string of the environment with spec id's if :attr:`spec.

__enter__()

Support with-statement for the environment.

__exit__(*args)

Support with-statement for the environment and closes the environment.

has_wrapper_attr(name)

Checks if the attribute name exists in the environment.

get_wrapper_attr(name)

Gets the attribute name from the environment.

set_wrapper_attr(name, value)

Sets the attribute name on the environment with value.

Inherited from Generic

__class_getitem__

Parameterizes a generic class.

__init_subclass__

Function to initialize subclasses.

Inherited from EzPickle

__init__(*args, **kwargs)

Uses the args and kwargs from the object's constructor for pickling.

__getstate__()

Returns the object pickle state with args and kwargs.

__setstate__(d)

Sets the object pickle state using d.

Private Data Attributes:

_np_random

_np_random_seed

Inherited from Env

_np_random

_np_random_seed

Private Methods:

_destroy()

_generate_terrain(hardcore)

_generate_clouds()


__annotations__ = {}
classmethod __class_getitem__()

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo[T]: ….

__dict__ = mappingproxy({'__module__': 'rldurham.bipedal_walker', '__doc__': '\n    ## Description\n    This is a simple 4-joint walker robot environment.\n    There are two versions:\n    - Normal, with slightly uneven terrain.\n    - Hardcore, with ladders, stumps, pitfalls.\n\n    To solve the normal version, you need to get 300 points in 1600 time steps.\n    To solve the hardcore version, you need 300 points in 2000 time steps.\n\n    A heuristic is provided for testing. It\'s also useful to get demonstrations\n    to learn from. To run the heuristic:\n    ```\n    python gymnasium/envs/box2d/bipedal_walker.py\n    ```\n\n    ## Action Space\n    Actions are motor speed values in the [-1, 1] range for each of the\n    4 joints at both hips and knees.\n\n    ## Observation Space\n    State consists of hull angle speed, angular velocity, horizontal speed,\n    vertical speed, position of joints and joints angular speed, legs contact\n    with ground, and 10 lidar rangefinder measurements. There are no coordinates\n    in the state vector.\n\n    ## Rewards\n    Reward is given for moving forward, totaling 300+ points up to the far end.\n    If the robot falls, it gets -100. Applying motor torque costs a small\n    amount of points. A more optimal agent will get a better score.\n\n    ## Starting State\n    The walker starts standing at the left end of the terrain with the hull\n    horizontal, and both legs in the same position with a slight knee angle.\n\n    ## Episode Termination\n    The episode will terminate if the hull gets in contact with the ground or\n    if the walker exceeds the right end of the terrain length.\n\n    ## Arguments\n\n    To use the _hardcore_ environment, you need to specify the `hardcore=True`:\n\n    ```python\n    >>> import gymnasium as gym\n    >>> env = gym.make("BipedalWalker-v3", hardcore=True, render_mode="rgb_array")\n    >>> env\n    <TimeLimit<OrderEnforcing<PassiveEnvChecker<BipedalWalker<BipedalWalker-v3>>>>>\n\n    ```\n\n    ## Version History\n    - v3: Returns the closest lidar trace instead of furthest;\n        faster video recording\n    - v2: Count energy spent\n    - v1: Legs now report contact with ground; motors have higher torque and\n        speed; ground has higher friction; lidar rendered less nervously.\n    - v0: Initial version\n\n\n    <!-- ## References -->\n\n    ## Credits\n    Created by Oleg Klimov\n\n    ', 'metadata': {'render_modes': ['human', 'rgb_array'], 'render_fps': 50}, '__init__': <function BipedalWalker.__init__>, '_destroy': <function BipedalWalker._destroy>, '_generate_terrain': <function BipedalWalker._generate_terrain>, '_generate_clouds': <function BipedalWalker._generate_clouds>, 'reset': <function BipedalWalker.reset>, 'step': <function BipedalWalker.step>, 'render': <function BipedalWalker.render>, 'close': <function BipedalWalker.close>, '__parameters__': (), '__annotations__': {'terrain': 'List[Box2D.b2Body]', 'hull': 'Optional[Box2D.b2Body]', 'screen': 'Optional[pygame.Surface]', 'metadata': 'dict[str, Any]', 'render_mode': 'str | None', 'spec': 'EnvSpec | None', 'action_space': 'spaces.Space[ActType]', 'observation_space': 'spaces.Space[ObsType]', '_np_random': 'np.random.Generator | None', '_np_random_seed': 'int | None'}})
__enter__()

Support with-statement for the environment.

__exit__(*args)

Support with-statement for the environment and closes the environment.

__getstate__()

Returns the object pickle state with args and kwargs.

__init__(render_mode=None, hardcore=False)[source]
classmethod __init_subclass__()

Function to initialize subclasses.

__module__ = 'rldurham.bipedal_walker'
__orig_bases__ = (typing.Generic[~ObsType, ~ActType],)
__parameters__ = ()
__setstate__(d)

Sets the object pickle state using d.

__str__()

Returns a string of the environment with spec id’s if :attr:`spec.

Returns:

A string identifying the environment

__weakref__

list of weak references to the object

_destroy()[source]
_generate_clouds()[source]
_generate_terrain(hardcore)[source]
_np_random: np.random.Generator | None = None
_np_random_seed: int | None = None
action_space: spaces.Space[ActType]
close()[source]

After the user has finished using the environment, close contains the code necessary to “clean up” the environment.

This is critical for closing rendering windows, database or HTTP connections. Calling close on an already closed environment has no effect and won’t raise an error.

get_wrapper_attr(name)

Gets the attribute name from the environment.

Return type:

Any

has_wrapper_attr(name)

Checks if the attribute name exists in the environment.

Return type:

bool

metadata: dict[str, Any] = {'render_fps': 50, 'render_modes': ['human', 'rgb_array']}
property np_random: Generator

Returns the environment’s internal _np_random that if not set will initialise with a random seed.

Returns:

Instances of np.random.Generator

property np_random_seed: int

Returns the environment’s internal _np_random_seed that if not set will first initialise with a random int as seed.

If np_random_seed was set directly instead of through reset() or set_np_random_through_seed(), the seed will take the value -1.

Returns:

int: the seed of the current np_random or -1, if the seed of the rng is unknown

observation_space: spaces.Space[ObsType]
render()[source]

Compute the render frames as specified by render_mode during the initialization of the environment.

The environment’s metadata render modes (env.metadata[“render_modes”]) should contain the possible ways to implement the render modes. In addition, list versions for most render modes is achieved through gymnasium.make which automatically applies a wrapper to collect rendered frames.

Note:

As the render_mode is known during __init__, the objects used to render the environment state should be initialised in __init__.

By convention, if the render_mode is:

  • None (default): no render is computed.

  • “human”: The environment is continuously rendered in the current display or terminal, usually for human consumption. This rendering should occur during step() and render() doesn’t need to be called. Returns None.

  • “rgb_array”: Return a single frame representing the current state of the environment. A frame is a np.ndarray with shape (x, y, 3) representing RGB values for an x-by-y pixel image.

  • “ansi”: Return a strings (str) or StringIO.StringIO containing a terminal-style text representation for each time step. The text can include newlines and ANSI escape sequences (e.g. for colors).

  • “rgb_array_list” and “ansi_list”: List based version of render modes are possible (except Human) through the wrapper, gymnasium.wrappers.RenderCollection that is automatically applied during gymnasium.make(..., render_mode="rgb_array_list"). The frames collected are popped after render() is called or reset().

Note:

Make sure that your class’s metadata "render_modes" key includes the list of supported modes.

Changed in version 0.25.0: The render function was changed to no longer accept parameters, rather these parameters should be specified in the environment initialised, i.e., gymnasium.make("CartPole-v1", render_mode="human")

render_mode: str | None = None
reset(*, seed=None, options=None)[source]

Resets the environment to an initial internal state, returning an initial observation and info.

This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the seed parameter otherwise if the environment already has a random number generator and reset() is called with seed=None, the RNG is not reset.

Therefore, reset() should (in the typical use case) be called with a seed right after initialization and then never again.

For Custom environments, the first line of reset() should be super().reset(seed=seed) which implements the seeding correctly.

Changed in version v0.25: The return_info parameter was removed and now info is expected to be returned.

Args:
seed (optional int): The seed that is used to initialize the environment’s PRNG (np_random) and

the read-only attribute np_random_seed. If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset and the env’s np_random_seed will not be altered. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.

options (optional dict): Additional information to specify how the environment is reset (optional,

depending on the specific environment)

Returns:
observation (ObsType): Observation of the initial state. This will be an element of observation_space

(typically a numpy array) and is analogous to the observation returned by step().

info (dictionary): This dictionary contains auxiliary information complementing observation. It should be analogous to

the info returned by step().

set_wrapper_attr(name, value)

Sets the attribute name on the environment with value.

spec: EnvSpec | None = None
step(action)[source]

Run one timestep of the environment’s dynamics using the agent actions.

When the end of an episode is reached (terminated or truncated), it is necessary to call reset() to reset this environment’s state for the next episode.

Changed in version 0.26: The Step API was changed removing done in favor of terminated and truncated to make it clearer to users when the environment had terminated or truncated which is critical for reinforcement learning bootstrapping algorithms.

Args:

action (ActType): an action provided by the agent to update the environment state.

Returns:
observation (ObsType): An element of the environment’s observation_space as the next observation due to the agent actions.

An example is a numpy array containing the positions and velocities of the pole in CartPole.

reward (SupportsFloat): The reward as a result of taking the action. terminated (bool): Whether the agent reaches the terminal state (as defined under the MDP of the task)

which can be positive or negative. An example is reaching the goal state or moving into the lava from the Sutton and Barto Gridworld. If true, the user needs to call reset().

truncated (bool): Whether the truncation condition outside the scope of the MDP is satisfied.

Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call reset().

info (dict): Contains auxiliary diagnostic information (helpful for debugging, learning, and logging).

This might, for instance, contain: metrics that describe the agent’s performance state, variables that are hidden from observations, or individual reward terms that are combined to produce the total reward. In OpenAI Gym <v26, it contains “TimeLimit.truncated” to distinguish truncation and termination, however this is deprecated in favour of returning terminated and truncated variables.

done (bool): (Deprecated) A boolean value for if the episode has ended, in which case further step() calls will

return undefined results. This was removed in OpenAI Gym v26 in favor of terminated and truncated attributes. A done signal may be emitted for different reasons: Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics simulation has entered an invalid state.

property unwrapped: Env[ObsType, ActType]

Returns the base non-wrapped environment.

Returns:

Env: The base non-wrapped gymnasium.Env instance