classEnv(Generic[ObsType, ActType]): r"""The main Gymnasium class for implementing Reinforcement Learning Agents environments. The class encapsulates an environment with arbitrary behind-the-scenes dynamics through the :meth:`step` and :meth:`reset` functions. An environment can be partially or fully observed by single agents. For multi-agent environments, see PettingZoo. The main API methods that users of this class need to know are: - :meth:`step` - Updates an environment with actions returning the next agent observation, the reward for taking that actions, if the environment has terminated or truncated due to the latest action and information from the environment about the step, i.e. metrics, debug info. - :meth:`reset` - Resets the environment to an initial state, required before calling step. Returns the first agent observation for an episode and information, i.e. metrics, debug info. - :meth:`render` - Renders the environments to help visualise what the agent see, examples modes are "human", "rgb_array", "ansi" for text. - :meth:`close` - Closes the environment, important when external software is used, i.e. pygame for rendering, databases Environments have additional attributes for users to understand the implementation - :attr:`action_space` - The Space object corresponding to valid actions, all valid actions should be contained within the space. - :attr:`observation_space` - The Space object corresponding to valid observations, all valid observations should be contained within the space. - :attr:`reward_range` - A tuple corresponding to the minimum and maximum possible rewards for an agent over an episode. The default reward range is set to :math:`(-\infty,+\infty)`. - :attr:`spec` - An environment spec that contains the information used to initialize the environment from :meth:`gymnasium.make` - :attr:`metadata` - The metadata of the environment, i.e. render modes, render fps - :attr:`np_random` - The random number generator for the environment. This is automatically assigned during ``super().reset(seed=seed)`` and when assessing ``self.np_random``. .. seealso:: For modifying or extending environments use the :py:class:`gymnasium.Wrapper` class """
env = StockEnv() env.reset() for i in range(1000): env.render() observation, reward, terminated, truncated, info = env.step(env.action_space.sample()) print(observation) print(reward) if terminated or truncated: break env.close()