sinergym.utils.evaluation.evaluate_policy
- sinergym.utils.evaluation.evaluate_policy(model: stable_baselines3.common.type_aliases.PolicyPredictor, env: Env | stable_baselines3.common.vec_env.VecEnv, n_eval_episodes: int = 10, deterministic: bool = True, render: bool = False, callback: Callable[[Dict[str, Any], Dict[str, Any]], None] | None = None) Dict[str, list]
Runs policy for
n_eval_episodes
episodes and returns average reward and other Sinergym metrics.Note
If environment has not been wrapped with
Monitor
wrapper, reward and episode lengths are counted as it appears withenv.step
calls. If the environment contains wrappers that modify rewards or episode lengths (e.g. reward scaling, early episode reset), these will affect the evaluation results as well. You can avoid this by wrapping environment withMonitor
wrapper before anything else.- Parameters:
model – The RL agent you want to evaluate. This can be any object that implements a predict method, such as an RL algorithm (
BaseAlgorithm
) or policy (BasePolicy
).env – The gym environment or
VecEnv
environment.n_eval_episodes – Number of episode to evaluate the agent
deterministic – Whether to use deterministic or stochastic actions
render – Whether to render the environment or not
callback – callback function to do additional checks, called after each step. Gets locals() and globals() passed as parameters.
reward_threshold – Minimum expected reward per episode, this will raise an error if the performance is not met
return_episode_rewards – If True, a list of rewards and episode lengths per episode will be returned instead of the mean.
warn – If True (default), warns user about lack of a Monitor wrapper in the evaluation environment.
- Returns:
Mean reward per episode, std of reward per episode. Returns ([float], [int]) when
return_episode_rewards
is True, first list containing per-episode rewards and second containing per-episode lengths (in number of steps).