Optimizers¶
Default optimizer: OptimizerWrapper
which offers additional update modifier options, so instead of using TFOptimizer
directly, a customized Adam optimizer can be specified via:
Agent.create(
...
optimizer=dict(
optimizer='adam', learning_rate=1e-3, clipping_threshold=1e-2,
multi_step=10, linesearch_iterations=5, subsampling_fraction=64
),
...
)
-
class
tensorforce.core.optimizers.
OptimizerWrapper
(optimizer, *, learning_rate=0.001, clipping_threshold=None, multi_step=1, subsampling_fraction=1.0, linesearch_iterations=0, name=None, arguments_spec=None, optimizing_iterations=None, **kwargs)¶ Optimizer wrapper (specification key:
optimizer_wrapper
).Parameters: - optimizer (specification) – Optimizer (required).
- learning_rate (parameter, float >= 0.0) – Learning rate (default: 1e-3).
- clipping_threshold (parameter, float > 0.0) – Clipping threshold (default: no clipping).
- multi_step (parameter, int >= 1) – Number of optimization steps (default: single step).
- subsampling_fraction (parameter, int > 0 | 0.0 < float <= 1.0) – Absolute/relative fraction of batch timesteps to subsample (default: no subsampling).
- linesearch_iterations (parameter, int >= 0) – Maximum number of line search iterations, using a backtracking factor of 0.75 (default: no line search).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
-
class
tensorforce.core.optimizers.
TFOptimizer
(*, optimizer, learning_rate=0.001, gradient_norm_clipping=1.0, name=None, arguments_spec=None, **kwargs)¶ TensorFlow optimizer (specification key:
tf_optimizer
,adadelta
,adagrad
,adam
,adamax
,adamw
,ftrl
,lazyadam
,nadam
,radam
,ranger
,rmsprop
,sgd
,sgdw
)Parameters: - optimizer (
adadelta
|adagrad
|adam
|adamax
|adamw
|ftrl
|lazyadam
|nadam
|radam
|ranger
|rmsprop
|sgd
|sgdw
) – TensorFlow optimizer name, see TensorFlow docs and TensorFlow Addons docs (required unless given by specification key). - learning_rate (parameter, float >= 0.0) – Learning rate (default: 1e-3).
- gradient_norm_clipping (parameter, float >= 0.0) – Clip gradients by the ratio of the sum of their norms (default: 1.0).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
- kwargs – Arguments for the TensorFlow optimizer, special values “decoupled_weight_decay”, “lookahead” and “moving_average”, see TensorFlow docs and TensorFlow Addons docs.
- optimizer (
-
class
tensorforce.core.optimizers.
NaturalGradient
(*, learning_rate=0.01, cg_max_iterations=10, cg_damping=0.1, only_positive_updates=True, return_improvement_estimate=False, name=None, arguments_spec=None)¶ Natural gradient optimizer (specification key:
natural_gradient
).Parameters: - learning_rate (parameter, float >= 0.0) – Learning rate as KL-divergence of distributions between optimization steps (default: 0.01).
- cg_max_iterations (int >= 0) – Maximum number of conjugate gradient iterations. (default: 10).
- cg_damping (0.0 <= float <= 1.0) – Conjugate gradient damping factor. (default: 0.1).
- only_positive_updates (bool) – Only perform updates with positive improvement estimate (default: true, false if using line-search option in OptimizerWrapper).
- return_improvement_estimate (bool) – Return improvement estimate (default: false, true if using line-search option in OptimizerWrapper).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
-
class
tensorforce.core.optimizers.
Evolutionary
(*, learning_rate, num_samples=1, name=None, arguments_spec=None)¶ Evolutionary optimizer, which samples random perturbations and applies them either as positive or negative update depending on their improvement of the loss (specification key:
evolutionary
).Parameters: - learning_rate (parameter, float >= 0.0) – Learning rate (required).
- num_samples (parameter, int >= 0) – Number of sampled perturbations (default: 1).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
-
class
tensorforce.core.optimizers.
ClippingStep
(*, optimizer, threshold, mode='global_norm', name=None, arguments_spec=None)¶ Clipping-step update modifier, which clips the updates of the given optimizer (specification key:
clipping_step
).Parameters: - optimizer (specification) – Optimizer configuration (required).
- threshold (parameter, float >= 0.0) – Clipping threshold (required).
- mode ('global_norm' | 'norm' | 'value') – Clipping mode (default: ‘global_norm’).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
-
class
tensorforce.core.optimizers.
MultiStep
(*, optimizer, num_steps, name=None, arguments_spec=None)¶ Multi-step update modifier, which applies the given optimizer for a number of times (specification key:
multi_step
).Parameters: - optimizer (specification) – Optimizer configuration (required).
- num_steps (parameter, int >= 0) – Number of optimization steps (required).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
-
class
tensorforce.core.optimizers.
LinesearchStep
(*, optimizer, max_iterations=10, backtracking_factor=0.75, accept_ratio=0.9, name=None, arguments_spec=None)¶ Line-search-step update modifier, which applies line search to the given optimizer to find a more optimal step size (specification key:
linesearch_step
).Parameters: - optimizer (specification) – Optimizer configuration (required).
- max_iterations (parameter, int >= 0) – Maximum number of line search iterations (default: 10).
- backtracking_factor (parameter, 0.0 < float < 1.0) – Line search backtracking factor (default: 0.75).
- accept_ratio (parameter, 0.0 <= float <= 1.0) – Line search acceptance ratio, not applicable in most situations (default: 0.9).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
-
class
tensorforce.core.optimizers.
SubsamplingStep
(*, optimizer, fraction, name=None, arguments_spec=None)¶ Subsampling-step update modifier, which randomly samples a subset of batch instances before applying the given optimizer (specification key:
subsampling_step
).Parameters: - optimizer (specification) – Optimizer configuration (required).
- fraction (parameter, int > 0 | 0.0 < float <= 1.0) – Absolute/relative fraction of batch timesteps to subsample (required).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
-
class
tensorforce.core.optimizers.
Synchronization
(*, sync_frequency=1, update_weight=1.0, name=None, arguments_spec=None)¶ Synchronization optimizer, which updates variables periodically to the value of a corresponding set of source variables (specification key:
synchronization
).Parameters: - optimizer (specification) – Optimizer configuration (required).
- sync_frequency (parameter, int >= 1) – Interval between updates which also perform a synchronization step (default: every update).
- update_weight (parameter, 0.0 <= float <= 1.0) – Update weight (default: 1.0).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.
-
class
tensorforce.core.optimizers.
Plus
(*, optimizer1, optimizer2, name=None, arguments_spec=None)¶ Additive combination of two optimizers (specification key:
plus
).Parameters: - optimizer1 (specification) – First optimizer configuration (required).
- optimizer2 (specification) – Second optimizer configuration (required).
- name (string) – (internal use).
- arguments_spec (specification) – internal use.