Preprocessing¶
Example of how to specify state and reward preprocessing:
Agent.create(
...
state_preprocessing=[
dict(type='image', height=4, width=4, grayscale=True),
dict(type='exponential_normalization')
],
reward_preprocessing=dict(type='clipping', lower=-1.0, upper=1.0),
...
)
-
class
tensorforce.core.layers.
Clipping
(*, lower=None, upper=None, name=None, input_spec=None) Clipping layer (specification key:
clipping
).Parameters: - lower (parameter, float) – Lower clipping value (default: no lower bound).
- upper (parameter, float) – Upper clipping value (default: no upper bound).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.
-
class
tensorforce.core.layers.
Image
(*, height=None, width=None, grayscale=False, name=None, input_spec=None) Image preprocessing layer (specification key:
image
).Parameters: - height (int) – Height of resized image (default: no resizing or relative to width).
- width (int) – Width of resized image (default: no resizing or relative to height).
- grayscale (bool | iter[float]) – Turn into grayscale image, optionally using given weights (default: false).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.
-
class
tensorforce.core.layers.
LinearNormalization
(*, min_value=None, max_value=None, name=None, input_spec=None) Linear normalization layer which scales and shifts the input to [-2.0, 2.0], for bounded states with min/max_value (specification key:
linear_normalization
).Parameters: - min_value (float | array[float]) – Lower bound of the value (default: based on input_spec).
- max_value (float | array[float]) – Upper bound of the value range (default: based on input_spec).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.
-
class
tensorforce.core.layers.
ExponentialNormalization
(*, decay=0.999, axes=None, name=None, input_spec=None) Normalization layer based on the exponential moving average over the temporal sequence of inputs (specification key:
exponential_normalization
).Parameters: - decay (parameter, 0.0 <= float <= 1.0) – Decay rate (default: 0.999).
- axes (iter[int >= 0]) – Normalization axes, excluding batch axis (default: all but last input axes).
- l2_regularization (float >= 0.0) – Scalar controlling L2 regularization (default: inherit value of parent module).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.
-
class
tensorforce.core.layers.
InstanceNormalization
(*, axes=None, name=None, input_spec=None) Instance normalization layer (specification key:
instance_normalization
).Parameters: - axes (iter[int >= 0]) – Normalization axes, excluding batch axis (default: all input axes).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.
-
class
tensorforce.core.layers.
Deltafier
(*, concatenate=False, name=None, input_spec=None) Deltafier layer computing the difference between the current and the previous input; can only be used as preprocessing layer (specification key:
deltafier
).Parameters: - concatenate (False | int >= 0) – Whether to concatenate instead of replace deltas with input, and if so, concatenation axis (default: false).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.
-
class
tensorforce.core.layers.
Sequence
(*, length, axis=-1, concatenate=True, name=None, input_spec=None) Sequence layer stacking the current and previous inputs; can only be used as preprocessing layer (specification key:
sequence
).Parameters: - length (int > 0) – Number of inputs to concatenate (required).
- axis (int >= 0) – Concatenation axis, excluding batch axis (default: last axis).
- concatenate (bool) – Whether to concatenate inputs at given axis, otherwise introduce new sequence axis (default: true).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.
-
class
tensorforce.core.layers.
Activation
(*, nonlinearity, name=None, input_spec=None) Activation layer (specification key:
activation
).Parameters: - ('crelu' | 'elu' | 'leaky-relu' | 'none' | 'relu' | 'selu' | 'sigmoid' | (nonlinearity) – ‘softmax’ | ‘softplus’ | ‘softsign’ | ‘swish’ | ‘tanh’): Nonlinearity (required).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.
-
class
tensorforce.core.layers.
Dropout
(*, rate, name=None, input_spec=None) Dropout layer (specification key:
dropout
).Parameters: - rate (parameter, 0.0 <= float < 1.0) – Dropout rate (required).
- name (string) – Layer name (default: internally chosen).
- input_spec (specification) – internal use.