automl.components.controllers package

class Controller

Bases: abc.ABC

This class defines the abstract behavior required of an AutoML Controller.

Controller implements the AutoML strategy that decides how the training is to be conducted. Controller produces recommendations by finding values from a search space with some algorithm.

abstract initial_recommendation()

This method is called by the AutoML workflow engine to produce the initial set of recommendations. The controller must produce 1 or more recommendations. If no recommendation is produced, the AutoML workflow will stop immediately.

This method is called only once at the beginning of the AutoML process.

Parameters

ctx – the context that enables across-component data sharing and communication

Returns: a list of recommendations

abstract refine_recommendation()

This method is called by the AutoML workflow engine to produce a set of recommendations based on the result from a previous job.

The controller can produce 0 or more recommendations.

This method is called every time a job finishes executing a previous recommendation.

Parameters
  • outcome – the result of executing the previous recommendation

  • ctx – the context that enables across-component data sharing and communication

Returns: a list of recommendations, could be empty

abstract set_search_space(space: automl.defs.SearchSpace, ctx: automl.defs.Context)

Set the search space. This is the search space that the controller will search against to produce recommendations. The controller must keep it for later use.

Parameters
  • space – the search space

  • ctx – the context that enables across-component data sharing and communication

Returns:

NOTE: the controller should validate the search space and makes sure it is acceptable. In case the search space is not acceptable, the controller should either raise an exception or ask to stop the workflow by calling: ctx.ask_to_stop().

shutdown(ctx: automl.defs.Context)

Called at the end of the AutoML workflow. This provides the opportunity for the controller to clean up if needed.

Parameters

ctx – the context that enables across-component data sharing and communication

Returns:

class FloatParameterSearchEngine(search_space: automl.defs.SearchSpace, ctx: automl.defs.Context)

Bases: object

Sets up network for Reinforcement learning to provide recommendations.

Parameters
  • search_space – search space definition

  • ctx – AutoMLContext

compute_settings(state: list)

Uses state and the network in the engine to calculate the next valid action and then returns it after advancing the state to the next valid action.

Returns: Next valid action based on engine’s state

initial_recommendation() → list

Calculates initial specific configuration for running first training.

Returns: Specific configuration to use for running training.

refine_recommendation(score: float, state: list) → list

Receives back the validation score for a completed run and calculates and returns specific configuration to use for the next training.

Parameters

score – validation score

Returns: Specific configuration to use for running training.

reinforce

net_manager = NetManager(num_input=784, num_classes=10, learning_rate=0.001, mnist=mnist, batch_size=100)

random() → x in the interval [0, 1).
class PolicyNetwork(max_num_layers: int, num_params: int)

Bases: torch.nn.modules.module.Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs: torch.Tensor) → torch.Tensor

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_hidden(batch_size: int = 1) → tuple
reset_parameters() → None
training: bool
class Reinforce(max_layers: int, num_params: int, global_step: int = 0, division_rate: float = 100.0, reg_param: float = 0.001, discount_factor: float = 0.99, exploration: float = 0.3)

Bases: object

get_action(state)
if random.random() < self.exploration:

return np.array([[random.sample(range(1, 35), 4*self.max_layers)]])

else:

return self.sess.run(self.predicted_action, {self.states: state})

storeRollout(state: list, reward: float) → None
train_step(steps_count: int) → None
class ReinforcementController(max_rounds=1000, num_workers=16)

Bases: automl.components.controllers.controller.Controller

initial_recommendation(ctx)

This method is called by the AutoML workflow engine to produce the initial set of recommendations. The controller must produce 1 or more recommendations. If no recommendation is produced, the AutoML workflow will stop immediately.

This method is called only once at the beginning of the AutoML process.

Parameters

ctx – the context that enables across-component data sharing and communication

Returns: a list of recommendations

refine_recommendation(outcome: automl.defs.Outcome, ctx: automl.defs.Context)

This method is called by the AutoML workflow engine to produce a set of recommendations based on the result from a previous job.

The controller can produce 0 or more recommendations.

This method is called every time a job finishes executing a previous recommendation.

Parameters
  • outcome – the result of executing the previous recommendation

  • ctx – the context that enables across-component data sharing and communication

Returns: a list of recommendations, could be empty

set_search_space(space, ctx)

Set the search space. This is the search space that the controller will search against to produce recommendations. The controller must keep it for later use.

Parameters
  • space – the search space

  • ctx – the context that enables across-component data sharing and communication

Returns:

NOTE: the controller should validate the search space and makes sure it is acceptable. In case the search space is not acceptable, the controller should either raise an exception or ask to stop the workflow by calling: ctx.ask_to_stop().

© Copyright 2021, NVIDIA. Last updated on Feb 2, 2023.