Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redesign Adapter subsystem #876

Merged
merged 82 commits into from
Oct 5, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
82 commits
Select commit Hold shift + click to select a range
8d18637
WIP adapt registry
gkirgizov Jul 5, 2022
af5694a
WIP adapter
gkirgizov Jul 5, 2022
a4b5587
WIP adapter
gkirgizov Jul 6, 2022
577c506
WIP simplest singleton runtime adapter based on isinstance
gkirgizov Jul 6, 2022
2803fc3
WIP add initialisation of adapt_registry.py
gkirgizov Jul 6, 2022
31be976
WIP adapt registry: add registration mechanics for `restore`
gkirgizov Jul 6, 2022
8e8ee75
WIP adapt registry: add docs
gkirgizov Jul 6, 2022
39cbaa0
WIP drop usages of adapter in GraphVerifier
gkirgizov Jul 6, 2022
e194b25
WIP drop usages of adapter in verification rules
gkirgizov Jul 6, 2022
26b9e56
WIP Rearrange classes related to adaptation
gkirgizov Jul 6, 2022
8db24d5
WIP use new adapter in evaluation.py
gkirgizov Jul 7, 2022
0ca622b
WIP use new adapter in mutation.py
gkirgizov Jul 7, 2022
33b0e94
Fixes after rebase
gkirgizov Aug 21, 2022
c839b37
Modify docs of AdaptRegistry
gkirgizov Aug 21, 2022
5f292a9
WIP adapt_registry.py
gkirgizov Aug 21, 2022
be9c105
Remove unnecessary field `node_class` in Adapters
gkirgizov Aug 22, 2022
531e4c3
Add tests for adapt_registry.py
gkirgizov Aug 22, 2022
d7506a7
WIP adapt_registry.py
gkirgizov Aug 22, 2022
05fff40
Drop `init_adapter` function
gkirgizov Aug 22, 2022
8be2331
Extract parent operator update function in Mutation
gkirgizov Aug 22, 2022
ca8a2cd
minor fixes in Mutation
gkirgizov Aug 22, 2022
38c59a4
Use @register_native decorator for internally defined mutations
gkirgizov Aug 22, 2022
aa11eab
Refactor Mutation.apply_mutation
gkirgizov Aug 22, 2022
3e2ad55
Add adaptaion in Mutation with new adapt_registry.py
gkirgizov Aug 22, 2022
e13eadb
Fix graph verifier
gkirgizov Aug 22, 2022
a0db020
Add tests for adapt_registry.py
gkirgizov Aug 23, 2022
6555637
Rewrite logic for is_native & register_native in adapt_registry.py
gkirgizov Aug 23, 2022
9032fa5
Test the failing behavior of adapt registry
gkirgizov Aug 24, 2022
fad4d54
Swap names for adapt & restore in adapt_registry.py for more intuitiv…
gkirgizov Aug 24, 2022
fdfd4d6
Rename one file
gkirgizov Aug 25, 2022
73785c6
Add adapter registration decorator to verification rules
gkirgizov Aug 25, 2022
b94ecf3
Add adapter tests for verification rules
gkirgizov Aug 25, 2022
3615c1b
Rename adapter tests file
gkirgizov Sep 1, 2022
b9d960f
Fix doc comments
gkirgizov Sep 1, 2022
a609f44
WIP refactoring in remote evaluator
gkirgizov Sep 1, 2022
c69933e
Add adapt/restore functions for populations
gkirgizov Sep 1, 2022
4755a3c
WIP remove usages of adapter (1)
gkirgizov Sep 1, 2022
746a1f4
Fixup usage of graph_growth
gkirgizov Sep 1, 2022
1dbb5ed
WIP fix example for using AdaptRegistry
gkirgizov Sep 6, 2022
768ee14
Refactor usages of `restore_as_template`, simplify
gkirgizov Sep 6, 2022
43456d7
Drop one usage of PipelineTemplate
gkirgizov Sep 6, 2022
8c24bf3
Fix init of dispacther
gkirgizov Sep 6, 2022
e656c35
fixup fix mutation operator
gkirgizov Sep 6, 2022
3835842
change usages of adapter in crossover
gkirgizov Sep 6, 2022
c728f75
Fix tests using objectiveEvaluationDispatcher-s
gkirgizov Sep 6, 2022
3b6545e
Fix other usages of adapter for adapt registry
gkirgizov Sep 6, 2022
e4e05af
fix adapter tests initialization
gkirgizov Sep 6, 2022
1f75303
Rework: move duplicating functions from adapt registry to BaseOptimiz…
gkirgizov Sep 7, 2022
b16245f
Rework: move duplicating functions from adapt registry to BaseOptimiz…
gkirgizov Sep 7, 2022
2c4a665
Fix usages of adapter for explicit dependency
gkirgizov Sep 7, 2022
6aeccf8
Fix test with adapter registry
gkirgizov Sep 7, 2022
e061043
Revert usage of adapter in GraphVerifier
gkirgizov Sep 7, 2022
ca11199
Fix adapt tests for explicit adapter
gkirgizov Sep 7, 2022
f3bb2dc
minor: opt imports
gkirgizov Sep 7, 2022
1e1201e
Clean adapter functions
gkirgizov Sep 7, 2022
6a9d218
Generalize a bit Graph equality, drop the __eq__ method from Pipeline
gkirgizov Sep 7, 2022
208aa53
fixup! fixup! Fix usages of adapter for explicit dependency
gkirgizov Sep 7, 2022
2afc0aa
Test duplicated irrelevant test for verification rules
gkirgizov Sep 12, 2022
db01d6c
WIP test fix
gkirgizov Sep 12, 2022
8644de5
WIP tmp
gkirgizov Sep 13, 2022
1d03795
Fix accidental cleanup of AdaptRegistry singleton class
gkirgizov Sep 13, 2022
9a66a83
Drop duplicating adapt/maybe_adapt methods
gkirgizov Sep 13, 2022
28665e0
Drop extra metadata argument from adapter.restore
gkirgizov Sep 13, 2022
c2369c1
Fix test for tuning using logger
gkirgizov Sep 13, 2022
93f7567
Make strict type equality in Adapter for adaptation
gkirgizov Sep 13, 2022
11e17d6
Fix imports in one example
gkirgizov Sep 14, 2022
0b82a2b
Add proper tests for adapter
gkirgizov Sep 14, 2022
090ffa0
fix pep8 issues
gkirgizov Sep 14, 2022
23a0370
Fix without_tuning test for sever
gkirgizov Sep 15, 2022
f41512e
Drop unneeded TODO
gkirgizov Sep 15, 2022
68519c5
refactor logging in api composer
gkirgizov Sep 15, 2022
f3afe2a
fixup! refactor logging in api composer
gkirgizov Sep 19, 2022
5643577
Unify restore/restore_ind methods in adapter
gkirgizov Sep 21, 2022
167c68a
Unify restore/restore_population methods in adapter
gkirgizov Sep 21, 2022
eaddfb7
Add `was_optimised` status flag to ApiComposer
gkirgizov Sep 21, 2022
3ad9db1
Fix return types of adapter: return OptGraph instead of Individual wh…
gkirgizov Sep 21, 2022
a508cc2
Fix test
gkirgizov Sep 21, 2022
66e4bdf
fix status flags in api omposer
gkirgizov Oct 3, 2022
b4e9f1b
fixes after rebase
gkirgizov Oct 3, 2022
b63e0f1
fix rst dos build
gkirgizov Oct 4, 2022
033e464
remove dupl line in indes.rst
gkirgizov Oct 4, 2022
66d1232
Extend dostrings for adapter lasses
gkirgizov Oct 5, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions examples/advanced/fedot_based_solutions/external_optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,12 +35,13 @@ def __init__(self,
def optimise(self, objective: ObjectiveFunction):

timer = OptimisationTimer(timeout=self.requirements.timeout)
dispatcher = SimpleDispatcher(self.graph_generation_params.adapter, timer)
dispatcher = SimpleDispatcher(timer)
evaluator = dispatcher.dispatch(objective)

num_iter = 0
initial_graph = self.graph_generation_params.adapter.adapt(choice(self.initial_graphs))
best = Individual(initial_graph)

initial_individuals = [Individual(graph) for graph in self.initial_graphs]
best = choice(initial_individuals)
evaluator([best])

with timer as t:
Expand All @@ -52,7 +53,7 @@ def optimise(self, objective: ObjectiveFunction):
best = new
num_iter += 1

return [self.graph_generation_params.adapter.restore(best.graph)]
return self.graph_generation_params.adapter.restore(best)


def run_with_random_search_composer():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,15 @@
import numpy as np
import pandas as pd

from fedot.core.adapter import DirectAdapter, register_native
from fedot.core.dag.verification_rules import has_no_cycle, has_no_self_cycled_nodes
from fedot.core.log import default_log
from fedot.core.optimisers.adapters import DirectAdapter
from fedot.core.optimisers.gp_comp.gp_optimizer import (
EvoGraphOptimizer,
GeneticSchemeTypesEnum
)
from fedot.core.optimisers.gp_comp.gp_optimizer import EvoGraphOptimizer
from fedot.core.optimisers.gp_comp.gp_params import GPGraphOptimizerParameters
from fedot.core.optimisers.gp_comp.pipeline_composer_requirements import PipelineComposerRequirements
from fedot.core.optimisers.gp_comp.operators.crossover import CrossoverTypesEnum
from fedot.core.optimisers.gp_comp.operators.inheritance import GeneticSchemeTypesEnum
from fedot.core.optimisers.gp_comp.operators.regularization import RegularizationTypesEnum
from fedot.core.optimisers.gp_comp.pipeline_composer_requirements import PipelineComposerRequirements
from fedot.core.optimisers.graph import OptGraph, OptNode
from fedot.core.optimisers.objective import Objective, ObjectiveEvaluate
from fedot.core.optimisers.optimizer import GraphGenerationParams
Expand Down Expand Up @@ -52,6 +50,7 @@ def _has_no_duplicates(graph):
return True


@register_native
def custom_mutation(graph: OptGraph, **kwargs):
num_mut = 10
try:
Expand Down
63 changes: 34 additions & 29 deletions fedot/api/api_utils/api_composer.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
from fedot.core.composer.gp_composer.specific_operators import boosting_mutation, parameter_change_mutation
from fedot.core.constants import DEFAULT_TUNING_ITERATIONS_NUMBER
from fedot.core.data.data import InputData
from fedot.core.log import LoggerAdapter
from fedot.core.log import default_log
from fedot.core.optimisers.adapters import PipelineAdapter
from fedot.core.optimisers.gp_comp.evaluation import determine_n_jobs
from fedot.core.optimisers.gp_comp.gp_params import GPGraphOptimizerParameters
Expand All @@ -39,11 +39,16 @@
class ApiComposer:

def __init__(self, problem: str):
self.log = default_log(self)
self.metrics = ApiMetrics(problem)
self.pipelines_cache: Optional[OperationsCache] = None
self.preprocessing_cache: Optional[PreprocessingCache] = None
self.preset_name = None
self.timer = None
# status flag indicating that composer step was applied
self.was_optimised = False
# status flag indicating that tuner step was applied
self.was_tuned = False
nicl-nno marked this conversation as resolved.
Show resolved Hide resolved
MorrisNein marked this conversation as resolved.
Show resolved Hide resolved

def obtain_metric(self, task: Task, metric: Union[str, Callable]) -> Sequence[MetricType]:
"""Chooses metric to use for quality assessment of pipeline during composition"""
Expand Down Expand Up @@ -169,7 +174,6 @@ def _init_graph_generation_params(task: Task, preset: str, available_operations:
def compose_fedot_model(self, api_params: dict, composer_params: dict, tuning_params: dict) \
-> Tuple[Pipeline, Sequence[Pipeline], OptHistory]:
""" Function for composing FEDOT pipeline model """
log: LoggerAdapter = api_params['logger']
task: Task = api_params['task']
train_data = api_params['train_data']
timeout = api_params['timeout']
Expand All @@ -189,7 +193,7 @@ def compose_fedot_model(self, api_params: dict, composer_params: dict, tuning_pa
assumption_handler.fit_assumption_and_check_correctness(initial_assumption[0],
pipelines_cache=self.pipelines_cache,
preprocessing_cache=self.preprocessing_cache)
log.message(
self.log.message(
f'Initial pipeline was fitted in {round(self.timer.assumption_fit_spend_time.total_seconds())} sec.')

n_jobs = determine_n_jobs(api_params['n_jobs'])
Expand All @@ -205,28 +209,26 @@ def compose_fedot_model(self, api_params: dict, composer_params: dict, tuning_pa
preset=preset,
available_operations=composer_params.get('available_operations'),
requirements=composer_requirements)
log.message(f"AutoML configured."
f" Parameters tuning: {with_tuning}"
f" Time limit: {timeout} min"
f" Set of candidate models: {available_operations}")
self.log.message(f"AutoML configured."
f" Parameters tuning: {with_tuning}"
f" Time limit: {timeout} min"
f" Set of candidate models: {available_operations}")

best_pipeline, best_pipeline_candidates, gp_composer = self.compose_pipeline(task, train_data,
fitted_assumption,
metric_functions,
composer_requirements,
composer_params,
graph_generation_params,
log)
graph_generation_params)
if with_tuning:
self.tune_final_pipeline(task, train_data,
metric_functions[0],
composer_requirements,
best_pipeline,
log)
best_pipeline = self.tune_final_pipeline(task, train_data,
metric_functions[0],
composer_requirements,
best_pipeline)
# enforce memory cleaning
gc.collect()

log.message('Model generation finished')
self.log.message('Model generation finished')
return best_pipeline, best_pipeline_candidates, gp_composer.history

def compose_pipeline(self, task: Task,
Expand All @@ -236,7 +238,7 @@ def compose_pipeline(self, task: Task,
composer_requirements: PipelineComposerRequirements,
composer_params: dict,
graph_generation_params: GraphGenerationParams,
log: LoggerAdapter) -> Tuple[Pipeline, List[Pipeline], GPComposer]:
) -> Tuple[Pipeline, List[Pipeline], GPComposer]:

multi_objective = len(metric_functions) > 1
optimizer_params = ApiComposer._init_optimizer_parameters(composer_params,
Expand All @@ -259,18 +261,20 @@ def compose_pipeline(self, task: Task,
if self.timer.have_time_for_composing(composer_params['pop_size'], n_jobs):
# Launch pipeline structure composition
with self.timer.launch_composing():
log.message('Pipeline composition started.')
self.log.message('Pipeline composition started.')
self.was_optimised = False
best_pipelines = gp_composer.compose_pipeline(data=train_data)
best_pipeline_candidates = gp_composer.best_models
self.was_optimised = True
else:
# Use initial pipeline as final solution
log.message(f'Timeout is too small for composing and is skipped '
f'because fit_time is {self.timer.assumption_fit_spend_time.total_seconds()} sec.')
self.log.message(f'Timeout is too small for composing and is skipped '
f'because fit_time is {self.timer.assumption_fit_spend_time.total_seconds()} sec.')
best_pipelines = fitted_assumption
best_pipeline_candidates = [fitted_assumption]

for pipeline in best_pipeline_candidates:
pipeline.log = log
pipeline.log = self.log
best_pipeline = best_pipelines[0] if isinstance(best_pipelines, Sequence) else best_pipelines
return best_pipeline, best_pipeline_candidates, gp_composer

Expand All @@ -279,7 +283,7 @@ def tune_final_pipeline(self, task: Task,
metric_function: Optional[MetricType],
composer_requirements: PipelineComposerRequirements,
pipeline_gp_composed: Pipeline,
log: LoggerAdapter) -> Pipeline:
) -> Pipeline:
""" Launch tuning procedure for obtained pipeline by composer """
timeout_for_tuning = abs(self.timer.determine_resources_for_tuning()) / 60
tuner = TunerBuilder(task) \
Expand All @@ -294,14 +298,16 @@ def tune_final_pipeline(self, task: Task,
if self.timer.have_time_for_tuning():
# Tune all nodes in the pipeline
with self.timer.launch_tuning():
log.message(f'Hyperparameters tuning started with {round(timeout_for_tuning)} sec. timeout')
self.log.message(f'Hyperparameters tuning started with {round(timeout_for_tuning)} sec. timeout')
self.was_tuned = False
tuned_pipeline = tuner.tune(pipeline_gp_composed)
log.message('Hyperparameters tuning finished')
self.was_tuned = True
self.log.message('Hyperparameters tuning finished')
else:
log.message(f'Time for pipeline composing was {str(self.timer.composing_spend_time)}.\n'
f'The remaining {max(0, timeout_for_tuning)} seconds are not enough '
f'to tune the hyperparameters.')
log.message('Composed pipeline returned without tuning.')
self.log.message(f'Time for pipeline composing was {str(self.timer.composing_spend_time)}.\n'
f'The remaining {max(0, timeout_for_tuning)} seconds are not enough '
f'to tune the hyperparameters.')
self.log.message('Composed pipeline returned without tuning.')
tuned_pipeline = pipeline_gp_composed

return tuned_pipeline
Expand All @@ -312,8 +318,7 @@ def _divide_parameters(common_dict: dict) -> List[dict]:

:param common_dict: dictionary with parameters for all AutoML modules
"""
api_params_dict = dict(train_data=None, task=Task, logger=LoggerAdapter, timeout=5, n_jobs=1,
show_progress=True)
api_params_dict = dict(train_data=None, task=Task, timeout=5, n_jobs=1, show_progress=True, logger=None)

composer_params_dict = dict(max_depth=None, max_arity=None, pop_size=None, num_of_generations=None,
keep_n_best=None, available_operations=None, metric=None,
Expand Down
2 changes: 2 additions & 0 deletions fedot/core/adapter/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from .adapter import BaseOptimizationAdapter, DirectAdapter
from .adapt_registry import AdaptRegistry, register_native
140 changes: 140 additions & 0 deletions fedot/core/adapter/adapt_registry.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
from copy import copy
from functools import partial
from typing import Callable

from fedot.core.utilities.singleton_meta import SingletonMeta


class AdaptRegistry(metaclass=SingletonMeta):
"""Registry of callables that require adaptation of argument/return values.
AdaptRegistry together with :class:``BaseOptimizationAdapter`` enables
automatic transformation between internal and domain graph representations.

**Short description of the use-case.**

Operators & verification rules that operate on internal representation
of graphs must be marked as native with ``register_native`` decorator.

Usually this is the case when users of the framework provide custom
operators for internal optimization graphs. When custom operators
operate on domain graphs, nothing is required.

**Extended description.**

Optimiser operates with generic graph representation.
Because of this any domain function requires adaptation
of its graph arguments. Adapter can automatically adapt
arguments to generic form in such cases.

Important notions:
- 'Domain' functions operate with domain-specific graphs.
- 'Native' functions operate with generic graphs used by optimiser.
- 'External' functions are functions defined by users of optimiser.
(most notably, custom mutations and custom verifier rules).
- 'Internal' functions are those defined by graph optimiser.
(most notably, the default set of mutations and verifier rules).
All internal functions are native.

Adaptation registry usage and behavior:
- Domain functions are adapted by default.
- Native functions don't require adaptation of their arguments.
- External functions are considered 'domain' functions by default.
Hence, they're their arguments are adapted, unless users of optimiser
exclude them from the process of automatic adaptation. It can be done
by registering them as 'native'.

AdaptRegistry can be safely used with multiprocessing
insofar as all relevant functions are registered as native
in the main process before child processes are started.
"""

_native_flag_attr_name_ = '_fedot_is_optimizer_native'

def __init__(self):
self._registered_native_callables = []

def register_native(self, fun: Callable) -> Callable:
"""Registers callable object as an internal function
that can work with internal graph representation.
Hence, it doesn't require adaptation when called by the optimiser.

Implementation details.
Works by setting a special attribute on the object.
This attribute then is checked by ``is_native`` used by adapters.

Args:
fun: function or callable to be registered as native

Returns:
Callable: same function with special private attribute set
"""
original_function = AdaptRegistry._get_underlying_func(fun)
setattr(original_function, AdaptRegistry._native_flag_attr_name_, True)
self._registered_native_callables.append(original_function)
return fun

def unregister_native(self, fun: Callable) -> Callable:
"""Unregisters callable object. See ``register_native``.

Args:
fun: function or callable to be unregistered as native

Returns:
Callable: same function with special private attribute unset
"""
original_function = AdaptRegistry._get_underlying_func(fun)
if hasattr(original_function, AdaptRegistry._native_flag_attr_name_):
delattr(original_function, AdaptRegistry._native_flag_attr_name_)
self._registered_native_callables.remove(original_function)
return fun

@staticmethod
def is_native(fun: Callable) -> bool:
"""Tests callable object for a presence of specific attribute
that tells that this function must not be restored with Adapter.

Args:
fun: tested Callable (function, method, functools.partial, or any callable object)

Returns:
bool: True if the callable was registered as native, False otherwise.
"""
original_function = AdaptRegistry._get_underlying_func(fun)
is_native = getattr(original_function, AdaptRegistry._native_flag_attr_name_, False)
return is_native

def clear_registered_callables(self):
# copy is to avoid removing elements from list while iterating
for f in copy(self._registered_native_callables):
self.unregister_native(f)

@staticmethod
def _get_underlying_func(obj: Callable) -> Callable:
"""Recursively unpacks 'partial' and 'method' objects to get underlying function.

Args:
obj: callable to try unpacking

Returns:
Callable: unpacked function that underlies the callable, or the unchanged object itself
"""
while True:
if isinstance(obj, partial): # if it is a 'partial'
obj = obj.func
elif hasattr(obj, '__func__'): # if it is a 'method'
obj = obj.__func__
else:
return obj # return the unpacked underlying function or the original object


def register_native(fun: Callable) -> Callable:
"""Out-of-class version of the ``register_native``
function that's intended to be used as a decorator.

Args:
fun: function or callable to be registered as native

Returns:
Callable: same function with special private attribute set
"""
return AdaptRegistry().register_native(fun)
Loading