Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Optimizer state and Multi-Fidelity passthrough in SMAC #751

Closed
wants to merge 26 commits into from

Conversation

jsfreischuetz
Copy link
Contributor

@jsfreischuetz jsfreischuetz commented May 22, 2024

Enables changing the defaults in the SMAC optimizer to enable state passing (what SMAC internally calls context, but that doesn't match MLOS's notion of context) and multi-fidelity tuning.
This requires changes across the project to support the concept of state, as well as updating a few test cases.

This PR replaces the previous one (#741)

@jsfreischuetz jsfreischuetz changed the title minimal implementation of mutli-fidelity Enable Context and Multi-Fidelity passthrough in SMAC May 22, 2024
.vscode/settings.json Outdated Show resolved Hide resolved
@jsfreischuetz jsfreischuetz marked this pull request as ready for review May 24, 2024 20:48
@jsfreischuetz jsfreischuetz requested a review from a team as a code owner May 24, 2024 20:48
@bpkroth
Copy link
Contributor

bpkroth commented Jun 3, 2024

@jsfreischuetz I just merged #738 from @motus.
Can you please update your branch to resolve conflicts?
Thanks!


(all_configs, all_scores, all_contexts) = optimizer.get_observations()
assert isinstance(all_configs, pd.DataFrame)
assert isinstance(all_scores, pd.DataFrame)
assert all_contexts is None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we assert that it is something expected now instead of just removing the check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because only SMAC uses context, where there other optimizers do not

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then check for that too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the associated changes

@@ -90,7 +89,6 @@ def objective(point: pd.DataFrame) -> pd.DataFrame:
(all_configs, all_scores, all_contexts) = optimizer.get_observations()
assert isinstance(all_configs, pd.DataFrame)
assert isinstance(all_scores, pd.DataFrame)
assert all_contexts is None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. Can we assert that the contexts are something useful?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, unless we split out the SMAC tests

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need to split them out, just add a branch in the test for them.
I want to make sure we're testing expected functionality in both directions.
It also makes it more obvious what needs to be done when we extend it elsewhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the associated changes

scores = pd.DataFrame({'score': [1]})
context = pd.DataFrame([["something"]])

with pytest.raises(UserWarning):
optimizer.register(suggestion, scores, context=context)

with pytest.raises(UserWarning):
optimizer.suggest(context=context)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, so we can register context, but not suggest with it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's more about not sending the warning not being being thrown anymore

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand what the code in this test is doing. I'm asking why we aren't supporting suggestions with context yet when we're adding support for registering with context. It makes it seem like a blackhole of information or only partial support somehow.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does context for a suggestion mean?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed offline and is documented in the suggestion in the README.md above now.

"""
if context is not None:
# not sure how that works here?
warn(f"Not Implemented: Ignoring context {list(context.columns)}", UserWarning)
return pd.DataFrame(dict(self.optimizer_parameter_space.sample_configuration()), index=[0])
return pd.DataFrame(dict(self.optimizer_parameter_space.sample_configuration()), index=[0]), None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should this return None for context?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't returning the context that was passed in make more sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have interpreted the context that is passed in to the suggest to be different than the context that is returned. I am not exactly sure what the context in is supposed to mean

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's what I expect:

  • context is composed of some set of metrics describing the execution environment (e.g., vm ram, vm vcore count, workload descriptor, etc.)
  • upon registering the optimizer is able to build a model for how different config+context pairs maps to different output metrics
  • when requesting a new suggestion, possible for a different execution environment (e.g., larger vm size), described in the context, the optimizer should be able to use all of that information to provide a new config suggestion for that new context

Now, it's possible that the optimizer stinks at that part initially because it doesn't know how to compute suggestions for unseen contexts (future research work probably).
One option could be that we return a variable length (user specified) list of items, each stemming from different context of "known" items (ranked by higher confidence), instead of for a the new context and then the caller needs to sift through and decide what to do, or else we return a random config (explore) + the provided context and they could start trying them to help fill in that knowledge.
I think the exact specifics of that part is to be determined.

But high level, if someone asks for a suggestion for a given context, I think we should return something within context, not something for anywhere in the space.

Make sense?

Copy link
Contributor Author

@jsfreischuetz jsfreischuetz Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

context is composed of some set of metrics describing the execution environment (e.g., vm ram, vm vcore count, workload descriptor, etc.)

As far as I am aware this is not supported by any of the optimizers when asking for a suggestion. SMAC supports the idea of these values being defined on optimizer initialization, but which context to evaluate is determined by the optimizer, and there is still not support for requiring a specific context.

But high level, if someone asks for a suggestion for a given context, I think we should return something within context, not something for anywhere in the space.

I think there is a problem with this. There are two ways to solve for this:

  1. If we simply request configurations until we find a context that matches, we now have a long list of pending configurations that should be evaluated according to the optimizer. If you ignore them, the optimizer will not be evaluating in this region as it believes that they are still pending. If you process them late, they are stale.
  2. Alternatively, we can force the context to match, regardless of what the optimizer returns, but this is not actually sampling efficiently according to the optimizer.

I think realistically, we should remove the argument from suggest, but I tried to keep the changes as minimal as possible. Someone else before me also left a comment suggesting this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed offline about the discrepency in terminology.

SMAC's notion of "context" is different from that in MLOS, which @jsfreischuetz rightly pointed out may or may not be supported directly yet.
SMAC's is much more about passing internal optimizer "state".

For now, we decided to

  1. document the difference in the README.md above.
  2. rename the SMAC required "state" to either metadata or state

.vscode/settings.json Outdated Show resolved Hide resolved
@bpkroth bpkroth changed the title Enable Context and Multi-Fidelity passthrough in SMAC Enable Optimizer state and Multi-Fidelity passthrough in SMAC Jun 6, 2024
Copy link
Contributor

@bpkroth bpkroth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please restore context and add metadata as new optional parameters.


self.trial_info_df: pd.DataFrame = pd.DataFrame(
columns=["Configuration", "Context", "TrialInfo", "TrialValue"]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: look into abstracting this out so multiple backend optimizers can reuse it (e.g., FLAML, LlamaTune)

@@ -224,7 +264,41 @@ def n_random_init(self) -> int:
return self.base_optimizer._initial_design._n_configs

@staticmethod
def _dummy_target_func(config: ConfigSpace.Configuration, seed: int = 0) -> None:
def _filter_kwargs(function: Callable, **kwargs: Any) -> Dict[str, Any]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move to util.py?

mlos_core/mlos_core/optimizers/README.md Show resolved Hide resolved
mlos_core/mlos_core/optimizers/README.md Outdated Show resolved Hide resolved
mlos_core/mlos_core/optimizers/README.md Outdated Show resolved Hide resolved
mlos_core/mlos_core/optimizers/README.md Outdated Show resolved Hide resolved
@@ -86,7 +86,7 @@ def __init__(self, *, # pylint: disable=too-many-arguments
self._suggested_config: Optional[dict]

def _register(self, configurations: pd.DataFrame, scores: pd.DataFrame,
context: Optional[pd.DataFrame] = None) -> None:
metadata: Optional[pd.DataFrame] = None) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please restore context.
The idea was to add metadata in addition to it, not replace it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(requires associated changes throughout)

"""
if context is not None:
# not sure how that works here?
warn(f"Not Implemented: Ignoring context {list(context.columns)}", UserWarning)
return pd.DataFrame(dict(self.optimizer_parameter_space.sample_configuration()), index=[0])
return pd.DataFrame(dict(self.optimizer_parameter_space.sample_configuration()), index=[0]), None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed offline about the discrepency in terminology.

SMAC's notion of "context" is different from that in MLOS, which @jsfreischuetz rightly pointed out may or may not be supported directly yet.
SMAC's is much more about passing internal optimizer "state".

For now, we decided to

  1. document the difference in the README.md above.
  2. rename the SMAC required "state" to either metadata or state

scores = pd.DataFrame({'score': [1]})
context = pd.DataFrame([["something"]])

with pytest.raises(UserWarning):
optimizer.register(suggestion, scores, context=context)

with pytest.raises(UserWarning):
optimizer.suggest(context=context)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed offline and is documented in the suggestion in the README.md above now.

jsfreischuetz and others added 2 commits June 6, 2024 14:08
…zer.py

Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
jsfreischuetz and others added 3 commits June 6, 2024 14:11
Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
This also involves modifying the test cases that interact with this interfaces
"""
# Do some input validation.
assert set(scores.columns) == set(self._optimization_targets), \
"Mismatched optimization targets."
if type(self._optimization_targets) is str:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if type(self._optimization_targets) is str:
assert self._optimization_targets, "Missing or invalid optimization targets"
if type(self._optimization_targets) is str:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also assert not empty (see also comment above about accepting None)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this makes sense given my comment above

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, but separate PR for that one please

motus and others added 3 commits June 12, 2024 10:08
Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
mlos_core/mlos_core/optimizers/optimizer.py Outdated Show resolved Hide resolved
self._has_context = context is not None

if self._space_adapter:
configurations = self._space_adapter.inverse_transform(configurations)
assert configurations.shape[1] == len(self.optimizer_parameter_space.values()), \
"Mismatched configuration shape after inverse transform."
return self._register(configurations, scores, context)
return self._register(configurations, scores, metadata, context)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

self._has_context = context is not None

if self._space_adapter:
configurations = self._space_adapter.inverse_transform(configurations)
assert configurations.shape[1] == len(self.optimizer_parameter_space.values()), \
"Mismatched configuration shape after inverse transform."
return self._register(configurations, scores, context)
return self._register(configurations, scores, metadata, context)

@abstractmethod
def _register(self, configurations: pd.DataFrame, scores: pd.DataFrame,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def _register(self, configurations: pd.DataFrame, scores: pd.DataFrame,
def _register(self, *, configurations: pd.DataFrame, scores: pd.DataFrame,

Can force the args to be named to help avoid param ordering mistakes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for the elsewhere (e.g., public methods and suggest), though this might be a larger API change that needs its own PR first in prepration for this one since callers will also be affected.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. Let's fix _register now and update the public register in the next PR

configuration: pd.DataFrame = config_to_dataframe(
self.parameter_space.get_default_configuration()
)
metadata = self.delayed_metadata
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: when creating PRs - try to keep your changes smaller. It's easier to review and debug.
If the order of this one didn't really matter you could have left the first line alone and only added the two new ones

mlos_core/mlos_core/optimizers/optimizer.py Outdated Show resolved Hide resolved
mlos_core/mlos_core/optimizers/optimizer.py Outdated Show resolved Hide resolved
jsfreischuetz and others added 2 commits June 12, 2024 17:04
Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
Copy link
Member

@motus motus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few comments. Let's have a meeting and discuss the changes. I am a bit confused about the purpose of the metadata and how it is different from the context

@@ -56,9 +56,11 @@ def __init__(self, *,
raise ValueError("Number of weights must match the number of optimization targets")

self._space_adapter: Optional[BaseSpaceAdapter] = space_adapter
self._observations: List[Tuple[pd.DataFrame, pd.DataFrame, Optional[pd.DataFrame]]] = []
self._observations: List[Tuple[pd.DataFrame, pd.DataFrame, Optional[pd.DataFrame], Optional[pd.DataFrame]]] = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this gets a bit unwieldy. Should we have a named tuple for _pending_observations, e.g.,

class PendingObservation(NamedTuple):
    """A named tuple representing a pending observation."""

    configurations: pd.DataFrame
    context: Optional[pd.DataFrame]
    meta: Optional[pd.DataFrame]

and do the same for _observations? (not sure we can inherit NamedTuples - most likely, we can't)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or, maybe, have _observed_configs, _observed_scores, _observed_contexts etc. and concatenate the dataframes instead of having a list of dataframes. I am pretty sure the schemas are the same from one _register call to the next

Comment on lines +62 to +63
self.delayed_config: Optional[pd.DataFrame] = None
self.delayed_metadata: Optional[pd.DataFrame] = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.delayed_config: Optional[pd.DataFrame] = None
self.delayed_metadata: Optional[pd.DataFrame] = None
self._delayed_config: Optional[pd.DataFrame] = None
self._delayed_metadata: Optional[pd.DataFrame] = None

probably should be private, right?

self._has_context = context is not None

if self._space_adapter:
configurations = self._space_adapter.inverse_transform(configurations)
assert configurations.shape[1] == len(self.optimizer_parameter_space.values()), \
"Mismatched configuration shape after inverse transform."
return self._register(configurations, scores, context)
return self._register(configurations, scores, metadata, context)

@abstractmethod
def _register(self, configurations: pd.DataFrame, scores: pd.DataFrame,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. Let's fix _register now and update the public register in the next PR


Returns
-------
observations : Tuple[pd.DataFrame, pd.DataFrame, Optional[pd.DataFrame]]
A triplet of (config, score, context) DataFrames of observations.
A triplet of (config, score, metadata) DataFrames of observations.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A triplet of (config, score, metadata) DataFrames of observations.
A 4-tuple of (config, score, context, metadata) DataFrames of observations.

(or, better yet, a NamedTuple)

configs = pd.concat([config for config, _, _ in self._observations]).reset_index(drop=True)
scores = pd.concat([score for _, score, _ in self._observations]).reset_index(drop=True)
configs = pd.concat([config for config, _, _, _ in self._observations]).reset_index(drop=True)
scores = pd.concat([score for _, score, _, _ in self._observations]).reset_index(drop=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, let's have a List[Observation] NamedTuples - or, better yet, concatenate the dataframes right there in _register and forget about List and NamedTuple

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we don't use this module anywhere - maybe remove it from this PR?

assert best_context is None
assert set(best_config.columns) == {'x', 'y'}
assert set(best_score.columns) == {'main_score', 'other_score'}
assert best_config.shape == (1, 2)
assert best_score.shape == (1, 2)

(all_configs, all_scores, all_contexts) = optimizer.get_observations()
(all_configs, all_scores, all_metadata, best_context) = optimizer.get_observations()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why best_context?? shouldn't it be all_contexts instead?

assert isinstance(all_metadata, pd.DataFrame) or all_metadata is None
else:
assert all_metadata is None
assert best_context is None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert best_context is None
assert all_contexts is None

?

@@ -48,7 +48,7 @@ def test_create_optimizer_and_suggest(configuration_space: CS.ConfigurationSpace

assert optimizer.parameter_space is not None

suggestion = optimizer.suggest()
suggestion, _ = optimizer.suggest()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here and in other places:

Suggested change
suggestion, _ = optimizer.suggest()
suggestion, _metadata = optimizer.suggest()

it's better to spell out things

(best_config, best_score, _context) = best_observation
(llamatune_best_config, llamatune_best_score, _context) = llamatune_best_observation
(best_config, best_score, _, _) = best_observation
(llamatune_best_config, llamatune_best_score, _metadata) = llamatune_best_observation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am confused. shouldn't it be

Suggested change
(llamatune_best_config, llamatune_best_score, _metadata) = llamatune_best_observation
(llamatune_best_config, llamatune_best_score, _context, _metadata) = llamatune_best_observation

?
this should fail in tests

@bpkroth
Copy link
Contributor

bpkroth commented Jun 26, 2024

Left a few comments. Let's have a meeting and discuss the changes. I am a bit confused about the purpose of the metadata and how it is different from the context

It should be in the README I asked for and also some internal chats we had, but essentially, metadata is for internal optimizer saved state (e.g., something SMAC specific) and context is more of an index into a subspace of a larger optimization problem (e.g., optimize for VMs with 8GB RAM & 4 vCores and a transactional workload by reusing data from VMs with 4GB RAM and 2 vCores). Internally we may wrap the later in multiple SMAC invocations for instance.

bpkroth added a commit that referenced this pull request Jun 28, 2024
This is a simple PR that makes all arguments explicit for
optimizer-related function calls in preparation to add additional
arguments in #751 and make it easier to review.

---------

Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
Co-authored-by: Brian Kroth <bpkroth@microsoft.com>
bpkroth added a commit that referenced this pull request Jul 1, 2024
Adds metadata to respond from suggest, and be passable into register.

This is in support of adding multi-fidelity support (#751)

---------

Co-authored-by: Brian Kroth <bpkroth@users.noreply.github.com>
Co-authored-by: Brian Kroth <bpkroth@microsoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants