Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DBF step optimization #1081

Merged
merged 18 commits into from
Nov 21, 2023
Merged

DBF step optimization #1081

merged 18 commits into from
Nov 21, 2023

Conversation

MatteoRobbiati
Copy link
Contributor

@MatteoRobbiati MatteoRobbiati commented Nov 7, 2023

A first possible optimization of the step is implemented.
We use hyperopt to optimize the step considering a single flow execution.

An example of usage:

import matplotlib.pyplot as plt
import numpy as np

from qibo.models.double_bracket import DoubleBracketFlow, FlowGeneratorType
from qibo import set_backend
from qibo.hamiltonians.hamiltonians import Hamiltonian
from qibo.quantum_info import random_hermitian

set_backend("numpy")

NSTEPS=20
nqubits = 4
h0 = random_hermitian(2**nqubits)
DELTA=0.005

np.random.seed(42)

norms = []
# No hyperopt vs hyperopt
methods = [None, "opt"]
init_step = 0.01

for i, meth in enumerate(methods):
    print(f"Testing opt: {meth}")
    norm_history = []

    dbf = DoubleBracketFlow(Hamiltonian(nqubits=nqubits, matrix=h0))
    norm_history.append(dbf.off_diagonal_norm)
    if meth is not None:
        print("Big optimization for setting the initial step:")
        step = dbf.hyperopt_step(step_min=0.005, step_max=0.02, max_evals=1000, verbose=True)

    for i in range(NSTEPS):
        # evolution under canonical commutator
        if meth is not None:
            # here a hyper-parameter space and an hyperopt algorithm can be passed
            # default are hp.uniform and tpe
            step = dbf.hyperopt_step(step_min=step-DELTA, step_max=step+DELTA, max_evals=100)
            print(f"New flow duration s={step}")
        dbf(step=init_step, mode=FlowGeneratorType.canonical)
        norm_history.append(dbf.off_diagonal_norm)
    norms.append(norm_history)


plt.figure(figsize=(5, 5*6/8))
plt.plot(norms[0], label="Fixed step", color="royalblue", alpha=0.7, lw=1.5)
plt.plot(norms[1], label="Optimized", color="red", alpha=0.7, lw=1.5)
plt.legend()
plt.xlabel("Steps")
plt.ylabel("Cost")
plt.savefig("test_fig.png")

And an example of benchmark between fixed step size (blue), and optimized step size:

test_fig

Checklist:

  • Reviewers confirm new code works as expected.
  • Tests are passing.
  • Coverage does not decrease.
  • Documentation is updated.

@MatteoRobbiati MatteoRobbiati marked this pull request as draft November 7, 2023 19:07
@MatteoRobbiati MatteoRobbiati changed the base branch from master to dbf November 7, 2023 19:07
@MatteoRobbiati MatteoRobbiati marked this pull request as ready for review November 13, 2023 09:30
Copy link

codecov bot commented Nov 13, 2023

Codecov Report

Attention: 1 lines in your changes are missing coverage. Please review.

Comparison is base (dbc8a12) 100.00% compared to head (f7d5bf8) 99.98%.

❗ Current head f7d5bf8 differs from pull request most recent head e7bdf50. Consider uploading reports for the commit e7bdf50 to get more accurate results

Files Patch % Lines
src/qibo/models/double_bracket.py 95.23% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##               dbf    #1081      +/-   ##
===========================================
- Coverage   100.00%   99.98%   -0.02%     
===========================================
  Files           51       51              
  Lines         7583     7601      +18     
===========================================
+ Hits          7583     7600      +17     
- Misses           0        1       +1     
Flag Coverage Δ
unittests 99.98% <95.23%> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@andrea-pasquale andrea-pasquale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @MatteoRobbiati for implementing this.
I have a few suggestions down below.

pyproject.toml Outdated Show resolved Hide resolved
src/qibo/models/double_bracket.py Outdated Show resolved Hide resolved
src/qibo/models/double_bracket.py Outdated Show resolved Hide resolved
tests/test_models_dbf.py Outdated Show resolved Hide resolved
@MatteoRobbiati MatteoRobbiati added this to the Qibo 0.2.3 milestone Nov 16, 2023
Copy link
Contributor

@andrea-pasquale andrea-pasquale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the updates @MatteoRobbiati.
Just a small comment regarding how to deal with hyperopt again.


import hyperopt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we removed the hyperopt dep from the main deps, when you try to import this module it will raise a ModuleNotFoundError.
I think that you should move this import directly inside the hyperopt_step possibly in a try-except block where you can tell the user to install hyperopt if they want to perform the optimiziation.
Perhaps at this point it is easier just to keep it as a dependency.
What do you recommend @scarrazza? Shall we add hyperopt to the main deps or we keep it as an optional dep?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we like this setup, in which the user can set the space and the algorithm directly into the function, I think it is necessary to import hyperopt at the beginning of the double_bracket.py file.

Moreover, we never spent a lot of time in doing hyperoptimization of VQCs (number of layers, learning rate of the optimizers, etc).
I think adding an hyperoptimization tool as dependence of Qibo can be useful in general.
If you prefere, we can choose between hyperopt and Optuna, I am quite indifferent on this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that you should move this import directly inside the hyperopt_step possibly in a try-except block where you can tell the user to install hyperopt if they want to perform the optimiziation.

I agree with this option

@andrea-pasquale andrea-pasquale mentioned this pull request Nov 17, 2023
4 tasks
@MatteoRobbiati MatteoRobbiati merged commit 0055353 into dbf Nov 21, 2023
27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants