-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix probabilities do not sum to 1 #528
Conversation
Codecov Report
@@ Coverage Diff @@
## master #528 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 85 85
Lines 12439 12439
=========================================
Hits 12439 12439
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
src/qibo/core/states.py
Outdated
@@ -92,7 +92,7 @@ def wrapper(self, qubits=None, measurement_gate=None): | |||
def probabilities(self, qubits=None, measurement_gate=None): | |||
unmeasured_qubits = tuple(i for i in range(self.nqubits) | |||
if i not in qubits) | |||
state = K.reshape(K.square(K.abs(K.cast(self.tensor))), self.nqubits * (2,)) | |||
state = K.reshape(K.square(K.abs(K.cast(self.tensor, dtype="complex128"))), self.nqubits * (2,)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it work if we make the cast on the float level, after the abs:
state = K.reshape(K.square(K.abs(K.cast(self.tensor, dtype="complex128"))), self.nqubits * (2,)) | |
state = K.reshape(K.cast(K.square(K.abs(self.tensor)), dtype="float64"), self.nqubits * (2,)) |
This way we would save some memory and performance deterioration. Not sure if it works though. Or we could also do it before the sum, since it's only the sum that is causing the problem, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, we may still need the first cast because we need to cast self.tensor
from cupy to numpy in case of fallback, but we may add an additional cast before the sum. I'll give it a try.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried a similar approach and it works, take a look at the latest commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's exactly what I meant. Given that K.cast
is properly implemented and does not create any copies when the same type is passed, now it should work ok, right? And the sum issue is fixed since we convert to float64 before, while we avoid the complex128 conversion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it should work fine.
Can we merge this? |
Yes, I believe so. |
Closes #517. In this PR I fixed issue #517 by casting the state vector to double precision before computing the probabilities.
The memory overhead w.r.t. main should be around 33%.
In the meantime, I made a small change in the NumPy cast, to ensure that no arrays are duplicated if not needed. Let me know your opinion.