Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up vectorization of symmetric matrices #3349

Merged
merged 2 commits into from
May 9, 2023
Merged

Speed up vectorization of symmetric matrices #3349

merged 2 commits into from
May 9, 2023

Conversation

blegat
Copy link
Member

@blegat blegat commented May 9, 2023

Before #2107, we didn't do this conversion, I guess i just did it like to to make it similar to the SquareMatrixShape case but this gives unnecessary allocations if the matrix is already Symmetric and even more if the matrix is sparse as in https://discourse.julialang.org/t/out-of-memory-when-constructing-large-sparse-sdp-in-jump/98332/2

Benchmark

Consider the following benchmark:

n = 1000
p = 0.01
C = sprandn(Float64, n, n, p)  
model = Model()
@variable(model, y[1:n])
@time @constraint(model, Symmetric(C - spdiagm(y)) in PSDCone())

On master, this gives

0.582551 seconds (10.06 M allocations: 696.425 MiB, 53.21% gc time)

After this PR, this gives

0.131655 seconds (2.58 M allocations: 186.525 MiB, 33.82% gc time)

The MOI way:

@time begin
func = MOI.VectorAffineFunction(
    [MOI.VectorAffineTerm(i, MOI.ScalarAffineTerm(1.0, index(y[i]))) for i in 1:n],
    JuMP.vectorize(C, SymmetricMatrixShape(n)),
)
set = MOI.PositiveSemidefiniteConeTriangle(n)
MOI.add_constraint(backend(model), func, set)
end

it gives

0.006370 seconds (33 allocations: 9.093 MiB)

@codecov
Copy link

codecov bot commented May 9, 2023

Codecov Report

Patch coverage: 100.00% and project coverage change: -0.01 ⚠️

Comparison is base (63b7648) 98.06% compared to head (145f6cb) 98.06%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3349      +/-   ##
==========================================
- Coverage   98.06%   98.06%   -0.01%     
==========================================
  Files          34       34              
  Lines        4921     4919       -2     
==========================================
- Hits         4826     4824       -2     
  Misses         95       95              
Impacted Files Coverage Δ
src/sd.jl 100.00% <100.00%> (ø)

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@odow
Copy link
Member

odow commented May 9, 2023

Could we add a test of some sort?

@odow
Copy link
Member

odow commented May 9, 2023

Does this improve that benchmark?

src/sd.jl Show resolved Hide resolved
@blegat
Copy link
Member Author

blegat commented May 9, 2023

Could we add a test of some sort?

Yes, I think we could have some sort of custom JuMP scalar that count the number of times it is copied.

Does this improve that benchmark?

Added in first comment

@odow
Copy link
Member

odow commented May 9, 2023

After this PR, this gives
0.131655 seconds (2.58 M allocations: 186.525 MiB, 33.82% gc time)

[MOI] gives
0.006370 seconds (33 allocations: 9.093 MiB)

I guess the underlying problem is that JuMP doesn't have a way of passing sparse vectors to MOI.

More precisely, we do pass sparse VectorAffineFunction to MOI, but we still instantiate the dense vector at the JuMP level.

julia> model = Model();

julia> @variable(model, x)
x

julia> y = [1.0 * x, 0.0, 2.0]
3-element Vector{AffExpr}:
 x
 0
 2

julia> f = moi_function(y)
┌                              ┐
│0.0 + 1.0 MOI.VariableIndex(1)│
│0.0                           │
│2.0                           │
└                              ┘

julia> f.terms
1-element Vector{MathOptInterface.VectorAffineTerm{Float64}}:
 MathOptInterface.VectorAffineTerm{Float64}(1, MathOptInterface.ScalarAffineTerm{Float64}(1.0, MOI.VariableIndex(1)))

julia> f.constants
3-element Vector{Float64}:
 0.0
 0.0
 2.0

@odow odow merged commit dd66846 into master May 9, 2023
@odow odow deleted the bl/vectorize_sym branch May 9, 2023 22:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants