Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sparselib ILU for benchmarks #487

Merged
merged 8 commits into from
Apr 6, 2020
Merged

Add sparselib ILU for benchmarks #487

merged 8 commits into from
Apr 6, 2020

Conversation

upsj
Copy link
Member

@upsj upsj commented Mar 24, 2020

This PR adds a LinOp for cuSPARSE/hipSPARSE ILU factorizations and the new ILU preconditioner interface described in #472 for integrating it into the benchmarks.

@upsj upsj added mod:cuda This is related to the CUDA module. reg:benchmarking This is related to benchmarking. type:preconditioner This is related to the preconditioners 1:ST:ready-for-review This PR is ready for review mod:hip This is related to the HIP module. labels Mar 24, 2020
@upsj upsj self-assigned this Mar 24, 2020
benchmark/utils/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/utils/hip_linops.hip.hpp Outdated Show resolved Hide resolved
benchmark/preconditioner/preconditioner.cpp Outdated Show resolved Hide resolved
benchmark/utils/hip_linops.hip.hpp Outdated Show resolved Hide resolved
@upsj upsj force-pushed the add_sparselib_ilu branch 3 times, most recently from b5544d0 to 905a089 Compare March 27, 2020 19:43
@codecov
Copy link

codecov bot commented Mar 28, 2020

Codecov Report

Merging #487 into develop will decrease coverage by 0.30%.
The diff coverage is 76.31%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #487      +/-   ##
===========================================
- Coverage    88.77%   88.46%   -0.31%     
===========================================
  Files          256      262       +6     
  Lines        16563    16645      +82     
===========================================
+ Hits         14703    14725      +22     
- Misses        1860     1920      +60     
Impacted Files Coverage Δ
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
core/factorization/ilu.cpp 0.00% <0.00%> (ø)
include/ginkgo/core/factorization/ilu.hpp 0.00% <0.00%> (ø)
include/ginkgo/core/factorization/par_ilu.hpp 100.00% <ø> (ø)
omp/factorization/ilu_kernels.cpp 0.00% <0.00%> (ø)
omp/factorization/par_ilu_kernels.cpp 100.00% <ø> (ø)
reference/factorization/ilu_kernels.cpp 0.00% <0.00%> (ø)
reference/factorization/par_ilu_kernels.cpp 100.00% <ø> (ø)
include/ginkgo/core/preconditioner/ilu.hpp 64.86% <78.57%> (+4.30%) ⬆️
core/factorization/par_ilu.cpp 88.46% <80.00%> (+1.36%) ⬆️
... and 15 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8da2da9...cc20d16. Read the comment docs.

@upsj upsj removed the reg:benchmarking This is related to benchmarking. label Mar 29, 2020
@pratikvn
Copy link
Member

pratikvn commented Apr 1, 2020

Do you want to merge this before #400 or after ?

@upsj
Copy link
Member Author

upsj commented Apr 1, 2020

I would probably merge it before, since it is much easier to review.

Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

core/factorization/ilu.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/factorization/ilu.hpp Show resolved Hide resolved
include/ginkgo/core/factorization/ilu.hpp Outdated Show resolved Hide resolved
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTm. I would like to check something.

core/factorization/ilu.cpp Show resolved Hide resolved
core/factorization/ilu.cpp Show resolved Hide resolved
hip/base/hipsparse_bindings.hip.hpp Show resolved Hide resolved
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
I have some small comments, nothing major.

core/factorization/ilu.cpp Outdated Show resolved Hide resolved
core/factorization/ilu.cpp Outdated Show resolved Hide resolved
cuda/factorization/ilu_kernels.cu Show resolved Hide resolved
include/ginkgo/core/factorization/ilu.hpp Show resolved Hide resolved
Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

There is one issue I want to bring up although I don't have a proper solution. Looking at the code for Ilu generation in general, there is so much Ginkgo specific that the important parts are completely buried under allocations etc, which I think would create an unfair/irrelevant situation when benchmarking the generate as the factorization time.

parameters_.u_strategy =
std::make_shared<typename matrix_type::classical>();
}
generate_l_u(system_matrix)->move_to(this);
Copy link
Member

@tcojean tcojean Apr 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you should put a comment somewhere that this will become the result of generate_l_u. All of this is a bit confusing until you see that line.
(I understand it's the same for ParIlu).


return Composition<ValueType>::create(std::move(l_factor),
std::move(u_factor));
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I'm a bit dubious in terms of benchmark quality. While most of that is required for Ginkgo specifics of the interface, if we are to benchmark the vendor libraries ilu factorization, then pretty much most of the relevant time would be comprised in compute_ilu no? It's the same in par_ilut with the compute_l_u_factors function being the important function AFAIK. All of these allocations and so on to wrap the data into CSR matrices that we then put in a composition are purely Ginkgo specifics. Therefore the generate time would have little relevance when benchmarking.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generate time would have little relevance, yes. But we can always just report the runtime of the compute_ilu operation (and I should note that the overall runtime is dominated by that and the triangular solve analysis phases)

Copy link
Member

@tcojean tcojean Apr 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, of course thanks to the loggers which report the time of separate operations. I think at the minimum we should have an immense warning somewhere, that the correct "generate" or factorization time should be taken from the specific compute kernels, and not the global generate time. That is different from say Jacobi, where the generate time is straight to the point a rather proper representation of the actual time.

Whereas here, due to the fact we want to separate the L U into Ginkgo CSR matrices among other interface issues and quality checks improvements, we can/could have a significant overhead which would blur the reality of the results.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add some performance numbers before we merge this PR, as far as I remember, the separation into L and U was really negligible

Copy link
Member Author

@upsj upsj Apr 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overhead

The actual kernels (factorization::*) look to be around 5% of the total runtime, some of the allocations and the copy we would need to do anyways. The only thing that kind of surprises me is the total alloc/free overhead, but I am not sure how much we can do about that.

EDIT: I ran everything on the K20Xm, Radeon 7 and V100.
Did the last years bring any performance improvements? These results might shock you!

EDIT2: Removed old plots, since I copy-pasted the wrong data there

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this actually just show that the hipSPARSE ILU generation is faster than the CUSPARSE one ? With also that your compute_l_u is also faster on the AMD gpus. Maybe, the applies are different ? As you say, the apply on the K20x is much slower than on the V100. Is it similar for the Radeon 7 ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The apply runtimes on the V100 and Radeon 7 are comparable, around 4ms. The astonishing thing is that apparently the analysis phases of ILU(0) and the triangular solves did not get any faster with at least 5 years of hardware development (The compute_lu kernel actually contains two phases: the analysis and the ILU solve. With the current setup, it is however not possible to separately measure their runtimes. Separating them into two kernels would be complicated, as this would require something similar to the SolveStruct)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The compute_lu is a ParILU isnt it ? So by nature isnt that faster than the level scheduled ILU(0) analysis and trisolve phases ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is still only the sparselib ILU, no ParILU in these plots

Copy link
Member

@tcojean tcojean Apr 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another interesting thing, for ani4 compute_lu (and everything else) seem to have shrinked quite a bit from K20 to V100. Weird that ani7 doesn't behave like that.

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general.
Is the purpose of factorization_kernels to contain all shared kernels of factorization?
I think we would have SVD/eigen or other factorization at some point.
Should we put their shared kernel in factorization_kernels?

IndexType) \
void add_diagonal_elements(std::shared_ptr<const DefaultExecutor> exec, \
matrix::Csr<ValueType, IndexType> *mtx, \
bool is_sorted)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
bool is_sorted)
bool is_sorted)

void initialize_row_ptrs_l_u( \
std::shared_ptr<const DefaultExecutor> exec, \
const matrix::Csr<ValueType, IndexType> *system_matrix, \
IndexType *l_row_ptrs, IndexType *u_row_ptrs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
IndexType *l_row_ptrs, IndexType *u_row_ptrs)
IndexType *l_row_ptrs, IndexType *u_row_ptrs)

@upsj
Copy link
Member Author

upsj commented Apr 6, 2020

@yhmtsai The main point of factorization_kernels is to move kernels from ParILU to a more general file since they are used in many different algorithms (ParILU(T), sparselib ILU, ...?)
I guess when we add other factorization types, we might need to move these kernels again.

@upsj upsj added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Apr 6, 2020
upsj and others added 7 commits April 6, 2020 17:06
Co-authored-by: Yuhsiang M. Tsai <yhmtsai@gmail.com>
Co-authored-by: Pratik Nayak <pratikvn@protonmail.com>

* Move common factorization code to factorization_kernels
* Fix typos
* Add missing whitespace
Co-authored-by: Yuhsiang M. Tsai <yhmtsai@gmail.com>
Co-authored-by: Pratik Nayak <pratikvn@protonmail.com>
@upsj upsj merged commit 9b36813 into develop Apr 6, 2020
@upsj upsj deleted the add_sparselib_ilu branch April 6, 2020 22:22
@sonarcloud
Copy link

sonarcloud bot commented Apr 7, 2020

Kudos, SonarCloud Quality Gate passed!

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities (and Security Hotspot 0 Security Hotspots to review)
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

@tcojean tcojean mentioned this pull request Jun 23, 2020
tcojean added a commit that referenced this pull request Jul 7, 2020
The Ginkgo team is proud to announce the new minor release of Ginkgo version
1.2.0. This release brings full HIP support to Ginkgo, new preconditioners
(ParILUT, ISAI), conversion between double and float for all LinOps, and many
more features and fixes.

Supported systems and requirements:
+ For all platforms, cmake 3.9+
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2017+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 2.8+
+ Windows
  + MinGW and CygWin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2017 15.7+
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or CygWin.


The current known issues can be found in the [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues).


# Additions
Here are the main additions to the Ginkgo library. Other thematic additions are listed below.
+ Add full HIP support to Ginkgo [#344](#344), [#357](#357), [#384](#384), [#373](#373), [#391](#391), [#396](#396), [#395](#395), [#393](#393), [#404](#404), [#439](#439), [#443](#443), [#567](#567)
+ Add a new ISAI preconditioner [#489](#489), [#502](#502), [#512](#512), [#508](#508), [#520](#520)
+ Add support for ParILUT and ParICT factorization with ILU preconditioners [#400](#400)
+ Add a new BiCG solver [#438](#438)
+ Add a new permutation matrix format [#352](#352), [#469](#469)
+ Add CSR SpGEMM support [#386](#386), [#398](#398), [#418](#418), [#457](#457)
+ Add CSR SpGEAM support [#556](#556)
+ Make all solvers and preconditioners transposable [#535](#535)
+ Add CsrBuilder and CooBuilder for intrusive access to matrix arrays [#437](#437)
+ Add a standard-compliant allocator based on the Executors [#504](#504)
+ Support conversions for all LinOp between double and float [#521](#521)
+ Add a new boolean to the CUDA and HIP executors to control DeviceReset (default off) [#557](#557)
+ Add a relaxation factor to IR to represent Richardson Relaxation [#574](#574)
+ Add two new stopping criteria, for relative (to `norm(b)`) and absolute residual norm [#577](#577)

### Example additions
+ Templatize all examples to simplify changing the precision [#513](#513)
+ Add a new adaptive precision block-Jacobi example [#507](#507)
+ Add a new IR example [#522](#522)
+ Add a new Mixed Precision Iterative Refinement example [#525](#525)
+ Add a new example on iterative trisolves in ILU preconditioning [#526](#526), [#536](#536), [#550](#550)

### Compilation and library changes
+ Auto-detect compilation settings based on environment [#435](#435), [#537](#537)
+ Add SONAME to shared libraries [#524](#524)
+ Add clang-cuda support [#543](#543)

### Other additions
+ Add sorting, searching and merging kernels for GPUs [#403](#403), [#428](#428), [#417](#417), [#455](#455)
+ Add `gko::as` support for smart pointers [#493](#493)
+ Add setters and getters for criterion factories [#527](#527)
+ Add a new method to check whether a solver uses `x` as an initial guess [#531](#531)
+ Add contribution guidelines [#549](#549)

# Fixes
### Algorithms
+ Improve the classical CSR strategy's performance [#401](#401)
+ Improve the CSR automatical strategy [#407](#407), [#559](#559)
+ Memory, speed improvements to the ELL kernel [#411](#411)
+ Multiple improvements and fixes to ParILU [#419](#419), [#427](#427), [#429](#429), [#456](#456), [#544](#544)
+ Fix multiple issues with GMRES [#481](#481), [#523](#523), [#575](#575)
+ Optimize OpenMP matrix conversions [#505](#505)
+ Ensure the linearity of the ILU preconditioner [#506](#506)
+ Fix IR's use of the advanced apply [#522](#522)
+ Fix empty matrices conversions and add tests [#560](#560)

### Other core functionalities
+ Fix complex number support in our math header [#410](#410)
+ Fix CUDA compatibility of the main ginkgo header [#450](#450)
+ Fix isfinite issues [#465](#465)
+ Fix the Array::view memory leak and the array/view copy/move [#485](#485)
+ Fix typos preventing use of some interface functions [#496](#496)
+ Fix the `gko::dim` to abide to the C++ standard [#498](#498)
+ Simplify the executor copy interface [#516](#516)
+ Optimize intermediate storage for Composition [#540](#540)
+ Provide an initial guess for relevant Compositions [#561](#561)
+ Better management of nullptr as criterion [#562](#562)
+ Fix the norm calculations for complex support [#564](#564)

### CUDA and HIP specific
+ Use the return value of the atomic operations in our wrappers [#405](#405)
+ Improve the portability of warp lane masks [#422](#422)
+ Extract thread ID computation into a separate function [#464](#464)
+ Reorder kernel parameters for consistency [#474](#474)
+ Fix the use of `pragma unroll` in HIP [#492](#492)

### Other
+ Fix the Ginkgo CMake installation files [#414](#414), [#553](#553)
+ Fix the Windows compilation [#415](#415)
+ Always use demangled types in error messages [#434](#434), [#486](#486)
+ Add CUDA header dependency to appropriate tests [#452](#452)
+ Fix several sonarqube or compilation warnings [#453](#453), [#463](#463), [#532](#532), [#569](#569)
+ Add shuffle tests [#460](#460)
+ Fix MSVC C2398 error [#490](#490)
+ Fix missing interface tests in test install [#558](#558)

# Tools and ecosystem
### Benchmarks
+ Add better norm support in the benchmarks [#377](#377)
+ Add CUDA 10.1 generic SpMV support in benchmarks [#468](#468), [#473](#473)
+ Add sparse library ILU in benchmarks [#487](#487)
+ Add overhead benchmarking capacities [#501](#501)
+ Allow benchmarking from a matrix list file [#503](#503)
+ Fix benchmarking issue with JSON and non-finite numbers [#514](#514)
+ Fix benchmark logger crashers with OpenMP [#565](#565)

### CI related
+ Improvements to the CI setup with HIP compilation [#421](#421), [#466](#466)
+ Add MacOSX CI support [#470](#470), [#488](#488)
+ Add Windows CI support [#471](#471), [#488](#488), [#510](#510), [#566](#566)
+ Use sanitizers instead of valgrind [#476](#476)
+ Add automatic container generation and update facilities [#499](#499)
+ Fix the CI parallelism settings [#517](#517), [#538](#538), [#539](#539)
+ Make the codecov patch check informational [#519](#519)
+ Add support for LLVM sanitizers with improved thread sanitizer support [#578](#578)

### Test suite
+ Add an assertion for sparsity pattern equality [#416](#416)
+ Add core and reference multiprecision tests support [#448](#448)
+ Speed up GPU tests by avoiding device reset [#467](#467)
+ Change test matrix location string [#494](#494)

### Other
+ Add Ginkgo badges from our tools [#413](#413)
+ Update the `create_new_algorithm.sh` script [#420](#420)
+ Bump copyright and improve license management [#436](#436), [#433](#433)
+ Set clang-format minimum requirement [#441](#441), [#484](#484)
+ Update git-cmake-format [#446](#446), [#484](#484)
+ Disable the development tools by default [#442](#442)
+ Add a script for automatic header formatting [#447](#447)
+ Add GDB pretty printer for `gko::Array` [#509](#509)
+ Improve compilation speed [#533](#533)
+ Add editorconfig support [#546](#546)
+ Add a compile-time check for header self-sufficiency [#552](#552)


# Related PR: #583
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:cuda This is related to the CUDA module. mod:hip This is related to the HIP module. type:preconditioner This is related to the preconditioners
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants