-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add sparselib ILU for benchmarks #487
Conversation
b5544d0
to
905a089
Compare
Codecov Report
@@ Coverage Diff @@
## develop #487 +/- ##
===========================================
- Coverage 88.77% 88.46% -0.31%
===========================================
Files 256 262 +6
Lines 16563 16645 +82
===========================================
+ Hits 14703 14725 +22
- Misses 1860 1920 +60
Continue to review full report at Codecov.
|
Do you want to merge this before #400 or after ? |
I would probably merge it before, since it is much easier to review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTm. I would like to check something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
I have some small comments, nothing major.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There is one issue I want to bring up although I don't have a proper solution. Looking at the code for Ilu generation in general, there is so much Ginkgo specific that the important parts are completely buried under allocations etc, which I think would create an unfair/irrelevant situation when benchmarking the generate
as the factorization time.
parameters_.u_strategy = | ||
std::make_shared<typename matrix_type::classical>(); | ||
} | ||
generate_l_u(system_matrix)->move_to(this); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you should put a comment somewhere that this will become the result of generate_l_u. All of this is a bit confusing until you see that line.
(I understand it's the same for ParIlu).
|
||
return Composition<ValueType>::create(std::move(l_factor), | ||
std::move(u_factor)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I'm a bit dubious in terms of benchmark quality. While most of that is required for Ginkgo specifics of the interface, if we are to benchmark the vendor libraries ilu factorization, then pretty much most of the relevant time would be comprised in compute_ilu
no? It's the same in par_ilut with the compute_l_u_factors
function being the important function AFAIK. All of these allocations and so on to wrap the data into CSR matrices that we then put in a composition are purely Ginkgo specifics. Therefore the generate
time would have little relevance when benchmarking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The generate time would have little relevance, yes. But we can always just report the runtime of the compute_ilu
operation (and I should note that the overall runtime is dominated by that and the triangular solve analysis phases)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, of course thanks to the loggers which report the time of separate operations. I think at the minimum we should have an immense warning somewhere, that the correct "generate" or factorization time should be taken from the specific compute
kernels, and not the global generate time. That is different from say Jacobi, where the generate
time is straight to the point a rather proper representation of the actual time.
Whereas here, due to the fact we want to separate the L U into Ginkgo CSR matrices among other interface issues and quality checks improvements, we can/could have a significant overhead which would blur the reality of the results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add some performance numbers before we merge this PR, as far as I remember, the separation into L and U was really negligible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The actual kernels (factorization::*
) look to be around 5% of the total runtime, some of the allocations and the copy we would need to do anyways. The only thing that kind of surprises me is the total alloc/free
overhead, but I am not sure how much we can do about that.
EDIT: I ran everything on the K20Xm, Radeon 7 and V100.
Did the last years bring any performance improvements? These results might shock you!
EDIT2: Removed old plots, since I copy-pasted the wrong data there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this actually just show that the hipSPARSE ILU generation is faster than the CUSPARSE one ? With also that your compute_l_u is also faster on the AMD gpus. Maybe, the applies are different ? As you say, the apply on the K20x is much slower than on the V100. Is it similar for the Radeon 7 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The apply runtimes on the V100 and Radeon 7 are comparable, around 4ms. The astonishing thing is that apparently the analysis phases of ILU(0) and the triangular solves did not get any faster with at least 5 years of hardware development (The compute_lu kernel actually contains two phases: the analysis and the ILU solve. With the current setup, it is however not possible to separately measure their runtimes. Separating them into two kernels would be complicated, as this would require something similar to the SolveStruct)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The compute_lu is a ParILU isnt it ? So by nature isnt that faster than the level scheduled ILU(0) analysis and trisolve phases ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this is still only the sparselib ILU, no ParILU in these plots
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another interesting thing, for ani4 compute_lu (and everything else) seem to have shrinked quite a bit from K20 to V100. Weird that ani7 doesn't behave like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general.
Is the purpose of factorization_kernels to contain all shared kernels of factorization?
I think we would have SVD/eigen or other factorization at some point.
Should we put their shared kernel in factorization_kernels?
IndexType) \ | ||
void add_diagonal_elements(std::shared_ptr<const DefaultExecutor> exec, \ | ||
matrix::Csr<ValueType, IndexType> *mtx, \ | ||
bool is_sorted) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bool is_sorted) | |
bool is_sorted) | |
void initialize_row_ptrs_l_u( \ | ||
std::shared_ptr<const DefaultExecutor> exec, \ | ||
const matrix::Csr<ValueType, IndexType> *system_matrix, \ | ||
IndexType *l_row_ptrs, IndexType *u_row_ptrs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IndexType *l_row_ptrs, IndexType *u_row_ptrs) | |
IndexType *l_row_ptrs, IndexType *u_row_ptrs) | |
@yhmtsai The main point of |
Co-authored-by: Yuhsiang M. Tsai <yhmtsai@gmail.com> Co-authored-by: Pratik Nayak <pratikvn@protonmail.com> * Move common factorization code to factorization_kernels * Fix typos * Add missing whitespace
Co-authored-by: Yuhsiang M. Tsai <yhmtsai@gmail.com> Co-authored-by: Pratik Nayak <pratikvn@protonmail.com>
Kudos, SonarCloud Quality Gate passed! 0 Bugs |
The Ginkgo team is proud to announce the new minor release of Ginkgo version 1.2.0. This release brings full HIP support to Ginkgo, new preconditioners (ParILUT, ISAI), conversion between double and float for all LinOps, and many more features and fixes. Supported systems and requirements: + For all platforms, cmake 3.9+ + Linux and MacOS + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+ + clang: 3.9+ + Intel compiler: 2017+ + Apple LLVM: 8.0+ + CUDA module: CUDA 9.0+ + HIP module: ROCm 2.8+ + Windows + MinGW and CygWin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+ + Microsoft Visual Studio: VS 2017 15.7+ + CUDA module: CUDA 9.0+, Microsoft Visual Studio + OpenMP module: MinGW or CygWin. The current known issues can be found in the [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues). # Additions Here are the main additions to the Ginkgo library. Other thematic additions are listed below. + Add full HIP support to Ginkgo [#344](#344), [#357](#357), [#384](#384), [#373](#373), [#391](#391), [#396](#396), [#395](#395), [#393](#393), [#404](#404), [#439](#439), [#443](#443), [#567](#567) + Add a new ISAI preconditioner [#489](#489), [#502](#502), [#512](#512), [#508](#508), [#520](#520) + Add support for ParILUT and ParICT factorization with ILU preconditioners [#400](#400) + Add a new BiCG solver [#438](#438) + Add a new permutation matrix format [#352](#352), [#469](#469) + Add CSR SpGEMM support [#386](#386), [#398](#398), [#418](#418), [#457](#457) + Add CSR SpGEAM support [#556](#556) + Make all solvers and preconditioners transposable [#535](#535) + Add CsrBuilder and CooBuilder for intrusive access to matrix arrays [#437](#437) + Add a standard-compliant allocator based on the Executors [#504](#504) + Support conversions for all LinOp between double and float [#521](#521) + Add a new boolean to the CUDA and HIP executors to control DeviceReset (default off) [#557](#557) + Add a relaxation factor to IR to represent Richardson Relaxation [#574](#574) + Add two new stopping criteria, for relative (to `norm(b)`) and absolute residual norm [#577](#577) ### Example additions + Templatize all examples to simplify changing the precision [#513](#513) + Add a new adaptive precision block-Jacobi example [#507](#507) + Add a new IR example [#522](#522) + Add a new Mixed Precision Iterative Refinement example [#525](#525) + Add a new example on iterative trisolves in ILU preconditioning [#526](#526), [#536](#536), [#550](#550) ### Compilation and library changes + Auto-detect compilation settings based on environment [#435](#435), [#537](#537) + Add SONAME to shared libraries [#524](#524) + Add clang-cuda support [#543](#543) ### Other additions + Add sorting, searching and merging kernels for GPUs [#403](#403), [#428](#428), [#417](#417), [#455](#455) + Add `gko::as` support for smart pointers [#493](#493) + Add setters and getters for criterion factories [#527](#527) + Add a new method to check whether a solver uses `x` as an initial guess [#531](#531) + Add contribution guidelines [#549](#549) # Fixes ### Algorithms + Improve the classical CSR strategy's performance [#401](#401) + Improve the CSR automatical strategy [#407](#407), [#559](#559) + Memory, speed improvements to the ELL kernel [#411](#411) + Multiple improvements and fixes to ParILU [#419](#419), [#427](#427), [#429](#429), [#456](#456), [#544](#544) + Fix multiple issues with GMRES [#481](#481), [#523](#523), [#575](#575) + Optimize OpenMP matrix conversions [#505](#505) + Ensure the linearity of the ILU preconditioner [#506](#506) + Fix IR's use of the advanced apply [#522](#522) + Fix empty matrices conversions and add tests [#560](#560) ### Other core functionalities + Fix complex number support in our math header [#410](#410) + Fix CUDA compatibility of the main ginkgo header [#450](#450) + Fix isfinite issues [#465](#465) + Fix the Array::view memory leak and the array/view copy/move [#485](#485) + Fix typos preventing use of some interface functions [#496](#496) + Fix the `gko::dim` to abide to the C++ standard [#498](#498) + Simplify the executor copy interface [#516](#516) + Optimize intermediate storage for Composition [#540](#540) + Provide an initial guess for relevant Compositions [#561](#561) + Better management of nullptr as criterion [#562](#562) + Fix the norm calculations for complex support [#564](#564) ### CUDA and HIP specific + Use the return value of the atomic operations in our wrappers [#405](#405) + Improve the portability of warp lane masks [#422](#422) + Extract thread ID computation into a separate function [#464](#464) + Reorder kernel parameters for consistency [#474](#474) + Fix the use of `pragma unroll` in HIP [#492](#492) ### Other + Fix the Ginkgo CMake installation files [#414](#414), [#553](#553) + Fix the Windows compilation [#415](#415) + Always use demangled types in error messages [#434](#434), [#486](#486) + Add CUDA header dependency to appropriate tests [#452](#452) + Fix several sonarqube or compilation warnings [#453](#453), [#463](#463), [#532](#532), [#569](#569) + Add shuffle tests [#460](#460) + Fix MSVC C2398 error [#490](#490) + Fix missing interface tests in test install [#558](#558) # Tools and ecosystem ### Benchmarks + Add better norm support in the benchmarks [#377](#377) + Add CUDA 10.1 generic SpMV support in benchmarks [#468](#468), [#473](#473) + Add sparse library ILU in benchmarks [#487](#487) + Add overhead benchmarking capacities [#501](#501) + Allow benchmarking from a matrix list file [#503](#503) + Fix benchmarking issue with JSON and non-finite numbers [#514](#514) + Fix benchmark logger crashers with OpenMP [#565](#565) ### CI related + Improvements to the CI setup with HIP compilation [#421](#421), [#466](#466) + Add MacOSX CI support [#470](#470), [#488](#488) + Add Windows CI support [#471](#471), [#488](#488), [#510](#510), [#566](#566) + Use sanitizers instead of valgrind [#476](#476) + Add automatic container generation and update facilities [#499](#499) + Fix the CI parallelism settings [#517](#517), [#538](#538), [#539](#539) + Make the codecov patch check informational [#519](#519) + Add support for LLVM sanitizers with improved thread sanitizer support [#578](#578) ### Test suite + Add an assertion for sparsity pattern equality [#416](#416) + Add core and reference multiprecision tests support [#448](#448) + Speed up GPU tests by avoiding device reset [#467](#467) + Change test matrix location string [#494](#494) ### Other + Add Ginkgo badges from our tools [#413](#413) + Update the `create_new_algorithm.sh` script [#420](#420) + Bump copyright and improve license management [#436](#436), [#433](#433) + Set clang-format minimum requirement [#441](#441), [#484](#484) + Update git-cmake-format [#446](#446), [#484](#484) + Disable the development tools by default [#442](#442) + Add a script for automatic header formatting [#447](#447) + Add GDB pretty printer for `gko::Array` [#509](#509) + Improve compilation speed [#533](#533) + Add editorconfig support [#546](#546) + Add a compile-time check for header self-sufficiency [#552](#552) # Related PR: #583
This PR adds a LinOp for cuSPARSE/hipSPARSE ILU factorizations and the new ILU preconditioner interface described in #472 for integrating it into the benchmarks.