Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve ell performance #411

Merged
merged 5 commits into from
Dec 7, 2019
Merged

Improve ell performance #411

merged 5 commits into from
Dec 7, 2019

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Dec 4, 2019

This PR improves the ell performance.

To read the memory contiguously, make the number of row in block to be the multiple of warp_size
(the warp_size is maybe not the minimal)
the block size is set to 512
In CUDA, the max number per row in one thread block is 16 (3216)
In AMD, the max number per row in one thread block is 8 (64
8)

I do not do warp reduction. I use the shared memory to accumulate the result from different threads by atomicAdd in the same row because the threads in the same rows are not in the same warp.
I try non-square transpose and then do warp reduction, whose performance is similar with this PR in ND/nd12.

I test on 30 matrices. I sort the speedup of hipsp_ell/ell and take 10 in the head, 10 in the end and 10 in the middle.
i.e.

  1. 10 matrices which hipsp_ell has largest speed up against original ell
  2. 10 matrices which original ell has largest speed up against hipsp_ell
  3. 10 matrices I pick up in the middle to check the implementation in general
  • this pr ell vs original ell
    improvement
  • original speedup of ell vs hipsp_ell
    original_speedup
  • this pr speedup of ell vs hipsp_ell
    thispr_speedup

Note: I still need to run this PR on the whole dataset

@yhmtsai yhmtsai added is:enhancement An improvement of an existing feature. mod:cuda This is related to the CUDA module. type:matrix-format This is related to the Matrix formats mod:hip This is related to the HIP module. labels Dec 4, 2019
@yhmtsai yhmtsai self-assigned this Dec 4, 2019
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

if (x < num_rows) {
const auto tile_block =
group::tiled_partition<subwarp_size>(group::this_thread_block());
static_cast<size_type>(blockDim.x) * blockIdx.x + threadIdx.x;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
static_cast<size_type>(blockDim.x) * blockIdx.x + threadIdx.x;
size_type(blockDim.x) * blockIdx.x + threadIdx.x;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would actually prefer static_cast here since it is in fact a cast.
Usually, I would use size_type(12) with actual numbers (for me, this is using a constructor vs. casting).

Since I don't think we have any guidelines for it, feel free to use what you want @yhmtsai, since both options should result in the exact same behavior in this case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would agree that this is a matter of taste. My reasons for proposing all these changes are

  1. they are semantically equivalent (except for composite type names like unsigned int, they can be used interchangably) and
  2. the "constructor" solution is shorter.

I would however not agree that there is a difference between construction and conversion (i.e., casting) for primitive types. Maybe we can discuss this question in general and possibly find a common guideline before I start spamming all pull requests with these change suggestions ;)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For primitive types, you are correct, there is no difference. However, for more complex types, it can make a difference.
I just like to have an explicit cast word, so everyone knows you are casting here.

I think adding either solution to the guidelines might be a bit too detailed since the guideline page is currently quite long. But we can discuss this tomorrow.

Copy link
Member

@tcojean tcojean Dec 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For more complex types, I'm not so sure there is even a difference. See Section 5.2.3 and early 5.4:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3242.pdf

If I understand all of this properly, this notation would even dynamic cast/reinterpret cast transparently? I never tested this but they link to section 5.4 which is for all casting operations.
We are going into the details of C++ here though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, a minor detail: in C++ standard jargon, they rather call this type of cast "functional notation" rather than "constructor notation".

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one question, otherwise, it looks good.

hip/matrix/ell_kernels.hip.cpp Outdated Show resolved Hide resolved
Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@yhmtsai
Copy link
Member Author

yhmtsai commented Dec 5, 2019

For V100,
V100_ell_upd: this is the setting now. (the number of threads per row in one block is <= 32)
V100_ell_fix: the number of threads per row in one block is <= 16

x is the max number per row / #rows

  • vs original ell
    v100_ell
  • V100_ell_32 vs cusparse
    V100_ell_32
  • V100_ell_fix vs cusparse
    V100_ell_fix

I choose 32 as the max number of threads per worker (the number of threads per worker is <= 32)

For radeon VII, I need to do full benchmark.
I try different limit for the maximum number of threads per worker in one block 8, 16, 32, 64.
There is a trade off among global memory coalescing, the number of work per threads, the atomic collision, and etc.
In the test, I choose the 32 as the maximum number of threads per row in one block which is same as cuda's.
Please ignore the value x is smaller than 1e-2, they should be the similar.
I set ratio 0 to see the performance with these parameters under 1e-2. the ratio 1e-2 still seems to be a good choice.
ell_sel

Also, I tried __builtin_nontemporal_load/__builtin_nontemporal_stored, but it looks like similar.
nont

@yhmtsai
Copy link
Member Author

yhmtsai commented Dec 6, 2019

It uses max_thread_per_worker is 32 in CUDA/AMD, specialize the kernel when num_thread_per_worker=1 without __builtin_nontemporal_load/__builtin_nontemporal_store.

x is max_nnz_per_row/nrows

radeon VII
I delete the max_nnz_per_row/nrows <= 1e-2 and max_nnz_per_row < 2 plot which uses num_thread_per_worker=1 because the kernel is wrong during benchmarking. Fixed

  • ell_all_upd: max_thread_per_worker=32 and special 1
  • ell_all_fix: max_thread_per_worker=8
  • radeonVII: original
  1. ell_all_upd, ell_all_fix, radeonVII ell
    r7_ell_vs_original
  2. ell_all_upd, ell_all_fix
    r7_ell_vs_8
  3. ell_all_upd, ell_all_fix, radeonVII hipsp_ell
    r7_ell_vs_hipsp

V100
This version is correct

  • V100_ell_upd: max_thread_per_worker=32 and special 1
  • V100_ell_fix: max_thread_per_worker=16
  • V100: original
  1. V100_ell_upd, V100_ell_fix, V100 ell
    V100_ell_cs_original
  2. V100_ell_upd, V100_ell_fix
    V100_vs_16
  3. V100_ell_upd, V100_ell_fix, V100 cusp_ell
    V100_ell_vs_cusp

I change some kernel staff and the variable name, so I re-request the reviews.

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@yhmtsai yhmtsai added the 1:ST:ready-to-merge This PR is ready to merge. label Dec 6, 2019
@yhmtsai yhmtsai merged commit b4262c9 into develop Dec 7, 2019
@tcojean tcojean deleted the improve_ell branch January 5, 2020 09:33
@tcojean tcojean mentioned this pull request Jun 23, 2020
tcojean added a commit that referenced this pull request Jul 7, 2020
The Ginkgo team is proud to announce the new minor release of Ginkgo version
1.2.0. This release brings full HIP support to Ginkgo, new preconditioners
(ParILUT, ISAI), conversion between double and float for all LinOps, and many
more features and fixes.

Supported systems and requirements:
+ For all platforms, cmake 3.9+
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2017+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 2.8+
+ Windows
  + MinGW and CygWin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2017 15.7+
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or CygWin.


The current known issues can be found in the [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues).


# Additions
Here are the main additions to the Ginkgo library. Other thematic additions are listed below.
+ Add full HIP support to Ginkgo [#344](#344), [#357](#357), [#384](#384), [#373](#373), [#391](#391), [#396](#396), [#395](#395), [#393](#393), [#404](#404), [#439](#439), [#443](#443), [#567](#567)
+ Add a new ISAI preconditioner [#489](#489), [#502](#502), [#512](#512), [#508](#508), [#520](#520)
+ Add support for ParILUT and ParICT factorization with ILU preconditioners [#400](#400)
+ Add a new BiCG solver [#438](#438)
+ Add a new permutation matrix format [#352](#352), [#469](#469)
+ Add CSR SpGEMM support [#386](#386), [#398](#398), [#418](#418), [#457](#457)
+ Add CSR SpGEAM support [#556](#556)
+ Make all solvers and preconditioners transposable [#535](#535)
+ Add CsrBuilder and CooBuilder for intrusive access to matrix arrays [#437](#437)
+ Add a standard-compliant allocator based on the Executors [#504](#504)
+ Support conversions for all LinOp between double and float [#521](#521)
+ Add a new boolean to the CUDA and HIP executors to control DeviceReset (default off) [#557](#557)
+ Add a relaxation factor to IR to represent Richardson Relaxation [#574](#574)
+ Add two new stopping criteria, for relative (to `norm(b)`) and absolute residual norm [#577](#577)

### Example additions
+ Templatize all examples to simplify changing the precision [#513](#513)
+ Add a new adaptive precision block-Jacobi example [#507](#507)
+ Add a new IR example [#522](#522)
+ Add a new Mixed Precision Iterative Refinement example [#525](#525)
+ Add a new example on iterative trisolves in ILU preconditioning [#526](#526), [#536](#536), [#550](#550)

### Compilation and library changes
+ Auto-detect compilation settings based on environment [#435](#435), [#537](#537)
+ Add SONAME to shared libraries [#524](#524)
+ Add clang-cuda support [#543](#543)

### Other additions
+ Add sorting, searching and merging kernels for GPUs [#403](#403), [#428](#428), [#417](#417), [#455](#455)
+ Add `gko::as` support for smart pointers [#493](#493)
+ Add setters and getters for criterion factories [#527](#527)
+ Add a new method to check whether a solver uses `x` as an initial guess [#531](#531)
+ Add contribution guidelines [#549](#549)

# Fixes
### Algorithms
+ Improve the classical CSR strategy's performance [#401](#401)
+ Improve the CSR automatical strategy [#407](#407), [#559](#559)
+ Memory, speed improvements to the ELL kernel [#411](#411)
+ Multiple improvements and fixes to ParILU [#419](#419), [#427](#427), [#429](#429), [#456](#456), [#544](#544)
+ Fix multiple issues with GMRES [#481](#481), [#523](#523), [#575](#575)
+ Optimize OpenMP matrix conversions [#505](#505)
+ Ensure the linearity of the ILU preconditioner [#506](#506)
+ Fix IR's use of the advanced apply [#522](#522)
+ Fix empty matrices conversions and add tests [#560](#560)

### Other core functionalities
+ Fix complex number support in our math header [#410](#410)
+ Fix CUDA compatibility of the main ginkgo header [#450](#450)
+ Fix isfinite issues [#465](#465)
+ Fix the Array::view memory leak and the array/view copy/move [#485](#485)
+ Fix typos preventing use of some interface functions [#496](#496)
+ Fix the `gko::dim` to abide to the C++ standard [#498](#498)
+ Simplify the executor copy interface [#516](#516)
+ Optimize intermediate storage for Composition [#540](#540)
+ Provide an initial guess for relevant Compositions [#561](#561)
+ Better management of nullptr as criterion [#562](#562)
+ Fix the norm calculations for complex support [#564](#564)

### CUDA and HIP specific
+ Use the return value of the atomic operations in our wrappers [#405](#405)
+ Improve the portability of warp lane masks [#422](#422)
+ Extract thread ID computation into a separate function [#464](#464)
+ Reorder kernel parameters for consistency [#474](#474)
+ Fix the use of `pragma unroll` in HIP [#492](#492)

### Other
+ Fix the Ginkgo CMake installation files [#414](#414), [#553](#553)
+ Fix the Windows compilation [#415](#415)
+ Always use demangled types in error messages [#434](#434), [#486](#486)
+ Add CUDA header dependency to appropriate tests [#452](#452)
+ Fix several sonarqube or compilation warnings [#453](#453), [#463](#463), [#532](#532), [#569](#569)
+ Add shuffle tests [#460](#460)
+ Fix MSVC C2398 error [#490](#490)
+ Fix missing interface tests in test install [#558](#558)

# Tools and ecosystem
### Benchmarks
+ Add better norm support in the benchmarks [#377](#377)
+ Add CUDA 10.1 generic SpMV support in benchmarks [#468](#468), [#473](#473)
+ Add sparse library ILU in benchmarks [#487](#487)
+ Add overhead benchmarking capacities [#501](#501)
+ Allow benchmarking from a matrix list file [#503](#503)
+ Fix benchmarking issue with JSON and non-finite numbers [#514](#514)
+ Fix benchmark logger crashers with OpenMP [#565](#565)

### CI related
+ Improvements to the CI setup with HIP compilation [#421](#421), [#466](#466)
+ Add MacOSX CI support [#470](#470), [#488](#488)
+ Add Windows CI support [#471](#471), [#488](#488), [#510](#510), [#566](#566)
+ Use sanitizers instead of valgrind [#476](#476)
+ Add automatic container generation and update facilities [#499](#499)
+ Fix the CI parallelism settings [#517](#517), [#538](#538), [#539](#539)
+ Make the codecov patch check informational [#519](#519)
+ Add support for LLVM sanitizers with improved thread sanitizer support [#578](#578)

### Test suite
+ Add an assertion for sparsity pattern equality [#416](#416)
+ Add core and reference multiprecision tests support [#448](#448)
+ Speed up GPU tests by avoiding device reset [#467](#467)
+ Change test matrix location string [#494](#494)

### Other
+ Add Ginkgo badges from our tools [#413](#413)
+ Update the `create_new_algorithm.sh` script [#420](#420)
+ Bump copyright and improve license management [#436](#436), [#433](#433)
+ Set clang-format minimum requirement [#441](#441), [#484](#484)
+ Update git-cmake-format [#446](#446), [#484](#484)
+ Disable the development tools by default [#442](#442)
+ Add a script for automatic header formatting [#447](#447)
+ Add GDB pretty printer for `gko::Array` [#509](#509)
+ Improve compilation speed [#533](#533)
+ Add editorconfig support [#546](#546)
+ Add a compile-time check for header self-sufficiency [#552](#552)


# Related PR: #583
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. is:enhancement An improvement of an existing feature. mod:cuda This is related to the CUDA module. mod:hip This is related to the HIP module. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants