Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Isolate the device counter to fix the device_reset issue #810

Merged
merged 11 commits into from
Aug 2, 2021

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Jul 1, 2021

This PR isolate the device counter such that Executor use the counter by the corresponding device.
If HIP is compiled for Nvidia, HipExecutor and CudaExecutor will use the same counter from NvidiaDevice.
Otherwise, HipExecutor will use AmdDevice.
Also, the mutex from Device are recursive such that we can use the same mutex in the destructor to keep thread-safe.

the first commit https://gitlab.com/ginkgo-project/ginkgo-public-ci/-/jobs/1332820514 check the test is failed.

the nm result (the second commit result)

lib/libginkgo_device.a:device.cpp.o:0000000000000a00 D _ZN3gko12NvidiaDevice5mutexE
lib/libginkgo_device.a:device.cpp.o:0000000000000100 B _ZN3gko12NvidiaDevice9num_execsE
lib/libginkgo_hip.a:ginkgo_hip_generated_executor.hip.cpp.o:                 U _ZN3gko12NvidiaDevice5mutexE
lib/libginkgo_hip.a:ginkgo_hip_generated_executor.hip.cpp.o:                 U _ZN3gko12NvidiaDevice9num_execsE
lib/libginkgo_hip.a:executor.cpp.o:                 U _ZN3gko12NvidiaDevice5mutexE
lib/libginkgo_hip.a:executor.cpp.o:                 U _ZN3gko12NvidiaDevice9num_execsE
lib/libginkgo_cuda.a:executor.cpp.o:                 U _ZN3gko12NvidiaDevice5mutexE
lib/libginkgo_cuda.a:executor.cpp.o:                 U _ZN3gko12NvidiaDevice9num_execsE
lib/libginkgo_cuda.a:executor.cpp.o:                 U _ZN3gko12NvidiaDevice5mutexE
lib/libginkgo_cuda.a:executor.cpp.o:                 U _ZN3gko12NvidiaDevice9num_execsE

MSVC is not happy about linking the definition of static data from dll.
It can not resolve the data when compile others with ginkgo_device (LNK2019 issue)
It requires __declspec(dllexport) and __declspec(dllimport) on the class (or member)
Moreover, if the class contains STL object, it will report warning C4251. In this case, it reports the warning on recursive_mutex
Thus, use the static getter and declare the static inside the function to avoid the issue/wraning.
some ref:
https://blog.kitware.com/create-dlls-on-windows-without-declspec-using-new-cmake-export-all-feature/
https://stackoverflow.com/questions/16419318/one-way-of-eliminating-c4251-warning-when-using-stl-classes-in-the-dll-interface

@yhmtsai yhmtsai added the 1:ST:ready-for-review This PR is ready for review label Jul 1, 2021
@yhmtsai yhmtsai self-assigned this Jul 1, 2021
@ginkgo-bot ginkgo-bot added mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:ci-cd This is related to the continuous integration system. reg:testing This is related to testing. labels Jul 1, 2021
@codecov
Copy link

codecov bot commented Jul 1, 2021

Codecov Report

Merging #810 (78d0c00) into develop (404a316) will decrease coverage by 1.20%.
The diff coverage is 20.00%.

❗ Current head 78d0c00 differs from pull request most recent head c22fbb3. Consider uploading reports for the commit c22fbb3 to get more accurate results
Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #810      +/-   ##
===========================================
- Coverage    94.58%   93.38%   -1.21%     
===========================================
  Files          410      411       +1     
  Lines        33097    33123      +26     
===========================================
- Hits         31306    30931     -375     
- Misses        1791     2192     +401     
Impacted Files Coverage Δ
devices/device.cpp 0.00% <0.00%> (ø)
devices/cuda/executor.cpp 53.84% <20.00%> (-21.16%) ⬇️
devices/hip/executor.cpp 50.00% <20.00%> (-21.43%) ⬇️
include/ginkgo/core/base/executor.hpp 77.84% <100.00%> (-1.64%) ⬇️
omp/factorization/par_ilut_kernels.cpp 0.00% <0.00%> (-100.00%) ⬇️
omp/test/factorization/par_ilut_kernels.cpp 0.00% <0.00%> (-98.91%) ⬇️
core/test/utils/assertions.hpp 71.49% <0.00%> (-1.36%) ⬇️
reference/factorization/par_ilut_kernels.cpp 99.50% <0.00%> (-0.50%) ⬇️
core/test/utils/value_generator.hpp 100.00% <0.00%> (ø)
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 404a316...c22fbb3. Read the comment docs.

@yhmtsai yhmtsai force-pushed the device_reset_issue branch 3 times, most recently from 6ae9fd3 to a16602c Compare July 1, 2021 19:57
@yhmtsai yhmtsai force-pushed the device_reset_issue branch 3 times, most recently from cb4d480 to ad7746d Compare July 9, 2021 13:09
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! We do need to seriously reevaluate how we deal with symbols, isolation and linking in general (see #715) though, and I expressed my dislike for the device_reset feature before already, since it introduces global state mainly for testing convenience.

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I have some small design comments.

include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/device.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/device.hpp Outdated Show resolved Hide resolved
link thread in test

Co-authored-by: Terry Cojean <terry.cojean@kit.edu>
@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review 1:ST:run-full-test labels Jul 29, 2021
Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

cuda/base/executor.cpp Outdated Show resolved Hide resolved
hip/base/executor.hip.cpp Outdated Show resolved Hide resolved
it makes us keep mutex not recursive_mutex

Co-authored-by: Terry Cojean <terry.cojean@kit.edu>
Comment on lines 1458 to 1462
#ifdef GINKGO_BUILD_CUDA
// increase the Cuda Device count only when ginkgo build cuda
std::lock_guard<std::mutex> guard(device_class::get_mutex(device_id));
device_class::get_num_execs(device_id)++;
#endif // GINKGO_BUILD_CUDA
Copy link
Member

@upsj upsj Jul 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This strongly links the public interface to the implementation, which might break our assumptions about being able to just swap out shared library stubs for actual implementations. Can we move this function implementation into core?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, but is it okay for the device_class choice?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate on that? In the current setting, we may violate the ODR rule by providing different implementations for the function in a Ginkgo build with or without CUDA. It would be great if we could keep the public interface entirely agnostic of which backends we built.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use GINKGO_HIP_PLATFORM_NVCC to choose the device_class.
does it destroy the rule?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not necessarily an ODR issue, since it is only related to friend declarations. But still, it would be great if we could have a mostly consistent interface across compilers or configurations

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it also lead the issue when we swap the shared_memory, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean the unified memory parameter for executors?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I mixed two pull request...
it also lead the issue when we swap the shared library, right?

Copy link
Member

@upsj upsj Jul 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked in the wrong place. Yes, that might also cause issues, but it does not strictly violate the ODR rule, since it's only an internal type alias. Moving the implementation to core fixes half of it, but then you can't safely swap out HIP-NVCC by HIP-ROCm

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@upsj I move all device_class related function to cpp and always make NvidiaDevice have friend class Cuda/Hip and AmdDevice have Hip. Only select the device_class in cpp not in class.

include/ginkgo/config.hpp.in Outdated Show resolved Hide resolved
include/ginkgo/config.hpp.in Outdated Show resolved Hide resolved
Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Comment on lines 70 to 77
/* Should we compile cuda kernels for Ginkgo? */
#cmakedefine GINKGO_BUILD_CUDA


/* Should we compile hip kernels for Ginkgo? */
#cmakedefine GINKGO_BUILD_HIP


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be removed now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these are still used ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But not in the public interface, which is where I wanted to avoid configuration-dependent flags as much as possible. So they could be moved to defines on the compilation level.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, this part still affects the header.
I still use the flag in the devices/cuda/executor or devices/hip/executor.
Do you have any idea?
only one I can imagine is to use compile_option to define this when compiling those cpps requiring the flag.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have GKO_COMPILING_CUDA/HIP/OMP, which is used to select backends for the common kernels and is not defined in hooks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see.
ginkgo_cuda from cuda has GKO_COMPILING_CUDA, but from hook does not have the definition.
ginkgo_cuda_device can use GINKGO_BUILD_CUDA to put GKO_COMPILING_CUDA or not.
I am not sure whether it is good because cmake contains two place to put the GKO_COMPILING_CUDA (same name but different path). maybe I will use different name first.
we can discuss it more detail later.
For me, switching config.hpp may be still a good way because user can explicit know which flag is on or off.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the issue with having these flags? What's important is more how we use it than whether it's present here, no? I think users could make use of such flags as well. Sure, mixing and matching Ginkgo headers and libraries will give awkward results for the user, but that's expected and it will not break Ginkgo itself?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they are okay for now. Maybe I should elaborate a bit more on my thoughts about this in one of the next meetings, basically I am trying to make the interface as consistent and independent of the backends as possible.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for my understanding, swapping the library can be a behavior selector?
when we use the libginkgo_cuda generated by cuda, we get the cuda function.
when we use the libginkgo_cuda generated by hook, we does not get the cuda function but others can use the same so from the previous without any issue.
I will use another compile flags in devices first and tend to merge it because it will be in the CMake and cpp not public interface such that we can have more flexibility to change it.

Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the additional tests. LGTM!

#ifdef GINKGO_BUILD_HIP
// increase the HIP Device count only when ginkgo build hip
std::lock_guard<std::mutex> guard(hip_device_class::get_mutex(device_id));
hip_device_class::get_num_execs(device_id)++;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, why snake_case here and CamelCase for NvidiaDevice and AmdDevice ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NvidiaDevice and AmdDevice are CamelCase because they are class.
I was thinking hip_device_class is like using value_type = ValueType, so I put it snake_case.
Is there any rule we can follow?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, the policy is the following: https://github.com/ginkgo-project/ginkgo/wiki/Contributing-guidelines#structures-and-classes

i.e., snake_case for all nonpolymorphic classes, CamelCase for polymorphic behavior. I don't think we are very consistent on that topic though.

Comment on lines 70 to 77
/* Should we compile cuda kernels for Ginkgo? */
#cmakedefine GINKGO_BUILD_CUDA


/* Should we compile hip kernels for Ginkgo? */
#cmakedefine GINKGO_BUILD_HIP


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these are still used ?

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

#ifdef GINKGO_BUILD_HIP
// increase the HIP Device count only when ginkgo build hip
std::lock_guard<std::mutex> guard(hip_device_class::get_mutex(device_id));
hip_device_class::get_num_execs(device_id)++;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, the policy is the following: https://github.com/ginkgo-project/ginkgo/wiki/Contributing-guidelines#structures-and-classes

i.e., snake_case for all nonpolymorphic classes, CamelCase for polymorphic behavior. I don't think we are very consistent on that topic though.

Comment on lines 70 to 77
/* Should we compile cuda kernels for Ginkgo? */
#cmakedefine GINKGO_BUILD_CUDA


/* Should we compile hip kernels for Ginkgo? */
#cmakedefine GINKGO_BUILD_HIP


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the issue with having these flags? What's important is more how we use it than whether it's present here, no? I think users could make use of such flags as well. Sure, mixing and matching Ginkgo headers and libraries will give awkward results for the user, but that's expected and it will not break Ginkgo itself?

Co-authored-by: Terry Cojean <terry.cojean@kit.edu>
Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
@sonarcloud
Copy link

sonarcloud bot commented Aug 2, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 10 Code Smells

18.2% 18.2% Coverage
2.2% 2.2% Duplication

@yhmtsai yhmtsai merged commit 2bfa1ef into develop Aug 2, 2021
@yhmtsai yhmtsai deleted the device_reset_issue branch August 2, 2021 23:29
tcojean added a commit that referenced this pull request Aug 20, 2021
Ginkgo release 1.4.0

The Ginkgo team is proud to announce the new Ginkgo minor release 1.4.0. This
release brings most of the Ginkgo functionality to the Intel DPC++ ecosystem
which enables Intel-GPU and CPU execution. The only Ginkgo features which have
not been ported yet are some preconditioners.

Ginkgo's mixed-precision support is greatly enhanced thanks to:
1. The new Accessor concept, which allows writing kernels featuring on-the-fly
memory compression, among other features. The accessor can be used as
header-only, see the [accessor BLAS benchmarks repository](https://github.com/ginkgo-project/accessor-BLAS/tree/develop) as a usage example.
2. All LinOps now transparently support mixed-precision execution. By default,
this is done through a temporary copy which may have a performance impact but
already allows mixed-precision research.

Native mixed-precision ELL kernels are implemented which do not see this cost.
The accessor is also leveraged in a new CB-GMRES solver which allows for
performance improvements by compressing the Krylov basis vectors. Many other
features have been added to Ginkgo, such as reordering support, a new IDR
solver, Incomplete Cholesky preconditioner, matrix assembly support (only CPU
for now), machine topology information, and more!

Supported systems and requirements:
+ For all platforms, cmake 3.13+
+ C++14 compliant compiler
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 3.5+
  + DPC++ module: Intel OneAPI 2021.3. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add a new DPC++ Executor for SYCL execution and other base utilities
  [#648](#648), [#661](#661), [#757](#757), [#832](#832)
+ Port matrix formats, solvers and related kernels to DPC++. For some kernels,
  also make use of a shared kernel implementation for all executors (except
  Reference). [#710](#710), [#799](#799), [#779](#779), [#733](#733), [#844](#844), [#843](#843), [#789](#789), [#845](#845), [#849](#849), [#855](#855), [#856](#856)
+ Add accessors which allow multi-precision kernels, among other things.
  [#643](#643), [#708](#708)
+ Add support for mixed precision operations through apply in all LinOps. [#677](#677)
+ Add incomplete Cholesky factorizations and preconditioners as well as some
  improvements to ILU. [#672](#672), [#837](#837), [#846](#846)
+ Add an AMGX implementation and kernels on all devices but DPC++.
  [#528](#528), [#695](#695), [#860](#860)
+ Add a new mixed-precision capability solver, Compressed Basis GMRES
  (CB-GMRES). [#693](#693), [#763](#763)
+ Add the IDR(s) solver. [#620](#620)
+ Add a new fixed-size block CSR matrix format (for the Reference executor).
  [#671](#671), [#730](#730)
+ Add native mixed-precision support to the ELL format. [#717](#717), [#780](#780)
+ Add Reverse Cuthill-McKee reordering [#500](#500), [#649](#649)
+ Add matrix assembly support on CPUs. [#644](#644)
+ Extends ISAI from triangular to general and spd matrices. [#690](#690)

Other additions:
+ Add the possibility to apply real matrices to complex vectors.
  [#655](#655), [#658](#658)
+ Add functions to compute the absolute of a matrix format. [#636](#636)
+ Add symmetric permutation and improve existing permutations.
  [#684](#684), [#657](#657), [#663](#663)
+ Add a MachineTopology class with HWLOC support [#554](#554), [#697](#697)
+ Add an implicit residual norm criterion. [#702](#702), [#818](#818), [#850](#850)
+ Row-major accessor is generalized to more than 2 dimensions and a new
  "block column-major" accessor has been added. [#707](#707)
+ Add an heat equation example. [#698](#698), [#706](#706)
+ Add ccache support in CMake and CI. [#725](#725), [#739](#739)
+ Allow tuning and benchmarking variables non intrusively. [#692](#692)
+ Add triangular solver benchmark [#664](#664)
+ Add benchmarks for BLAS operations [#772](#772), [#829](#829)
+ Add support for different precisions and consistent index types in benchmarks.
  [#675](#675), [#828](#828)
+ Add a Github bot system to facilitate development and PR management.
  [#667](#667), [#674](#674), [#689](#689), [#853](#853)
+ Add Intel (DPC++) CI support and enable CI on HPC systems. [#736](#736), [#751](#751), [#781](#781)
+ Add ssh debugging for Github Actions CI. [#749](#749)
+ Add pipeline segmentation for better CI speed. [#737](#737)


Changes:
+ Add a Scalar Jacobi specialization and kernels. [#808](#808), [#834](#834), [#854](#854)
+ Add implicit residual log for solvers and benchmarks. [#714](#714)
+ Change handling of the conjugate in the dense dot product. [#755](#755)
+ Improved Dense stride handling. [#774](#774)
+ Multiple improvements to the OpenMP kernels performance, including COO,
an exclusive prefix sum, and more. [#703](#703), [#765](#765), [#740](#740)
+ Allow specialization of submatrix and other dense creation functions in solvers. [#718](#718)
+ Improved Identity constructor and treatment of rectangular matrices. [#646](#646)
+ Allow CUDA/HIP executors to select allocation mode. [#758](#758)
+ Check if executors share the same memory. [#670](#670)
+ Improve test install and smoke testing support. [#721](#721)
+ Update the JOSS paper citation and add publications in the documentation.
  [#629](#629), [#724](#724)
+ Improve the version output. [#806](#806)
+ Add some utilities for dim and span. [#821](#821)
+ Improved solver and preconditioner benchmarks. [#660](#660)
+ Improve benchmark timing and output. [#669](#669), [#791](#791), [#801](#801), [#812](#812)


Fixes:
+ Sorting fix for the Jacobi preconditioner. [#659](#659)
+ Also log the first residual norm in CGS [#735](#735)
+ Fix BiCG and HIP CSR to work with complex matrices. [#651](#651)
+ Fix Coo SpMV on strided vectors. [#807](#807)
+ Fix segfault of extract_diagonal, add short-and-fat test. [#769](#769)
+ Fix device_reset issue by moving counter/mutex to device. [#810](#810)
+ Fix `EnableLogging` superclass. [#841](#841)
+ Support ROCm 4.1.x and breaking HIP_PLATFORM changes. [#726](#726)
+ Decreased test size for a few device tests. [#742](#742)
+ Fix multiple issues with our CMake HIP and RPATH setup.
  [#712](#712), [#745](#745), [#709](#709)
+ Cleanup our CMake installation step. [#713](#713)
+ Various simplification and fixes to the Windows CMake setup. [#720](#720), [#785](#785)
+ Simplify third-party integration. [#786](#786)
+ Improve Ginkgo device arch flags management. [#696](#696)
+ Other fixes and improvements to the CMake setup.
  [#685](#685), [#792](#792), [#705](#705), [#836](#836)
+ Clarification of dense norm documentation [#784](#784)
+ Various development tools fixes and improvements [#738](#738), [#830](#830), [#840](#840)
+ Make multiple operators/constructors explicit. [#650](#650), [#761](#761)
+ Fix some issues, memory leaks and warnings found by MSVC.
  [#666](#666), [#731](#731)
+ Improved solver memory estimates and consistent iteration counts [#691](#691)
+ Various logger improvements and fixes [#728](#728), [#743](#743), [#754](#754)
+ Fix for ForwardIterator requirements in iterator_factory. [#665](#665)
+ Various benchmark fixes. [#647](#647), [#673](#673), [#722](#722)
+ Various CI fixes and improvements. [#642](#642), [#641](#641), [#795](#795), [#783](#783), [#793](#793), [#852](#852)


Related PR: #857
tcojean added a commit that referenced this pull request Aug 23, 2021
Release 1.4.0 to master

The Ginkgo team is proud to announce the new Ginkgo minor release 1.4.0. This
release brings most of the Ginkgo functionality to the Intel DPC++ ecosystem
which enables Intel-GPU and CPU execution. The only Ginkgo features which have
not been ported yet are some preconditioners.

Ginkgo's mixed-precision support is greatly enhanced thanks to:
1. The new Accessor concept, which allows writing kernels featuring on-the-fly
memory compression, among other features. The accessor can be used as
header-only, see the [accessor BLAS benchmarks repository](https://github.com/ginkgo-project/accessor-BLAS/tree/develop) as a usage example.
2. All LinOps now transparently support mixed-precision execution. By default,
this is done through a temporary copy which may have a performance impact but
already allows mixed-precision research.

Native mixed-precision ELL kernels are implemented which do not see this cost.
The accessor is also leveraged in a new CB-GMRES solver which allows for
performance improvements by compressing the Krylov basis vectors. Many other
features have been added to Ginkgo, such as reordering support, a new IDR
solver, Incomplete Cholesky preconditioner, matrix assembly support (only CPU
for now), machine topology information, and more!

Supported systems and requirements:
+ For all platforms, cmake 3.13+
+ C++14 compliant compiler
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 3.5+
  + DPC++ module: Intel OneAPI 2021.3. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add a new DPC++ Executor for SYCL execution and other base utilities
  [#648](#648), [#661](#661), [#757](#757), [#832](#832)
+ Port matrix formats, solvers and related kernels to DPC++. For some kernels,
  also make use of a shared kernel implementation for all executors (except
  Reference). [#710](#710), [#799](#799), [#779](#779), [#733](#733), [#844](#844), [#843](#843), [#789](#789), [#845](#845), [#849](#849), [#855](#855), [#856](#856)
+ Add accessors which allow multi-precision kernels, among other things.
  [#643](#643), [#708](#708)
+ Add support for mixed precision operations through apply in all LinOps. [#677](#677)
+ Add incomplete Cholesky factorizations and preconditioners as well as some
  improvements to ILU. [#672](#672), [#837](#837), [#846](#846)
+ Add an AMGX implementation and kernels on all devices but DPC++.
  [#528](#528), [#695](#695), [#860](#860)
+ Add a new mixed-precision capability solver, Compressed Basis GMRES
  (CB-GMRES). [#693](#693), [#763](#763)
+ Add the IDR(s) solver. [#620](#620)
+ Add a new fixed-size block CSR matrix format (for the Reference executor).
  [#671](#671), [#730](#730)
+ Add native mixed-precision support to the ELL format. [#717](#717), [#780](#780)
+ Add Reverse Cuthill-McKee reordering [#500](#500), [#649](#649)
+ Add matrix assembly support on CPUs. [#644](#644)
+ Extends ISAI from triangular to general and spd matrices. [#690](#690)

Other additions:
+ Add the possibility to apply real matrices to complex vectors.
  [#655](#655), [#658](#658)
+ Add functions to compute the absolute of a matrix format. [#636](#636)
+ Add symmetric permutation and improve existing permutations.
  [#684](#684), [#657](#657), [#663](#663)
+ Add a MachineTopology class with HWLOC support [#554](#554), [#697](#697)
+ Add an implicit residual norm criterion. [#702](#702), [#818](#818), [#850](#850)
+ Row-major accessor is generalized to more than 2 dimensions and a new
  "block column-major" accessor has been added. [#707](#707)
+ Add an heat equation example. [#698](#698), [#706](#706)
+ Add ccache support in CMake and CI. [#725](#725), [#739](#739)
+ Allow tuning and benchmarking variables non intrusively. [#692](#692)
+ Add triangular solver benchmark [#664](#664)
+ Add benchmarks for BLAS operations [#772](#772), [#829](#829)
+ Add support for different precisions and consistent index types in benchmarks.
  [#675](#675), [#828](#828)
+ Add a Github bot system to facilitate development and PR management.
  [#667](#667), [#674](#674), [#689](#689), [#853](#853)
+ Add Intel (DPC++) CI support and enable CI on HPC systems. [#736](#736), [#751](#751), [#781](#781)
+ Add ssh debugging for Github Actions CI. [#749](#749)
+ Add pipeline segmentation for better CI speed. [#737](#737)


Changes:
+ Add a Scalar Jacobi specialization and kernels. [#808](#808), [#834](#834), [#854](#854)
+ Add implicit residual log for solvers and benchmarks. [#714](#714)
+ Change handling of the conjugate in the dense dot product. [#755](#755)
+ Improved Dense stride handling. [#774](#774)
+ Multiple improvements to the OpenMP kernels performance, including COO,
an exclusive prefix sum, and more. [#703](#703), [#765](#765), [#740](#740)
+ Allow specialization of submatrix and other dense creation functions in solvers. [#718](#718)
+ Improved Identity constructor and treatment of rectangular matrices. [#646](#646)
+ Allow CUDA/HIP executors to select allocation mode. [#758](#758)
+ Check if executors share the same memory. [#670](#670)
+ Improve test install and smoke testing support. [#721](#721)
+ Update the JOSS paper citation and add publications in the documentation.
  [#629](#629), [#724](#724)
+ Improve the version output. [#806](#806)
+ Add some utilities for dim and span. [#821](#821)
+ Improved solver and preconditioner benchmarks. [#660](#660)
+ Improve benchmark timing and output. [#669](#669), [#791](#791), [#801](#801), [#812](#812)


Fixes:
+ Sorting fix for the Jacobi preconditioner. [#659](#659)
+ Also log the first residual norm in CGS [#735](#735)
+ Fix BiCG and HIP CSR to work with complex matrices. [#651](#651)
+ Fix Coo SpMV on strided vectors. [#807](#807)
+ Fix segfault of extract_diagonal, add short-and-fat test. [#769](#769)
+ Fix device_reset issue by moving counter/mutex to device. [#810](#810)
+ Fix `EnableLogging` superclass. [#841](#841)
+ Support ROCm 4.1.x and breaking HIP_PLATFORM changes. [#726](#726)
+ Decreased test size for a few device tests. [#742](#742)
+ Fix multiple issues with our CMake HIP and RPATH setup.
  [#712](#712), [#745](#745), [#709](#709)
+ Cleanup our CMake installation step. [#713](#713)
+ Various simplification and fixes to the Windows CMake setup. [#720](#720), [#785](#785)
+ Simplify third-party integration. [#786](#786)
+ Improve Ginkgo device arch flags management. [#696](#696)
+ Other fixes and improvements to the CMake setup.
  [#685](#685), [#792](#792), [#705](#705), [#836](#836)
+ Clarification of dense norm documentation [#784](#784)
+ Various development tools fixes and improvements [#738](#738), [#830](#830), [#840](#840)
+ Make multiple operators/constructors explicit. [#650](#650), [#761](#761)
+ Fix some issues, memory leaks and warnings found by MSVC.
  [#666](#666), [#731](#731)
+ Improved solver memory estimates and consistent iteration counts [#691](#691)
+ Various logger improvements and fixes [#728](#728), [#743](#743), [#754](#754)
+ Fix for ForwardIterator requirements in iterator_factory. [#665](#665)
+ Various benchmark fixes. [#647](#647), [#673](#673), [#722](#722)
+ Various CI fixes and improvements. [#642](#642), [#641](#641), [#795](#795), [#783](#783), [#793](#793), [#852](#852)

Related PR: #866
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:ci-cd This is related to the continuous integration system. reg:testing This is related to testing.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants