Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable CTest Resources #1373

Merged
merged 44 commits into from
Sep 21, 2023
Merged

Enable CTest Resources #1373

merged 44 commits into from
Sep 21, 2023

Conversation

MarcelKoch
Copy link
Member

@MarcelKoch MarcelKoch commented Jul 24, 2023

This PR enables the use ctest resources to better handle the available hardware for inter-test parallelism. Enabling it required some changes to make our tests run on devices other than the one with device id 0.

What is not included: actually enabling the test parallelism. (The resources.json file is only temporary)

To get the full benefits, it is necessary to define a resource file that specifies the available resources. I see two options on how to handle this file:

  1. Generate the file from CMake (this is used in rapids.ai)
  2. Create a file for each of our gitlab/github runners by hand
    Getting 1. to work robustly across all kind of different hardware might be difficult, while option 2. requires a lot of manual set-up and maintanance.

I've noticed some issues with MPI implementations and OMP (mostly mvapich). It is important to figure out the right environment variables, otherwise the MPI tests won't schedule their threads to different cores. On Leconte I used MV_ENABLE_AFFINITY=0 and OMP_NUM_THREADS=dummy_value.

A timing comparison using ICL's Leconte (40 cores):

  • with resources: ~80s (it is more random than without)
  • without resources: 400s

Default Values

Number of OMP thread: 4
Oversubscription of hardware threads: 20% (i.e. specify more slots than HW threads)
Concurrent GPU tests on a single device: 4 (excluding MPI tests)

@MarcelKoch MarcelKoch self-assigned this Jul 24, 2023
@ginkgo-bot ginkgo-bot added reg:build This is related to the build system. reg:testing This is related to testing. mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:openmp This is related to the OpenMP module. type:solver This is related to the solvers type:preconditioner This is related to the preconditioners type:matrix-format This is related to the Matrix formats mod:hip This is related to the HIP module. type:reordering This is related to the matrix(LinOp) reordering labels Jul 24, 2023
@MarcelKoch
Copy link
Member Author

Should this allow for tests with multiple backends enabled at once? Currently it assumes only one type of GPUs.

@MarcelKoch
Copy link
Member Author

Should CPU and GPU test run completely independent? Or should a GPU test also occupy a single HW thread? Currently, it is assumed that they can run independently. Doing it otherwise will complicate the set-up again.

@codecov
Copy link

codecov bot commented Jul 25, 2023

Codecov Report

Patch coverage is 30.84% of modified lines.

❗ Current head 9071cfb differs from pull request most recent head 6e4bc33. Consider uploading reports for the commit 6e4bc33 to get more accurate results

Files Changed Coverage
include/ginkgo/core/base/executor.hpp ø
include/ginkgo/core/base/memory.hpp ø
omp/base/executor.cpp 0.00%
test/tools/resource_file_generator.cpp 0.00%
core/test/gtest/resources.cpp 20.51%
core/test/gtest/environments.hpp 88.88%
core/test/gtest/ginkgo_main.cpp 100.00%
core/test/gtest/ginkgo_mpi_main.cpp 100.00%
test/mpi/preconditioner/schwarz.cpp 100.00%
test/utils/executor.hpp 100.00%
... and 1 more

📢 Thoughts on this report? Let us know!.

@upsj
Copy link
Member

upsj commented Jul 26, 2023

Different pipelines sometimes want to use different devices (e.g. SYCL CPU vs. GPU via SYCL_DEVICE_FILTER or ONEAPI_DEVICE_SELECTOR, or just generally CUDA_VISIBLE_DEVICES/ROCM_VISIBLE_DEVICES, so dynamically creating the resource file might be preferable. On the other hand, this is mostly relevant to SYCL, where we don't really have multiple devices available on a single system anyways, and we can't control how many CPUs the SYCL device uses (except for maybe the taskset OpenCL environment variable, but that is pretty advanced and doesn't work for level zero).

Since it is heavily influenced by the environment (variables and hardware), and might otherwise harm building the tests on non-GPU enabled login nodes, we should make the entire resource configuration optional.

Collecting the information with the native language (cudaGetNumDevices, omp_get_max_threads etc.) should be straightforward. Alternatively, we could provide a separate tool that generates these resource files?

Finally, if there is nothing fundamentally that would make it hard to implement, I would prefer if we had one resource type per device executor. I think using a single CPU thread as the minimum resource for each test seems sensible.

@MarcelKoch
Copy link
Member Author

Since it is heavily influenced by the environment (variables and hardware), and might otherwise harm building the tests on non-GPU enabled login nodes, we should make the entire resource configuration optional.

I'm not sure if I follow here. This changes only how the tests are run, not how they are build. But mentioning non-GPU login nodes, these might be an issue for dynamically creating the resource file. I think it might not be a problem, if cmake is run again on the compute nodes.

Finally, if there is nothing fundamentally that would make it hard to implement, I would prefer if we had one resource type per device executor. I think using a single CPU thread as the minimum resource for each test seems sensible.

I'm not sure what this relates to. Do you mean that a device executor occupies the full device? So removing the RESOURCE_PRECENTAGE stuff?

@upsj
Copy link
Member

upsj commented Jul 28, 2023

But mentioning non-GPU login nodes, these might be an issue for dynamically creating the resource file.

Exactly, that's what I meant - the test properties need to be captured at configure time, if we want to do things dynamically. But mounting a resource file in the container and specifying it to CMake should also be easily doable, we just need a set of binaries that can produce the necessary information (number of devices of each type), and a script to combine them into a JSON file.

I think it might not be a problem, if cmake is run again on the compute nodes.

I am a bit unhappy with that requirement, because IMO the configuration should only capture the external environment once and be stable afterwards (as much as possible).

I'm not sure what this relates to

This relates to the question whether we should have a resource type for each GPU vendor. I would say yes, if it's not too complicated.

@MarcelKoch
Copy link
Member Author

This relates to the question whether we should have a resource type for each GPU vendor. I would say yes, if it's not too complicated.

If I understand you correctly, that would be using cuda-gpu, amd-gpu, etc., in the resource file. It would complicate things a bit, but should still be doable.

Exactly, that's what I meant - the test properties need to be captured at configure time, if we want to do things dynamically. But mounting a resource file in the container and specifying it to CMake should also be easily doable, we just need a set of binaries that can produce the necessary information (number of devices of each type), and a script to combine them into a JSON file.

One very simple approach would be to predefine the resource file for all of our supported systems and just put them into our repository.

@upsj
Copy link
Member

upsj commented Jul 28, 2023

One very simple approach would be to predefine the resource file for all of our supported systems and just put them into our repository.

That would be hard to maintain, as it requires rebuilding the containers every time we add a new CI system. Doing that on the administrative side is much cleaner and easier to scale.

@MarcelKoch MarcelKoch changed the title WIP: Enable CTest Resources Enable CTest Resources Jul 31, 2023
@MarcelKoch MarcelKoch added the 1:ST:ready-for-review This PR is ready for review label Jul 31, 2023
@MarcelKoch
Copy link
Member Author

If we continue with this, I think we have to revisit our gitlab-runners. We need to decide how many jobs we want to allow to run in parallel, and how to limit the resources accordingly.

@upsj
Copy link
Member

upsj commented Jul 31, 2023

I can take care of setting up the test environments for the runners. I would suggest using an environment variable CTEST_EXTRA_PARAMETERS that we control from the runner, so we can keep things from breaking on systems without resource files.

Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even though part of the code was written by me, I'll add some review comments anyways

LGTM, if we add a CTEST_EXTRA_ARGS environment variable to be used in ctest invocations, we can even do the resource configuration transparently to the .gitlab-ci.yml

cmake/create_test.cmake Outdated Show resolved Hide resolved
core/test/gtest/environments.hpp Outdated Show resolved Hide resolved
cuda/test/base/CMakeLists.txt Outdated Show resolved Hide resolved
hip/test/base/CMakeLists.txt Outdated Show resolved Hide resolved
hip/test/utils.hip.hpp Outdated Show resolved Hide resolved
test/tools/CMakeLists.txt Outdated Show resolved Hide resolved
test/tools/resource_file_generator.cpp Outdated Show resolved Hide resolved
test/utils/executor.hpp Outdated Show resolved Hide resolved
test/utils/executor.hpp Show resolved Hide resolved
test/utils/executor.hpp Show resolved Hide resolved
@MarcelKoch MarcelKoch force-pushed the ctest-resources branch 2 times, most recently from 0b6a5b8 to 051d12f Compare August 2, 2023 07:27
@MarcelKoch
Copy link
Member Author

format-rebase!

@ginkgo-bot
Copy link
Member

Error: Rebase failed, see the related Action for details

@MarcelKoch
Copy link
Member Author

format!

upsj and others added 4 commits September 7, 2023 10:21
- formatting
- remove remaining occurrences of syclgpu
- rename to GINKGO_CI_TEST_OMP_PARALLELISM

Co-authored-by: Yuhsiang M. Tsai <yhmtsai@gmail.com>
CMakeLists.txt Outdated Show resolved Hide resolved
@@ -120,12 +120,13 @@ struct CudaSolveStruct : gko::solver::SolveStruct {
const auto rows = matrix->get_size()[0];
// workaround suggested by NVIDIA engineers: for some reason
// cusparse needs non-nullptr input vectors even for analysis
// also make sure they are aligned by 16 bytes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

16bytes or 8 bytes?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

double complex needs to be aligned by 16 bytes, since thrust::complex<double> has higher alignment requirements to enable vectorized loads/stores

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but 0xDEAD0 only use more than two bytes, does it only has four byte?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a pointer, the 0 at the least significant digit makes it divisible by 16

test/utils/executor.hpp Show resolved Hide resolved
if (i > 0) {
gpus.append(",\n");
}
gpus += R"( {"id": ")" + std::to_string(i) + R"(", "slots": 1})";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when we need to oversubscripting it, we change the slot number here, right?
in case developers use ctest_resource to test mpi on single gpu node.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I manually changed the output for the CI runs to match with our number of parallel jobs.

cuda/test/utils.hpp Show resolved Hide resolved
cuda/test/base/CMakeLists.txt Outdated Show resolved Hide resolved
endif()
set_property(TEST ${test_name}
PROPERTY
RESOURCE_GROUPS "${add_rr_MPI_SIZE},${single_resource}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when using gpu executor, we do not have limitation on cpu side.
The gpu testing which uses omp/ref for reference answer will use full cpu

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could add an additional CPU restriction, but since all tests compare against sequential reference, which only uses a single core, and it's unlikely that a system has more GPUs than cores, I think it shouldn't make a difference.

hip/test/utils.hip.hpp Show resolved Hide resolved
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

cmake/create_test.cmake Outdated Show resolved Hide resolved
Co-authored-by: Yu-Hsiang M. Tsai <19565938+yhmtsai@users.noreply.github.com>
@upsj upsj added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Sep 15, 2023
Copy link
Member Author

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only some minor remarks

@@ -72,15 +72,13 @@
image: ginkgohub/rocm:45-mvapich2-gnu8-llvm8
tags:
- private_ci
- amdci
- gpu
- amd-gpu
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these changes from the PR #1394 ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, I just noticed we are barely using nla1, and this enables more jobs to be run there.


## Replaces / by _ to create valid target names from relative paths
function(ginkgo_build_test_name test_name target_name)
file(RELATIVE_PATH REL_BINARY_DIR
${PROJECT_BINARY_DIR} ${CMAKE_CURRENT_BINARY_DIR})
${PROJECT_BINARY_DIR} ${CMAKE_CURRENT_BINARY_DIR})
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file contains some formatting changes. We should really find a way to get a consistent cmake formatting.

cmake/create_test.cmake Outdated Show resolved Hide resolved
cmake/create_test.cmake Show resolved Hide resolved
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the renames are not necessary anymore

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate? I changed the CMakeLists.txt to use host tests, so this is looking for the .cpp file

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was just thinking less change is better, but it doesn't really matter.

@thoasm thoasm self-requested a review September 18, 2023 12:34
cmake/create_test.cmake Show resolved Hide resolved
cmake/create_test.cmake Show resolved Hide resolved
core/test/gtest/ginkgo_mpi_main.cpp Outdated Show resolved Hide resolved
- make more tests host-compiled
- make GTest main library suffix more descriptive
- more consistent formatting
@upsj upsj merged commit 96d01cc into develop Sep 21, 2023
12 of 14 checks passed
@upsj upsj deleted the ctest-resources branch September 21, 2023 12:28
@sonarcloud
Copy link

sonarcloud bot commented Sep 21, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 1 Code Smell

7.5% 7.5% Coverage
0.0% 0.0% Duplication

warning The version of Java (11.0.3) you have used to run this analysis is deprecated and we will stop accepting it soon. Please update to at least Java 17.
Read more here

@tcojean tcojean mentioned this pull request Nov 6, 2023
tcojean added a commit that referenced this pull request Nov 10, 2023
Release 1.7.0 to master

The Ginkgo team is proud to announce the new Ginkgo minor release 1.7.0. This release brings new features such as:
- Complete GPU-resident sparse direct solvers feature set and interfaces,
- Improved Cholesky factorization performance,
- A new MC64 reordering,
- Batched iterative solver support with the BiCGSTAB solver with batched Dense and ELL matrix types,
- MPI support for the SYCL backend,
- Improved ParILU(T)/ParIC(T) preconditioner convergence,
and more!

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.16+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2019+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CMake 3.18+, and CUDA 10.1+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel oneAPI 2022.1+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp` or `icpx`.
  + MPI: standard version 3.1+, ideally GPU Aware, for best performance
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 10.1+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version support changes

+ CUDA 9.2 is no longer supported and 10.0 is untested [#1382](#1382)
+ Ginkgo now requires CMake version 3.16 (and 3.18 for CUDA) [#1368](#1368)

### Interface changes

+ `const` Factory parameters can no longer be modified through `with_*` functions, as this breaks const-correctness [#1336](#1336) [#1439](#1439)

### New Deprecations

+ The `device_reset` parameter of CUDA and HIP executors no longer has an effect, and its `allocation_mode` parameters have been deprecated in favor of the `Allocator` interface. [#1315](#1315)
+ The CMake parameter `GINKGO_BUILD_DPCPP` has been deprecated in favor of `GINKGO_BUILD_SYCL`. [#1350](#1350)
+ The `gko::reorder::Rcm` interface has been deprecated in favor of `gko::experimental::reorder::Rcm` based on `Permutation`. [#1418](#1418)
+ The Permutation class' `permute_mask` functionality. [#1415](#1415)
+ Multiple functions with typos (`set_complex_subpsace()`, range functions such as `conj_operaton` etc). [#1348](#1348)

### Summary of previous deprecations
+ `gko::lend()` is not necessary anymore.
+ The classes `RelativeResidualNorm` and `AbsoluteResidualNorm` are deprecated in favor of `ResidualNorm`.
+ The class `AmgxPgm` is deprecated in favor of `Pgm`.
+ Default constructors for the CSR `load_balance` and `automatical` strategies
+ The PolymorphicObject's move-semantic `copy_from` variant
+ The templated `SolverBase` class.
+ The class `MachineTopology` is deprecated in favor of `machine_topology`.
+ Logger constructors and create functions with the `executor` parameter.
+ The virtual, protected, Dense functions `compute_norm1_impl`, `add_scaled_impl`, etc.
+ Logger events for solvers and criterion without the additional `implicit_tau_sq` parameter.
+ The global `gko::solver::default_krylov_dim`, use instead `gko::solver::gmres_default_krylov_dim`.

### Added features

+ Adds a batch::BatchLinOp class that forms a base class for batched linear operators such as batched matrix formats, solver and preconditioners [#1379](#1379)
+ Adds a batch::MultiVector class that enables operations such as dot, norm, scale on batched vectors [#1371](#1371)
+ Adds a batch::Dense matrix format that stores batched dense matrices and provides gemv operations for these dense matrices. [#1413](#1413)
+ Adds a batch::Ell matrix format that stores batched Ell matrices and provides spmv operations for these batched Ell matrices. [#1416](#1416) [#1437](#1437)
+ Add a batch::Bicgstab solver (class, core, and reference kernels) that enables iterative solution of batched linear systems [#1438](#1438).
+ Add device kernels (CUDA, HIP, and DPCPP) for batch::Bicgstab solver. [#1443](#1443).
+ New MC64 reordering algorithm which optimizes the diagonal product or sum of a matrix by permuting the rows, and computes additional scaling factors for equilibriation [#1120](#1120)
+ New interface for (non-symmetric) permutation and scaled permutation of Dense and Csr matrices [#1415](#1415)
+ LU and Cholesky Factorizations can now be separated into their factors [#1432](#1432)
+ New symbolic LU factorization algorithm that is optimized for matrices with an almost-symmetric sparsity pattern [#1445](#1445)
+ Sorting kernels for SparsityCsr on all backends [#1343](#1343)
+ Allow passing pre-generated local solver as factory parameter for the distributed Schwarz preconditioner [#1426](#1426)
+ Add DPCPP kernels for Partition [#1034](#1034), and CSR's `check_diagonal_entries` and `add_scaled_identity` functionality [#1436](#1436)
+ Adds a helper function to create a partition based on either local sizes, or local ranges [#1227](#1227)
+ Add function to compute arithmetic mean of dense and distributed vectors [#1275](#1275)
+ Adds `icpx` compiler supports [#1350](#1350)
+ All backends can be built simultaneously [#1333](#1333)
+ Emits a CMake warning in downstream projects that use different compilers than the installed Ginkgo [#1372](#1372)
+ Reordering algorithms in sparse_blas benchmark [#1354](#1354)
+ Benchmarks gained an `-allocator` parameter to specify device allocators [#1385](#1385)
+ Benchmarks gained an `-input_matrix` parameter that initializes the input JSON based on the filename [#1387](#1387)
+ Benchmark inputs can now be reordered as a preprocessing step [#1408](#1408)


### Improvements

+ Significantly improve Cholesky factorization performance [#1366](#1366)
+ Improve parallel build performance [#1378](#1378)
+ Allow constrained parallel test execution using CTest resources [#1373](#1373)
+ Use arithmetic type more inside mixed precision ELL [#1414](#1414)
+ Most factory parameters of factory type no longer need to be constructed explicitly via `.on(exec)` [#1336](#1336) [#1439](#1439)
+ Improve ParILU(T)/ParIC(T) convergence by using more appropriate atomic operations [#1434](#1434)

### Fixes

+ Fix an over-allocation for OpenMP reductions [#1369](#1369)
+ Fix DPCPP's common-kernel reduction for empty input sizes [#1362](#1362)
+ Fix several typos in the API and documentation [#1348](#1348)
+ Fix inconsistent `Threads` between generations [#1388](#1388)
+ Fix benchmark median condition [#1398](#1398)
+ Fix HIP 5.6.0 compilation [#1411](#1411)
+ Fix missing destruction of rand_generator from cuda/hip [#1417](#1417)
+ Fix PAPI logger destruction order [#1419](#1419)
+ Fix TAU logger compilation [#1422](#1422)
+ Fix relative criterion to not iterate if the residual is already zero [#1079](#1079)
+ Fix memory_order invocations with C++20 changes [#1402](#1402)
+ Fix `check_diagonal_entries_exist` report correctly when only missing diagonal value in the last rows. [#1440](#1440)
+ Fix checking OpenMPI version in cross-compilation settings [#1446](#1446)
+ Fix false-positive deprecation warnings in Ginkgo, especially for the old Rcm (it doesn't emit deprecation warnings anymore as a result but is still considered deprecated) [#1444](#1444)


### Related PR: #1451
tcojean added a commit that referenced this pull request Nov 10, 2023
Release 1.7.0 to develop

The Ginkgo team is proud to announce the new Ginkgo minor release 1.7.0. This release brings new features such as:
- Complete GPU-resident sparse direct solvers feature set and interfaces,
- Improved Cholesky factorization performance,
- A new MC64 reordering,
- Batched iterative solver support with the BiCGSTAB solver with batched Dense and ELL matrix types,
- MPI support for the SYCL backend,
- Improved ParILU(T)/ParIC(T) preconditioner convergence,
and more!

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.16+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2019+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CMake 3.18+, and CUDA 10.1+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel oneAPI 2022.1+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp` or `icpx`.
  + MPI: standard version 3.1+, ideally GPU Aware, for best performance
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 10.1+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version support changes

+ CUDA 9.2 is no longer supported and 10.0 is untested [#1382](#1382)
+ Ginkgo now requires CMake version 3.16 (and 3.18 for CUDA) [#1368](#1368)

### Interface changes

+ `const` Factory parameters can no longer be modified through `with_*` functions, as this breaks const-correctness [#1336](#1336) [#1439](#1439)

### New Deprecations

+ The `device_reset` parameter of CUDA and HIP executors no longer has an effect, and its `allocation_mode` parameters have been deprecated in favor of the `Allocator` interface. [#1315](#1315)
+ The CMake parameter `GINKGO_BUILD_DPCPP` has been deprecated in favor of `GINKGO_BUILD_SYCL`. [#1350](#1350)
+ The `gko::reorder::Rcm` interface has been deprecated in favor of `gko::experimental::reorder::Rcm` based on `Permutation`. [#1418](#1418)
+ The Permutation class' `permute_mask` functionality. [#1415](#1415)
+ Multiple functions with typos (`set_complex_subpsace()`, range functions such as `conj_operaton` etc). [#1348](#1348)

### Summary of previous deprecations
+ `gko::lend()` is not necessary anymore.
+ The classes `RelativeResidualNorm` and `AbsoluteResidualNorm` are deprecated in favor of `ResidualNorm`.
+ The class `AmgxPgm` is deprecated in favor of `Pgm`.
+ Default constructors for the CSR `load_balance` and `automatical` strategies
+ The PolymorphicObject's move-semantic `copy_from` variant
+ The templated `SolverBase` class.
+ The class `MachineTopology` is deprecated in favor of `machine_topology`.
+ Logger constructors and create functions with the `executor` parameter.
+ The virtual, protected, Dense functions `compute_norm1_impl`, `add_scaled_impl`, etc.
+ Logger events for solvers and criterion without the additional `implicit_tau_sq` parameter.
+ The global `gko::solver::default_krylov_dim`, use instead `gko::solver::gmres_default_krylov_dim`.

### Added features

+ Adds a batch::BatchLinOp class that forms a base class for batched linear operators such as batched matrix formats, solver and preconditioners [#1379](#1379)
+ Adds a batch::MultiVector class that enables operations such as dot, norm, scale on batched vectors [#1371](#1371)
+ Adds a batch::Dense matrix format that stores batched dense matrices and provides gemv operations for these dense matrices. [#1413](#1413)
+ Adds a batch::Ell matrix format that stores batched Ell matrices and provides spmv operations for these batched Ell matrices. [#1416](#1416) [#1437](#1437)
+ Add a batch::Bicgstab solver (class, core, and reference kernels) that enables iterative solution of batched linear systems [#1438](#1438).
+ Add device kernels (CUDA, HIP, and DPCPP) for batch::Bicgstab solver. [#1443](#1443).
+ New MC64 reordering algorithm which optimizes the diagonal product or sum of a matrix by permuting the rows, and computes additional scaling factors for equilibriation [#1120](#1120)
+ New interface for (non-symmetric) permutation and scaled permutation of Dense and Csr matrices [#1415](#1415)
+ LU and Cholesky Factorizations can now be separated into their factors [#1432](#1432)
+ New symbolic LU factorization algorithm that is optimized for matrices with an almost-symmetric sparsity pattern [#1445](#1445)
+ Sorting kernels for SparsityCsr on all backends [#1343](#1343)
+ Allow passing pre-generated local solver as factory parameter for the distributed Schwarz preconditioner [#1426](#1426)
+ Add DPCPP kernels for Partition [#1034](#1034), and CSR's `check_diagonal_entries` and `add_scaled_identity` functionality [#1436](#1436)
+ Adds a helper function to create a partition based on either local sizes, or local ranges [#1227](#1227)
+ Add function to compute arithmetic mean of dense and distributed vectors [#1275](#1275)
+ Adds `icpx` compiler supports [#1350](#1350)
+ All backends can be built simultaneously [#1333](#1333)
+ Emits a CMake warning in downstream projects that use different compilers than the installed Ginkgo [#1372](#1372)
+ Reordering algorithms in sparse_blas benchmark [#1354](#1354)
+ Benchmarks gained an `-allocator` parameter to specify device allocators [#1385](#1385)
+ Benchmarks gained an `-input_matrix` parameter that initializes the input JSON based on the filename [#1387](#1387)
+ Benchmark inputs can now be reordered as a preprocessing step [#1408](#1408)


### Improvements

+ Significantly improve Cholesky factorization performance [#1366](#1366)
+ Improve parallel build performance [#1378](#1378)
+ Allow constrained parallel test execution using CTest resources [#1373](#1373)
+ Use arithmetic type more inside mixed precision ELL [#1414](#1414)
+ Most factory parameters of factory type no longer need to be constructed explicitly via `.on(exec)` [#1336](#1336) [#1439](#1439)
+ Improve ParILU(T)/ParIC(T) convergence by using more appropriate atomic operations [#1434](#1434)

### Fixes

+ Fix an over-allocation for OpenMP reductions [#1369](#1369)
+ Fix DPCPP's common-kernel reduction for empty input sizes [#1362](#1362)
+ Fix several typos in the API and documentation [#1348](#1348)
+ Fix inconsistent `Threads` between generations [#1388](#1388)
+ Fix benchmark median condition [#1398](#1398)
+ Fix HIP 5.6.0 compilation [#1411](#1411)
+ Fix missing destruction of rand_generator from cuda/hip [#1417](#1417)
+ Fix PAPI logger destruction order [#1419](#1419)
+ Fix TAU logger compilation [#1422](#1422)
+ Fix relative criterion to not iterate if the residual is already zero [#1079](#1079)
+ Fix memory_order invocations with C++20 changes [#1402](#1402)
+ Fix `check_diagonal_entries_exist` report correctly when only missing diagonal value in the last rows. [#1440](#1440)
+ Fix checking OpenMPI version in cross-compilation settings [#1446](#1446)
+ Fix false-positive deprecation warnings in Ginkgo, especially for the old Rcm (it doesn't emit deprecation warnings anymore as a result but is still considered deprecated) [#1444](#1444)

### Related PR: #1454
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:hip This is related to the HIP module. mod:openmp This is related to the OpenMP module. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats type:preconditioner This is related to the preconditioners type:reordering This is related to the matrix(LinOp) reordering type:solver This is related to the solvers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants