Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow passing matrix filename as benchmark input #1387

Merged
merged 6 commits into from
Aug 17, 2023
Merged

Conversation

upsj
Copy link
Member

@upsj upsj commented Aug 14, 2023

When running benchmarks by hand, I always have to type -input '[{"filename":"..."}' by hand. The MatrixMarket and Ginkgo binary formats can be differentiated from JSON files by the first character: % is MM, G is binary, everything else is potentially JSON. So I had the idea to pass the matrix filename directly as input for the benchmark, either with the same flag (like in this PR) or with a different flag.

Not sure whether this overloads the -input flag too much, but I wanted to suggest it anyways.

@upsj upsj added the 1:ST:ready-for-review This PR is ready for review label Aug 14, 2023
@upsj upsj requested a review from a team August 14, 2023 09:00
@upsj upsj self-assigned this Aug 14, 2023
@ginkgo-bot ginkgo-bot added reg:benchmarking This is related to benchmarking. type:solver This is related to the solvers labels Aug 14, 2023
@MarcelKoch
Copy link
Member

I don't know if -input was part of the last release, but perhaps just rename that to -input_matrix and expect it to be a filename. There may not be much benefit of -input [{'filename': '...'}] over echo [{filename': '...'}] | benchmark.

@upsj
Copy link
Member Author

upsj commented Aug 14, 2023

The main benefit of passing JSON input via flag is that you don't need to work with pipes, which simplifies things like e.g. debugging with gdb (where you can't easily pass anything into stdin)

@MarcelKoch
Copy link
Member

From some stackoverflow (https://stackoverflow.com/a/8422306) it seems like you can do this:

> gdb ./exec
> run < input.json

And also, perhaps it's enough for the debugging to have one input matrix.

@upsj
Copy link
Member Author

upsj commented Aug 14, 2023

That doesn't allow changing the parameters in a debugging session, since you cannot do echo ... | run.

@MarcelKoch
Copy link
Member

you can do

echo ... > input.json
gdb ./exec
run < input.json

Or do you want to change stdin during the run?

@upsj
Copy link
Member Author

upsj commented Aug 14, 2023

No, but having to switch to another terminal is always inconvenient. But gdb is only one example, automation in Python is another one - having to specify stdin streams complicates the subprocess call and makes integration with other tools harder

@MarcelKoch
Copy link
Member

In newer python version (>=3.7) you can just pass a string to subprocess.run(args, input=some_input). But I might not understand your workflow anyway.
Still, I think it would be better to have this parameter for the very simple case of using exactly one input file.

@upsj
Copy link
Member Author

upsj commented Aug 14, 2023

I simply don't see a reason to remove the functionality when it can make scripting things much easier. Running benchmarks for multiple BLAS parameters on multiple GPUs using parallel is another example case where
parallel command -input ... :: parameters is simpler than parallel bash -c 'echo ... | command' :: parameters

@MarcelKoch
Copy link
Member

TBH, I'm still not sure if it makes scripting easier. But anyway, perhaps just add this under a new parameter name, to not overload the old one.

@upsj
Copy link
Member Author

upsj commented Aug 15, 2023

The reason I reused the name is purely practical: The parameter would either need to be put into general.hpp and thus be inapplicable for blas/multivector/matrix_generator or be duplicated into individual functions or put into an entirely separate header. I'll use -input_matrix

"spmv": {}
}
Matrix is of size (36, 36)
Current state:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you remind me about the json output? I remember something related to the json. like the intermediate output only in stderr not stdout, or the stderr does not contain json

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is how it will be after #1323, at the moment, there is still JSON in both stderr and stdout

benchmark/utils/general_matrix.hpp Show resolved Hide resolved
benchmark/utils/general_matrix.hpp Show resolved Hide resolved
Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

benchmark/solver/distributed/solver.cpp Outdated Show resolved Hide resolved
benchmark/solver/solver.cpp Outdated Show resolved Hide resolved
benchmark/utils/general_matrix.hpp Outdated Show resolved Hide resolved
benchmark/utils/general_matrix.hpp Outdated Show resolved Hide resolved
@upsj upsj added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Aug 17, 2023
upsj and others added 6 commits August 17, 2023 17:30
* -input and -input_matrix are incompatible
* use R-strings for JSON

Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
Co-authored-by: Yuhsiang M. Tsai <yhmtsai@gmail.com>
@upsj upsj merged commit dc5a120 into develop Aug 17, 2023
12 of 14 checks passed
@upsj upsj deleted the benchmark_matrix_input branch August 17, 2023 18:54
@sonarcloud
Copy link

sonarcloud bot commented Aug 17, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 3 Code Smells

90.9% 90.9% Coverage
0.0% 0.0% Duplication

warning The version of Java (11.0.3) you have used to run this analysis is deprecated and we will stop accepting it soon. Please update to at least Java 17.
Read more here

@tcojean tcojean mentioned this pull request Nov 6, 2023
tcojean added a commit that referenced this pull request Nov 10, 2023
Release 1.7.0 to master

The Ginkgo team is proud to announce the new Ginkgo minor release 1.7.0. This release brings new features such as:
- Complete GPU-resident sparse direct solvers feature set and interfaces,
- Improved Cholesky factorization performance,
- A new MC64 reordering,
- Batched iterative solver support with the BiCGSTAB solver with batched Dense and ELL matrix types,
- MPI support for the SYCL backend,
- Improved ParILU(T)/ParIC(T) preconditioner convergence,
and more!

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.16+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2019+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CMake 3.18+, and CUDA 10.1+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel oneAPI 2022.1+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp` or `icpx`.
  + MPI: standard version 3.1+, ideally GPU Aware, for best performance
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 10.1+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version support changes

+ CUDA 9.2 is no longer supported and 10.0 is untested [#1382](#1382)
+ Ginkgo now requires CMake version 3.16 (and 3.18 for CUDA) [#1368](#1368)

### Interface changes

+ `const` Factory parameters can no longer be modified through `with_*` functions, as this breaks const-correctness [#1336](#1336) [#1439](#1439)

### New Deprecations

+ The `device_reset` parameter of CUDA and HIP executors no longer has an effect, and its `allocation_mode` parameters have been deprecated in favor of the `Allocator` interface. [#1315](#1315)
+ The CMake parameter `GINKGO_BUILD_DPCPP` has been deprecated in favor of `GINKGO_BUILD_SYCL`. [#1350](#1350)
+ The `gko::reorder::Rcm` interface has been deprecated in favor of `gko::experimental::reorder::Rcm` based on `Permutation`. [#1418](#1418)
+ The Permutation class' `permute_mask` functionality. [#1415](#1415)
+ Multiple functions with typos (`set_complex_subpsace()`, range functions such as `conj_operaton` etc). [#1348](#1348)

### Summary of previous deprecations
+ `gko::lend()` is not necessary anymore.
+ The classes `RelativeResidualNorm` and `AbsoluteResidualNorm` are deprecated in favor of `ResidualNorm`.
+ The class `AmgxPgm` is deprecated in favor of `Pgm`.
+ Default constructors for the CSR `load_balance` and `automatical` strategies
+ The PolymorphicObject's move-semantic `copy_from` variant
+ The templated `SolverBase` class.
+ The class `MachineTopology` is deprecated in favor of `machine_topology`.
+ Logger constructors and create functions with the `executor` parameter.
+ The virtual, protected, Dense functions `compute_norm1_impl`, `add_scaled_impl`, etc.
+ Logger events for solvers and criterion without the additional `implicit_tau_sq` parameter.
+ The global `gko::solver::default_krylov_dim`, use instead `gko::solver::gmres_default_krylov_dim`.

### Added features

+ Adds a batch::BatchLinOp class that forms a base class for batched linear operators such as batched matrix formats, solver and preconditioners [#1379](#1379)
+ Adds a batch::MultiVector class that enables operations such as dot, norm, scale on batched vectors [#1371](#1371)
+ Adds a batch::Dense matrix format that stores batched dense matrices and provides gemv operations for these dense matrices. [#1413](#1413)
+ Adds a batch::Ell matrix format that stores batched Ell matrices and provides spmv operations for these batched Ell matrices. [#1416](#1416) [#1437](#1437)
+ Add a batch::Bicgstab solver (class, core, and reference kernels) that enables iterative solution of batched linear systems [#1438](#1438).
+ Add device kernels (CUDA, HIP, and DPCPP) for batch::Bicgstab solver. [#1443](#1443).
+ New MC64 reordering algorithm which optimizes the diagonal product or sum of a matrix by permuting the rows, and computes additional scaling factors for equilibriation [#1120](#1120)
+ New interface for (non-symmetric) permutation and scaled permutation of Dense and Csr matrices [#1415](#1415)
+ LU and Cholesky Factorizations can now be separated into their factors [#1432](#1432)
+ New symbolic LU factorization algorithm that is optimized for matrices with an almost-symmetric sparsity pattern [#1445](#1445)
+ Sorting kernels for SparsityCsr on all backends [#1343](#1343)
+ Allow passing pre-generated local solver as factory parameter for the distributed Schwarz preconditioner [#1426](#1426)
+ Add DPCPP kernels for Partition [#1034](#1034), and CSR's `check_diagonal_entries` and `add_scaled_identity` functionality [#1436](#1436)
+ Adds a helper function to create a partition based on either local sizes, or local ranges [#1227](#1227)
+ Add function to compute arithmetic mean of dense and distributed vectors [#1275](#1275)
+ Adds `icpx` compiler supports [#1350](#1350)
+ All backends can be built simultaneously [#1333](#1333)
+ Emits a CMake warning in downstream projects that use different compilers than the installed Ginkgo [#1372](#1372)
+ Reordering algorithms in sparse_blas benchmark [#1354](#1354)
+ Benchmarks gained an `-allocator` parameter to specify device allocators [#1385](#1385)
+ Benchmarks gained an `-input_matrix` parameter that initializes the input JSON based on the filename [#1387](#1387)
+ Benchmark inputs can now be reordered as a preprocessing step [#1408](#1408)


### Improvements

+ Significantly improve Cholesky factorization performance [#1366](#1366)
+ Improve parallel build performance [#1378](#1378)
+ Allow constrained parallel test execution using CTest resources [#1373](#1373)
+ Use arithmetic type more inside mixed precision ELL [#1414](#1414)
+ Most factory parameters of factory type no longer need to be constructed explicitly via `.on(exec)` [#1336](#1336) [#1439](#1439)
+ Improve ParILU(T)/ParIC(T) convergence by using more appropriate atomic operations [#1434](#1434)

### Fixes

+ Fix an over-allocation for OpenMP reductions [#1369](#1369)
+ Fix DPCPP's common-kernel reduction for empty input sizes [#1362](#1362)
+ Fix several typos in the API and documentation [#1348](#1348)
+ Fix inconsistent `Threads` between generations [#1388](#1388)
+ Fix benchmark median condition [#1398](#1398)
+ Fix HIP 5.6.0 compilation [#1411](#1411)
+ Fix missing destruction of rand_generator from cuda/hip [#1417](#1417)
+ Fix PAPI logger destruction order [#1419](#1419)
+ Fix TAU logger compilation [#1422](#1422)
+ Fix relative criterion to not iterate if the residual is already zero [#1079](#1079)
+ Fix memory_order invocations with C++20 changes [#1402](#1402)
+ Fix `check_diagonal_entries_exist` report correctly when only missing diagonal value in the last rows. [#1440](#1440)
+ Fix checking OpenMPI version in cross-compilation settings [#1446](#1446)
+ Fix false-positive deprecation warnings in Ginkgo, especially for the old Rcm (it doesn't emit deprecation warnings anymore as a result but is still considered deprecated) [#1444](#1444)


### Related PR: #1451
tcojean added a commit that referenced this pull request Nov 10, 2023
Release 1.7.0 to develop

The Ginkgo team is proud to announce the new Ginkgo minor release 1.7.0. This release brings new features such as:
- Complete GPU-resident sparse direct solvers feature set and interfaces,
- Improved Cholesky factorization performance,
- A new MC64 reordering,
- Batched iterative solver support with the BiCGSTAB solver with batched Dense and ELL matrix types,
- MPI support for the SYCL backend,
- Improved ParILU(T)/ParIC(T) preconditioner convergence,
and more!

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.16+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2019+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CMake 3.18+, and CUDA 10.1+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel oneAPI 2022.1+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp` or `icpx`.
  + MPI: standard version 3.1+, ideally GPU Aware, for best performance
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 10.1+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version support changes

+ CUDA 9.2 is no longer supported and 10.0 is untested [#1382](#1382)
+ Ginkgo now requires CMake version 3.16 (and 3.18 for CUDA) [#1368](#1368)

### Interface changes

+ `const` Factory parameters can no longer be modified through `with_*` functions, as this breaks const-correctness [#1336](#1336) [#1439](#1439)

### New Deprecations

+ The `device_reset` parameter of CUDA and HIP executors no longer has an effect, and its `allocation_mode` parameters have been deprecated in favor of the `Allocator` interface. [#1315](#1315)
+ The CMake parameter `GINKGO_BUILD_DPCPP` has been deprecated in favor of `GINKGO_BUILD_SYCL`. [#1350](#1350)
+ The `gko::reorder::Rcm` interface has been deprecated in favor of `gko::experimental::reorder::Rcm` based on `Permutation`. [#1418](#1418)
+ The Permutation class' `permute_mask` functionality. [#1415](#1415)
+ Multiple functions with typos (`set_complex_subpsace()`, range functions such as `conj_operaton` etc). [#1348](#1348)

### Summary of previous deprecations
+ `gko::lend()` is not necessary anymore.
+ The classes `RelativeResidualNorm` and `AbsoluteResidualNorm` are deprecated in favor of `ResidualNorm`.
+ The class `AmgxPgm` is deprecated in favor of `Pgm`.
+ Default constructors for the CSR `load_balance` and `automatical` strategies
+ The PolymorphicObject's move-semantic `copy_from` variant
+ The templated `SolverBase` class.
+ The class `MachineTopology` is deprecated in favor of `machine_topology`.
+ Logger constructors and create functions with the `executor` parameter.
+ The virtual, protected, Dense functions `compute_norm1_impl`, `add_scaled_impl`, etc.
+ Logger events for solvers and criterion without the additional `implicit_tau_sq` parameter.
+ The global `gko::solver::default_krylov_dim`, use instead `gko::solver::gmres_default_krylov_dim`.

### Added features

+ Adds a batch::BatchLinOp class that forms a base class for batched linear operators such as batched matrix formats, solver and preconditioners [#1379](#1379)
+ Adds a batch::MultiVector class that enables operations such as dot, norm, scale on batched vectors [#1371](#1371)
+ Adds a batch::Dense matrix format that stores batched dense matrices and provides gemv operations for these dense matrices. [#1413](#1413)
+ Adds a batch::Ell matrix format that stores batched Ell matrices and provides spmv operations for these batched Ell matrices. [#1416](#1416) [#1437](#1437)
+ Add a batch::Bicgstab solver (class, core, and reference kernels) that enables iterative solution of batched linear systems [#1438](#1438).
+ Add device kernels (CUDA, HIP, and DPCPP) for batch::Bicgstab solver. [#1443](#1443).
+ New MC64 reordering algorithm which optimizes the diagonal product or sum of a matrix by permuting the rows, and computes additional scaling factors for equilibriation [#1120](#1120)
+ New interface for (non-symmetric) permutation and scaled permutation of Dense and Csr matrices [#1415](#1415)
+ LU and Cholesky Factorizations can now be separated into their factors [#1432](#1432)
+ New symbolic LU factorization algorithm that is optimized for matrices with an almost-symmetric sparsity pattern [#1445](#1445)
+ Sorting kernels for SparsityCsr on all backends [#1343](#1343)
+ Allow passing pre-generated local solver as factory parameter for the distributed Schwarz preconditioner [#1426](#1426)
+ Add DPCPP kernels for Partition [#1034](#1034), and CSR's `check_diagonal_entries` and `add_scaled_identity` functionality [#1436](#1436)
+ Adds a helper function to create a partition based on either local sizes, or local ranges [#1227](#1227)
+ Add function to compute arithmetic mean of dense and distributed vectors [#1275](#1275)
+ Adds `icpx` compiler supports [#1350](#1350)
+ All backends can be built simultaneously [#1333](#1333)
+ Emits a CMake warning in downstream projects that use different compilers than the installed Ginkgo [#1372](#1372)
+ Reordering algorithms in sparse_blas benchmark [#1354](#1354)
+ Benchmarks gained an `-allocator` parameter to specify device allocators [#1385](#1385)
+ Benchmarks gained an `-input_matrix` parameter that initializes the input JSON based on the filename [#1387](#1387)
+ Benchmark inputs can now be reordered as a preprocessing step [#1408](#1408)


### Improvements

+ Significantly improve Cholesky factorization performance [#1366](#1366)
+ Improve parallel build performance [#1378](#1378)
+ Allow constrained parallel test execution using CTest resources [#1373](#1373)
+ Use arithmetic type more inside mixed precision ELL [#1414](#1414)
+ Most factory parameters of factory type no longer need to be constructed explicitly via `.on(exec)` [#1336](#1336) [#1439](#1439)
+ Improve ParILU(T)/ParIC(T) convergence by using more appropriate atomic operations [#1434](#1434)

### Fixes

+ Fix an over-allocation for OpenMP reductions [#1369](#1369)
+ Fix DPCPP's common-kernel reduction for empty input sizes [#1362](#1362)
+ Fix several typos in the API and documentation [#1348](#1348)
+ Fix inconsistent `Threads` between generations [#1388](#1388)
+ Fix benchmark median condition [#1398](#1398)
+ Fix HIP 5.6.0 compilation [#1411](#1411)
+ Fix missing destruction of rand_generator from cuda/hip [#1417](#1417)
+ Fix PAPI logger destruction order [#1419](#1419)
+ Fix TAU logger compilation [#1422](#1422)
+ Fix relative criterion to not iterate if the residual is already zero [#1079](#1079)
+ Fix memory_order invocations with C++20 changes [#1402](#1402)
+ Fix `check_diagonal_entries_exist` report correctly when only missing diagonal value in the last rows. [#1440](#1440)
+ Fix checking OpenMPI version in cross-compilation settings [#1446](#1446)
+ Fix false-positive deprecation warnings in Ginkgo, especially for the old Rcm (it doesn't emit deprecation warnings anymore as a result but is still considered deprecated) [#1444](#1444)

### Related PR: #1454
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. reg:benchmarking This is related to benchmarking. type:solver This is related to the solvers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants