Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(tvm_utility): remove download logic from CMake and update documentation #4923

Merged
merged 59 commits into from
Oct 26, 2023
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
c90b228
add include tier4_autoware_utils and dependency
lexavtanke Sep 6, 2023
e77245c
remove downloading logic from Cmake, update documentation
lexavtanke Sep 7, 2023
59c590f
build(tvm_utility): remove downloading logic from Cmake, update docum…
lexavtanke Sep 7, 2023
deab5e7
Merge remote-tracking branch 'lexavtanke/remove_download_tvm_utility'…
lexavtanke Sep 7, 2023
d4e0064
style(pre-commit): autofix
pre-commit-ci[bot] Sep 7, 2023
a098b48
build(tvm_utility): fix lint_cmake error
lexavtanke Sep 8, 2023
dac7a20
build(tvm_utility): format warning message
lexavtanke Sep 8, 2023
789ba72
build(tvm_utility): add logic to work with autoware_data folder, add …
lexavtanke Sep 13, 2023
0fa9061
style(pre-commit): autofix
pre-commit-ci[bot] Sep 13, 2023
4060e78
Merge branch 'autowarefoundation:main' into remove_download_tvm_utility
lexavtanke Sep 14, 2023
726e3a2
style(pre-commit): autofix
pre-commit-ci[bot] Sep 14, 2023
f9d23be
Merge branch 'main' into remove_download_tvm_utility
lexavtanke Sep 27, 2023
a602a34
build(tvm_utility): refactor, update InferenceEngineTVM constructor
lexavtanke Sep 28, 2023
e2ad796
style(pre-commit): autofix
pre-commit-ci[bot] Sep 28, 2023
1c46e02
Merge branch 'autowarefoundation:main' into remove_download_tvm_utility
lexavtanke Oct 5, 2023
b218bbe
build(tvm_utility): add lightweight model and test with it
lexavtanke Oct 5, 2023
6eea3e9
build(tvm_utility): make building yolo_v2_tiny disable by default
lexavtanke Oct 5, 2023
dd09064
build(tvm_utility): remove test artifact for yolo_v2_tiny
lexavtanke Oct 5, 2023
3cf6108
build(tvm_utility): update docs
lexavtanke Oct 6, 2023
e82b412
build(tvm_utility): update docs
lexavtanke Oct 6, 2023
5ed5724
style(pre-commit): autofix
pre-commit-ci[bot] Oct 6, 2023
7ba6056
build(tvm_utility): update namespace in abs_model test
lexavtanke Oct 6, 2023
c45ab80
build(tvm_utility): rewrite yolo_v2_tiny as example
lexavtanke Oct 16, 2023
c91c0e2
build(tvm_utility): clean comments in yolo_v2_tiny example main.cpp
lexavtanke Oct 16, 2023
f145dbc
build(tvm_utility): add launch file for yolo_v2_tiny example
lexavtanke Oct 17, 2023
6d86664
build(tvm_utility): update yolo_v2_tiny example readme
lexavtanke Oct 17, 2023
24aee4b
style(pre-commit): autofix
pre-commit-ci[bot] Oct 17, 2023
3484369
build(tvm_utility): add model for arm based systems, need to be teste…
lexavtanke Oct 17, 2023
165255a
style(pre-commit): autofix
pre-commit-ci[bot] Oct 17, 2023
0c08dc3
Merge branch 'autowarefoundation:main' into remove_download_tvm_utility
lexavtanke Oct 17, 2023
603c163
style(pre-commit): autofix
pre-commit-ci[bot] Oct 17, 2023
dd8e24a
build(tvm_utility): update config header for arm
lexavtanke Oct 17, 2023
971508a
style(pre-commit): autofix
pre-commit-ci[bot] Oct 17, 2023
d6992b1
build(tvm_utility): remove debug output
lexavtanke Oct 17, 2023
8f2f263
Merge branch 'main' into remove_download_tvm_utility
lexavtanke Oct 19, 2023
107c939
build(tvm_utility): add find_package conditional section
lexavtanke Oct 19, 2023
e360caf
build(tvm_utility): fix lint_cmake errors
lexavtanke Oct 19, 2023
99caa44
build(tvm_utility): remove coping model files during build
lexavtanke Oct 19, 2023
eb3b8bb
build(tvm_utility): update readme with new data folder structure
lexavtanke Oct 19, 2023
63e932a
build(tvm_utility): fix spell check warnings
lexavtanke Oct 19, 2023
1b22105
style(pre-commit): autofix
pre-commit-ci[bot] Oct 19, 2023
4b8471d
build(tvm_utility): add no model files guard to get_neural_network
lexavtanke Oct 20, 2023
3514fd9
style(pre-commit): autofix
pre-commit-ci[bot] Oct 20, 2023
c50d1df
build(tvm_utility): set back default paths in config headers
lexavtanke Oct 20, 2023
770c400
build(tvm_utility): add param file, update launch file
lexavtanke Oct 23, 2023
0dd4431
build(tvm_utility): add schema file, update node name
lexavtanke Oct 23, 2023
9fced41
style(pre-commit): autofix
pre-commit-ci[bot] Oct 23, 2023
a8b65b2
build(tvm_utility): fix json-schema-check
lexavtanke Oct 23, 2023
742903f
Merge remote-tracking branch 'lexavtanke/remove_download_tvm_utility'…
lexavtanke Oct 23, 2023
7999377
build(tvm_utility): fix json-schema-check
lexavtanke Oct 23, 2023
ab86d1f
style(pre-commit): autofix
pre-commit-ci[bot] Oct 23, 2023
3300af7
build(tvm_utility): add parameter table to example readme
lexavtanke Oct 26, 2023
81f9aed
build(tvm_utility): fix typo-error in description of schema.json
lexavtanke Oct 26, 2023
70cabe2
style(pre-commit): autofix
pre-commit-ci[bot] Oct 26, 2023
0d81afb
Merge branch 'main' into remove_download_tvm_utility
lexavtanke Oct 26, 2023
d080050
buiild(tvm_utility): fix spell-check warning and typo
lexavtanke Oct 26, 2023
2f190c6
feat(spell-check): add dltype and tvmgen to local dict
lexavtanke Oct 26, 2023
4f7d1fe
Merge branch 'feat-spell-check-add-tvmg-and-dltype' into remove_downl…
lexavtanke Oct 26, 2023
3c97359
style(pre-commit): autofix
pre-commit-ci[bot] Oct 26, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions common/tvm_utility/.gitignore
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This file can be removed if it's empty.

Original file line number Diff line number Diff line change
@@ -1,2 +1 @@
artifacts/**/*.jpg
data/
data/models
41 changes: 4 additions & 37 deletions common/tvm_utility/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,35 +50,17 @@ error description.

### Neural Networks Provider

This package also provides a utility to get pre-compiled neural networks to packages using them for their inference.

The neural networks are compiled as part of the
[Model Zoo](https://github.com/autowarefoundation/modelzoo/) CI pipeline and saved to an S3 bucket.
This package exports cmake variables and functions for ease of access to those neural networks.

The `get_neural_network` function creates an abstraction for the artifact management.
The artifacts are saved under the source directory of the package making use of the function; under "data/".
Priority is given to user-provided files, under "data/user/${MODEL_NAME}/".
If there are no user-provided files, the function tries to reuse previously-downloaded artifacts.
If there are no previously-downloaded artifacts, and if the `DOWNLOAD_ARTIFACTS` cmake variable is set, they will be downloaded from the bucket.
Otherwise, nothing happens.
Users should provide model files under "data/user/${MODEL_NAME}/". Otherwise, nothing happens and compilation of the package will be skipped.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the artifacts are required for compilation, then they should be part of the source tree (via git-lfs for example if the files are too big), otherwise it won't be possible to build Debian packages.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually It is a bit confusing for now. Because this packages don't seem to build anyway now because DOWNLOAD_ARTIFACTS flag is disabled by default and this PR kinda show it because last changes was made several months ago and bug was hidden. So here I just follow the same logic.

And as I understand before we are going to use ansible to provide artifacts. And for packages with tenosorrt support It is more or less straight forward process as all model conversion from onnx to TRT happens later on the first run not on the build.

For TVM it is a bit more complicated as it provide models in already compiled form and to use it in the package you will need to provide some model files to build it.

But if I understand you correctly here. You propose that there should be some default artifacts for all TVM packages which should be part of the source tree and the only way to do it is git-lfs because for now deploy_param.params file weights 10-20 Mb usually. And If user will want to provide his own models wouldn't it be messy a bit ?

I think there is a kinda similar way to complile the model to TVM before run by the user same as with TRT but I'm not so familiar with TVM.

And what is the idea behind the Debian packages? Should user be able to use them of the shelf with the build in model? I guess in many cases user with have to use model trained on his own data.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For TVM it is a bit more complicated as it provide models in already compiled form and to use it in the package you will need to provide some model files to build it.

As far as I know, we only need the headers for building the packages, not the full models.

And what is the idea behind the Debian packages? Should user be able to use them of the shelf with the build in model? I guess in many cases user with have to use model trained on his own data.

The Debian packages are a way for users to install Autoware in production environments without having to build it themselves, much like how ROS is distributed. Eventually, Autoware can become part of the ROS distribution, but that will come at a later stage.


The structure inside of the source directory of the package making use of the function is as follow:

```{text}
.
├── data
│ ├── downloads
│ │ ├── ${MODEL 1}-${ARCH 1}-{BACKEND 1}-{VERSION 1}.tar.gz
│ │ ├── ...
│ │ └── ${MODEL ...}-${ARCH ...}-{BACKEND ...}-{VERSION ...}.tar.gz
│ ├── models
│ │ ├── ${MODEL 1}
│ │ │ ├── ...
│ │ │ └── inference_engine_tvm_config.hpp
│ │ ├── ...
│ │ └── ${MODEL ...}
│ │ └── ...
│ └── user
│ ├── ${MODEL 1}
│ │ ├── deploy_graph.json
Expand All @@ -90,36 +72,21 @@ The structure inside of the source directory of the package making use of the fu
│ └── ...
```

The `inference_engine_tvm_config.hpp` file needed for compilation by dependent packages is made available under "data/models/${MODEL_NAME}/inference_engine_tvm_config.hpp".
The `inference_engine_tvm_config.hpp` file needed for compilation by dependent packages should be available under "data/models/${MODEL_NAME}/inference_engine_tvm_config.hpp".
Dependent packages can use the cmake `add_dependencies` function with the name provided in the `DEPENDENCY` output parameter of `get_neural_network` to ensure this file is created before it gets used.

The other `deploy_*` files are installed to "models/${MODEL_NAME}/" under the `share` directory of the package.

The target version to be downloaded can be overwritten by setting the `MODELZOO_VERSION` cmake variable.

#### Assumptions / Known limits

If several packages make use of the same neural network, it will be downloaded once per package.

In case a requested artifact doesn't exist in the S3 bucket, the error message from ExternalProject is not explicit enough for the user to understand what went wrong.

In case the user manually sets `MODELZOO_VERSION` to "latest", the archive will not be re-downloaded when it gets updated in the S3 bucket (it is not a problem for tagged versions as they are not expected to be updated).

#### Inputs / Outputs

Inputs:

- `DOWNLOAD_ARTIFACTS` cmake variable; needs to be set to enable downloading the artifacts
- `MODELZOO_VERSION` cmake variable; can be used to overwrite the default target version of downloads

Outputs:

- `get_neural_network` cmake function; can be used to get a neural network compiled for a specific backend
- `get_neural_network` cmake function; create proper external dependency for a package with use of the model provided by the user.

In/Out:

- The `DEPENDENCY` argument of `get_neural_network` can be checked for the outcome of the function.
It is an empty string when the neural network couldn't be made available.
It is an empty string when the neural network wasn't provided by the user.

## Security considerations

Expand Down
ambroise-arm marked this conversation as resolved.
Show resolved Hide resolved
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
// Copyright 2021 Arm Limited and Contributors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "tvm_utility/pipeline.hpp"

#ifndef COMMON__TVM_UTILITY__DATA__USER__YOLO_V2_TINY__INFERENCE_ENGINE_TVM_CONFIG_HPP_ // NOLINT
#define COMMON__TVM_UTILITY__DATA__USER__YOLO_V2_TINY__INFERENCE_ENGINE_TVM_CONFIG_HPP_

namespace model_zoo
{
namespace perception
{
namespace camera_obstacle_detection
{
namespace yolo_v2_tiny
{
namespace tensorflow_fp32_coco
{

static const tvm_utility::pipeline::InferenceEngineTVMConfig config{
{3, 0, 0}, // modelzoo_version

"yolo_v2_tiny", // network_name
"llvm", // network_backend

"deploy_lib.so", // network_module_path
"deploy_graph.json", // network_graph_path
"deploy_param.params", // network_params_path

kDLCPU, // tvm_device_type

Check warning on line 41 in common/tvm_utility/data/user/yolo_v2_tiny/inference_engine_tvm_config.hpp

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (DLCPU)

Check warning on line 41 in common/tvm_utility/data/user/yolo_v2_tiny/inference_engine_tvm_config.hpp

View workflow job for this annotation

GitHub Actions / spell-check-partial

Unknown word (DLCPU)
0, // tvm_device_id

{{"input", kDLFloat, 32, 1, {-1, 416, 416, 3}}}, // network_inputs

{{"output", kDLFloat, 32, 1, {1, 13, 13, 425}}} // network_outputs
};

} // namespace tensorflow_fp32_coco
} // namespace yolo_v2_tiny
} // namespace camera_obstacle_detection
} // namespace perception
} // namespace model_zoo
#endif // COMMON__TVM_UTILITY__DATA__USER__YOLO_V2_TINY__INFERENCE_ENGINE_TVM_CONFIG_HPP_
// NOLINT
70 changes: 70 additions & 0 deletions common/tvm_utility/include/tvm_utility/pipeline.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,76 @@ typedef struct
class InferenceEngineTVM : public InferenceEngine
{
public:
explicit InferenceEngineTVM(
lexavtanke marked this conversation as resolved.
Show resolved Hide resolved
const InferenceEngineTVMConfig & config, const std::string & pkg_name,
const std::string & autoware_data_path)
: config_(config)
{
// Get full network path
std::string network_prefix =
autoware_data_path + pkg_name + "/models/" + config.network_name + "/";
std::string network_module_path = network_prefix + config.network_module_path;
std::string network_graph_path = network_prefix + config.network_graph_path;
std::string network_params_path = network_prefix + config.network_params_path;

// Load compiled functions
std::ifstream module(network_module_path);
if (!module.good()) {
throw std::runtime_error(
"File " + network_module_path + " specified in inference_engine_tvm_config.hpp not found");
}
module.close();
tvm::runtime::Module mod = tvm::runtime::Module::LoadFromFile(network_module_path);

// Load json graph
std::ifstream json_in(network_graph_path, std::ios::in);
if (!json_in.good()) {
throw std::runtime_error(
"File " + network_graph_path + " specified in inference_engine_tvm_config.hpp not found");
}
std::string json_data(
(std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());
json_in.close();

// Load parameters from binary file
std::ifstream params_in(network_params_path, std::ios::binary);
if (!params_in.good()) {
throw std::runtime_error(
"File " + network_params_path + " specified in inference_engine_tvm_config.hpp not found");
}
std::string params_data(
(std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());
params_in.close();

// Parameters need to be in TVMByteArray format
TVMByteArray params_arr;
params_arr.data = params_data.c_str();
params_arr.size = params_data.length();

// Create tvm runtime module
tvm::runtime::Module runtime_mod = (*tvm::runtime::Registry::Get("tvm.graph_executor.create"))(
json_data, mod, static_cast<uint32_t>(config.tvm_device_type), config.tvm_device_id);

// Load parameters
auto load_params = runtime_mod.GetFunction("load_params");
load_params(params_arr);

// Get set_input function
set_input = runtime_mod.GetFunction("set_input");

// Get the function which executes the network
execute = runtime_mod.GetFunction("run");

// Get the function to get output data
get_output = runtime_mod.GetFunction("get_output");

for (auto & output_config : config.network_outputs) {
output_.push_back(TVMArrayContainer(
output_config.node_shape, output_config.tvm_dtype_code, output_config.tvm_dtype_bits,
output_config.tvm_dtype_lanes, config.tvm_device_type, config.tvm_device_id));
}
}

explicit InferenceEngineTVM(const InferenceEngineTVMConfig & config, const std::string & pkg_name)
: config_(config)
{
Expand Down
4 changes: 3 additions & 1 deletion common/tvm_utility/test/yolo_v2_tiny/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,9 @@ TEST(PipelineExamples, SimplePipeline)
using PostPT = PostProcessorYoloV2Tiny;

PrePT PreP{config};
IET IE{config, "tvm_utility"};
std::string home_dir = getenv("HOME");
lexavtanke marked this conversation as resolved.
Show resolved Hide resolved
std::string autoware_data = "/autoware_data/";
IET IE{config, "tvm_utility", home_dir + autoware_data};
PostPT PostP{config};

tvm_utility::pipeline::Pipeline<PrePT, IET, PostPT> pipeline(PreP, IE, PostP);
Expand Down
6 changes: 3 additions & 3 deletions common/tvm_utility/tvm-utility-yolo-v2-tiny-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ curl https://github.com/pjreddie/darknet/master/data/dog.jpg \
> artifacts/yolo_v2_tiny/test_image_0.jpg
```

1. Build and test with the `DOWNLOAD_ARTIFACTS` flag set.
1. Build and test.

```sh
colcon build --packages-up-to tvm_utility --cmake-args -DDOWNLOAD_ARTIFACTS=ON
colcon build --packages-up-to tvm_utility
colcon test --packages-select tvm_utility
```

Expand All @@ -28,5 +28,5 @@ Vulkan is supported by default by the tvm_vendor package.
It can be selected by setting the `tvm_utility_BACKEND` variable:

```sh
colcon build --packages-up-to tvm_utility --cmake-args -DDOWNLOAD_ARTIFACTS=ON -Dtvm_utility_BACKEND=vulkan
colcon build --packages-up-to tvm_utility -Dtvm_utility_BACKEND=vulkan
```
32 changes: 5 additions & 27 deletions common/tvm_utility/tvm_utility-extras.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

# Get user-provided variables
set(DOWNLOAD_ARTIFACTS OFF CACHE BOOL "enable artifacts download")
set(MODELZOO_VERSION "3.0.0-20221221" CACHE STRING "targeted ModelZoo version")

#
# Download the selected neural network if it is not already present on disk.
# Make inference_engine_tvm_config.hpp available under "data/models/${MODEL_NAME}/".
# Install the TVM artifacts to "share/${PROJECT_NAME}/models/".
# Return the name of the custom target in the DEPENDENCY parameter.
Expand All @@ -34,7 +29,7 @@ function(get_neural_network MODEL_NAME MODEL_BACKEND DEPENDENCY)
set(EXTERNALPROJECT_NAME ${MODEL_NAME}_${MODEL_BACKEND})
set(PREPROCESSING "")

# Prioritize user-provided models.
# Use user-provided models.
# cspell: ignore COPYONLY
if(IS_DIRECTORY "${DATA_PATH}/user/${MODEL_NAME}")
ambroise-arm marked this conversation as resolved.
Show resolved Hide resolved
message(STATUS "Using user-provided model from ${DATA_PATH}/user/${MODEL_NAME}")
Expand All @@ -54,27 +49,10 @@ function(get_neural_network MODEL_NAME MODEL_BACKEND DEPENDENCY)
set(SOURCE_DIR "${DATA_PATH}/user/${MODEL_NAME}")
set(INSTALL_DIRECTORY "${DATA_PATH}/user/${MODEL_NAME}")
else()
set(ARCHIVE_NAME "${MODEL_NAME}-${CMAKE_SYSTEM_PROCESSOR}-${MODEL_BACKEND}-${MODELZOO_VERSION}.tar.gz")

# Use previously-downloaded archives if available.
set(DOWNLOAD_DIR "${DATA_PATH}/downloads")
if(DOWNLOAD_ARTIFACTS)
message(STATUS "Downloading ${ARCHIVE_NAME} ...")
if(NOT EXISTS "${DATA_PATH}/downloads/${ARCHIVE_NAME}")
set(URL "https://autoware-modelzoo.s3.us-east-2.amazonaws.com/models/${MODELZOO_VERSION}/${ARCHIVE_NAME}")
file(DOWNLOAD ${URL} "${DOWNLOAD_DIR}/${ARCHIVE_NAME}")
endif()
else()
message(WARNING "Skipped download for ${MODEL_NAME} (enable by setting DOWNLOAD_ARTIFACTS)")
set(${DEPENDENCY} "" PARENT_SCOPE)
return()
endif()
set(SOURCE_DIR "${DATA_PATH}/models/${MODEL_NAME}")
set(INSTALL_DIRECTORY "${DATA_PATH}/models/${MODEL_NAME}")
file(ARCHIVE_EXTRACT INPUT "${DOWNLOAD_DIR}/${ARCHIVE_NAME}" DESTINATION "${SOURCE_DIR}")
if(EXISTS "${DATA_PATH}/models/${MODEL_NAME}/preprocessing_inference_engine_tvm_config.hpp")
set(PREPROCESSING "${DATA_PATH}/models/${MODEL_NAME}/preprocessing_inference_engine_tvm_config.hpp")
endif()
message(WARNING " NO ${MODEL_NAME} model provided by user, for more info check"
" https://autowarefoundation.github.io/autoware.universe/main/common/tvm_utility/")
set(${DEPENDENCY} "" PARENT_SCOPE)
return()

endif()

Expand Down
Loading