Skip to content

Commit

Permalink
refactor: Last trtorch references
Browse files Browse the repository at this point in the history
Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
  • Loading branch information
narendasan committed Nov 9, 2021
1 parent c2bee87 commit 55c3bab
Show file tree
Hide file tree
Showing 5 changed files with 26 additions and 27 deletions.
18 changes: 9 additions & 9 deletions examples/custom_converters/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,13 @@ Note that ELU converter is now supported in our library. If you want to get abov
error and run the example in this document, you can either:
1. get the source code, go to root directory, then run: <br />
`git apply ./examples/custom_converters/elu_converter/disable_core_elu.patch`
2. If you are using a pre-downloaded release of TRTorch, you need to make sure that
it doesn't support elu operator in default. (TRTorch <= v0.1.0)
2. If you are using a pre-downloaded release of Torch-TensorRT, you need to make sure that
it doesn't support elu operator in default. (Torch-TensorRT <= v0.1.0)

## Writing Converter in C++
We can register a converter for this operator in our application. You can find more
information on all the details of writing converters in the contributors documentation
([Writing Converters](https://nvidia.github.io/TRTorch/contributors/writing_converters.html)).
([Writing Converters](https://nvidia.github.io/Torch-TensorRT/contributors/writing_converters.html)).
Once we are clear about these rules and writing patterns, we can create a seperate new C++ source file as:

```c++
Expand Down Expand Up @@ -66,7 +66,7 @@ from torch.utils import cpp_extension
# library_dirs should point to the libtorch_tensorrt.so, include_dirs should point to the dir that include the headers
# 1) download the latest package from https://github.com/NVIDIA/TRTorch/releases/
# 1) download the latest package from https://github.com/NVIDIA/Torch-TensorRT/releases/
# 2) Extract the file from downloaded package, we will get the "torch_tensorrt" directory
# 3) Set torch_tensorrt_path to that directory
torch_tensorrt_path = <PATH TO TRTORCH>
Expand All @@ -87,7 +87,7 @@ setup(
```
Make sure to include the path for header files in `include_dirs` and the path
for dependent libraries in `library_dirs`. Generally speaking, you should download
the latest package from [here](https://github.com/NVIDIA/TRTorch/releases), extract
the latest package from [here](https://github.com/NVIDIA/Torch-TensorRT/releases), extract
the files, and the set the `torch_tensorrt_path` to it. You could also add other compilation
flags in cpp_extension if you need. Then, run above python scripts as:
```shell
Expand All @@ -99,7 +99,7 @@ by the command above. In build folder, you can find the generated `.so` library,
which could be loaded in our Python application.

## Load `.so` in Python Application
With the new generated library, TRTorch now support the new developed converter.
With the new generated library, Torch-TensorRT now support the new developed converter.
We use `torch.ops.load_library` to load `.so`. For example, we could load the ELU
converter and use it in our application:
```python
Expand All @@ -124,7 +124,7 @@ def cal_max_diff(pytorch_out, torch_tensorrt_out):
diff = torch.sub(pytorch_out, torch_tensorrt_out)
abs_diff = torch.abs(diff)
max_diff = torch.max(abs_diff)
print("Maximum differnce between TRTorch and PyTorch: \n", max_diff)
print("Maximum differnce between Torch-TensorRT and PyTorch: \n", max_diff)


def main():
Expand All @@ -146,12 +146,12 @@ def main():

torch_tensorrt_out = trt_ts_module(input_data)
print('PyTorch output: \n', pytorch_out[0, :, :, 0])
print('TRTorch output: \n', torch_tensorrt_out[0, :, :, 0])
print('Torch-TensorRT output: \n', torch_tensorrt_out[0, :, :, 0])
cal_max_diff(pytorch_out, torch_tensorrt_out)


if __name__ == "__main__":
main()

```
Run this script, we can get the different outputs from PyTorch and TRTorch.
Run this script, we can get the different outputs from PyTorch and Torch-TensorRT.
2 changes: 1 addition & 1 deletion examples/int8/ptq/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/deps/torch_tensorrt/lib:$(pwd)/de

2) Build and run `ptq`

We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where Torch-TensorRT is located `<path_to_TRTORCH>`.
We import header files `cifar10.h` and `benchmark.h` from `ROOT_DIR`. `ROOT_DIR` should point to the path where Torch-TensorRT is located `<path_to_torch_tensorrt>`.

By default it is set to `../../../`. If your Torch-TensorRT directory structure is different, please set `ROOT_DIR` accordingly.

Expand Down
7 changes: 3 additions & 4 deletions examples/int8/training/vgg16/test_qat.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,13 +79,12 @@ def test(model, dataloader, crit):
print("[JIT] Test Loss: {:.5f} Test Acc: {:.2f}%".format(test_loss, 100 * test_acc))

import torch_tensorrt as torchtrt
# trtorch.logging.set_reportable_log_level(trtorch.logging.Level.Debug)
compile_settings = {
"inputs": [torchtrt.Input([1, 3, 32, 32])],
"enabled_precisions": {torch.float, torch.half, torch.int8} # Run with FP16
"inputs": [torchtrt.Input([1, 3, 32, 32])],
"enabled_precisions": {torch.float, torch.half, torch.int8} # Run with FP16
}
new_mod = torch.jit.load('trained_vgg16_qat.jit.pt')
trt_ts_module = torchtrt.compile(new_mod, compile_settings)
trt_ts_module = torchtrt.compile(new_mod, **compile_settings)
testing_dataloader = torch.utils.data.DataLoader(testing_dataset, batch_size=1, shuffle=False, num_workers=2)
test_loss, test_acc = test(trt_ts_module, testing_dataloader, crit)
print("[TRTorch] Test Loss: {:.5f} Test Acc: {:.2f}%".format(test_loss, 100 * test_acc))
4 changes: 2 additions & 2 deletions examples/torchtrt_runtime_example/network.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import torch_tensorrt as torchtrt

# create a simple norm layer.
# This norm layer uses NormalizePlugin from TRTorch
# This norm layer uses NormalizePlugin from Torch-TensorRT
class Norm(torch.nn.Module):
def __init__(self):
super(Norm, self).__init__()
Expand All @@ -12,7 +12,7 @@ def forward(self, x):
return torch.norm(x, 2, None, False)

# Create a sample network with a conv and gelu node.
# Gelu layer in TRTorch is converted to CustomGeluPluginDynamic from TensorRT plugin registry.
# Gelu layer in Torch-TensorRT is converted to CustomGeluPluginDynamic from TensorRT plugin registry.
class ConvGelu(torch.nn.Module):
def __init__(self):
super(ConvGelu, self).__init__()
Expand Down
22 changes: 11 additions & 11 deletions py/BUILD
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
package(default_visibility = ["//visibility:public"])

load("@trtorch_py_deps//:requirements.bzl", "requirement")
load("@torch_tensorrt_py_deps//:requirements.bzl", "requirement")

# Exposes the library for testing
py_library(
name = "trtorch",
name = "torch_tensorrt",
srcs = [
"trtorch/__init__.py",
"trtorch/_compile_spec.py",
"trtorch/_compiler.py",
"trtorch/_types.py",
"trtorch/_version.py",
"trtorch/logging.py",
"trtorch/ptq.py",
"torch_tensorrt/__init__.py",
"torch_tensorrt/_compile_spec.py",
"torch_tensorrt/_compiler.py",
"torch_tensorrt/_types.py",
"torch_tensorrt/_version.py",
"torch_tensorrt/logging.py",
"torch_tensorrt/ptq.py",
],
data = [
"trtorch/lib/libtrtorch.so",
"torch_tensorrt/lib/libtrtorch.so",
] + glob([
"trtorch/_C.cpython*.so",
"torch_tensorrt/_C.cpython*.so",
]),
deps = [
requirement("torch"),
Expand Down

0 comments on commit 55c3bab

Please sign in to comment.