Skip to content

Commit

Permalink
Merge branch 'master' into integrate_sanity_tests_with_pytest
Browse files Browse the repository at this point in the history
  • Loading branch information
chauhang committed Aug 29, 2023
2 parents c558f95 + d3eeb07 commit e17d88f
Show file tree
Hide file tree
Showing 28 changed files with 483 additions and 19 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/ci_cpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ on:
pull_request:
branches:
- master
merge_group:


concurrency:
group: ci-cpu-${{ github.workflow }}-${{ github.ref == 'refs/heads/master' && github.run_number || github.ref }}
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/ci_gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ on:
pull_request:
branches:
- master
merge_group:

concurrency:
group: ci-gpu-${{ github.workflow }}-${{ github.ref == 'refs/heads/master' && github.run_number || github.ref }}
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/doc-automation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@ on:
push:
branches:
- master
merge_group:

jobs:
build_docs_job:
runs-on: ubuntu-20.04
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/docker-ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ on:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
merge_group:


jobs:
test-build-and-container:
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ on:
pull_request:
branches:
- master
merge_group:


jobs:
mypy:
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/regression_tests_cpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ on:
pull_request:
branches:
- master
merge_group:

concurrency:
group: ci-cpu-${{ github.workflow }}-${{ github.ref == 'refs/heads/master' && github.run_number || github.ref }}
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/regression_tests_gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ on:
pull_request:
branches:
- master
merge_group:

concurrency:
group: ci-cpu-${{ github.workflow }}-${{ github.ref == 'refs/heads/master' && github.run_number || github.ref }}
Expand Down
23 changes: 19 additions & 4 deletions docs/FAQs.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# FAQ'S
Contents of this document.
* [General](#general)
* [Performance](#performance)
* [Deployment and config](#deployment-and-config)
* [API](#api)
* [Handler](#handler)
Expand Down Expand Up @@ -34,9 +35,23 @@ No, As of now only python based models are supported.
Torchserve is derived from Multi-Model-Server. However, Torchserve is specifically tuned for Pytorch models. It also has new features like Snapshot and model versioning.

### How to decode international language in inference response on client side?
By default, Torchserve uses utf-8 to encode if the inference response is string. So client can use utf-8 to decode.
By default, Torchserve uses utf-8 to encode if the inference response is string. So client can use utf-8 to decode.

If a model converts international language string to bytes, client needs to use the codec mechanism specified by the model such as in https://github.com/pytorch/serve/blob/master/examples/nmt_transformer/model_handler_generalized.py#L55
If a model converts international language string to bytes, client needs to use the codec mechanism specified by the model such as in https://github.com/pytorch/serve/blob/master/examples/nmt_transformer/model_handler_generalized.py

## Performance

Relevant documents.
- [Performance Guide](performance_guide.md)

### How do I improve TorchServe performance on CPU?
CPU performance is heavily influenced by launcher core pinning. We recommend setting the following properties in your `config.properties`:

```bash
cpu_launcher_enable=true
cpu_launcher_args=--use_logical_core
```
More background on improving CPU performance can be found in this [blog post](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex#grokking-pytorch-intel-cpu-performance-from-first-principles).

## Deployment and config
Relevant documents.
Expand Down Expand Up @@ -97,7 +112,7 @@ TorchServe looks for the config.property file according to the order listed in t

- [models](configuration.md): Defines a list of models' configuration in config.property. A model's configuration can be overridden by [management API](management_api.md). It does not decide which models will be loaded during TorchServe start. There is no relationship b.w "models" and "load_models" (ie. TorchServe command line option [--models](configuration.md)).

###
###

## API
Relevant documents
Expand Down Expand Up @@ -133,7 +148,7 @@ Refer to [default handlers](default_handlers.md) for more details.

### Is it possible to deploy Hugging Face models?
Yes, you can deploy Hugging Face models using a custom handler.
Refer to [HuggingFace_Transformers](https://github.com/pytorch/serve/blob/master/examples/Huggingface_Transformers/README.md#huggingface-transformers) for example.
Refer to [HuggingFace_Transformers](https://github.com/pytorch/serve/blob/master/examples/Huggingface_Transformers/README.md#huggingface-transformers) for example.

## Model-archiver
Relevant documents
Expand Down
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,4 +52,4 @@ TorchServe is a performant, flexible and easy to use tool for serving PyTorch ea
* [TorchServe on Kubernetes](https://github.com/pytorch/serve/blob/master/kubernetes/README.md#torchserve-on-kubernetes) - Demonstrates a Torchserve deployment in Kubernetes using Helm Chart supported in both Azure Kubernetes Service and Google Kubernetes service
* [mlflow-torchserve](https://github.com/mlflow/mlflow-torchserve) - Deploy mlflow pipeline models into TorchServe
* [Kubeflow pipelines](https://github.com/kubeflow/pipelines/tree/master/samples/contrib/pytorch-samples) - Kubeflow pipelines and Google Vertex AI Managed pipelines
* [NVIDIA MPS](mps.md) - Use NVIDIA MPS to optimize multi-worker deployment on a single GPU
* [NVIDIA MPS](nvidia_mps.md) - Use NVIDIA MPS to optimize multi-worker deployment on a single GPU
2 changes: 1 addition & 1 deletion docs/contents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
model_zoo
request_envelopes
server
mps
nvidia_mps
snapshot
torchserve_on_win_native
torchserve_on_wsl
Expand Down
7 changes: 7 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,13 @@ What's going on in TorchServe?
:link: performance_guide.html
:tags: Performance,Troubleshooting

.. customcarditem::
:header: Large Model Inference
:card_description: Serving Large Models with TorchServe
:image: https://github.com/pytorch/serve/master/docs/images/ts-lmi-internal.png
:link: large_model_inference.html
:tags: Large-Models,Performance

.. customcarditem::
:header: Troubleshooting
:card_description: Various updates on Torcherve and use cases.
Expand Down
8 changes: 4 additions & 4 deletions docs/mps.md → docs/nvidia_mps.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Please note that we set the concurrency level to 600 which will make sure that t
We first perform the single worker benchmark for the G4 instance.
In the figure below we see that up to a batch size of four we see a steady increase of the throughput over the batch size.

![G4 benchmark, single worker](images/mps_g4_single.png)
![G4 benchmark, single worker](https://github.com/pytorch/serve/master/docs/images/mps_g4_single.png)

Next, we increase the number of workers to two in order to compare the throughput with and without MPS running.
To enable MPS for the second set of runs we first set the exclusive processing mode for the GPU and then start the MPS daemon as shown above.
Expand All @@ -69,19 +69,19 @@ We select the batch size between one and eight according to our previous finding
In the figure we can see that the performance in terms of throughput can be better in case of batch size 1 and 8 (up to +18%) while it can be worse for others (-11%).
An interpretation of this result could be that the G4 instance has not many resources to share when we run a BERT model in one of the workers.

![G4 benchmark, two workers](images/mps_g4_two_worker.png)
![G4 benchmark, two workers](https://github.com/pytorch/serve/master/docs/images/mps_g4_two_worker.png)

### P3 instance
Next, we will run the same experiment with the bigger p3.2xlarge instance.
With a single worker we get the following throughput values:

![P3 benchmark, single worker](images/mps_p3_single.png)
![P3 benchmark, single worker](https://github.com/pytorch/serve/master/docs/images/mps_p3_single.png)

We can see that the throughput steady increases but for a batch size over eight we see diminishing returns.
Finally, we deploy two workers on the P3 instance and compare running them with and without MPS.
We can see that for batch size between 1 and 32 the throughput is consistently higher (up to +25%) for MPS enabled with the exception of batch size 16.

![P3 benchmark, two workers](images/mps_p3_two_worker.png)
![P3 benchmark, two workers](https://github.com/pytorch/serve/master/docs/images/mps_p3_two_worker.png)

## Summary
In the previous section we saw that by enabling MPS for two workers running the same model we receive mixed results.
Expand Down
38 changes: 38 additions & 0 deletions docs/performance_checklist.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Model Inference Optimization Checklist

This checklist describes some steps that should be completed when diagnosing model inference performance issues. Some of these suggestions are only applicable to NLP models (e.g., ensuring the input is not over-padded and sequence bucketing), but the general principles are useful for other models too.

## General System Optimizations

- Check the versions of PyTorch, Nvidia driver, and other components and update to the latest compatible releases. Oftentimes known performance bugs have already been fixed.

- Collect system-level activity logs to understand the overall resource utilizations. It’s useful to know how the model inference pipeline is using the system resources at a high level, as the first step of optimization. Even simple CLI tools such as nvidia-smi and htop would be helpful.

- Start with a target with the highest impact on performance. It should be obvious from the system activity logs where the biggest bottleneck is – look beyond model inference, as pre/post processing can be expensive and can affect the end-to-end throughput just as much.

- Quantify and mitigate the influence of slow I/O such as disk and network on end-to-end performance. While optimizing I/O is out of scope for this checklist, look for techniques that use async, concurrency, pipelining, etc. to effectively “hide” the cost of I/O.

- For model inference on input sequences of dynamic length (e.g., transformers for NLP), make sure the tokenizer is not over-padding the input. If a transformer was trained with padding to a constant length (e.g., 512) and deployed with the same padding, it would run unnecessarily slow (orders of magnitude) on short sequences.

- Vision models with input in JPEG format often benefit from faster JPEG decoding on CPU such as libjpeg-turbo and Pillow-SIMD, and on GPU such as torchvision.io.decode_jpeg and Nvidia DALI.
As this [example](https://colab.research.google.com/drive/1NMaLS8PG0eYhbd8IxQAajXgXNIZ_AvHo?usp=sharing) shows, Nvidia DALI is about 20% faster than torchvision, even on an old K80 GPU.

## Model Inference Optimizations

Start model inference optimization only after other factors, the “low-hanging fruit”, have been extensively evaluated and addressed.

- Use fp16 for GPU inference. The speed will most likely more than double on newer GPUs with tensor cores, with negligible accuracy degradation. Technically fp16 is a type of quantization but since it seldom suffers from loss of accuracy for inference it should always be explored. As shown in this [article](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html#abstract), use of fp16 offers speed up in large neural network applications.

- Use model quantization (i.e. int8) for CPU inference. Explore different quantization options: dynamic quantization, static quantization, and quantization aware training, as well as tools such as Intel Neural Compressor that provide more sophisticated quantization methods. It is worth noting that quantization comes with some loss in accuracy and might not always offer significant speed up on some hardware thus this might not always be the right approach.

- Balance throughput and latency with smart batching. While meeting the latency SLA try larger batch sizes to increase the throughput.

- Try optimized inference engines such as onnxruntime, tensorRT, lightseq, ctranslate-2, etc. These engines often provide additional optimizations such as operator fusion, in addition to model quantization.

- Try model distillation. This is more involved and often requires training data, but the potential gain can be large. For example, MiniLM achieves 99% the accuracy of the original BERT base model while being 2X faster.

- If working on CPU, you can try core pinning. You can find more information on how to work with this [in this blog post](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex#grokking-pytorch-intel-cpu-performance-from-first-principles).

- For batch processing on sequences with different lengths, sequence bucketing could potentially improve the throughput by 2X. In this case, a simple implementation of sequence bucketing is to sort all input by sequence length before feeding them to the model, as this reduces unnecessary padding when batching the sequences.

While this checklist is not exhaustive, going through the items will likely help you squeeze more performance out of your model inference pipeline.
18 changes: 15 additions & 3 deletions docs/performance_guide.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# [Performance Guide](#performance-guide)
In case you're interested in optimizing the memory usage, latency or throughput of a PyTorch model served with TorchServe, this is the guide for you.

We have also created a quick checklist here for extra things to try outside of what is covered on this page. You can find the checklist [here](performance_checklist.md).

## Optimizing PyTorch

There are many tricks to optimize PyTorch models for production including but not limited to distillation, quantization, fusion, pruning, setting environment variables and we encourage you to benchmark and see what works best for you.
Expand Down Expand Up @@ -42,11 +44,17 @@ TorchServe exposes configurations that allow the user to configure the number of

<h4>TorchServe On CPU </h4>

If working with TorchServe on a CPU here are some things to consider that could improve performance:
If working with TorchServe on a CPU you can improve performance by setting the following in your `config.properties`:

```bash
cpu_launcher_enable=true
cpu_launcher_args=--use_logical_core
```
These settings improve performance significantly through launcher core pinning.
The theory behind this improvement is discussed in [this blog](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex#grokking-pytorch-intel-cpu-performance-from-first-principles) which can be quickly summarized as:
* In a hyperthreading enabled system, avoid logical cores by setting thread affinity to physical cores only via core pinning.
* In a multi-socket system with NUMA, avoid cross-socket remote memory access by setting thread affinity to a specific socket via core pinning.

These principles can be automatically configured via an easy to use launch script which has already been integrated into TorchServe. For more information take a look at this [case study](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex#grokking-pytorch-intel-cpu-performance-from-first-principles) which dives into these points further with examples and explanations from first principles.

<h4>TorchServe on GPU</h4>

Expand All @@ -61,7 +69,7 @@ While NVIDIA GPUs allow multiple processes to run on CUDA kernels, this comes wi
* The execution of the kernels is generally serialized
* Each processes creates its own CUDA context which occupies additional GPU memory

To get around these drawbacks, you can utilize the NVIDIA Multi-Process Service (MPS) to increase performance. You can find more information on how to utilize NVIDIA MPS with TorchServe [here](mps.md).
To get around these drawbacks, you can utilize the NVIDIA Multi-Process Service (MPS) to increase performance. You can find more information on how to utilize NVIDIA MPS with TorchServe [here](nvidia_mps.md).

<h6> NVIDIA DALI</h6>

Expand Down Expand Up @@ -92,3 +100,7 @@ Visit this [link]( https://github.com/pytorch/kineto/tree/main/tb_plugin) to lea
<h4>TorchServe on the Animated Drawings App</h4>

For some insight into fine tuning TorchServe performance in an application, take a look at this [article](https://pytorch.org/blog/torchserve-performance-tuning/). The case study shown here uses the Animated Drawings App form Meta to improve TorchServe Performance.

<h4>Performance Checklist</h4>

We have also created a quick checklist here for extra things to try outside of what is covered on this page. You can find the checklist [here](performance_checklist.md).
60 changes: 60 additions & 0 deletions examples/large_models/Huggingface_accelerate/llama2/Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Loading meta-llama/Llama-2-70b-chat-hf on AWS EC2 g5.24xlarge using accelerate

This document briefs on serving large HG models with limited resource using accelerate. This option can be activated with `low_cpu_mem_usage=True`. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint).

### Step 1: Download model Permission

Follow [this instruction](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) to get permission

Login with a Hugging Face account
```
huggingface-cli login
# or using an environment variable
huggingface-cli login --token $HUGGINGFACE_TOKEN
```

```bash
python ../Download_model.py --model_path model --model_name meta-llama/Llama-2-70b-chat-hf
```
Model will be saved in the following path, `model/models--meta-llama--Llama-2-70b-chat-hf`.

### Step 2: Generate MAR file

Add the downloaded path to " model_path:" in `model-config.yaml` and run the following.

```bash
torch-model-archiver --model-name llama2-70b-chat --version 1.0 --handler custom_handler.py --config-file model-config.yaml -r requirements.txt --archive-format no-archive
```

If you are using conda, and notice issues with mpi4py, you would need to install openmpi-mpicc using the following

```
conda install -c conda-forge openmpi-mpicc
```

### Step 3: Add the mar file to model store

```bash
mkdir model_store
mv llama2-70b-chat model_store
mv model model_store/llama2-70b-chat
```

### Step 3: Start torchserve

Update config.properties and start torchserve

```bash
torchserve --start --ncs --ts-config config.properties --model-store model_store --models llama2-70b-chat
```

### Step 4: Run inference

```bash
curl -v "http://localhost:8080/predictions/llama2-70b-chat" -T sample_text.txt
```

results in the following output
```
Mayonnaise is a thick, creamy condiment made from a mixture of egg yolks, oil, vinegar or lemon juice, and seasonings'
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
enable_envvars_config=true
install_py_dep_per_model=true

Loading

0 comments on commit e17d88f

Please sign in to comment.