From be7c60f7ccca09c6e7fbbd14282c8933a97863d7 Mon Sep 17 00:00:00 2001 From: "samarth.tripathi" Date: Mon, 27 Jul 2020 15:20:29 -0700 Subject: [PATCH 1/2] Introducing Profiler --- README.md | 4 ++++ docs/prof_readme.rst | 10 +++++----- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 085018a..d4f60b0 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,11 @@ neural architecture search. The table below shows a full list of currently suppo | ----------- | ----------- | | Random
Grid
[Hyperband](https://github.com/zygmuntz/hyperband)
[Hyperopt](https://github.com/hyperopt/hyperopt)
[Spearmint](https://github.com/JasperSnoek/spearmint)
[BOHB](https://github.com/automl/HpBandSter)
[EAS (experimental)](https://github.com/han-cai/EAS)
Passive | Multiple CPUs
Multiple GPUs
Multiple Machines (SSH)
AWS EC2 instances | +## Auptimizer v 1.3 has been released: Introducing Profiler +**Profiler** is a simulator for profiling performance of Machine Learning (ML) model scripts. Given compute- and memory resource constraints for a CPU-based Edge device, Profiler can provide estimates of compute- and memory usage for model scripts on the device. These estimations can be used to choose best performing models or, in certain cases, to predict how much compute and memory models will use on the target device. + +Because Profiler mimics the target device environment on the user's development machine, the user can gain insights about the performance and resource needs of a model script without having to deploy it on the target device. By using Profiler, you can significantly accelerate the model development cycle and find the instant model-device fit. For more information please see [Profiler](https://github.com/LGE-ARC-AdvancedAI/auptimizer/tree/master/src/aup/profiler) . ## Install **Auptimizer** currently is well tested on Linux systems, it may require some tweaks for Windows users. diff --git a/docs/prof_readme.rst b/docs/prof_readme.rst index fa47477..6755e89 100644 --- a/docs/prof_readme.rst +++ b/docs/prof_readme.rst @@ -62,7 +62,7 @@ How Profiler Works We have conducted over 300 experiments across multiple models, devices, and compute settings. Full results are available -`here. <../Examples/profiler_examples/experiments>`__ +`here. `__ How to Use Profiler @@ -113,7 +113,7 @@ Set Up Profiler User Variables Profiler can accept two arguments as inputs - the environment file (necessary) and model name list or file (optional). Refer to ``env_mnist.template`` and ``env_benchmark.template`` in `Profiler -Examples <../Examples/profiler_examples>`__ for examples. +Examples `__ for examples. Create ``env.template``, and add the following variables as needed: @@ -228,14 +228,14 @@ Examples We present some examples on how to use profiler in `Profiler -Examples <../Examples/profiler_examples>`__ folder. +Examples `__ folder. TensorFlow Lite Inference Benchmarking ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To use Profiler on TensorFlow Lite Inference Benchmarking classification -in the `benchmark <../Examples/profiler_examples/bench>`__ folder. +in the `benchmark `__ folder. 1. [Optional] Use the bench/download.sh script (wget must be installed on your system) to download mobilenet_v1_0.75_224 and mobilenet_v1_1.0_224 (Alternatively, you can download a different set of TensorFlow Lite models from @@ -271,7 +271,7 @@ MNIST Training Benchmarking ~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can also use Profiler to profile training. MNIST classification -example can be found in the `mnist <../Examples/profiler_examples/mnist>`__ folder. +example can be found in the `mnist `__ folder. 1. [Optional] Download the MNIST dataset from (http://yann.lecun.com/exdb/mnist/). Add the ``.gz`` files to the From 1ab76f128c0a8bef3820a33787fada24b26d337d Mon Sep 17 00:00:00 2001 From: "samarth.tripathi" Date: Mon, 27 Jul 2020 15:29:15 -0700 Subject: [PATCH 2/2] Testing bug --- tests/EE/Resource/test_GPUResourceManager.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/EE/Resource/test_GPUResourceManager.py b/tests/EE/Resource/test_GPUResourceManager.py index 98b89b8..8ee75ae 100644 --- a/tests/EE/Resource/test_GPUResourceManager.py +++ b/tests/EE/Resource/test_GPUResourceManager.py @@ -36,7 +36,7 @@ def callback(*args): self.val = -1 self.rm.run(self.job, self.rm.get_available("test", "gpu"), {}, callback) - sleep(0.5) # wait till subprocess finished - handled by rm.finish() + sleep(1.5) # wait till subprocess finished - handled by rm.finish() self.assertEqual(self.val, -1)