Skip to content

Commit

Permalink
Merge pull request #190 from isl-org/thias15/update-links
Browse files Browse the repository at this point in the history
update links
  • Loading branch information
thias15 authored Dec 15, 2022
2 parents 6688299 + f21620b commit f28885a
Show file tree
Hide file tree
Showing 11 changed files with 38 additions and 38 deletions.
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ COPY ./midas ./midas
COPY ./*.py ./

# download model weights so the docker image can be used offline
RUN cd weights && {curl -OL https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/dpt_hybrid-midas-501f0c75.pt; cd -; }
RUN cd weights && {curl -OL https://github.com/isl-org/MiDaS/releases/download/v3/dpt_hybrid-midas-501f0c75.pt; cd -; }
RUN python3 run.py --model_type dpt_hybrid; exit 0

# entrypoint (dont forget to mount input and output directories)
Expand Down
36 changes: 18 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,31 +14,31 @@ and our [preprint](https://arxiv.org/abs/2103.13413):

MiDaS was trained on 10 datasets (ReDWeb, DIML, Movies, MegaDepth, WSVD, TartanAir, HRWSI, ApolloScape, BlendedMVS, IRS) with
multi-objective optimization.
The original model that was trained on 5 datasets (`MIX 5` in the paper) can be found [here](https://github.com/intel-isl/MiDaS/releases/tag/v2).
The original model that was trained on 5 datasets (`MIX 5` in the paper) can be found [here](https://github.com/isl-org/MiDaS/releases/tag/v2).


### Changelog
* [Sep 2021] Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/DPT-Large).
* [Apr 2021] Released MiDaS v3.0:
- New models based on [Dense Prediction Transformers](https://arxiv.org/abs/2103.13413) are on average [21% more accurate](#Accuracy) than MiDaS v2.1
- Additional models can be found [here](https://github.com/intel-isl/DPT)
- Additional models can be found [here](https://github.com/isl-org/DPT)
* [Nov 2020] Released MiDaS v2.1:
- New model that was trained on 10 datasets and is on average about [10% more accurate](#Accuracy) than [MiDaS v2.0](https://github.com/intel-isl/MiDaS/releases/tag/v2)
- New light-weight model that achieves [real-time performance](https://github.com/intel-isl/MiDaS/tree/master/mobile) on mobile platforms.
- Sample applications for [iOS](https://github.com/intel-isl/MiDaS/tree/master/mobile/ios) and [Android](https://github.com/intel-isl/MiDaS/tree/master/mobile/android)
- [ROS package](https://github.com/intel-isl/MiDaS/tree/master/ros) for easy deployment on robots
- New model that was trained on 10 datasets and is on average about [10% more accurate](#Accuracy) than [MiDaS v2.0](https://github.com/isl-org/MiDaS/releases/tag/v2)
- New light-weight model that achieves [real-time performance](https://github.com/isl-org/MiDaS/tree/master/mobile) on mobile platforms.
- Sample applications for [iOS](https://github.com/isl-org/MiDaS/tree/master/mobile/ios) and [Android](https://github.com/isl-org/MiDaS/tree/master/mobile/android)
- [ROS package](https://github.com/isl-org/MiDaS/tree/master/ros) for easy deployment on robots
* [Jul 2020] Added TensorFlow and ONNX code. Added [online demo](http://35.202.76.57/).
* [Dec 2019] Released new version of MiDaS - the new model is significantly more accurate and robust
* [Jul 2019] Initial release of MiDaS ([Link](https://github.com/intel-isl/MiDaS/releases/tag/v1))
* [Jul 2019] Initial release of MiDaS ([Link](https://github.com/isl-org/MiDaS/releases/tag/v1))

### Setup

1) Pick one or more models and download corresponding weights to the `weights` folder:

- For highest quality: [dpt_large](https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt)
- For moderately less quality, but better speed on CPU and slower GPUs: [dpt_hybrid](https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt)
- For real-time applications on resource-constrained devices: [midas_v21_small](https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21_small-70d6b9c8.pt)
- Legacy convolutional model: [midas_v21](https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21-f6b98070.pt)
- For highest quality: [dpt_large](https://github.com/isl-org/MiDaS/releases/download/v3/dpt_large-midas-2f21e586.pt)
- For moderately less quality, but better speed on CPU and slower GPUs: [dpt_hybrid](https://github.com/isl-org/MiDaS/releases/download/v3/dpt_hybrid-midas-501f0c75.pt)
- For real-time applications on resource-constrained devices: [midas_v21_small](https://github.com/isl-org/MiDaS/releases/download/v2_1/midas_v21_small-70d6b9c8.pt)
- Legacy convolutional model: [midas_v21](https://github.com/isl-org/MiDaS/releases/download/v2_1/midas_v21-f6b98070.pt)

2) Set up dependencies:

Expand Down Expand Up @@ -92,18 +92,18 @@ The pretrained model is also available on [PyTorch Hub](https://pytorch.org/hub/

#### via TensorFlow or ONNX

See [README](https://github.com/intel-isl/MiDaS/tree/master/tf) in the `tf` subdirectory.
See [README](https://github.com/isl-org/MiDaS/tree/master/tf) in the `tf` subdirectory.

Currently only supports MiDaS v2.1. DPT-based models to be added.


#### via Mobile (iOS / Android)

See [README](https://github.com/intel-isl/MiDaS/tree/master/mobile) in the `mobile` subdirectory.
See [README](https://github.com/isl-org/MiDaS/tree/master/mobile) in the `mobile` subdirectory.

#### via ROS1 (Robot Operating System)

See [README](https://github.com/intel-isl/MiDaS/tree/master/ros) in the `ros` subdirectory.
See [README](https://github.com/isl-org/MiDaS/tree/master/ros) in the `ros` subdirectory.

Currently only supports MiDaS v2.1. DPT-based models to be added.

Expand All @@ -119,10 +119,10 @@ Zero-shot error (the lower - the better) and speed (FPS):
| MiDaS v2.1 small [URL]() | 0.1344 | **0.1344** | 0.3370 | 29.27 | **13.43** | **14.53** | 30 |
| | | | | | | |
| **Big models:** | | | | | | | GPU RTX 3090 |
| MiDaS v2 large [URL](https://github.com/intel-isl/MiDaS/releases/download/v2/model-f46da743.pt) | 0.1246 | 0.1290 | 0.3270 | 23.90 | 9.55 | 14.29 | 51 |
| MiDaS v2.1 large [URL](https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21-f6b98070.pt) | 0.1295 | 0.1155 | 0.3285 | 16.08 | 8.71 | 12.51 | 51 |
| MiDaS v3.0 DPT-Hybrid [URL](https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt) | 0.1106 | 0.0934 | 0.2741 | 11.56 | 8.69 | 10.89 | 46 |
| MiDaS v3.0 DPT-Large [URL](https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt) | **0.1082** | **0.0888** | **0.2697** | **8.46** | **8.32** | **9.97** | 47 |
| MiDaS v2 large [URL](https://github.com/isl-org/MiDaS/releases/download/v2/model-f46da743.pt) | 0.1246 | 0.1290 | 0.3270 | 23.90 | 9.55 | 14.29 | 51 |
| MiDaS v2.1 large [URL](https://github.com/isl-org/MiDaS/releases/download/v2_1/midas_v21-f6b98070.pt) | 0.1295 | 0.1155 | 0.3285 | 16.08 | 8.71 | 12.51 | 51 |
| MiDaS v3.0 DPT-Hybrid [URL](https://github.com/isl-org/MiDaS/releases/download/v3/dpt_hybrid-midas-501f0c75.pt) | 0.1106 | 0.0934 | 0.2741 | 11.56 | 8.69 | 10.89 | 46 |
| MiDaS v3.0 DPT-Large [URL](https://github.com/isl-org/MiDaS/releases/download/v3/dpt_large-midas-2f21e586.pt) | **0.1082** | **0.0888** | **0.2697** | **8.46** | **8.32** | **9.97** | 47 |



Expand Down
8 changes: 4 additions & 4 deletions hubconf.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ def DPT_Large(pretrained=True, **kwargs):

if pretrained:
checkpoint = (
"https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt"
"https://github.com/isl-org/MiDaS/releases/download/v3/dpt_large-midas-2f21e586.pt"
)
state_dict = torch.hub.load_state_dict_from_url(
checkpoint, map_location=torch.device('cpu'), progress=True, check_hash=True
Expand All @@ -43,7 +43,7 @@ def DPT_Hybrid(pretrained=True, **kwargs):

if pretrained:
checkpoint = (
"https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt"
"https://github.com/isl-org/MiDaS/releases/download/v3/dpt_hybrid-midas-501f0c75.pt"
)
state_dict = torch.hub.load_state_dict_from_url(
checkpoint, map_location=torch.device('cpu'), progress=True, check_hash=True
Expand All @@ -62,7 +62,7 @@ def MiDaS(pretrained=True, **kwargs):

if pretrained:
checkpoint = (
"https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-f6b98070.pt"
"https://github.com/isl-org/MiDaS/releases/download/v2_1/model-f6b98070.pt"
)
state_dict = torch.hub.load_state_dict_from_url(
checkpoint, map_location=torch.device('cpu'), progress=True, check_hash=True
Expand All @@ -81,7 +81,7 @@ def MiDaS_small(pretrained=True, **kwargs):

if pretrained:
checkpoint = (
"https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-small-70d6b9c8.pt"
"https://github.com/isl-org/MiDaS/releases/download/v2_1/model-small-70d6b9c8.pt"
)
state_dict = torch.hub.load_state_dict_from_url(
checkpoint, map_location=torch.device('cpu'), progress=True, check_hash=True
Expand Down
2 changes: 1 addition & 1 deletion mobile/android/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ To use another model, you should convert it to `model_opt.tflite` and place it t

----

Original repository: https://github.com/intel-isl/MiDaS
Original repository: https://github.com/isl-org/MiDaS
2 changes: 1 addition & 1 deletion mobile/android/models/download.gradle
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
def modelFloatDownloadUrl = "https://github.com/intel-isl/MiDaS/releases/download/v2_1/model_opt.tflite"
def modelFloatDownloadUrl = "https://github.com/isl-org/MiDaS/releases/download/v2_1/model_opt.tflite"
def modelFloatFile = "model_opt.tflite"

task downloadModelFloat(type: Download) {
Expand Down
4 changes: 2 additions & 2 deletions mobile/ios/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ pip install tensorflow

### Install TensorFlowLiteSwift via Cocoapods

Set required TensorFlowLiteSwift version in the file (`0.0.1-nightly` is recommended): https://github.com/AlexeyAB/midas_tf_ios/blob/main/Podfile#L9
Set required TensorFlowLiteSwift version in the file (`0.0.1-nightly` is recommended): https://github.com/isl-org/MiDaS/blob/master/mobile/ios/Podfile#L9

Install: brew, ruby, cocoapods

Expand Down Expand Up @@ -82,7 +82,7 @@ open(model_tflite_name, "wb").write("model.tflite")

----

Original repository: https://github.com/intel-isl/MiDaS
Original repository: https://github.com/isl-org/MiDaS


### Examples:
Expand Down
2 changes: 1 addition & 1 deletion mobile/ios/RunScripts/download_models.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

TFLITE_MODEL="model_opt.tflite"
TFLITE_FILE="Midas/Model/${TFLITE_MODEL}"
MODEL_SRC="https://github.com/intel-isl/MiDaS/releases/download/v2/${TFLITE_MODEL}"
MODEL_SRC="https://github.com/isl-org/MiDaS/releases/download/v2/${TFLITE_MODEL}"

if test -f "${TFLITE_FILE}"; then
echo "INFO: TF Lite model already exists. Skip downloading and use the local model."
Expand Down
6 changes: 3 additions & 3 deletions ros/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,14 @@ MiDaS is a neural network to compute depth from a single image.

* install ROS Melodic for Ubuntu 17.10 / 18.04:
```bash
wget https://github.com/intel-isl/MiDaS/master/ros/additions/install_ros_melodic_ubuntu_17_18.sh
wget https://github.com/isl-org/MiDaS/master/ros/additions/install_ros_melodic_ubuntu_17_18.sh
./install_ros_melodic_ubuntu_17_18.sh
```

or Noetic for Ubuntu 20.04:

```bash
wget https://github.com/intel-isl/MiDaS/master/ros/additions/install_ros_noetic_ubuntu_20.sh
wget https://github.com/isl-org/MiDaS/master/ros/additions/install_ros_noetic_ubuntu_20.sh
./install_ros_noetic_ubuntu_20.sh
```

Expand Down Expand Up @@ -61,7 +61,7 @@ source ~/.bashrc
cd ~/
mkdir catkin_ws
cd catkin_ws
git clone https://github.com/intel-isl/MiDaS
git clone https://github.com/isl-org/MiDaS
mkdir src
cp -r MiDaS/ros/* src

Expand Down
2 changes: 1 addition & 1 deletion ros/additions/downloads.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
mkdir ~/.ros
wget https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-small-traced.pt
wget https://github.com/isl-org/MiDaS/releases/download/v2_1/model-small-traced.pt
cp ./model-small-traced.pt ~/.ros/model-small-traced.pt


2 changes: 1 addition & 1 deletion ros/midas_cpp/package.xml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

<maintainer email="alexeyab84@gmail.com">Alexey Bochkovskiy</maintainer>
<license>MIT</license>
<url type="website">https://github.com/AlexeyAB/midas_ros</url>
<url type="website">https://github.com/isl-org/MiDaS/tree/master/ros</url>
<!-- <author email="alexeyab84@gmail.com">Alexey Bochkovskiy</author> -->


Expand Down
10 changes: 5 additions & 5 deletions tf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@

### Run inference on TensorFlow-model by using TensorFlow

1) Download the model weights [model-f6b98070.pb](https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-f6b98070.pb)
and [model-small.pb](https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-small.pb) and place the
1) Download the model weights [model-f6b98070.pb](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-f6b98070.pb)
and [model-small.pb](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-small.pb) and place the
file in the `/tf/` folder.

2) Set up dependencies:
Expand Down Expand Up @@ -47,8 +47,8 @@ pip install -I grpcio tensorflow==2.3.0 tensorflow-addons==0.11.2 numpy==1.18.0

### Run inference on ONNX-model by using ONNX-Runtime

1) Download the model weights [model-f6b98070.onnx](https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-f6b98070.onnx)
and [model-small.onnx](https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-small.onnx) and place the
1) Download the model weights [model-f6b98070.onnx](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-f6b98070.onnx)
and [model-small.onnx](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-small.onnx) and place the
file in the `/tf/` folder.

2) Set up dependencies:
Expand Down Expand Up @@ -87,7 +87,7 @@ pip install onnxruntime==1.5.2

### Make ONNX model from downloaded Pytorch model file

1) Download the model weights [model-f6b98070.pt](https://github.com/intel-isl/MiDaS/releases/download/v2_1/model-f6b98070.pt) and place the
1) Download the model weights [model-f6b98070.pt](https://github.com/isl-org/MiDaS/releases/download/v2_1/model-f6b98070.pt) and place the
file in the root folder.

2) Set up dependencies:
Expand Down

0 comments on commit f28885a

Please sign in to comment.