Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update finetuner docs #843

Merged
merged 9 commits into from
Oct 21, 2022
11 changes: 10 additions & 1 deletion docs/user-guides/finetuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ This guide will show you how to use [Finetuner](https://finetuner.jina.ai) to fi
For installation and basic usage of Finetuner, please refer to [Finetuner documentation](https://finetuner.jina.ai).
You can also [learn more details about fine-tuning CLIP](https://finetuner.jina.ai/tasks/text-to-image/).

We use `finetuner`==0.6.2, `clip-as-service`==0.8.0 and `docarray`==0.17.0 in this tutorial.
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved

## Prepare Training Data

Finetuner accepts training data and evaluation data in the form of {class}`~docarray.array.document.DocumentArray`.
Expand Down Expand Up @@ -92,6 +94,7 @@ run = finetuner.fit(
learning_rate=1e-5,
loss='CLIPLoss',
cpu=False,
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved
to_onnx=True,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As finetuner supports open_clip, can we finetune model='ViT-B-32::openai' in this tutorial.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this model name does not match that in finetuner

jemmyshin marked this conversation as resolved.
Show resolved Hide resolved
)
```

Expand Down Expand Up @@ -174,10 +177,16 @@ executors:
replicas: 1
```
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved


```{warning}
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved
Note that Finetuner only support ViT-B/32 CLIP model currently. The model name should match the fine-tuned model, or you will get incorrect output.
Note that `finetuner`==0.6.2 doesn't support these new clip models trained on Laion2B:
jemmyshin marked this conversation as resolved.
Show resolved Hide resolved
- ViT-B-32::laion2b-s34b-b79k
- ViT-L-14::laion2b-s32b-b82k
- ViT-H-14::laion2b-s32b-b79k
- ViT-g-14::laion2b-s12b-b42k
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we provide the full model list supported in finetuner. This is not a release notes where the updated information is fine. As the official documentation, we should provide complete information.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model names in finetuner are not the same as in cas, which is confusing. Should we creat a mapping for the model names?



You can now start the `clip_server` using fine-tuned model to get a performance boost:

```bash
Expand Down