Simple implementation of OpenAI CLIP model in PyTorch.
-
Updated
Apr 17, 2024 - Jupyter Notebook
Simple implementation of OpenAI CLIP model in PyTorch.
🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.
A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Official implementation for "Blended Diffusion for Text-driven Editing of Natural Images" [CVPR 2022]
Sort a folder of images according to their similarity with provided text in your browser (uses a browser-ported version of OpenAI's CLIP model and the web's new File System Access API)
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
KoCLIP: Korean port of OpenAI CLIP, in Flax
CLIPfa: Connecting Farsi Text and Images
An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.
Run CLIP inference on the ImageNet dataset and use these inferences as labels to train other models and again evaluate the trained model on Imagenet validation dataset using original labels or CLIP labels
code for studying OpenAI's CLIP explainability
A dead-simple image search and image-text matching system for Bangla using CLIP
CLIP (Contrastive Language–Image Pre-training) for Bangla.
CLIP as a service - Embed image and sentences, object recognition, visual reasoning, image classification and reverse image search
Open AI Clip + Faiss Image Semantic search
Recommendation system that searches similar items
OpenAI's CLIP neural network
Add a description, image, and links to the openai-clip topic page so that developers can more easily learn about it.
To associate your repository with the openai-clip topic, visit your repo's landing page and select "manage topics."