OpenAI's CLIP neural network
-
Updated
Feb 5, 2021 - Python
OpenAI's CLIP neural network
Run CLIP inference on the ImageNet dataset and use these inferences as labels to train other models and again evaluate the trained model on Imagenet validation dataset using original labels or CLIP labels
A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
CLIFS (CLIP-based Frame Selection) is a Python function that takes in a video file and a text prompt as input, and uses the CLIP (Contrastive Language-Image Pre-training) model to find the frame in the video that is most similar to the given text prompt.
Group images by provided labels using OpenAI/CLIP
A dead-simple image search and image-text matching system for Bangla using CLIP
GUI to explore large image collections with text queries
KoCLIP: Korean port of OpenAI CLIP, in Flax
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
ChatSense - Llama 2 + Code Llama + CLIP based Chatbot
🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.
Visual semantic search system | Search across products | Text Query --> Visual Retrieval
Computationally-free personalization at test time for sEMG gesture classification. Fast (gpu/cpu) ninapro API.
Search for images using text and images.
CLIP (Contrastive Language–Image Pre-training) for Bangla.
Fine-tune OpenAI's CLIP model for classification tasks
Add a description, image, and links to the openai-clip topic page so that developers can more easily learn about it.
To associate your repository with the openai-clip topic, visit your repo's landing page and select "manage topics."