OpenAI's CLIP neural network
-
Updated
Feb 5, 2021 - Python
OpenAI's CLIP neural network
Run CLIP inference on the ImageNet dataset and use these inferences as labels to train other models and again evaluate the trained model on Imagenet validation dataset using original labels or CLIP labels
code for studying OpenAI's CLIP explainability
A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
Generative models for architecture prose and schematics
An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Visual Search with OpenAI Clip
Search images by text input with CLIP
SpaceVector is a platform for semantic search on satellite images using state of the art AI. It aims to support the use of satellite images.
CLIPfa: Connecting Farsi Text and Images
Visual and Vision-Language Representation Pre-Training with Contrastive Learning
Search relevant images using text/image query.
CLIFS (CLIP-based Frame Selection) is a Python function that takes in a video file and a text prompt as input, and uses the CLIP (Contrastive Language-Image Pre-training) model to find the frame in the video that is most similar to the given text prompt.
Recommendation system that searches similar items
Group images by provided labels using OpenAI/CLIP
A dead-simple image search and image-text matching system for Bangla using CLIP
GUI to explore large image collections with text queries
KoCLIP: Korean port of OpenAI CLIP, in Flax
Add a description, image, and links to the openai-clip topic page so that developers can more easily learn about it.
To associate your repository with the openai-clip topic, visit your repo's landing page and select "manage topics."