CLIP
tiktoken
CLIP | tiktoken | |
---|---|---|
104 | 30 | |
22,316 | 9,884 | |
3.0% | 5.4% | |
1.2 | 6.7 | |
7 days ago | 27 days ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CLIP
-
Anomaly Detection with FiftyOne and Anomalib
pip install -U huggingface_hub umap-learn git+https://github.com/openai/CLIP.git
-
How to Cluster Images
We will also need two more libraries: OpenAI’s CLIP GitHub repo, enabling us to generate image features with the CLIP model, and the umap-learn library, which will let us apply a dimensionality reduction technique called Uniform Manifold Approximation and Projection (UMAP) to those features to visualize them in 2D:
-
Show HN: Memories, FOSS Google Photos alternative built for high performance
Biggest missing feature for all these self hosted photo hosting is the lack of a real search. Being able to search for things like "beach at night" is a time saver instead of browsing through hundreds or thousands of photos. There are trained neural networks out there like https://github.com/openai/CLIP which are quite good.
-
Zero-Shot Prediction Plugin for FiftyOne
In computer vision, this is known as zero-shot learning, or zero-shot prediction, because the goal is to generate predictions without explicitly being given any example predictions to learn from. With the advent of high quality multimodal models like CLIP and foundation models like Segment Anything, it is now possible to generate remarkably good zero-shot predictions for a variety of computer vision tasks, including:
-
A History of CLIP Model Training Data Advances
(Github Repo | Most Popular Model | Paper | Project Page)
-
NLP Algorithms for Clustering AI Content Search Keywords
the first thing that comes to mind is CLIP: https://github.com/openai/CLIP
-
How to Build a Semantic Search Engine for Emojis
Whenever I’m working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity we’ll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Stability Matrix v1.1.0 - Portable mode, Automatic updates, Revamped console, and more
Command: "C:\StabilityMatrix\Packages\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip --prefer-binary
-
[D] LLM or model that does image -> prompt?
CLIP might work for your needs.
tiktoken
- FLaNK AI - 01 April 2024
-
GPT-3.5 crashes when it thinks about useRalativeImagePath too much
Their tokenizer is open source: https://github.com/openai/tiktoken
Data files that contain vocabulary are listed here: https://github.com/openai/tiktoken/blob/9e79899bc248d5313c7d...
-
How fast is JS tiktoken?
OpenAI's refference tokeniser - https://github.com/openai/tiktoken
-
Anthropic announces Claude 2.1 – 200k context, less refusals
ChatGPT presumably adds them as special tokens to the cl100k_base tokenizer, as they demo in the tiktoken documentation: https://github.com/openai/tiktoken#extending-tiktoken
-
What is the best way to get an approximate number of tokens for a piece of text?
I want to measure the approximate number of tokens in a piece of text to understand if I will need to modify it before passing it into the context of an OpenAI API call. Tiktoken can do this, but I'm not sure if it's overkill to use that library just for this simple task. I don't need to actually tokenize the text, I just need an approximate count (e.g. within like 1% of the text's actual token length for text that represents the visible text on a webpage).
-
Show HN: LLaMA tokenizer that runs in browser
https://platform.openai.com/tokenizer or the official python library tiktoken https://github.com/openai/tiktoken or this JS port of tiktoken https://github.com/dqbd/tiktoken
-
Made a GPT-3.5-Turbo and GPT-4 Tokenizer
It's built on top of the tiktoken library and is basically just a lambda function in the backend.
- AiPrice - an API for calculating OpenAI tokens and pricing
-
Anyone able to explain what happened here?
"All" is a single token in OpenAI's tiktoken Tokenizer, unrelated to the token for capital "A". Even lowercase "all" is a distinct token from "All" or "ALL."
-
Which lib is the tokenizer page using to calculate the tokens?
check tiktoken
What are some alternatives?
open_clip - An open source implementation of CLIP.
tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
daath-ai-parser - Daath AI Parser is an open-source application that uses OpenAI to parse visible text of HTML elements.
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
skypilot - SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
disco-diffusion
bricks - Open-source natural language enrichments at your fingertips.
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
terminal-copilot - A smart terminal assistant that helps you find the right command.
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
jupyter-scheduler - Run Jupyter notebooks as jobs