Sort/search images using OpenAI's CLIP in your browser
This web app sorts/searches through images in a directory on your computer using OpenAI's CLIP model, and the new File System Access API. Here's the Github repo for this web app, and here's the Github repo for the web-ported CLIP models. Feel free to open an issue if you have any questions about this demo, but note that I'm not actively adding any more features right now.
All processing happens in your browser, on your device - i.e. your images are not uploaded to a server for processing.
Heads Up: Your browser is missing one or more features (File System Access API and/or credentialless COEP) that are needed for this demo to work. As of writing, it works in Chromium-based browsers like Chrome and Edge. Other browsers like Firefox and Safari are often a bit slower in implementing cutting-edge features.
Step 1: Choose model:
Step 2: Download and initialize the models.
Download image model:
Download text model:
Initialize workers:
Number of image embedding workers/threads:
Step 3: Pick a directory of images (images in subdirectories will be included).
or (remove nsfw:)
Download progress:
Loading existing embeddings: none
Step 4: Compute image embeddings. (they will be saved as <ModelName>_embeddings.tsv in the selected directory)
0 images embedded (? ms per image)
Step 5:Existing embeddings found.
Only needed if you've added or changed images:
Only new images?
Step 6: Enter a search term or
max results: skip first: score-based visibility:
Results(hover over images for cosine similarities)