Embedding models on very large sentence level datasets.
embedding
22m
33m
200K Pulls Updated 7 months ago
Updated 10 months ago
10 months ago
4f5da3bd944d · 67MB
model
archbert
·
parameters33.2M
·
quantizationF16
67MB
params
{
"num_ctx": 256
}
16B
license
Apache License
Version 2.0, January 2004
11kB
Readme
Note: this model requires Ollama 0.1.26 or later. Download it here. It can only be used to generate embeddings.
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective.
Usage
REST API
curl http://localhost:11434/api/embeddings -d '{
"model": "all-minilm",
"prompt": "The sky is blue because of Rayleigh scattering"
}'
Python library
ollama.embeddings(model='all-minilm', prompt='The sky is blue because of Rayleigh scattering')
Javascript library
ollama.embeddings({ model: 'all-minilm', prompt: 'The sky is blue because of Rayleigh scattering' })