A LLaVA model fine-tuned from Llama 3 Instruct with better scores in several benchmarks.
vision
8b
233.6K Pulls Updated 7 months ago
Updated 7 months ago
7 months ago
7d4b165b1c5e · 17GB
model
archllama
·
parameters8.03B
·
quantizationF16
16GB
projector
archclip
·
parameters312M
·
quantizationF16
624MB
params
{
"num_ctx": 4096,
"num_keep": 4,
"stop": [
"<|start_header_id|>",
"<|en
124B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .P
254B
Readme
llava-llama3
is a LLaVA model fine-tuned from Llama 3 Instruct and CLIP-ViT-Large-patch14-336 with ShareGPT4V-PT and InternVL-SFT by XTuner.