PaperSwipe

Jina-VLM: Small Multilingual Vision Language Model

Published 2 days agoVersion 1arXiv:2512.04032

Authors

Andreas Koukounas, Georgios Mastrapas, Florian Hönicke, Sedigheh Eslami, Guillaume Roncari, Scott Martens, Han Xiao

Categories

cs.CLcs.AIcs.CV

Abstract

We present Jina-VLM, a 2.4B parameter vision-language model that achieves state-of-the-art multilingual visual question answering among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language backbone through an attention-pooling connector that enables token-efficient processing of arbitrary-resolution images. Across standard VQA benchmarks and multilingual evaluations, Jina-VLM outperforms comparable models while preserving competitive text-only performance.

Jina-VLM: Small Multilingual Vision Language Model

2 days ago
v1
7 authors

Categories

cs.CLcs.AIcs.CV

Abstract

We present Jina-VLM, a 2.4B parameter vision-language model that achieves state-of-the-art multilingual visual question answering among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language backbone through an attention-pooling connector that enables token-efficient processing of arbitrary-resolution images. Across standard VQA benchmarks and multilingual evaluations, Jina-VLM outperforms comparable models while preserving competitive text-only performance.

Authors

Andreas Koukounas, Georgios Mastrapas, Florian Hönicke et al. (+4 more)

arXiv ID: 2512.04032
Published Dec 3, 2025

Click to preview the PDF directly in your browser