English | 简体中文
Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks, while maintaining strong general multimodal understanding. More importantly, we analyze the impact of data scaling, discuss early signs of emergent generalization capabilities enabled by diverse data training, analyze the risk of overfitting and language shortcuts, present a preliminary study on spatial chain-of-thought reasoning, and validate the potential downstream application. SenseNova-SI is an ongoing project, and this report will be updated continuously. All newly trained multimodal foundation models are publicly released to facilitate further research in this direction. In the future, SenseNova-SI will be integrated with larger-scale in-house models.
- [2025-12-06] As a first step, we have released a highly effective data subset, SenseNova-SI-800K, as well as SenseNova-SI-1.1-InternVL3-8B-800K, a model trained exclusively on the SenseNova-SI-800K subset.
- [2025-12-06] We present models starting from more base models, namelySenseNova-SI-1.2-InternVL3-8B, SenseNova-SI-1.1-Qwen2.5-VL-3B, SenseNova-SI-1.1-Qwen2.5-VL-7B, and SenseNova-SI-1.1-Qwen3-VL-8B. SenseNova-SI-1.2-InternVL3-8B achieve SOTA across eight recent spatial intelligence benchmarks.
- [2025-11-15] We have released SenseNova-SI-1.1-InternVL3-2B and SenseNova-SI-1.1-InternVL3-8B, which achieve state-of-the-art(SOTA) performance among open-source models of comparable size across five recent spatial intelligence benchmarks: VSI, MMSI, MindCube, ViewSpatial and SITE.
| Model | Base Architecture | SI Dataset Scale | Other Remarks |
|---|---|---|---|
| SenseNova-SI-1.2-InternVL3-8B | InternVL3 | 10M | Best Model |
| SenseNova-SI-1.1-InternVL3-8B | InternVL3 | 8M | - |
| SenseNova-SI-1.1-InternVL3-2B | InternVL3 | 8M | - |
| SenseNova-SI-1.1-Qwen3-VL-8B | Qwen3-VL | 8M | - |
| SenseNova-SI-1.1-Qwen2.5-VL-7B | Qwen2.5-VL | 8M | - |
| SenseNova-SI-1.1-Qwen2.5-VL-3B | Qwen2.5-VL | 8M | - |
| SenseNova-SI-1.1-BAGEL-7B-MoT | BAGEL | 8M | unified understanding and generation model |
Currently, we build SenseNova-SI upon popular open-source foundation models to maximize compatibility with existing research pipelines. In this release, we present SenseNova-SI-1.2-InternVL3-8B, SenseNova-SI-1.1-InternVL3-8B, SenseNova-SI-1.1-Qwen3-VL-8B, SenseNova-SI-1.1-Qwen2.5-VL-7B, SenseNova-SI-1.1-Qwen2.5-VL-3B, and SenseNova-SI-1.1-InternVL3-2B, of which SenseNova-SI-1.2-InternVL3-8B achieves state-of-the-art performance among open-source models of comparable size across eight recent spatial intelligence benchmarks: VSI, MMSI, MindCube, ViewSpatial, SITE, BLINK, 3DSRBench, EmbSpatial-Bench.
| Model | VSI | MMSI | MindCube-Tiny | ViewSpatial | SITE | BLINK | 3DSRBench | EmbSpatial-Bench |
|---|---|---|---|---|---|---|---|---|
| Open-source Models (~2B) | ||||||||
| InternVL3-2B | 32.9 | 26.5 | 37.5 | 32.5 | 30.0 | 50.8 | 47.7 | 60.1 |
| Qwen3-VL-2B-Instruct | 50.3 | 28.9 | 34.5 | 36.9 | 35.6 | 53.2 | 47.5 | 70.1 |
| MindCube-3B-RawQA-SFT | 17.2 | 1.7 | 51.7 | 24.1 | 6.3 | 35.1 | 2.8 | 37.0 |
| SpatialLadder-3B | 44.8 | 27.4 | 43.4 | 39.8 | 27.9 | 43.0 | 42.8 | 58.2 |
| SpatialMLLM-4B | 46.3 | 26.1 | 33.4 | 34.6 | 18.0 | 40.5 | 36.2 | 50.0 |
| VST-3B-SFT | 57.9 | 30.2 | 35.9 | 52.8 | 35.8 | 58.8 | 54.1 | 69.0 |
| Cambrian-S-3B | 57.3 | 25.2 | 32.5 | 39.0 | 28.3 | 37.7 | 50.9 | 63.5 |
| Open-source Models (~8B) | ||||||||
| InternVL3-8B | 42.1 | 28.0 | 41.5 | 38.6 | 41.1 | 53.5 | 44.3 | 76.4 |
| Qwen3-VL-8B-Instruct | 57.9 | 31.1 | 29.4 | 42.2 | 45.8 | 66.7 | 53.9 | 77.7 |
| BAGEL-7B-MoT | 31.4 | 31.0 | 34.7 | 41.3 | 37.0 | 63.7 | 50.2 | 73.1 |
| SpaceR-7B | 41.5 | 27.4 | 37.9 | 35.8 | 34.2 | 49.6 | 40.5 | 66.9 |
| ViLaSR-7B | 44.6 | 30.2 | 35.1 | 35.7 | 38.7 | 51.4 | 46.6 | 67.3 |
| VST-7B-SFT | 60.6 | 32.0 | 39.7 | 50.5 | 39.6 | 61.9 | 54.6 | 73.7 |
| Cambrian-S-7B | 67.5 | 25.8 | 39.6 | 40.9 | 33.0 | 37.9 | 54.8 | 72.8 |
| SenseNova-SI-1.2-InternVL3-8B | 69.6 | 42.6 | 89.0 | 58.8 | 49.0 | 69.4 | 60.1 | 77.7 |
| Proprietary Models | ||||||||
| Gemini-2.5-pro-2025-06 | 53.5 | 38.0 | 57.6 | 46.0 | 57.0 | 73.5 | 59.3 | 78.9 |
| Grok-4-2025-07-09 | 47.9 | 37.8 | 63.5 | 43.2 | 47.0 | 56.4 | 54.9 | 75.7 |
| GPT-5-2025-08-07 | 55.0 | 41.8 | 56.3 | 45.5 | 61.8 | 68.0 | 60.3 | 81.6 |
To further facilitate the research in spatial intelligence, we have released a highly effective subset, SenseNova-SI-800K. Since SenseNova-SI is designed to study scaling laws, we observe that this initial release captures a substantial portion of the gains.
| Model | SI Dataset | VSI | MMSI | MindCube-Tiny | ViewSpatial | SITE |
|---|---|---|---|---|---|---|
| InternVL3-8B | - | 42.1 | 28.0 | 41.5 | 38.6 | 41.1 |
| VST-7B-SFT | VST-P-4.1M | 60.6 | 32.0 | 39.7 | 50.5 | 39.6 |
| Cambrian-S-7B | VSI-590K | 67.5 | 25.8 | 39.6 | 40.9 | 33.0 |
| *SenseNova-SI-1.1-InternVL3-8B-800K | SenseNova-SI-800K | 60.9 | 36.4 | 56.9 | 52.5 | 47.7 |
| SenseNova-SI-1.1-InternVL3-8B | SenseNova-SI-8M | 68.7 | 43.3 | 85.6 | 54.6 | 47.7 |
Note that *SenseNova-SI-1.1-InternVL3-8B-800K is trained on the SenseNova-SI-800K subset to provide a reference for researchers working with the 800K-scale dataset. It is released exclusively for scaling-law analysis and research validation, and is not intended to serve as a primary recommended model of the SenseNova-SI series.
Our data is stored in the SenseNova-SI-800K.jsonl file using the JSONL (JSON Lines) format, where each line represents an independent data entry. Each entry is a dictionary organized in the following format,containing three main fields: id, conversations, and image.
- The
idserves as a unique identifier for each data sample. - The
imagefield is a list of strings specifying image paths, all given as paths relative to the root data directory. - The
conversationsfield is a list of dialogue turns, where each turn is a dictionary with two key-value pairs:from, indicating the speaker identity (e.g., human or gpt), andvalue, indicating the textual content. Withinvalue, the<image>placeholder marks where images are inserted, and the number of<image>placeholders match the number of images listed in theimagefield.
{
"id": 0,
"conversations": [
{"from": "human", "value": "<image>\nuser input <image>\nuser input"},
{"from": "gpt", "value": "assistant output"},
{"from": "human", "value": "<image>\nuser input"},
{"from": "gpt", "value": "assistant output"}
],
"image": ["path/to/image1.jpg", "path/to/image2.jpg", "path/to/image3.jpg"],
}We recommend using uv to manage the environment.
uv installation guide: https://docs.astral.sh/uv/getting-started/installation/#installing-uv
git clone git@github.com:OpenSenseNova/SenseNova-SI.git
cd SenseNova-SI/
uv sync --extra cu124 # or one of [cu118|cu121|cu124|cu126|cu128|cu129], depending on your CUDA version
source .venv/bin/activateA simple image-free test to verify environment setup and download the model.
python example.py \
--question "Hello" \
--model_path sensenova/SenseNova-SI-1.2-InternVL3-8BWe fully support multiple model architectures. To use a different model, simply change the value of the --model_path argument, no other code changes are required.
To use BAGEL-MoT:
--model_path sensenova/SenseNova-SI-1.1-BAGEL-7B-MoTTo use Qwen3-VL:
--model_path sensenova/SenseNova-SI-1.1-Qwen3-VL-8BTo run the image generation example specifically for the BAGEL-7B-MoT structure, use the following command:
python example_bagel.py \
--model_path sensenova/SenseNova-SI-1.1-BAGEL-7B-MoT \
--mode generateThis example is from SITE-Bench:
python example.py \
--image_paths examples/Q1_1.png \
--question "<image>\nConsider the real-world 3D locations of the objects. Which is closer to the sink, the toilet paper or the towel?\nOptions: \nA. toilet paper\nB. towel\nGive me the answer letter directly. The best answer is:" \
--model_path sensenova/SenseNova-SI-1.2-InternVL3-8B
# --model_path sensenova/SenseNova-SI-1.1-Qwen3-VL-8BDetails of Example 1
Q:Consider the real-world 3D locations of the objects. Which is closer to the sink, the toilet paper or the towel?\nOptions: \nA. toilet paper\nB. towel\nGive me the answer letter directly. The best answer is:
|
GT: A
This example is from MMSI-Bench:
python example.py \
--image_paths examples/Q2_1.png examples/Q2_2.png \
--question "<image><image>\nIf the landscape painting is on the east side of the bedroom, where is the window located in the bedroom?\nOptions: A. North side, B. South side, C. West side, D. East side\nAnswer with the option's letter from the given choices directly. Enclose the option's letter within ``." \
--model_path sensenova/SenseNova-SI-1.2-InternVL3-8B
# --model_path sensenova/SenseNova-SI-1.1-Qwen3-VL-8BDetails of Example 2
Q:If the landscape painting is on the east side of the bedroom, where is the window located in the bedroom?\nOptions: A. North side, B. South side, C. West side, D. East side\nAnswer with the option's letter from the given choices directly. Enclose the option's letter within ``.
|
|
GT: C
Prepare a file similar to examples/examples.jsonl, where each line represents a single question.
The model is loaded once and processes questions sequentially. The questions remain independent of each other.
For more details on the
jsonlformat, refer to the documentation for Single-Image Data and Multi-Image Data.
python example.py \
--jsonl_path examples/examples.jsonl \
--model_path sensenova/SenseNova-SI-1.2-InternVL3-8B
# --model_path sensenova/SenseNova-SI-1.1-Qwen3-VL-8BTo reproduce the benchmark results above, please refer to EASI to evaluate SenseNova-SI on mainstream spatial intelligence benchmarks.
EASI supports over 20 spatial intelligence models and more than 10 spatial benchmarks, offering Docker for one-click spatial intelligence evaluation.
@article{sensenova-si,
title = {Scaling Spatial Intelligence with Multimodal Foundation Models},
author = {Cai, Zhongang and Wang, Ruisi and Gu, Chenyang and Pu, Fanyi and Xu, Junxiang and Wang, Yubo and Yin, Wanqi and Yang, Zhitao and Wei, Chen and Sun, Qingping and Zhou, Tongxi and Li, Jiaqi and Pang, Hui En and Qian, Oscar and Wei, Yukun and Lin, Zhiqian and Shi, Xuanke and Deng, Kewang and Han, Xiaoyang and Chen, Zukai and Fan, Xiangyu and Deng, Hanming and Lu, Lewei and Pan, Liang and Li, Bo and Liu, Ziwei and Wang, Quan and Lin, Dahua and Yang, Lei},
journal = {arXiv preprint arXiv:2511.13719},
year = {2025}
}

