View on GitHub

Mobile-VideoGPT

Fast and Accurate Video Understanding Language Model

Abdelrahman Shaker, Muhammad Maaz, Chenhui Gou, Hamid Rezatofighi, Salman Khan, and Fahad Khan


                            Paper: , Code: , Models: , Presentation:


Mobile-VideoGPT Overview

Overall Architecture of Mobile-VideoGPT

We propose Mobile-VideoGPT, an efficient multimodal framework designed to operate with fewer than a billion parameters. Unlike traditional video large multimodal models, Mobile-VideoGPT consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM) with real-time throughput. To further improve efficiency, we present an Attention-Based Frame Scoring mechanism to select the key-frames, along with an efficient token projector that prunes redundant visual tokens and preserves essential contextual cues. We evaluate our model across well-established six video understanding benchmarks (e.g., MVBench, EgoSchema, NextQA, and PerceptionTest), and our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second while outperforming existing state-of-the-art 0.5B-parameter competitors.

Mobile-VideoGPT Architectural Overview


📊 Evaluation Summary on 6 benchmarks:

Contributions

🛠️ Installation

We recommend setting up a conda environment for the project:

conda create --name=mobile_videogpt python=3.11
conda activate mobile_videogpt

git clone https://github.com/Amshaker/Mobile-VideoGPT
cd Mobile-VideoGPT

pip install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cu118
pip install transformers==4.41.0

pip install -r requirements.txt

export PYTHONPATH="./:$PYTHONPATH"

Install VideoMamba. VideoMamba is the efficient video encoder in our architecture. Clone and install it using the following commands:

git clone https://github.com/OpenGVLab/VideoMamba
cd VideoMamba
pip install -e causal-conv1d
pip install -e mamba

Additionally, install FlashAttention for training,

pip install ninja

git clone https://github.com/HazyResearch/flash-attention.git
cd flash-attention
python setup.py install

🔥 Running Inference

The following script demonstrates how to load Mobile-VideoGPT-1.5B, tokenize the input prompt, and generate a response:

import sys
import torch
from pathlib import Path
from PIL import Image
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig

# Mobile-VideoGPT Imports
from mobilevideogpt.utils import preprocess_input

# Model and tokenizer paths
pretrained_path = "Amshaker/Mobile-VideoGPT-1.5B" 
video_path = "sample_videos/v_JspVuT6rsLA.mp4"
prompt = "Can you describe what is happening in the video in detail?"

# Load model and tokenizer
config = AutoConfig.from_pretrained(pretrained_path)
tokenizer = AutoTokenizer.from_pretrained(pretrained_path, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
    pretrained_path,
    config=config,
    torch_dtype=torch.float16
).cuda()

# Preprocess input
input_ids, video_frames, context_frames, stop_str = preprocess_input(
    model, tokenizer, video_path, prompt
)

# Run inference
with torch.inference_mode():
    output_ids = model.generate(
        input_ids,
        images=torch.stack(video_frames, dim=0).half().cuda(),
        context_images=torch.stack(context_frames, dim=0).half().cuda(),
        do_sample=False,
        temperature=0,
        top_p=1,
        num_beams=1,
        max_new_tokens=1024,
        use_cache=True,
    )
# Decode output
outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
if outputs.endswith(stop_str):
    outputs = outputs[:-len(stop_str)].strip()

print("🤖 Mobile-VideoGPT Output:", outputs)

✅ Expected Output:

🤖 Mobile-ViideoGPT Output:  In the video, a young boy is playing the violin in front of an adult who is playing the piano. The boy appears to be focused on his performance and is wearing a blue shirt with white stripes down the sleeves. He plays the violin with great concentration and skill, moving his fingers along the strings with precision. Meanwhile, the adult pianist sits behind him and plays the piano alongside him. Both musicians are dressed casually, suggesting that they may be practicing or performing for friends or family. The setting seems to be a cozy room with wooden flooring, giving it a warm and inviting atmosphere. Overall, this video showcases two talented musicians sharing a moment of musical collaboration, creating beautiful music together.

📊 Quantitative Evaluation:

We provide instructions on how to reproduce Mobile-VideoGPT-0.5B and Mobile-VideoGPT-1.5B results on MVBench, PerceptionTest, NextQA, MLVU, EgoSchema, and ActNet-QA. Please follow the instructions at eval/README.md.

Benchmark Evaluation:

All_benchmarks


Detailed Evaluation on MVBench:

MVBench_quantitative


Training:

We provide unified scripts for pretraining and finetuning of Mobile-VideoGPT. Please follow the instructions at scripts/README.md.


Qualitative Examples:

Qualitative comparison between the proposed Mobile Video-GPT-0.5B, LLaVA-OneVision-0.5B, and LLaVa-Mini-8B. The output highlights both video comprehension quality and speed performance in terms of latency and throughput (tokens per second):

Contributions


Multi-turn conversation for Mobile-VideoGPT:

Contributions


📜 Citations:

If you’re using Mobile-VideoGPT in your research or applications, please cite using this BibTeX:

@article{Shaker2025MobileVideoGPT,
    title={Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model},
    author={Shaker, Abdelrahman and Maaz, Muhammad and Rezatofighi, Hamid and Khan, Salman and Khan, Fahad Shahbaz},
    journal={arxiv},
    year={2025},
    url={https://arxiv.org/abs/2503.21782}
}

Contact

Should you have any questions, please create an issue on our GitHub repository or contact me at abdelrahman.youssief@mbzuai.ac.ae.