-
Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey,
arXiv, 2412.18619
, arxiv, pdf, cication: -1Liang Chen, Zekun Wang, Shuhuai Ren, ..., Tianyu Liu, Baobao Chang · (Awesome-Multimodal-Next-Token-Prediction - LMM101)
-
A Survey of Mathematical Reasoning in the Era of Multimodal Large Language Model: Benchmark, Method & Challenges,
arXiv, 2412.11936
, arxiv, pdf, cication: -1Yibo Yan, Jiamin Su, Jianxiang He, ..., Qingsong Wen, Xuming Hu
-
Personalized Multimodal Large Language Models: A Survey,
arXiv, 2412.02142
, arxiv, pdf, cication: -1Junda Wu, Hanjia Lyu, Yu Xia, ..., Jiebo Luo, Julian McAuley
-
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey,
arXiv, 2412.02104
, arxiv, pdf, cication: -1Yunkai Dang, Kaichen Huang, Jiahao Huo, ..., Hui Xiong, Xuming Hu
-
Personalized Multimodal Large Language Models: A Survey,
arXiv, 2412.02142
, arxiv, pdf, cication: -1Junda Wu, Hanjia Lyu, Yu Xia, ..., Jiebo Luo, Julian McAuley
-
Papers I've read this week: vision language models
· (𝕏)
-
short survey of trends in VLMs since Llava 1.0 came out 𝕏
· (huggingface) · (youtube)
-
Towards Unifying Understanding and Generation in the Era of Vision Foundation Models: A Survey from the Autoregression Perspective,
arXiv, 2410.22217
, arxiv, pdf, cication: -1Shenghao Xie, Wenqiang Zu, Mingyang Zhao, ..., Shanghang Zhang, Lei Ma
-
A Survey of Hallucination in Large Visual Language Models,
arXiv, 2410.15359
, arxiv, pdf, cication: -1Wei Lan, Wenyi Chen, Qingfeng Chen, ..., Huiyu Zhou, Yi Pan
-
Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models,
arXiv, 2501.05767
, arxiv, pdf, cication: -1You Li, Heyu Huang, Chi Chen, ..., Ruixuan Li, Maosong Sun · (migician-vg.github) · (arxiv) · (Migician - thunlp)
-
Moondream 2025-01-09 Release: Structured Text, Enhanced OCR, Gaze Detection
· (𝕏) · (docs.moondream)
-
The Illusion-Illusion: Vision Language Models See Illusions Where There are None,
arXiv, 2412.18613
, arxiv, pdf, cication: -1Tomer Ullman
· (𝕏)
-
Are Vision-Language Models Truly Understanding Multi-vision Sensor?,
arXiv, 2412.20750
, arxiv, pdf, cication: -1Sangyun Chung, Youngjoon Yu, Youngchae Chee, ..., Byung-Kwan Lee, Yong Man Ro
-
SynerGen-VL: Towards Synergistic Image Understanding and Generation with Vision Experts and Token Folding,
arXiv, 2412.09604
, arxiv, pdf, cication: -1Hao Li, Changyao Tian, Jie Shao, ..., Lewei Lu, Jifeng Dai
-
POINTS1.5: Building a Vision-Language Model towards Real World Applications,
arXiv, 2412.08443
, arxiv, pdf, cication: -1Yuan Liu, Le Tian, Xiao Zhou, ..., Yang Yu, Jie Zhou
-
OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation,
arXiv, 2412.09585
, arxiv, pdf, cication: -1Jitesh Jain, Zhengyuan Yang, Humphrey Shi, ..., Jianfeng Gao, Jianwei Yang · (OLA-VLM; - SHI-Labs) · (praeclarumjj3.github)
-
🌟 NVILA: Efficient Frontier Visual Language Models,
arXiv, 2412.04468
, arxiv, pdf, cication: -1Zhijian Liu, Ligeng Zhu, Baifeng Shi, ..., Song Han, Yao Lu
-
moondream is a small vision language model designed to run efficiently on edge devices. 🤗
-
🌟 Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling,
arXiv, 2412.05271
, arxiv, pdf, cication: -1Zhe Chen, Weiyun Wang, Yue Cao, ..., Jifeng Dai, Wenhai Wang
-
🌟 MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale,
arXiv, 2412.05237
, arxiv, pdf, cication: -1Jarvis Guo, Tuney Zheng, Yuelin Bai, ..., Wenhu Chen, Xiang Yue · (𝕏)
-
🌟 Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Vision-Language Models,
arXiv, 2409.17146
, arxiv, pdf, cication: -1Matt Deitke, Christopher Clark, Sangho Lee, ..., Ali Farhadi, Aniruddha Kembhavi · (molmo - allenai)
-
🌟 Florence-VL: Enhancing Vision-Language Models with Generative Vision Encoder and Depth-Breadth Fusion,
arXiv, 2412.04424
, arxiv, pdf, cication: -1Jiuhai Chen, Jianwei Yang, Haiping Wu, ..., Tianyi Zhou, Bin Xiao · (huggingface)
-
Discriminative Fine-tuning of LVLMs,
arXiv, 2412.04378
, arxiv, pdf, cication: -1Yassine Ouali, Adrian Bulat, Alexandros Xenos, ..., Brais Martinez, Georgios Tzimiropoulos
-
CompCap: Improving Multimodal Large Language Models with Composite Captions,
arXiv, 2412.05243
, arxiv, pdf, cication: -1Xiaohui Chen, Satya Narayan Shukla, Mahmoud Azab, ..., Xuewen Zhang, Baosheng He
-
Maya: An Instruction Finetuned Multilingual Multimodal Model,
arXiv, 2412.07112
, arxiv, pdf, cication: -1Nahid Alam, Karthik Reddy Kanjula, Surya Guthikonda, ..., Snehanshu Mukherjee, Alham Fikri Aji · (maya - nahidalam)
-
Welcome PaliGemma 2 – New vision language models by Google 🤗
· (𝕏)
-
FINECAPTION: Compositional Image Captioning Focusing on Wherever You Want at Any Granularity,
arXiv, 2411.15411
, arxiv, pdf, cication: -1Hang Hua, Qing Liu, Lingzhi Zhang, ..., Jianming Zhang, Jiebo Luo
-
Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding Conversations,
arXiv, 2411.10414
, arxiv, pdf, cication: -1Jianfeng Chi, Ujjwal Karn, Hongyuan Zhan, ..., Kartikeya Upasani, Mahesh Pasupuleti · (llama) · (llama-recipes - meta-llama)
-
Unified Generative and Discriminative Training for Multi-modal Large Language Models,
arXiv, 2411.00304
, arxiv, pdf, cication: -1Wei Chow, Juncheng Li, Qifan Yu, ..., Hanwang Zhang, Qianru Sun
-
🌟 CLEAR: Character Unlearning in Textual and Visual Modalities,
arXiv, 2410.18057
, arxiv, pdf, cication: -1Alexey Dontsov, Dmitrii Korzh, Alexey Zhavoronkin, ..., Ivan Oseledets, Elena Tutubalina · (huggingface) · (multimodal_unlearning - somvy)
-
Improve Vision Language Model Chain-of-thought Reasoning,
arXiv, 2410.16198
, arxiv, pdf, cication: -1Ruohong Zhang, Bowen Zhang, Yanghao Li, ..., Ruoming Pang, Yiming Yang
· (LLaVA-Reasoner-DPO - RifleZhang)
-
Mitigating Object Hallucination via Concentric Causal Attention,
arXiv, 2410.15926
, arxiv, pdf, cication: -1Yun Xing, Yiheng Li, Ivan Laptev, ..., Shijian Lu
-
ChatRex: Taming Multimodal LLM for Joint Perception and Understanding,
arXiv, 2411.18363
, arxiv, pdf, cication: -1Qing Jiang, Gen Luo, Yuqin Yang, ..., Tianhe Ren, Lei Zhang · (chatrex - idea-research)
-
DINO-X: A Unified Vision Model for Open-World Object Detection and Understanding,
arXiv, 2411.14347
, arxiv, pdf, cication: -1Tianhe Ren, Yihao Chen, Qing Jiang, ..., Kent Yu, Lei Zhang
-
Teach Multimodal LLMs to Comprehend Electrocardiographic Images,
arXiv, 2410.19008
, arxiv, pdf, cication: -1Ruoqi Liu, Yuelin Bai, Xiang Yue, ..., Ping Zhang
-
An Empirical Study of Autoregressive Pre-training from Videos,
arXiv, 2501.05453
, arxiv, pdf, cication: -1Jathushan Rajasegaran, Ilija Radosavovic, Rahul Ravishankar, ..., Christoph Feichtenhofer, Jitendra Malik · (brjathu.github)
-
Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction,
arXiv, 2501.03218
, arxiv, pdf, cication: -1Rui Qian, Shuangrui Ding, Xiaoyi Dong, ..., Dahua Lin, Jiaqi Wang · (Dispider - Mark12Ding)
-
MotionBench: Benchmarking and Improving Fine-grained Video Motion Understanding for Vision Language Models,
arXiv, 2501.02955
, arxiv, pdf, cication: -1Wenyi Hong, Yean Cheng, Zhuoyi Yang, ..., Yuxiao Dong, Jie Tang · (motion-bench.github)
-
Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos,
arXiv, 2501.04001
, arxiv, pdf, cication: -1Haobo Yuan, Xiangtai Li, Tao Zhang, ..., Jiashi Feng, Ming-Hsuan Yang · (Sa2VA - magic-research) · (arxiv) · (huggingface) · (lxtgh.github)
-
VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM,
arXiv, 2501.00599
, arxiv, pdf, cication: -1Yuqian Yuan, Hang Zhang, Wentong Li, ..., Jianke Zhu, Lidong Bing
-
Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models,
arXiv, 2412.18609
, arxiv, pdf, cication: -1Jinhui Yi, Syed Talal Wasim, Yanan Luo, ..., Muzammal Naseer, Juergen Gall · (Video-Panda - jh-yi)
-
🌟 Apollo: An Exploration of Video Understanding in Large Multimodal Models,
arXiv, 2412.10360
, arxiv, pdf, cication: -1Orr Zohar, Xiaohan Wang, Yann Dubois, ..., Serena Yeung-Levy, Xide Xia · (apollo-lmms.github) · (huggingface) · (Apollo - Apollo-LMMs) · (huggingface)
-
StreamChat: Chatting with Streaming Video,
arXiv, 2412.08646
, arxiv, pdf, cication: -1Jihao Liu, Zhiding Yu, Shiyi Lan, ..., Hongsheng Li, Jose M. Alvare · (jihaonew.github)
-
VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video Understanding,
arXiv, 2412.02186
, arxiv, pdf, cication: -1Kangsan Kim, Geon Park, Youngwan Lee, ..., Woongyeong Yeo, Sung Ju Hwang
-
Towards Universal Soccer Video Understanding,
arXiv, 2412.01820
, arxiv, pdf, cication: -1Jiayuan Rao, Haoning Wu, Hao Jiang, ..., Yanfeng Wang, Weidi Xie · (jyrao.github) · (arxiv) · (UniSoccer - jyrao)
-
VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection,
arXiv, 2411.14794
, arxiv, pdf, cication: -1Songhao Han, Wei Huang, Hairong Shi, ..., Yue Liao, Si Liu · (VideoEspresso - hshjerry)
-
TimeMarker: A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability,
arXiv, 2411.18211
, arxiv, pdf, cication: -1Shimin Chen, Xiaohan Lan, Yitian Yuan, ..., Zequn Jie, Lin Ma · (TimeMarker - TimeMarker-LLM)
-
Number it: Temporal Grounding Videos like Flipping Manga,
arXiv, 2411.10332
, arxiv, pdf, cication: -1Yongliang Wu, Xinting Hu, Yuyang Sun, ..., Bernt Schiele, Xu Yang · (NumPro. - yongliang-wu)
-
Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension,
arXiv, 2411.13093
, arxiv, pdf, cication: -1Yongdong Luo, Xiawu Zheng, Xiao Yang, ..., Jiebo Luo, Rongrong Ji
-
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance,
arXiv, 2411.02327
, arxiv, pdf, cication: -1Ruyang Liu, Haoran Tang, Haibo Liu, ..., Chen Li, Jiankun Yang · (PPLLaVA. - farewellthree)
-
VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in Videos,
arXiv, 2411.04923
, arxiv, pdf, cication: -1Shehan Munasinghe, Hanan Gani, Wenqi Zhu, ..., Fahad Shahbaz Khan, Salman Khan · (mbzuai-oryx.github)
-
xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs,
arXiv, 2410.16267
, arxiv, pdf, cication: -1Michael S. Ryoo, Honglu Zhou, Shrikant Kendre, ..., Caiming Xiong, Juan Carlos Niebles
-
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding,
arXiv, 2410.17434
, arxiv, pdf, cication: -1Xiaoqian Shen, Yunyang Xiong, Changsheng Zhao, ..., Mohamed Elhoseiny, Vikas Chandra
· (vision-cair.github) · (LongVU - Vision-CAIR) · (huggingface) · (huggingface)
-
VidEgoThink: Assessing Egocentric Video Understanding Capabilities for Embodied AI,
arXiv, 2410.11623
, arxiv, pdf, cication: -1Sijie Cheng, Kechen Fang, Yangyang Yu, ..., Lei Han, Yang Liu
-
OMCAT: Omni Context Aware Transformer,
arXiv, 2410.12109
, arxiv, pdf, cication: -1Arushi Goel, Karan Sapra, Matthieu Le, ..., Andrew Tao, Bryan Catanzaro · (om-cat.github)
-
Unifying Specialized Visual Encoders for Video Language Models,
arXiv, 2501.01426
, arxiv, pdf, cication: -1Jihoon Chung, Tyler Zhu, Max Gonzalez Saez-Diez, ..., Honglu Zhou, Olga Russakovsky · (tylerzhu) · (merv - princetonvisualai)
-
LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer,
arXiv, 2412.13871
, arxiv, pdf, cication: -1Yipeng Zhang, Yifan Liu, Zonghao Guo, ..., Tat-Seng Chua, Maosong Sun · (LLaVA-UHD - thunlp)
-
FastVLM: Efficient Vision Encoding for Vision Language Models,
arXiv, 2412.13303
, arxiv, pdf, cication: -1Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, ..., Oncel Tuzel, Hadi Pouransari
-
TRecViT: A Recurrent Video Transformer,
arXiv, 2412.14294
, arxiv, pdf, cication: -1Viorica Pătrăucean, Xu Owen He, Joseph Heyward, ..., João Carreira, Razvan Pascanu · (trecvit - google-deepmind)
-
PruneVid: Visual Token Pruning for Efficient Video Large Language Models,
arXiv, 2412.16117
, arxiv, pdf, cication: -1Xiaohu Huang, Hao Zhou, Kai Han · (PruneVid - Visual-AI)
-
Large Motion Video Autoencoding with Cross-modal Video VAE,
arXiv, 2412.17805
, arxiv, pdf, cication: -1Yazhou Xing, Yang Fei, Yingqing He, ..., Xiaowei Chi, Qifeng Chen · (VideoVAEPlus - VideoVerses)
-
🌟 VisionZip: Longer is Better but Not Necessary in Vision Language Models,
arXiv, 2412.04467
, arxiv, pdf, cication: -1Senqiao Yang, Yukang Chen, Zhuotao Tian, ..., Bei Yu, Jiaya Jia · (202.104.135) · (youtu) · (VisionZip - dvlab-research)
-
[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs,
arXiv, 2412.05819
, arxiv, pdf, cication: -1Ao Wang, Fengyuan Sun, Hui Chen, ..., Jungong Han, Guiguang Ding · (VTC-CLS - THU-MIG)
-
FocusLLaVA: A Coarse-to-Fine Approach for Efficient and Effective Visual Token Compression,
arXiv, 2411.14228
, arxiv, pdf, cication: -1Yuke Zhu, Chi Xie, Shuang Liang, ..., Bo Zheng, Sheng Guo
-
DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models,
arXiv, 2411.15024
, arxiv, pdf, cication: -1Keda Tao, Can Qin, Haoxuan You, ..., Yang Sui, Huan Wang
-
Factorized Visual Tokenization and Generation,
arXiv, 2411.16681
, arxiv, pdf, cication: -1Zechen Bai, Jianxiong Gao, Ziteng Gao, ..., Tong He, Mike Zheng Shou · (showlab.github)
-
REDUCIO! Generating 1024$\times$1024 Video within 16 Seconds using Extremely Compressed Motion Latents,
arXiv, 2411.13552
, arxiv, pdf, cication: -1Rui Tian, Qi Dai, Jianmin Bao, ..., Zuxuan Wu, Yu-Gang Jiang · (Reducio-VAE - microsoft)
-
Multimodal Autoregressive Pre-training of Large Vision Encoders,
arXiv, 2411.14402
, arxiv, pdf, cication: -1Enrico Fini, Mustafa Shukor, Xiujun Li, ..., Joshua M. Susskind, Alaaeldin El-Nouby · (ml-aim - apple) · (huggingface)
-
Don't Look Twice: Faster Video Transformers with Run-Length Tokenization,
arXiv, 2411.05222
, arxiv, pdf, cication: -1Rohan Choudhury, Guanglei Zhu, Sihan Liu, ..., Kris M. Kitani, László Jeni · (rccchoudhury.github) · (rlt - rccchoudhury) · (mp.weixin.qq)
-
🌟 LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation,
arXiv, 2411.04997
, arxiv, pdf, cication: -1Weiquan Huang, Aoqi Wu, Yifan Yang, ..., Chong Luo, Lili Qiu · (aka) · (LLM2CLIP - microsoft)
-
In Search of Forgotten Domain Generalization,
arXiv, 2410.08258
, arxiv, pdf, cication: -1Prasanna Mayilvahanan, Roland S. Zimmermann, Thaddäus Wiedemer, ..., Matthias Bethge, Wieland Brendel · (𝕏)
-
Adaptive Length Image Tokenization via Recurrent Allocation,
arXiv, 2411.02393
, arxiv, pdf, cication: -1Shivam Duggal, Phillip Isola, Antonio Torralba, ..., William T. Freeman
-
LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior,
arXiv, 2410.21264
, arxiv, pdf, cication: -1Hanyu Wang, Saksham Suri, Yixuan Ren, ..., Hao Chen, Abhinav Shrivastava · (hywang66.github)
-
Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss,
arXiv, 2410.17243
, arxiv, pdf, cication: -1Zesen Cheng, Hang Zhang, Kehan Li, ..., Xin Li, Lidong Bing
-
Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment,
arXiv, 2412.19326
, arxiv, pdf, cication: -1Ziang Yan, Zhilin Li, Yinan He, ..., Limin Wang, Yi Wang · (TPO - OpenGVLab) · (huggingface)
-
Preference Optimization for Vision Language Models with TRL 🤗
-
On Domain-Specific Post-Training for Multimodal Large Language Models,
arXiv, 2411.19930
, arxiv, pdf, cication: -1Daixuan Cheng, Shaohan Huang, Ziyu Zhu, ..., Bo Dai, Zhenliang Zhang · (huggingface)
-
🌟 Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization,
arXiv, 2411.10442
, arxiv, pdf, cication: -1Weiyun Wang, Zhe Chen, Wenhai Wang, ..., Yu Qiao, Jifeng Dai · (internvl.github)
-
SymDPO: Boosting In-Context Learning of Large Multimodal Models with Symbol Demonstration Direct Preference Optimization,
arXiv, 2411.11909
, arxiv, pdf, cication: -1Hongrui Jia, Chaoya Jiang, Haiyang Xu, ..., Fei Huang, Shikun Zhang
-
V-DPO: Mitigating Hallucination in Large Vision Language Models via Vision-Guided Direct Preference Optimization,
arXiv, 2411.02712
, arxiv, pdf, cication: -1Yuxi Xie, Guanzhen Li, Xiao Xu, ..., Min-Yen Kan · (V-DPO - YuxiXie)
-
MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models,
arXiv, 2410.17637
, arxiv, pdf, cication: -1Ziyu Liu, Yuhang Zang, Xiaoyi Dong, ..., Dahua Lin, Jiaqi Wang
-
🌟 LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs,
arXiv, 2501.06186
, arxiv, pdf, cication: -1Omkar Thawakar, Dinura Dissanayake, Ketan More, ..., Fahad Shahbaz Khan, Salman Khan
-
Multimodal Reasoning and its Applications to Computer Use and Robotics 🎬
-
🌟 Virgo: A Preliminary Exploration on Reproducing o1-like MLLM,
arXiv, 2501.01904
, arxiv, pdf, cication: -1Yifan Du, Zikang Liu, Yifan Li, ..., Zhongyuan Wang, Ji-Rong Wen · (Virgo - RUCAIBox)
-
URSA: Understanding and Verifying Chain-of-thought Reasoning in Multimodal Mathematics,
arXiv, 2501.04686
, arxiv, pdf, cication: -1Ruilin Luo, Zhuofan Zheng, Yifan Wang, ..., Jin Zeng, Yujiu Yang · (ursa-math.github)
-
🌟 Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search,
arXiv, 2412.18319
, arxiv, pdf, cication: -1Huanjin Yao, Jiaxing Huang, Wenhao Wu, ..., Li Shen, Dacheng Tao · (Mulberry - HJYao00)
-
🌟 Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces,
arXiv, 2412.14171
, arxiv, pdf, cication: -1Jihan Yang, Shusheng Yang, Anjali W. Gupta, ..., Li Fei-Fei, Saining Xie · (vision-x-nyu.github) · (thinking-in-space.git - vision-x-nyu) · (huggingface)
-
🌟 Progressive Multimodal Reasoning via Active Retrieval,
arXiv, 2412.14835
, arxiv, pdf, cication: -1Guanting Dong, Chenghao Zhang, Mengjie Deng, ..., Zhicheng Dou, Ji-Rong Wen
-
🌟 Diving into Self-Evolving Training for Multimodal Reasoning,
arXiv, 2412.17451
, arxiv, pdf, cication: -1Wei Liu, Junlong Li, Xiwen Zhang, ..., Yu Cheng, Junxian He · (mstar-lmm.github)
-
TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action,
arXiv, 2412.05479
, arxiv, pdf, cication: -1Zixian Ma, Jianguo Zhang, Zhiwei Liu, ..., Ranjay Krishna, Silvio Savarese · (taco-project.github) · (TACO - SalesforceAIResearch)
-
🌟 LLaVA-o1: Let Vision Language Models Reason Step-by-Step,
arXiv, 2411.10440
, arxiv, pdf, cication: -1Guowei Xu, Peng Jin, Li Hao, ..., Lichao Sun, Li Yuan · (LLaVA-o1 - PKU-YuanGroup)
-
Vision-Language Models Can Self-Improve Reasoning via Reflection,
arXiv, 2411.00855
, arxiv, pdf, cication: -1Kanzhi Cheng, Yantao Li, Fangzhi Xu, ..., Hao Zhou, Yang Liu
-
OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?,
arXiv, 2501.05510
, arxiv, pdf, cication: -1Yifei Li, Junbo Niu, Ziyang Miao, ..., Conghui He, Jiaqi Wang · (OVO-Bench - JoeLeelyf)
-
Multi-Dimensional Insights: Benchmarking Real-World Personalization in Large Multimodal Models,
arXiv, 2412.12606
, arxiv, pdf, cication: -1YiFan Zhang, Shanglin Lei, Runqi Qiao, ..., Xiaofei Wang, Honggang Zhang · (mdi-benchmark.github)
-
VisionArena: 230K Real World User-VLM Conversations with Preference Labels,
arXiv, 2412.08687
, arxiv, pdf, cication: -1Christopher Chou, Lisa Dunlap, Koki Mashita, ..., Joseph E. Gonzalez, Wei-Lin Chiang · (huggingface)
-
VideoAutoArena: An Automated Arena for Evaluating Large Multimodal Models in Video Analysis through User Simulation,
arXiv, 2411.13281
, arxiv, pdf, cication: -1Ziyang Luo, Haoning Wu, Dongxu Li, ..., Mohan Kankanhalli, Junnan Li · (videoautoarena.github)
-
ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large Multimodal Models,
arXiv, 2411.10867
, arxiv, pdf, cication: -1Vipula Rawte, Sarthak Jain, Aarush Sinha, ..., Amit P. Sheth, Amitava Das
-
M-Longdoc: A Benchmark For Multimodal Super-Long Document Understanding And A Retrieval-Aware Tuning Framework,
arXiv, 2411.06176
, arxiv, pdf, cication: -1Yew Ken Chia, Liying Cheng, Hou Pong Chan, ..., Soujanya Poria, Lidong Bing · (multimodal-documents.github)
-
DynaMath: A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models,
arXiv, 2411.00836
, arxiv, pdf, cication: -1Chengke Zou, Xingang Guo, Rui Yang, ..., Bin Hu, Huan Zhang · (DynaMath - DynaMath) · (huggingface)
-
🌟 Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination,
arXiv, 2411.03823
, arxiv, pdf, cication: -1Dingjie Song, Sicheng Lai, Shunian Chen, ..., Lichao Sun, Benyou Wang · (MM-Detect - MLLM-Data-Contamination)
-
StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding,
arXiv, 2411.03628
, arxiv, pdf, cication: -1Junming Lin, Zheng Fang, Chi Chen, ..., Yang Liu, Maosong Sun
-
TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models,
arXiv, 2410.23266
, arxiv, pdf, cication: -1Ziyao Shangguan, Chuhan Li, Yuxuan Ding, ..., Tesca Fitzgerald, Arman Cohan · (TOMATO - yale-nlp)
-
Image2Struct: Benchmarking Structure Extraction for Vision-Language Models,
arXiv, 2410.22456
, arxiv, pdf, cication: -1Josselin Somerville Roberts, Tony Lee, Chi Heem Wong, ..., Yifan Mai, Percy Liang · (crfm.stanford) · (x)
-
AVHBench: A Cross-Modal Hallucination Benchmark for Audio-Visual Large Language Models,
arXiv, 2410.18325
, arxiv, pdf, cication: -1Kim Sung-Bin, Oh Hyun-Bin, JungMok Lee, ..., Joon Son Chung, Tae-Hyun Oh
· (AVHBench - AVHBench)
-
MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models,
arXiv, 2410.10139
, arxiv, pdf, cication: -1Peng Xia, Siwei Han, Shi Qiu, ..., Lijuan Wang, Huaxiu Yao · (mmie-bench.github) · (MMIE - Lillianwei-h) · (huggingface)
-
MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks,
arXiv, 2410.10563
, arxiv, pdf, cication: -1Jiacheng Chen, Tianhao Liang, Sherman Siu, ..., Xiang Yue, Wenhu Chen · (tiger-ai-lab.github)
-
LiveXiv -- A Multi-Modal Live Benchmark Based on Arxiv Papers Content,
arXiv, 2410.10783
, arxiv, pdf, cication: -1Nimrod Shabtay, Felipe Maia Polo, Sivan Doveh, ..., Leonid Karlinsky, Raja Giryes
-
TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models,
arXiv, 2410.10818
, arxiv, pdf, cication: -1Mu Cai, Reuben Tan, Jianrui Zhang, ..., Yong Jae Lee, Jianwei Yang
-
NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples,
arXiv, 2410.14669
, arxiv, pdf, cication: -1Baiqi Li, Zhiqiu Lin, Wenxuan Peng, ..., Graham Neubig, Deva Ramanan · (arxiv) · (huggingface) · (linzhiqiu.github)
-
WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines,
arXiv, 2410.12705
, arxiv, pdf, cication: -1Genta Indra Winata, Frederikus Hudi, Patrick Amadeus Irawan, ..., Alice Oh, Chong-Wah Ngo
-
HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of Large Multimodal Models Through Coding Tasks,
arXiv, 2410.12381
, arxiv, pdf, cication: -1Fengji Zhang, Linquan Wu, Huiyu Bai, ..., Bei Chen, Jacky Keung
-
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token,
arXiv, 2501.03895
, arxiv, pdf, cication: -1Shaolei Zhang, Qingkai Fang, Zhe Yang, ..., Yang Feng
-
Feather the Throttle: Revisiting Visual Token Pruning for Vision-Language Model Acceleration,
arXiv, 2412.13180
, arxiv, pdf, cication: -1Mark Endo, Xiaohan Wang, Serena Yeung-Levy · (web.stanford)
-
🌟 SmolVLM - small yet mighty Vision Language Model 🤗
· (𝕏) · (huggingface)
-
Treat Visual Tokens as Text? But Your MLLM Only Needs Fewer Efforts to See,
arXiv, 2410.06169
, arxiv, pdf, cication: -1Zeliang Zhang, Phu Pham, Wentian Zhao, ..., Ajinkya Kale, Chenliang Xu · (YOPO_MLLM_Pruning - ZhangAIPI)
-
🌟 BlueLM-V-3B: Algorithm and System Co-Design for Multimodal Large Language Models on Mobile Devices,
arXiv, 2411.10640
, arxiv, pdf, cication: -1Xudong Lu, Yinghao Chen, Cheng Chen, ..., Shuai Ren, Hongsheng Li
-
Inference Optimal VLMs Need Only One Visual Token but Larger Models,
arXiv, 2411.03312
, arxiv, pdf, cication: -1Kevin Y. Li, Sachin Goyal, Joao D. Semedo, ..., J. Zico Kolter · (llava-token-compression - locuslab)
-
PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction,
arXiv, 2410.17247
, arxiv, pdf, cication: -1Long Xing, Qidong Huang, Xiaoyi Dong, ..., Feng Wu, Dahua Lin
· (PyramidDrop - Cooperx521)
-
Flowing from Words to Pixels: A Framework for Cross-Modality Evolution,
arXiv, 2412.15213
, arxiv, pdf, cication: -1Qihao Liu, Xi Yin, Alan Yuille, ..., Andrew Brown, Mannat Singh · (cross-flow.github)
-
MetaMorph: Multimodal Understanding and Generation via Instruction Tuning,
arXiv, 2412.14164
, arxiv, pdf, cication: -1Shengbang Tong, David Fan, Jiachen Zhu, ..., Saining Xie, Zhuang Liu · (𝕏)
-
LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation,
arXiv, 2412.15188
, arxiv, pdf, cication: -1Weijia Shi, Xiaochuang Han, Chunting Zhou, ..., Luke Zettlemoyer, Lili Yu
-
DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation,
arXiv, 2412.07589
, arxiv, pdf, cication: -1Jianzong Wu, Chao Tang, Jingbo Wang, ..., Xiangtai Li, Yunhai Tong · (jianzongwu.github) · (arxiv) · (DiffSensei - jianzongwu)
-
Multimodal Latent Language Modeling with Next-Token Diffusion,
arXiv, 2412.08635
, arxiv, pdf, cication: -1Yutao Sun, Hangbo Bao, Wenhui Wang, ..., Jianyong Wang, Furu Wei
-
EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM,
arXiv, 2412.09618
, arxiv, pdf, cication: -1Zhuofan Zong, Dongzhi Jiang, Bingqi Ma, ..., Yu Liu, Hongsheng Li · (easyref-gen.github)
-
TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation,
arXiv, 2412.03069
, arxiv, pdf, cication: -1Liao Qu, Huichao Zhang, Yiheng Liu, ..., Zehuan Yuan, Xinglong Wu · (byteflow-ai.github) · (TokenFlow - ByteFlow-AI)
-
Divot: Diffusion Powers Video Tokenizer for Comprehension and Generation,
arXiv, 2412.04432
, arxiv, pdf, cication: -1Yuying Ge, Yizhuo Li, Yixiao Ge, ..., Ying Shan · (huggingface) · (Divot - TencentARC)
-
ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance,
arXiv, 2412.06673
, arxiv, pdf, cication: -1Chunwei Wang, Guansong Lu, Junwei Yang, ..., Wei Zhang, Hang Xu
-
qwen2vl-flux - erwold
Unifying Image and Text Guidance for Controllable Image Generation · (qwen2vl-flux - erwold)
-
🌟 JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation,
arXiv, 2411.07975
, arxiv, pdf, cication: -1Yiyang Ma, Xingchao Liu, Xiaokang Chen, ..., Jiaying Liu, Chong Ruan · (Janus - deepseek-ai)
-
🌟 Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models,
arXiv, 2411.04996
, arxiv, pdf, cication: -1Weixin Liang, Lili Yu, Liang Luo, ..., Luke Zettlemoyer, Xi Victoria Lin
-
VITRON: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing
· (Vitron - SkyworkAI)
-
Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation,
arXiv, 2410.13848
, arxiv, pdf, cication: -1Chengyue Wu, Xiaokang Chen, Zhiyu Wu, ..., Chong Ruan, Ping Luo
-
PUMA: Empowering Unified MLLM with Multi-granular Visual Generation,
arXiv, 2410.13861
, arxiv, pdf, cication: -1Rongyao Fang, Chengqi Duan, Kun Wang, ..., Hongsheng Li, Xihui Liu
-
🌟 2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining,
arXiv, 2501.00958
, arxiv, pdf, cication: -1Wenqi Zhang, Hang Zhang, Xin Li, ..., Yueting Zhuang, Lidong Bing · (multimodal-interleaved-textbook.github) · (multimodal_textbook - DAMO-NLP-SG) · (huggingface) · (zhuanlan.zhihu)
-
MegaPairs: Massive Data Synthesis For Universal Multimodal Retrieval,
arXiv, 2412.14475
, arxiv, pdf, cication: -1Junjie Zhou, Zheng Liu, Ze Liu, ..., Defu Lian, Yongping Xiong
-
🌟 LAION-SG: An Enhanced Large-Scale Dataset for Training Complex Image-Text Models with Structural Annotations,
arXiv, 2412.08580
, arxiv, pdf, cication: -1Zejian Li, Chenye Meng, Yize Li, ..., Jinxiong Chang, Lingyun Sun · (LAION-SG - mengcye)
-
Euclid: Supercharging Multimodal LLMs with Synthetic High-Fidelity Visual Descriptions,
arXiv, 2412.08737
, arxiv, pdf, cication: -1Jiarui Zhang, Ollie Liu, Tianyu Yu, ..., Jinyi Hu, Willie Neiswanger
-
BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks,
arXiv, 2412.04626
, arxiv, pdf, cication: -1Juan Rodriguez, Xiangru Jian, Siba Smarak Panigrahi, ..., David Vazquez, Sai Rajeswar · (bigdocs.github)
-
BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions,
arXiv, 2411.07461
, arxiv, pdf, cication: -1Anas Awadalla, Le Xue, Manli Shu, ..., Caiming Xiong, Ran Xu · (huggingface)
-
HumanVLM: Foundation for Human-Scene Vision-Language Model,
arXiv, 2411.03034
, arxiv, pdf, cication: -1Dawei Dai, Xu Long, Li Yutang, ..., Zhang Yuanhui, Shuyin Xia
-
HourVideo: 1-Hour Video-Language Understanding,
arXiv, 2411.04998
, arxiv, pdf, cication: -1Keshigeyan Chandrasegaran, Agrim Gupta, Lea M. Hadzic, ..., Jiajun Wu, Li Fei-Fei · (hourvideo.stanford)
-
Infinity-MM: Scaling Multimodal Performance with Large-Scale and High-Quality Instruction Data,
arXiv, 2410.18558
, arxiv, pdf, cication: -1Shuhao Gu, Jialing Zhang, Siyuan Zhou, ..., Fangxiang Feng, Guang Liu
-
This dataset is our multimodal, fine-grained, ranking Google Shopping dataset, Marqo-GS-10M 🤗
-
LVD-2M: A Long-take Video Dataset with Temporally Dense Captions,
arXiv, 2410.10816
, arxiv, pdf, cication: -1Tianwei Xiong, Yuqing Wang, Daquan Zhou, ..., Jiashi Feng, Xihui Liu
· (LVD-2M - SilentView)
-
Harnessing Webpage UIs for Text-Rich Visual Understanding,
arXiv, 2410.13824
, arxiv, pdf, cication: -1Junpeng Liu, Tianyue Ou, Yifan Song, ..., Graham Neubig, Xiang Yue
-
LVD-2M: A Long-take Video Dataset with Temporally Dense Captions,
arXiv, 2410.10816
, arxiv, pdf, cication: -1Tianwei Xiong, Yuqing Wang, Daquan Zhou, ..., Jiashi Feng, Xihui Liu · (silentview.github)
-
🌟 DeepSeek-VL2 - deepseek-ai
· (𝕏)
-
· (huggingface)
-
IPLoc - SivanDoveh
-
OmniVision-968M: World's Smallest Vision Language Model
· (huggingface)
-
neptune - google-deepmind
· (storage.googleapis) · (research)