Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Apps
  • Models
  • Providers
  • Pricing
  • Enterprise
  • Labs

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR
  • Data

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for meta-llama

Meta: Llama 3.2 11B Vision Instruct

meta-llama/llama-3.2-11b-vision-instruct

ChatCompare

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.

Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research.

Click here for the original model card(opens in new tab).

Usage of this model is subject to Meta's Acceptable Use Policy(opens in new tab).

Modalities

In / Out Price

$0.245 / $0.245per 1M

Context

131K

Weekly Rank

#233on OpenRouter

Knowledge Cutoff

Dec 31, 2023

Overview
Playground
Providers
Performance
Pricing
Benchmarks
Apps
Activity
Uptime
API

Try Llama 3.2 11B Vision Instruct

Test this model directly in the playground.