Brace yourself: AWS just expanded Bedrock with a hefty 18 new open-weight models, marking one of its most significant AI rollouts to date. The move injects powerful multimodal capabilities into the platform and showcases Mistral’s latest models as the flagship entrants.
In a post on the AWS News Blog, Channy Yun described the update as broad in scope, bringing Bedrock closer to a nearly 100-model lineup with fresh entries from Mistral, Google, Nvidia, OpenAI, MiniMax, Moonshot, and Qwen. The goal is clear: give customers more choices without forcing changes to existing infrastructure.
A diverse roster from top AI developers joins the latest expansion. Notable additions include:
- Google Gemma 3: Lightweight multimodal models designed for local text-and-image tasks on laptops and workstations.
- Nvidia Nemotron Nano 2: Efficiency-forward models aimed at reasoning, coding, and video understanding.
- OpenAI gpt-oss-safeguard: New safety classifiers for policy enforcement and high-volume content moderation.
- MiniMax M2: Coding-automation models capable of multi-file edits and extended tool-calling sequences.
- Moonshot Kimi K2 Thinking: A deep reasoning model tailored for research-heavy, multi-step workflows.
- Qwen3-Next and Qwen3-VL: Long-context and vision models supporting document extraction, code generation, and video analysis.
These offerings span language, vision, audio, and safety workloads, collectively widening Bedrock’s catalog as it moves toward a triple-digit collection of models.
Mistral models take center stage in this refresh. The new Mistral Large 3 serves as a long-context, multimodal workhorse designed for dense enterprise tasks—from document-heavy workflows and multilingual analysis to advanced coding and tool-using agents. It emphasizes reliability with lengthy prompts and cross-domain reasoning across text and vision.
In addition, AWS is releasing the full Ministral 3 family—3B, 8B, and 14B variants—optimized to run efficiently on a single GPU. The 3B model targets lightweight vision and language tasks on edge devices, the 8B strikes a balance between footprint and performance for chat interfaces and embedded systems, and the 14B delivers private, on-premises capabilities with state-of-the-art text and vision features for hardware-constrained environments.
Bedrock is now the first platform to host these Mistral models, giving customers early access to the newest long-context and edge-optimized releases.
Enhancements in audio, vision, and reasoning capability broaden the use cases. New audio models improve speech processing—from fast transcription to multilingual voice commands that can operate with limited cloud connectivity. Vision-focused models enhance document interpretation, turning screenshots into usable code, and analyzing video sequences with richer context. Other models emphasize problem-solving, enabling long-form planning, multi-step tool use, and retrieval workflows that handle sprawling inputs. The result is a versatile set of models suitable for on-device processing, moderation queues, automation pipelines, and industry workflows that demand richer multimodal understanding.
Testing and deployment are becoming more straightforward. AWS is delivering a unified AI model experience that lets teams try and compare options without rewriting applications. The Bedrock console’s playground supports experimentation, while the AWS SDKs enable direct integration into existing systems. For agent-building teams, Bedrock AgentCore and Strands Agents are ready to work with the new releases.
Safety and evaluation tooling are expanding as well. Guardrails can be applied to any incoming model, and the included evaluation suite helps teams benchmark options before selecting a workload-appropriate model.
With this update, AWS moves into the next phase: increasing access as customers begin evaluating the new models. The collaboration between Lyft, Anthropic, and AWS on agentic AI for large-scale customer support provides a glimpse into how these capabilities could scale in real-world deployments.
If you’re exploring Bedrock for your organization, now is a pivotal moment to assess how these new models—especially the Mistral lineup and the broad safety and evaluation tools—could fit into your AI strategy and infrastructure plans.