Skip to main content

Documentation Index

Fetch the complete documentation index at: https://documentation.deepmask.io/llms.txt

Use this file to discover all available pages before exploring further.

DeepMask gives you access to more than 25 AI models from 11 leading providers in a single workspace. You can switch between models at any time without losing your conversation context, and several models run entirely on EU-hosted infrastructure so your data never leaves European soil.

OpenAI

GPT-5.2 · GPT-5.3 · GPT-5.4 — Most capable models for chat, document and image analysis, and tool use with strong reasoning.GPT-4o — Low-latency multimodal model optimized for real-time voice and vision tasks.GPT-4.1 — Well-suited for long-context tasks, spreadsheet analysis, and tool use.GPT-o3 Mini — Focused reasoning model for document analysis and tool use.GPT-OSS 120B (StackIT) · GPT-OSS 120B (Infercom) — Open-weight model for document analysis and research, EU-hosted.

Anthropic

Opus 4.5 · Opus 4.6 — Anthropic’s most capable tier; best for demanding chat, complex documents, image analysis, and tool use.Sonnet 4.5 · Sonnet 4.6 — Balanced performance for autonomous coding, agentic workflows, and long-horizon tasks. Context window up to 1M tokens.Haiku 4.5 — Fastest Anthropic model; designed for high-volume support, real-time data, and sub-agent workloads.

Google

Gemini 2.5 Pro — High-capability model for chat, document and image analysis, and tool use with a large context window.Gemini 2.5 Flash — Industry-leading throughput at 185 tokens/sec with a 1M token context window. Optimized for large-scale document processing and real-time summarization.Gemma 3 27B (StackIT) — Lightweight open model for chat and document/image analysis, EU-hosted via StackIT (Schwarz Group).

DeepSeek

DeepSeek V3 — 671B MoE model delivering frontier-level coding and math performance. Strong for complex questions, writing, and document analysis. Context window up to 164K tokens.DeepSeek V3.1 (Infercom) — Same capability as V3 with EU-hosted endpoints via Infercom for strict data residency requirements.

MoonshotAI (Kimi)

Kimi K2 (DeepMask) — 1T parameter MoE model with native Agent Swarm Mode, supporting up to 100 parallel sub-agents. Handles 2M token context. EU-hosted via DeepMask infrastructure.Kimi K2.5 — Open-source multimodal model that converts text, images, and video into production-ready code, built for large-scale agent swarm workflows.

Mistral AI

Mistral Large 3 — Elite reasoning, multimodal understanding, and best-in-class multilingual performance across 40+ languages.Mistral Medium 3 — Frontier-level performance at significantly lower cost; designed for fast, scalable enterprise AI deployments across cloud, hybrid, and on-premises environments.

Alibaba (Qwen)

Qwen (DeepMask) — Versatile model with reasoning and tool use, strong at document and image analysis and multilingual chat. EU-hosted via DeepMask infrastructure.Qwen3 (StackIT) — Same capability profile as Qwen (DeepMask) with EU-hosting via StackIT (German sovereign cloud by Schwarz Group).

MiniMax

MiniMax M2 · MiniMax M2.1 — Built for elite multi-language coding, advanced agent workflows, and high-quality reasoning across development and office tasks.MiniMax M2.5 (Infercom) — Strong for document analysis, coding, and tool use with a 164K token context. EU-hosted via Infercom.

Z.ai (GLM)

GLM-4.7 — Advanced reasoning and coding model featuring interleaved thinking, elite agent workflows, and high-fidelity UI generation for complex real-world tasks.GLM-4.7 Flash — High-performance lightweight MoE variant delivering strong reasoning and coding accuracy with exceptional speed.

xAI (Grok)

Grok 3 Mini — Compact reasoning model for fast, cost-effective chat and analysis tasks.Grok 4 Fast Non-Reasoning — High-speed model optimized for rapid response without extended thinking overhead.
Models marked with (StackIT) run on StackIT, the German sovereign cloud operated by the Schwarz Group. Models marked with (Infercom) use EU-hosted endpoints via Infercom with strict data residency controls. Models marked with (DeepMask) are hosted directly on DeepMask’s own EU infrastructure. If your organization requires that all data processing stays within the European Union, filter for EU-hosted models in the model selector.