# DeepMask ## Docs - [Set up your DeepMask account](https://documentation.deepmask.io/account-setup.md): Create your DeepMask account, select a plan, configure your team, and review the security and compliance settings that protect all your data. - [DeepMask Security and Compliance Overview](https://documentation.deepmask.io/enterprise/security-compliance.md): DeepMask runs on EU sovereign cloud infrastructure with GDPR compliance, enterprise-grade encryption, EU data residency, and a strict no-training policy. - [Manage Your Team in DeepMask Enterprise](https://documentation.deepmask.io/enterprise/team-management.md): Invite team members, assign admin and member roles, share projects, and oversee AI token usage across your entire organization from one workspace. - [Track AI Token Usage Across Your Team](https://documentation.deepmask.io/enterprise/usage-analytics.md): Monitor total token consumption, input and output breakdowns, and per-model usage to control AI spend and demonstrate ROI across your organization. - [Use the DeepMask AI Chat Workspace](https://documentation.deepmask.io/features/chat-workspace.md): Start conversations, switch between 25+ AI models mid-chat, upload files, and use extended thinking — all from one unified interface. - [Analyze Data and Create Charts with DeepMask](https://documentation.deepmask.io/features/data-visualization.md): Upload spreadsheets, CSVs, or documents and let AI generate charts, insights, and polished reports instantly — no coding or BI tools required. - [Connect Tools to DeepMask via MCP](https://documentation.deepmask.io/features/mcp-connectors.md): Link Google Drive, Gmail, Salesforce, SharePoint, and more to your AI workspace using the open MCP framework — directly from the chat interface. - [Organize Work with DeepMask Projects](https://documentation.deepmask.io/features/projects.md): Create persistent AI workspaces in DeepMask that retain custom instructions, uploaded files, and multiple chat threads across every session for ongoing work. - [Control AI Response Style in DeepMask](https://documentation.deepmask.io/features/response-styles.md): Choose from five response styles — Normal, Concise, Explanatory, Learning, or Formal — to match the AI's tone and depth to your exact needs. - [Real-Time Web Search in DeepMask](https://documentation.deepmask.io/features/web-search.md): Enable Perplexity-powered web search in any chat to get cited, real-time answers grounded in current information without leaving the conversation. - [Welcome to DeepMask](https://documentation.deepmask.io/introduction.md): DeepMask unifies 25+ leading AI models in one GDPR-compliant EU workspace. No vendor lock-in, no model training on your data, built for enterprise teams. - [How to Choose the Right AI Model in DeepMask](https://documentation.deepmask.io/models/choosing-a-model.md): Match your task to the right AI model in DeepMask. Compare models by use case — coding, research, writing, speed, reasoning, and EU-only data residency. - [DeepSeek V3 & V3.1 — Efficient reasoning and coding models](https://documentation.deepmask.io/models/deepseek.md): DeepSeek V3 and V3.1 (Infercom) on DeepMask. 671B MoE architecture delivering frontier-class coding, math, and document analysis without image support. - [Gemini 2.5 Flash & Pro — Google's multimodal AI models](https://documentation.deepmask.io/models/gemini.md): Explore Gemini 2.5 Flash and 2.5 Pro on DeepMask. From high-throughput multimodal processing to deep reasoning, both models offer a 1M+ token context window. - [GLM-4.7 & GLM-4.7 Flash — Z.ai's bilingual AI models](https://documentation.deepmask.io/models/glm.md): GLM-4.7 and GLM-4.7 Flash on DeepMask. Z.ai's bilingual models for advanced reasoning, agentic coding, UI generation, and high-speed automation workflows. - [GPT-4.1 — OpenAI's precision model for large documents](https://documentation.deepmask.io/models/gpt-4-1.md): OpenAI's precision and context specialist. 1M token window, GPQA 66.6%, 0.62s TTFT, 91 tok/s. Best for large document processing and spreadsheets. - [GPT-4o — Real-time audio, vision, and text AI model](https://documentation.deepmask.io/models/gpt-4o.md): OpenAI's high-frequency multi-modal model for real-time voice, vision, and text interactions. 128K context, 0.12s TTFT, 112 tok/s, GPQA 74.0%. - [GPT-5 Series — OpenAI's most capable reasoning models](https://documentation.deepmask.io/models/gpt-5.md): GPT-5.2, 5.3, and 5.4 cover the full spectrum from production workhorse to autonomous agent with computer use. GPQA scores up to 92.4%. - [GPT-o3 Mini — Fast STEM reasoning model with tool support](https://documentation.deepmask.io/models/gpt-o3-mini.md): OpenAI's compact reasoning model for STEM and coding tasks. 200K context window, GPQA 79.7%, 0.25s TTFT, 141 tok/s. No image input support. - [GPT-OSS 120B — EU-hosted open-weight reasoning model](https://documentation.deepmask.io/models/gpt-oss-120b.md): OpenAI's open-source 120B model, EU-hosted via StackIT and Infercom. GPQA 80.9%, transparent chain-of-thought, adaptive reasoning effort levels. - [Haiku 4.5 — Anthropic's fastest high-volume AI model](https://documentation.deepmask.io/models/haiku.md): Claude Haiku 4.5 by Anthropic — fastest Anthropic model. 200K context, GPQA 73.2%, 0.20s TTFT, 180+ tok/s. Ideal for high-volume tasks and real-time agents. - [Kimi K2 & K2.5 — MoonshotAI's agent swarm models](https://documentation.deepmask.io/models/kimi-k2.md): Kimi K2 and Kimi K2.5 on DeepMask. 1-trillion parameter MoE models with Agent Swarm Mode, multimodal reasoning, and 2M token context for complex agentic tasks. - [MiniMax M2, M2.1 & M2.5 — MoE models for coding and agents](https://documentation.deepmask.io/models/minimax.md): MiniMax M2, M2.1, and M2.5 on DeepMask. Expert MoE models for full-stack coding, mobile development, and EU-hosted agentic workflows with interleaved thinking. - [Mistral Large 3 & Medium 3 — Enterprise open-weight models](https://documentation.deepmask.io/models/mistral.md): Mistral Large 3 and Mistral Medium 3 on DeepMask. State-of-the-art open-weight models for multilingual reasoning, coding, and scalable enterprise workflows. - [Opus — Anthropic's most capable models for complex tasks](https://documentation.deepmask.io/models/opus.md): Claude Opus 4.5 and 4.6 from Anthropic. Up to 91.3% GPQA, 1M context, adaptive reasoning. Best for demanding coding, research, and agents. - [AI Models Available in DeepMask](https://documentation.deepmask.io/models/overview.md): Browse the full catalog of 25+ AI models in DeepMask, organized by provider — from OpenAI and Anthropic to EU-hosted options via StackIT and Infercom. - [Qwen (DeepMask) & Qwen3 (StackIT) — Alibaba's AI models](https://documentation.deepmask.io/models/qwen.md): Qwen (DeepMask) and Qwen3 (StackIT) on DeepMask. Alibaba's flagship models with dual-mode reasoning, 1M token context, and EU-hosted infrastructure via StackIT. - [Sonnet — Anthropic's balanced models for agents and coding](https://documentation.deepmask.io/models/sonnet.md): Claude Sonnet 4.5 and 4.6 by Anthropic. 1M token context, GPQA up to 84.4%, native computer use, and 30-hour autonomous coding. Best for engineering agents. - [Get started with DeepMask](https://documentation.deepmask.io/quickstart.md): Sign up, select an AI model, and send your first message in minutes. This guide walks you through every step to get productive with DeepMask immediately. - [Contact DeepMask Support and Sales](https://documentation.deepmask.io/support/contact.md): Reach the DeepMask team by email or book a call for support, sales questions, enterprise plans, and partnership inquiries. Response within 42 hours. - [Frequently Asked Questions — DeepMask](https://documentation.deepmask.io/support/faq.md): Answers to common questions about DeepMask's AI models, data privacy, GDPR compliance, Projects, MCP connectors, response styles, and enterprise pricing. ## OpenAPI Specs - [openapi](https://documentation.deepmask.io/api-reference/openapi.json)