Skip to main content

Documentation Index

Fetch the complete documentation index at: https://documentation.deepmask.io/llms.txt

Use this file to discover all available pages before exploring further.

DeepMask’s usage analytics dashboard gives enterprise admins a clear view of how your team consumes AI across the platform. You can see total token usage, how tokens split between input and output, and a per-model breakdown showing which models your team relies on most. This data helps you manage AI spend, justify costs to stakeholders, and make informed decisions about which models to route different workloads to.

What the usage dashboard shows

The dashboard surfaces three core metrics for your organization:

Total tokens

The combined count of all tokens processed across every conversation and project in your workspace during the selected period.

Input tokens

Tokens sent to the model—your messages, uploaded file contents, system instructions, and project context. Input tokens typically represent the majority of consumption.

Output tokens

Tokens generated by the model in its responses. Output tokens are smaller in volume but often priced higher by underlying model providers.

Model breakdown

A per-model view showing how token consumption distributes across the 25+ models available in DeepMask—for example, Mistral Large, Opus 4.5, GPT-4, DeepSeek V3, and others your team has used.

Reading the model breakdown

The model breakdown is the most actionable part of the usage dashboard. It shows you which models consumed the most tokens during a given period, so you can answer questions like:
  • Are teams defaulting to the most expensive models for every task, including simple ones?
  • Which departments or projects are the heaviest users of premium reasoning models?
  • Has usage shifted since you onboarded a new team or started a new project?
A high input token count usually means your team is uploading large documents, using long system prompts, or working within projects with extensive context. Review project instructions and file uploads to ensure you’re only passing context that the model actually needs.
If the breakdown shows most tokens concentrated on a single high-cost model, consider whether all those tasks require that capability. Switching routine writing or summarization tasks to a faster model reduces cost without degrading output quality for those use cases.
Spikes in output tokens can indicate conversations asking for very long responses—full reports, detailed code, or extended analysis. Check whether those outputs are being used or whether response length can be constrained with project instructions.

Optimizing AI spend with usage data

Token usage data translates directly into cost. Here is how to use it to get better ROI from your DeepMask enterprise plan:
1

Establish a baseline

Review your first month of usage to understand your team’s natural consumption patterns. Note which models dominate the breakdown and what total token volumes look like week over week.
2

Match models to task types

DeepMask gives you access to 25+ models. Use the breakdown to identify tasks running on premium reasoning models that could be handled by faster, lower-cost alternatives. Routine drafting, translation, and summarization rarely need extended thinking models.
3

Audit project context

Input tokens often grow as projects accumulate files and instructions. Periodically review project files and instructions to remove outdated context. Lean project context means lower input token counts on every conversation.
4

Review usage with team leads

Share usage data with department leads quarterly. Usage patterns often reflect workflow inefficiencies—teams that use AI heavily for tasks that could be templatized or automated with MCP connectors can reduce token usage while getting faster results.
Models like Haiku 4.5 and Gemini 2.0 Flash are well-suited for high-volume, lower-complexity tasks. Reserving extended thinking models like Opus 4.6 for genuinely complex reasoning tasks keeps your model breakdown balanced and your costs predictable.

ROI context for enterprise teams

DeepMask is designed so that AI scales with your company, not just individuals. Usage analytics give you the organizational visibility to measure the return on your AI investment—not just whether people are using it, but how, and at what cost.
Token usage data is most useful when connected to business outcomes. Consider tracking usage alongside output metrics for your teams: volume of reports produced, code shipped, campaigns drafted, or research completed. This gives finance and leadership teams a concrete basis for evaluating the value DeepMask delivers against what it costs. For custom usage reporting, consolidated billing, or help interpreting your analytics data, contact the sales team at contact@deepmask.io or book a 30-minute call.