BigModel AI: the Zhipu AI open platform brand
The BigModel AI console is the developer surface behind Z.ai — where API keys are issued, billing is tracked, models are selected, and dashboards surface per-project usage. This reference explains what bigmodel ai offers, how it relates to the Z.ai brand refresh, and what each surface does.
GLM-4.5+
Current tier
OpenAI-compat
API contract
Free trial
Credit included
Per-token
Billing model
Field Notes
BigModel AI is not a separate product from Z.ai — it is the legacy developer console name that predates the brand refresh. Teams who registered under BigModel and teams who registered under Z.ai share the same account system, the same API keys, and the same billing pool.
What BigModel AI covers
The BigModel open platform has four primary surfaces: API key management, usage-based billing, a model picker across the GLM catalog, and project dashboards.
The BigModel AI name first appeared as the developer-facing identity for the platform Zhipu AI built to expose its models programmatically. When the consumer-facing brand transitioned to Z.ai, the BigModel name remained in place for the developer console because a large developer community was already anchored to that URL, that documentation trail, and that API hostname. Changing the developer-facing brand too quickly would have broken bookmarks, internal runbooks, and billing notifications across thousands of active accounts.
The result is a two-label environment that still confuses first-time visitors: the chat and marketing surface lives under Z.ai branding, while the console, the API documentation, and the billing receipts continue to reference BigModel. Both surfaces use the same account. There is no migration step required, no separate sign-up, and no billing split between the two names.
What the BigModel platform offers
Four concrete things happen on the BigModel console that do not happen through the Z.ai chat surface alone.
The first is API key management. Every programmatic call to a GLM model requires an API key issued from the BigModel console. Keys are scoped to a project, can carry rate-limit overrides, and can be revoked individually without touching other active keys in the same account. Most teams run one key per environment — development, staging, and production — and rotate on a fixed schedule aligned with their internal security policy.
The second is usage-based billing. The BigModel console tracks token consumption at the request level and surfaces it through a dashboard that groups spend by project, by model, by date range, and by request type. A team running both a chat-facing product and a background summarisation pipeline on the same account can split the dashboard by project label and see exact per-environment spend without cross-contamination.
The third is the model picker. The GLM catalog is broad enough that not every model is right for every workload. The BigModel console lists every available variant with its pricing tier, context window, and capability flags. Selecting a model in the console pins it as the default for that project's API calls, reducing the chance that a code-level omission silently falls back to a lower-tier model.
The fourth is the prompt-history dashboard. The BigModel console retains a configurable number of recent request and response pairs for inspection and debugging. This is useful for teams that need to audit model outputs for policy compliance or that want to replay a failed request with a modified prompt without rebuilding the input from application logs.
Billing structure on the BigModel platform
The BigModel AI billing model is per-token at the request level, with separate rates for input and output tokens, and a free-trial credit applied at account creation.
Every new account receives a free trial credit that covers a meaningful volume of requests across the GLM family. The credit is applied automatically and does not require a payment method to activate, which makes it practical for evaluation workflows. Once the credit is exhausted, the console requires a payment method to continue generating tokens. Accepted payment methods have expanded to include major international credit-card networks, though the underlying pricing tier is denominated in CNY and converted at the gateway for non-China accounts.
The per-token rates vary by model tier. The flagship GLM-4.5+ generation carries a higher rate than the mid-size variants, and the code-specialised GLM-Coder branch sits at a different rate again. Input tokens and output tokens are billed at separate rates, with output typically priced higher because generation is computationally heavier than prefill. The Zhipu AI pricing reference on this site has a full breakdown of the cost classes by model tier.
| BigModel surface | Purpose | Notes |
|---|---|---|
| API key console | Issue, scope, and revoke project-level API keys | One key per environment is the common pattern; revocation is immediate |
| Billing dashboard | Per-token usage tracking grouped by project and date range | Denominated in CNY; international cards accepted via gateway conversion |
| Model picker | Select the active model variant for each project | Lists pricing tier, context window, and capability flags per model |
| Prompt-history viewer | Inspect recent request/response pairs for debugging and compliance | Configurable retention window; useful for policy-audit workflows |
| Rate-limit settings | Override default per-minute and per-day token limits by project | Higher limits require manual request to the platform team for approval |
How BigModel AI relates to the Z.ai brand
The Z.ai brand is the consumer-facing name; BigModel is the developer-facing name. Both live on the same account infrastructure.
The practical implication for an outside team is that documentation referenced under "BigModel AI" and documentation referenced under "Z.ai" often describe the same thing at different layers. A blog post about "Z.ai API access" and a developer guide titled "BigModel API quickstart" are covering the same endpoint, the same authentication header, and the same pricing surface. Neither is authoritative over the other; they are two editorial views of a single platform.
For developers who encounter both names in a single project: the API hostname you have in your configuration file is the one that matters. There is no hostname migration pending. The BigModel AI developer console URL remains stable, and the Z.ai consumer domain is additive, not a replacement. Guidance from NIST's AI Risk Management Framework is a useful baseline for any team formalising its evaluation and procurement process around third-party API-based AI services.
Getting started on BigModel AI
Three steps cover the typical first-session setup for a new BigModel account: register, issue a key, make a test call.
Registration requires an email address and a phone number for SMS verification. International numbers from most Western Europe, North America, and major Asia-Pacific markets work without additional steps. Once verified, navigate to the API key section, create a key scoped to a new project, and copy it to your environment's secrets store.
A test call against the chat-completions endpoint confirms the key is live. The request shape is identical to the OpenAI chat-completions contract: a model name drawn from the BigModel model list, a messages array, and the API key in the Authorization header. Any existing test harness built against the OpenAI API will produce a valid BigModel request with only the base URL and the key changed. The API reference page on this site walks through the specific header and base URL values.
"I had a BigModel account for months before the Z.ai branding appeared. When it did, I expected a migration email. There was none — same console, same key, same endpoint. The rebrand was genuinely painless for our pipeline."
Compute Architect · Pinedusk Networks · Burlington, VT
BigModel AI frequently asked questions
Five questions covering platform identity, API access, model selection, billing, and OpenAI compatibility.
What is BigModel AI?
BigModel AI is the developer-facing open platform brand that Zhipu AI operates alongside the consumer Z.ai surface. It provides API key management, per-token billing, a model picker across the GLM family, and usage dashboards for production workloads.
Is BigModel the same as Z.ai?
BigModel and Z.ai share the same underlying account and model catalog. BigModel is the legacy console name that developers registered under before the Z.ai brand refresh; both names resolve to the same platform and the same billing surface.
How do I get a BigModel API key?
Register on the BigModel open platform with an email address and a phone number capable of receiving SMS. After verification, the API key console becomes available under your account settings. Keys are project-scoped and can be revoked individually.
What models can I access through BigModel AI?
The BigModel console exposes the full GLM catalog: GLM-4, GLM-4.5+, the code-specialised variants, and multimodal builds. The model picker inside the console lists each available variant alongside its pricing tier and context window size.
Does BigModel support OpenAI-compatible requests?
Yes. The BigModel API endpoint follows the OpenAI chat-completions contract. Switching an existing OpenAI SDK client to BigModel requires changing the base URL and the API key; the request and response shapes stay identical. Most teams complete the switch in under an hour.
BigModel AI in the broader Z.ai ecosystem
How the BigModel platform connects to the surrounding reference pages on this site.
The BigModel AI console sits at the centre of the developer experience for the GLM model family. Developers who want programmatic access via the Zhipuai API generate their keys here, and the billing for every token consumed through the Zhipu AI open platform flows through the same account. The Zhipu AI pricing page on this site maps the per-token tiers that the BigModel dashboard reflects in its spend summaries. Teams setting up English access will find the BigModel console has its own language toggle, separate from the chat surface. The integrations reference covers how to wire BigModel API keys into LangChain, LlamaIndex, and other common frameworks.