Z.ai resource hub: where to find help and what to ask
A practical guide to the support channels, community forums, and documentation sources for questions about the GLM model family, ChatGLM, and the BigModel platform.
5
Question channels
GitHub
Best for bugs
HF
Best for usage Qs
Platform
Best for billing
Essentials Recap
Five distinct channels exist for getting Z.ai and GLM questions answered, and the right one depends on question type. GitHub issues work best for reproducible bugs. Hugging Face discussions work best for usage patterns and integration questions. The BigModel support portal handles billing and account issues. Community Discord servers are useful for quick orientation. Academic methodology questions benefit from external research guidance.
Understanding the support landscape
The Z.ai and Zhipu AI ecosystem has no single support desk — questions route differently depending on whether they concern a model behaviour, a platform account, an API contract, or a community integration.
Developers and researchers new to the Z.ai ecosystem quickly find that there is no single inbox that handles all questions. The platform behind Z.ai grew from a research lab, and its support infrastructure reflects that history: technically detailed questions about model behaviour are answered through GitHub and Hugging Face, while account and billing questions go through the platform portal, and broad integration discussions happen in community Discord servers or on forums like Reddit's machine learning communities.
Knowing which channel to use before you ask saves significant time. A bug report filed in Hugging Face discussions may sit unnoticed while the same issue raised as a GitHub issue with a minimal reproducible example gets triaged within a day. A billing question posted in a community Discord wastes everyone's time when it should be a platform support ticket. This page maps the terrain so you know where to start.
GitHub issues for model bugs and inference problems
The Zhipu AI GitHub organisation is the canonical place to report reproducible issues with model behaviour, inference code, and fine-tuning tooling.
The lab maintains an active GitHub organisation hosting the model inference code, fine-tuning recipes, evaluation harnesses, and the model cards that accompany each release. For a developer who finds that a specific input produces unexpected output, or that the inference code fails under a particular configuration, this is the right starting point. The issue tracker on each repository is monitored by the engineering team and by community contributors who may have solved the same problem.
The quality of the response depends almost entirely on the quality of the question. Issues that include the exact model version, the hardware and software configuration, a minimal reproducible example, and the actual versus expected behaviour get resolved faster than vague reports. Before filing a new issue, search the closed issues for the same symptom — the most common inference problems have usually been reported and resolved in a previous release cycle, and the closed issue will link to the fix or the workaround.
Hugging Face discussions for usage and integration questions
The Hugging Face discussion threads attached to each model's page host the broadest community of practitioners who have used the model in real-world integrations.
Every major GLM and ChatGLM release on Hugging Face has a Discussions tab that accumulates practical questions from developers who have tried the model in their own setups. These threads are searchable and often contain exactly the integration pattern you are trying to replicate — a specific quantisation configuration, a prompt format that works well for a particular task, or a workaround for a known limitation on a specific hardware class.
When the answer is not already in the thread archive, posting a new question with context about your use case and what you have already tried tends to attract helpful responses from the broader community within a day or two. Hugging Face discussions are particularly good for questions that are not quite bugs — cases where the model behaves as designed but the default behaviour does not match your expectations, and you want to understand how others have adjusted it.
BigModel platform support for account and billing
Account registration problems, billing discrepancies, and API key issues are platform matters and belong in the BigModel support portal — not in GitHub or community channels.
The BigModel console has a support portal accessible after login that handles account-level issues: registration problems, payment failures, key provisioning failures, usage billing discrepancies, and rate-limit increase requests. These are not appropriate questions for GitHub or Hugging Face — they involve your specific account state, which community channels cannot access.
Response times from the platform support portal are faster during Chinese business hours. Outside those hours, the most effective approach is to file a detailed ticket and check the platform's status page for any ongoing incidents that may explain the problem. For outside-China teams, academic benchmarking methodology resources from Stanford CRFM on AI platform evaluation are useful background when preparing a formal vendor assessment alongside a support escalation.
Where to ask: question-type matrix
Five question types mapped to the channel most likely to produce a timely, accurate answer.
| Question type | Where to ask | Typical response time |
|---|---|---|
| Reproducible model bug or inference failure | GitHub issues on the relevant repository (THUDM org) | 1–3 business days for a well-formed issue |
| Integration pattern or usage question | Hugging Face Discussions on the model's page | Hours to 2 days depending on topic activity |
| Account, billing, or key management issue | BigModel platform support portal (after login) | 1–2 days; faster during Chinese business hours |
| Quick orientation or community discovery | Community Discord servers or Reddit ML communities | Minutes to hours for popular topics |
| Formal evaluation or enterprise procurement | BigModel business contact through the platform console | 3–5 business days for initial response |
Resource and support questions
Five questions across two tabs cover where to ask different types of Z.ai questions and how to get faster, better answers.
Where is the best place to ask GLM model questions?
The GitHub issues tracker on the official Zhipu AI organisation is the best channel for reproducible bugs and model-behaviour questions tied to specific releases. For general usage questions and integration patterns, Hugging Face Discussions on the model's page reaches a broader community of practitioners who have solved similar problems. Platform account questions belong in the BigModel support portal — not in either of those community channels.
Where do I report a BigModel API billing or account issue?
Account and billing issues for the BigModel platform are handled through the platform's own support portal, accessible after login. The editorial team at this reference site cannot assist with account or billing matters — we are an independent reference, not a support proxy for the upstream platform. Filing a detailed ticket through the official portal is the only path that has access to your account state.
Is there an English-language community for Z.ai users?
The Hugging Face model page discussions and GitHub issues are both primarily English-language channels with substantial activity from developers outside China. The upstream platform's Discord server, when active, has an English-language channel. Most practical integration questions from outside developers get answered fastest through GitHub issues on the relevant repository, because the engineering team monitors those channels in addition to community contributors.
Where can I find the latest GLM model cards?
Model cards for GLM and ChatGLM releases are published on Hugging Face under the THUDM organisation profile. Each card includes architecture notes, license terms, training data summary, benchmark results, and usage examples. These are the canonical source for release-specific information, and they are updated when the upstream team revises a model or corrects a documentation error.
What should I ask before opening a GitHub issue on a GLM repository?
Before opening an issue: confirm you are running the current release, reproduce the problem with the smallest possible input, note the exact model version and inference setup (GPU type, framework version, quantisation level if any), and search existing closed issues for the same symptom. A minimal reproducible example is the single thing that most accelerates an issue resolution. Vague reports without reproduction steps rarely get actionable responses.
Connected reference pages for the Z.ai developer journey
From the resource hub, most readers continue to either the API reference for integration detail or the access walkthrough for account setup guidance.
Once you know where to ask questions, the next step is usually building something. The API reference covers the BigModel endpoint contract and authentication pattern in detail. For new accounts, the access walkthrough explains account setup, password recovery, and what login unlocks on the platform. Teams evaluating whether the GLM model family is the right fit will want the model reference and the comparison page for a direct side-by-side. The GitHub reference maps the full Zhipu AI repository structure. For questions about the BigModel open platform console itself, the platform reference covers key management and billing in depth. The security page covers risk categories relevant to production use, and the editorial contact page is the right destination for feedback about this reference site specifically.