|
|
||
|---|---|---|
| .claude | ||
| claude-glm | ||
| .gitattributes | ||
| .gitignore | ||
| CLAUDE.md | ||
| install.ps1 | ||
| install.sh | ||
| LICENSE | ||
| README.md | ||
Opus-GLM
An orchestration system for Claude Code that turns Opus into a lead architect that delegates work to parallel GLM-5.1 worker agents via Z.ai GLM API.
Give Opus a task. It breaks it into subtasks, spawns specialized agents (code reviewers, security auditors, language experts), verifies their output, and delivers the result — all autonomously.
Quick Start
macOS / Linux
git clone https://github.com/itohnobue/opus-glm
cd opus-glm
./install.sh /path/to/your/project
Windows (PowerShell)
git clone https://github.com/itohnobue/opus-glm
cd opus-glm
.\install.ps1 C:\path\to\your\project
The installer copies everything into your project and optionally sets up the claude-glm wrapper for Z.ai API access.
After installation, open your project with Claude Code — Opus-GLM activates automatically.
How It Works
You ──► Opus (lead) ──► Plan ──► Spawn agents ──► Verify ──► Deliver
│
┌─────┼─────┐
▼ ▼ ▼
Agent Agent Agent (parallel GLM-5.1 workers, max 3)
│ │ │
▼ ▼ ▼
Report Report Report (tmp/{name}-report.md)
│ │ │
└─────┼─────┘
▼
Lead verifies
every finding
│
▼
Final result
Opus is the orchestrator. It reads your task, plans the workflow, writes detailed prompts for each agent, spawns them in parallel, waits for completion, verifies every claim against actual code, fixes issues, and delivers.
GLM-5.1 agents are workers. Each gets a focused prompt with an agent persona (e.g., code-reviewer, python-pro, security-reviewer), specific files to examine, questions to answer, and an explicit list of writable files. They write their findings to tmp/{name}-report.md.
Agents are spawned via claude-glm — a wrapper that redirects Claude Code to the Z.ai GLM API, where agents run on glm-5.1.
Components
Orchestration (Opus-GLM Core)
The workflow is defined in CLAUDE.md and activates automatically when Opus receives a non-trivial task. The lead:
- Plans — scopes the task, identifies files, picks agents, builds dependency graph
- Prepares — writes prompts with agent persona + key files + must-answer questions + writable files list + quality rules
- Spawns — runs agents in batches (max 3 parallel) via
spawn-glm.sh - Waits — monitors progress and detects stalled agents via
wait-glm.sh - Verifies — reads every finding, checks cited files, labels VERIFIED/REJECTED/DOWNGRADED/UNABLE TO VERIFY
- Delivers — synthesizes results, fixes issues, writes summary
Multi-stage workflows are supported — later stages use verified results from earlier stages. Stages can be iterative (mandatory for production checks, final audits) — agents run repeatedly with varied approaches until convergence (2 consecutive iterations with no new actionable findings). Agents have abort conditions — they stop and report blockers instead of retrying endlessly.
Agents (110 Specialists)
Each agent is a .md file with a persona, focus area, approach, and safety rules. Categories:
| Category | Agents | Examples |
|---|---|---|
| Languages | 25+ | python-pro, typescript-pro, golang-pro, rust-pro, java-pro, c-pro, cpp-pro |
| Review | 8 | code-reviewer, security-reviewer, go-reviewer, python-reviewer, database-reviewer |
| Architecture | 11 | backend-architect, cloud-architect, database-architect, microservices-architect |
| DevOps | 10 | deployment-engineer, kubernetes-architect, terraform-pro, sre-engineer, devops-troubleshooter |
| Frontend | 8 | react-pro, nextjs-pro, vue-pro, frontend-developer, ui-designer, ux-designer |
| Data | 6 | data-scientist, data-engineer, ml-engineer, database-optimizer, sql-pro, postgres-pro |
| Mobile | 5 | ios-pro, kotlin-pro, flutter-pro, swift-pro, mobile-developer |
| Security | 5 | penetration-tester, threat-modeling-pro, backend-security-coder, frontend-security-coder |
| Docs & Planning | 6 | technical-writer, documentation-pro, planner, product-manager, tutorial-engineer |
| Other | 25+ | debugger, build-error-resolver, refactor-cleaner, mcp-developer, prompt-engineer |
Memory System
Persistent knowledge that survives across sessions:
# Save a discovery
.claude/tools/memory.sh add gotcha "psycopg2 needs libpq-dev on Ubuntu" --tags postgres,ubuntu
# Recall context before starting work
.claude/tools/memory.sh context "postgres connection"
# Track session progress
.claude/tools/memory.sh session add todo "Implement auth middleware" --status pending
Two tiers:
- Knowledge (
knowledge.md) — permanent facts, patterns, gotchas - Session (
session.md) — current task progress, checkpoints, plans
Web Search
Deep web search with 50+ results per query (vs. the typical 10-20):
.claude/tools/web_search.sh "React server components best practices" --tech
.claude/tools/web_search.sh "CRISPR delivery methods" --sci --med
Features: DuckDuckGo + Brave fallback, anti-bot bypass, smart content extraction, sentence-level BM25 compression, cross-page dedup, domain-specific bonus sources (arXiv, PubMed, Hacker News, Stack Overflow).
Claude-GLM Wrapper
Redirects Claude Code to the Z.ai GLM API. Required for spawning agents.
# Install separately
cd claude-glm
./install.sh # macOS/Linux
.\install.ps1 # Windows
See claude-glm/docs/TROUBLESHOOTING.md for common issues.
Requirements
- Claude Code — Download
- Z.ai API key — Get one (required for agent spawning)
- Z.ai GLM Coding Plan — Subscribe
- uv — Auto-installed by tools if missing (handles Python dependencies)
Models & Plans
Default Setup: Opus + GLM-5.1
Out of the box, Opus-GLM uses Opus as the lead orchestrator and GLM-5.1 for all spawned agents. Opus plans and verifies while GLM-5.1 workers do the heavy lifting in parallel. Max 3 agents per stage — if more coverage is needed, add stages, not agents.
GLM Coding Plans
The installer asks which Z.ai GLM Coding Plan you have. Pro and Max plans support GLM-5.1 agents:
| Plan | Lead Model | Agent Model | Max Parallel Agents |
|---|---|---|---|
| Max | Opus (native) | GLM-5.1 (Z.ai) | 3 |
| Pro | Opus (native) | GLM-5.1 (Z.ai) | 3 |
| Lite | Opus (native) | GLM-4.7 (Z.ai) | 1 |
The lead always runs as your native Claude Code instance (Opus). Only the spawned agents go through the Z.ai GLM API.
Configuration
Using with Anthropic API (No Z.ai)
The orchestration instructions work with native Anthropic API too. In spawn-glm.sh, change the GLM_WRAPPER variable (near the top) from claude-glm to claude to use your Anthropic subscription directly.
Custom Agents
Add your own agent definitions to .claude/agents/:
---
name: my-agent
description: What this agent does
tools: Read, Write, Edit, Bash, Grep, Glob
---
You are a specialist in [domain].
## Approach
[How to handle tasks]
## Common Pitfalls
[What to watch out for]
Adjusting Quality Rules
Edit files in .claude/templates/ to change the boilerplate appended to agent prompts. For example, relax the severity guide for internal tools or tighten it for production codebases.
Manual Installation
If you prefer not to use the installer:
- Copy
.claude/directory to your project - Copy
CLAUDE.mdto your project root (or append to existing) - Create
tmp/directory - Install claude-glm wrapper
- Add
tmp/and.claude/knowledge.mdto.gitignore
License
MIT