Claude Agent Skill · by Affaan M

Skill Comply

Install Skill Comply skill for Claude Code from affaan-m/everything-claude-code.

Works with Paperclip

How Skill Comply fits into a Paperclip company.

Skill Comply drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.

S
SaaS FactoryPaired

Pre-configured AI company — 18 agents, 18 skills, one-time purchase.

$27$59
Explore pack
Source file
SKILL.md58 lines
Expand
---name: skill-complydescription: Visualize whether skills, rules, and agent definitions are actually followed — auto-generates scenarios at 3 prompt strictness levels, runs agents, classifies behavioral sequences, and reports compliance rates with full tool call timelinesorigin: ECCtools: Read, Bash--- # skill-comply: Automated Compliance Measurement Measures whether coding agents actually follow skills, rules, or agent definitions by:1. Auto-generating expected behavioral sequences (specs) from any .md file2. Auto-generating scenarios with decreasing prompt strictness (supportive → neutral → competing)3. Running `claude -p` and capturing tool call traces via stream-json4. Classifying tool calls against spec steps using LLM (not regex)5. Checking temporal ordering deterministically6. Generating self-contained reports with spec, prompts, and timelines ## Supported Targets - **Skills** (`skills/*/SKILL.md`): Workflow skills like search-first, TDD guides- **Rules** (`rules/common/*.md`): Mandatory rules like testing.md, security.md, git-workflow.md- **Agent definitions** (`agents/*.md`): Whether an agent gets invoked when expected (internal workflow verification not yet supported) ## When to Activate - User runs `/skill-comply <path>`- User asks "is this rule actually being followed?"- After adding new rules/skills, to verify agent compliance- Periodically as part of quality maintenance ## Usage ```bash# Full runuv run python -m scripts.run ~/.claude/rules/common/testing.md # Dry run (no cost, spec + scenarios only)uv run python -m scripts.run --dry-run ~/.claude/skills/search-first/SKILL.md # Custom modelsuv run python -m scripts.run --gen-model haiku --model sonnet <path>``` ## Key Concept: Prompt Independence Measures whether a skill/rule is followed even when the prompt doesn't explicitly support it. ## Report Contents Reports are self-contained and include:1. Expected behavioral sequence (auto-generated spec)2. Scenario prompts (what was asked at each strictness level)3. Compliance scores per scenario4. Tool call timelines with LLM classification labels ### Advanced (optional) For users familiar with hooks, reports also include hook promotion recommendations for steps with low compliance. This is informational — the main value is the compliance visibility itself.