npx skills add https://github.com/obra/superpowers --skill test-driven-developmentHow Minimal Run And Audit fits into a Paperclip company.
Minimal Run And Audit drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.
Pre-configured AI company — 18 agents, 18 skills, one-time purchase.
SKILL.md47 linesExpandCollapse
---name: minimal-run-and-auditdescription: Trusted-lane execution and reporting skill for README-first AI repo reproduction. Use when the task is specifically to capture or normalize evidence from the selected smoke test or documented inference or evaluation command and write standardized `repro_outputs/` files, including patch notes when repository files changed. Do not use for training execution, initial repo intake, generic environment setup, paper lookup, target selection, or end-to-end orchestration by itself.--- # minimal-run-and-audit ## When to apply - After a reproduction target and setup plan exist.- When the main skill needs execution evidence and normalized outputs.- When a smoke test, documented inference run, documented evaluation run, or other short non-training verification is appropriate.- When the user already knows what command should be attempted and wants execution plus reporting only. ## When not to apply - During initial repo scanning.- When environment or assets are still undefined enough to make execution meaningless.- When the task is a literature lookup rather than repository execution.- When the user is still deciding which reproduction target should count as the main run. ## Clear boundaries - This skill owns normalized reporting for an attempted command.- It may receive execution evidence from the main skill or a thin helper.- It does not choose the overall target on its own.- It does not perform broad paper analysis.- It does not own training startup, resume, or long-running training state.- It should not normalize risky code edits into acceptable practice. ## Input expectations - selected reproduction goal- runnable commands or smoke commands- environment and asset assumptions- optional patch metadata ## Output expectations - execution result summary- standardized `repro_outputs/` files- clear distinction between verified, partial, and blocked states- `PATCHES.md` when repo files changed ## Notes Use `references/reporting-policy.md`, `scripts/run_command.py`, and `scripts/write_outputs.py`.Env And Assets Bootstrap
When you're trying to reproduce an AI research repo and need to set up the environment before running anything, this handles the tedious bootstrap work. It gene
Paper Context Resolver
When you're reproducing an AI paper from a GitHub repo and hit a specific gap the README can't fill, this resolves narrow technical details from the original pa
Repo Intake And Plan
Takes a fresh repo and does the boring first pass: reads the README, scans for setup scripts and documented commands, then categorizes what looks like inference