Install
Terminal · npx$
npx skills add https://github.com/affaan-m/everything-claude-code --skill tdd-workflowWorks with Paperclip
How Tdd Workflow fits into a Paperclip company.
Tdd Workflow drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.
S
SaaS FactoryPaired
Pre-configured AI company — 18 agents, 18 skills, one-time purchase.
$27$59
Explore packSource file
SKILL.md463 linesExpandCollapse
---name: tdd-workflowdescription: Use this skill when writing new features, fixing bugs, or refactoring code. Enforces test-driven development with 80%+ coverage including unit, integration, and E2E tests.origin: ECC--- # Test-Driven Development Workflow This skill ensures all code development follows TDD principles with comprehensive test coverage. ## When to Activate - Writing new features or functionality- Fixing bugs or issues- Refactoring existing code- Adding API endpoints- Creating new components ## Core Principles ### 1. Tests BEFORE CodeALWAYS write tests first, then implement code to make tests pass. ### 2. Coverage Requirements- Minimum 80% coverage (unit + integration + E2E)- All edge cases covered- Error scenarios tested- Boundary conditions verified ### 3. Test Types #### Unit Tests- Individual functions and utilities- Component logic- Pure functions- Helpers and utilities #### Integration Tests- API endpoints- Database operations- Service interactions- External API calls #### E2E Tests (Playwright)- Critical user flows- Complete workflows- Browser automation- UI interactions ### 4. Git Checkpoints- If the repository is under Git, create a checkpoint commit after each TDD stage- Do not squash or rewrite these checkpoint commits until the workflow is complete- Each checkpoint commit message must describe the stage and the exact evidence captured- Count only commits created on the current active branch for the current task- Do not treat commits from other branches, earlier unrelated work, or distant branch history as valid checkpoint evidence- Before treating a checkpoint as satisfied, verify that the commit is reachable from the current `HEAD` on the active branch and belongs to the current task sequence- The preferred compact workflow is: - one commit for failing test added and RED validated - one commit for minimal fix applied and GREEN validated - one optional commit for refactor complete- Separate evidence-only commits are not required if the test commit clearly corresponds to RED and the fix commit clearly corresponds to GREEN ## TDD Workflow Steps ### Step 1: Write User Journeys```As a [role], I want to [action], so that [benefit] Example:As a user, I want to search for markets semantically,so that I can find relevant markets even without exact keywords.``` ### Step 2: Generate Test CasesFor each user journey, create comprehensive test cases: ```typescriptdescribe('Semantic Search', () => { it('returns relevant markets for query', async () => { // Test implementation }) it('handles empty query gracefully', async () => { // Test edge case }) it('falls back to substring search when Redis unavailable', async () => { // Test fallback behavior }) it('sorts results by similarity score', async () => { // Test sorting logic })})``` ### Step 3: Run Tests (They Should Fail)```bashnpm test# Tests should fail - we haven't implemented yet``` This step is mandatory and is the RED gate for all production changes. Before modifying business logic or other production code, you must verify a valid RED state via one of these paths:- Runtime RED: - The relevant test target compiles successfully - The new or changed test is actually executed - The result is RED- Compile-time RED: - The new test newly instantiates, references, or exercises the buggy code path - The compile failure is itself the intended RED signal- In either case, the failure is caused by the intended business-logic bug, undefined behavior, or missing implementation- The failure is not caused only by unrelated syntax errors, broken test setup, missing dependencies, or unrelated regressions A test that was only written but not compiled and executed does not count as RED. Do not edit production code until this RED state is confirmed. If the repository is under Git, create a checkpoint commit immediately after this stage is validated.Recommended commit message format:- `test: add reproducer for <feature or bug>`- This commit may also serve as the RED validation checkpoint if the reproducer was compiled and executed and failed for the intended reason- Verify that this checkpoint commit is on the current active branch before continuing ### Step 4: Implement CodeWrite minimal code to make tests pass: ```typescript// Implementation guided by testsexport async function searchMarkets(query: string) { // Implementation here}``` If the repository is under Git, stage the minimal fix now but defer the checkpoint commit until GREEN is validated in Step 5. ### Step 5: Run Tests Again```bashnpm test# Tests should now pass``` Rerun the same relevant test target after the fix and confirm the previously failing test is now GREEN. Only after a valid GREEN result may you proceed to refactor. If the repository is under Git, create a checkpoint commit immediately after GREEN is validated.Recommended commit message format:- `fix: <feature or bug>`- The fix commit may also serve as the GREEN validation checkpoint if the same relevant test target was rerun and passed- Verify that this checkpoint commit is on the current active branch before continuing ### Step 6: RefactorImprove code quality while keeping tests green:- Remove duplication- Improve naming- Optimize performance- Enhance readability If the repository is under Git, create a checkpoint commit immediately after refactoring is complete and tests remain green.Recommended commit message format:- `refactor: clean up after <feature or bug> implementation`- Verify that this checkpoint commit is on the current active branch before considering the TDD cycle complete ### Step 7: Verify Coverage```bashnpm run test:coverage# Verify 80%+ coverage achieved``` ## Testing Patterns ### Unit Test Pattern (Jest/Vitest)```typescriptimport { render, screen, fireEvent } from '@testing-library/react'import { Button } from './Button' describe('Button Component', () => { it('renders with correct text', () => { render(<Button>Click me</Button>) expect(screen.getByText('Click me')).toBeInTheDocument() }) it('calls onClick when clicked', () => { const handleClick = jest.fn() render(<Button onClick={handleClick}>Click</Button>) fireEvent.click(screen.getByRole('button')) expect(handleClick).toHaveBeenCalledTimes(1) }) it('is disabled when disabled prop is true', () => { render(<Button disabled>Click</Button>) expect(screen.getByRole('button')).toBeDisabled() })})``` ### API Integration Test Pattern```typescriptimport { NextRequest } from 'next/server'import { GET } from './route' describe('GET /api/markets', () => { it('returns markets successfully', async () => { const request = new NextRequest('http://localhost/api/markets') const response = await GET(request) const data = await response.json() expect(response.status).toBe(200) expect(data.success).toBe(true) expect(Array.isArray(data.data)).toBe(true) }) it('validates query parameters', async () => { const request = new NextRequest('http://localhost/api/markets?limit=invalid') const response = await GET(request) expect(response.status).toBe(400) }) it('handles database errors gracefully', async () => { // Mock database failure const request = new NextRequest('http://localhost/api/markets') // Test error handling })})``` ### E2E Test Pattern (Playwright)```typescriptimport { test, expect } from '@playwright/test' test('user can search and filter markets', async ({ page }) => { // Navigate to markets page await page.goto('/') await page.click('a[href="/markets"]') // Verify page loaded await expect(page.locator('h1')).toContainText('Markets') // Search for markets await page.fill('input[placeholder="Search markets"]', 'election') // Wait for debounce and results await page.waitForTimeout(600) // Verify search results displayed const results = page.locator('[data-testid="market-card"]') await expect(results).toHaveCount(5, { timeout: 5000 }) // Verify results contain search term const firstResult = results.first() await expect(firstResult).toContainText('election', { ignoreCase: true }) // Filter by status await page.click('button:has-text("Active")') // Verify filtered results await expect(results).toHaveCount(3)}) test('user can create a new market', async ({ page }) => { // Login first await page.goto('/creator-dashboard') // Fill market creation form await page.fill('input[name="name"]', 'Test Market') await page.fill('textarea[name="description"]', 'Test description') await page.fill('input[name="endDate"]', '2025-12-31') // Submit form await page.click('button[type="submit"]') // Verify success message await expect(page.locator('text=Market created successfully')).toBeVisible() // Verify redirect to market page await expect(page).toHaveURL(/\/markets\/test-market/)})``` ## Test File Organization ```src/├── components/│ ├── Button/│ │ ├── Button.tsx│ │ ├── Button.test.tsx # Unit tests│ │ └── Button.stories.tsx # Storybook│ └── MarketCard/│ ├── MarketCard.tsx│ └── MarketCard.test.tsx├── app/│ └── api/│ └── markets/│ ├── route.ts│ └── route.test.ts # Integration tests└── e2e/ ├── markets.spec.ts # E2E tests ├── trading.spec.ts └── auth.spec.ts``` ## Mocking External Services ### Supabase Mock```typescriptjest.mock('@/lib/supabase', () => ({ supabase: { from: jest.fn(() => ({ select: jest.fn(() => ({ eq: jest.fn(() => Promise.resolve({ data: [{ id: 1, name: 'Test Market' }], error: null })) })) })) }}))``` ### Redis Mock```typescriptjest.mock('@/lib/redis', () => ({ searchMarketsByVector: jest.fn(() => Promise.resolve([ { slug: 'test-market', similarity_score: 0.95 } ])), checkRedisHealth: jest.fn(() => Promise.resolve({ connected: true }))}))``` ### OpenAI Mock```typescriptjest.mock('@/lib/openai', () => ({ generateEmbedding: jest.fn(() => Promise.resolve( new Array(1536).fill(0.1) // Mock 1536-dim embedding ))}))``` ## Test Coverage Verification ### Run Coverage Report```bashnpm run test:coverage``` ### Coverage Thresholds```json{ "jest": { "coverageThresholds": { "global": { "branches": 80, "functions": 80, "lines": 80, "statements": 80 } } }}``` ## Common Testing Mistakes to Avoid ### FAIL: WRONG: Testing Implementation Details```typescript// Don't test internal stateexpect(component.state.count).toBe(5)``` ### PASS: CORRECT: Test User-Visible Behavior```typescript// Test what users seeexpect(screen.getByText('Count: 5')).toBeInTheDocument()``` ### FAIL: WRONG: Brittle Selectors```typescript// Breaks easilyawait page.click('.css-class-xyz')``` ### PASS: CORRECT: Semantic Selectors```typescript// Resilient to changesawait page.click('button:has-text("Submit")')await page.click('[data-testid="submit-button"]')``` ### FAIL: WRONG: No Test Isolation```typescript// Tests depend on each othertest('creates user', () => { /* ... */ })test('updates same user', () => { /* depends on previous test */ })``` ### PASS: CORRECT: Independent Tests```typescript// Each test sets up its own datatest('creates user', () => { const user = createTestUser() // Test logic}) test('updates user', () => { const user = createTestUser() // Update logic})``` ## Continuous Testing ### Watch Mode During Development```bashnpm test -- --watch# Tests run automatically on file changes``` ### Pre-Commit Hook```bash# Runs before every commitnpm test && npm run lint``` ### CI/CD Integration```yaml# GitHub Actions- name: Run Tests run: npm test -- --coverage- name: Upload Coverage uses: codecov/codecov-action@v3``` ## Best Practices 1. **Write Tests First** - Always TDD2. **One Assert Per Test** - Focus on single behavior3. **Descriptive Test Names** - Explain what's tested4. **Arrange-Act-Assert** - Clear test structure5. **Mock External Dependencies** - Isolate unit tests6. **Test Edge Cases** - Null, undefined, empty, large7. **Test Error Paths** - Not just happy paths8. **Keep Tests Fast** - Unit tests < 50ms each9. **Clean Up After Tests** - No side effects10. **Review Coverage Reports** - Identify gaps ## Success Metrics - 80%+ code coverage achieved- All tests passing (green)- No skipped or disabled tests- Fast test execution (< 30s for unit tests)- E2E tests cover critical user flows- Tests catch bugs before production --- **Remember**: Tests are not optional. They are the safety net that enables confident refactoring, rapid development, and production reliability.Related skills
Agent Eval
Install Agent Eval skill for Claude Code from affaan-m/everything-claude-code.
Agent Harness Construction
Install Agent Harness Construction skill for Claude Code from affaan-m/everything-claude-code.
Agent Payment X402
Install Agent Payment X402 skill for Claude Code from affaan-m/everything-claude-code.