Claude Agent Skill · by Affaan M

Python Testing

The python-testing skill teaches developers how to implement comprehensive testing strategies for Python applications using pytest, test-driven development (TDD

Install
Terminal · npx
$npx skills add https://github.com/affaan-m/everything-claude-code --skill python-testing
Works with Paperclip

How Python Testing fits into a Paperclip company.

Python Testing drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.

S
SaaS FactoryPaired

Pre-configured AI company — 18 agents, 18 skills, one-time purchase.

$27$59
Explore pack
Source file
SKILL.md816 lines
Expand
---name: python-testingdescription: Python testing strategies using pytest, TDD methodology, fixtures, mocking, parametrization, and coverage requirements.origin: ECC--- # Python Testing Patterns Comprehensive testing strategies for Python applications using pytest, TDD methodology, and best practices. ## When to Activate - Writing new Python code (follow TDD: red, green, refactor)- Designing test suites for Python projects- Reviewing Python test coverage- Setting up testing infrastructure ## Core Testing Philosophy ### Test-Driven Development (TDD) Always follow the TDD cycle: 1. **RED**: Write a failing test for the desired behavior2. **GREEN**: Write minimal code to make the test pass3. **REFACTOR**: Improve code while keeping tests green ```python# Step 1: Write failing test (RED)def test_add_numbers():    result = add(2, 3)    assert result == 5 # Step 2: Write minimal implementation (GREEN)def add(a, b):    return a + b # Step 3: Refactor if needed (REFACTOR)``` ### Coverage Requirements - **Target**: 80%+ code coverage- **Critical paths**: 100% coverage required- Use `pytest --cov` to measure coverage ```bashpytest --cov=mypackage --cov-report=term-missing --cov-report=html``` ## pytest Fundamentals ### Basic Test Structure ```pythonimport pytest def test_addition():    """Test basic addition."""    assert 2 + 2 == 4 def test_string_uppercase():    """Test string uppercasing."""    text = "hello"    assert text.upper() == "HELLO" def test_list_append():    """Test list append."""    items = [1, 2, 3]    items.append(4)    assert 4 in items    assert len(items) == 4``` ### Assertions ```python# Equalityassert result == expected # Inequalityassert result != unexpected # Truthinessassert result  # Truthyassert not result  # Falsyassert result is True  # Exactly Trueassert result is False  # Exactly Falseassert result is None  # Exactly None # Membershipassert item in collectionassert item not in collection # Comparisonsassert result > 0assert 0 <= result <= 100 # Type checkingassert isinstance(result, str) # Exception testing (preferred approach)with pytest.raises(ValueError):    raise ValueError("error message") # Check exception messagewith pytest.raises(ValueError, match="invalid input"):    raise ValueError("invalid input provided") # Check exception attributeswith pytest.raises(ValueError) as exc_info:    raise ValueError("error message")assert str(exc_info.value) == "error message"``` ## Fixtures ### Basic Fixture Usage ```pythonimport pytest @pytest.fixturedef sample_data():    """Fixture providing sample data."""    return {"name": "Alice", "age": 30} def test_sample_data(sample_data):    """Test using the fixture."""    assert sample_data["name"] == "Alice"    assert sample_data["age"] == 30``` ### Fixture with Setup/Teardown ```python@pytest.fixturedef database():    """Fixture with setup and teardown."""    # Setup    db = Database(":memory:")    db.create_tables()    db.insert_test_data()     yield db  # Provide to test     # Teardown    db.close() def test_database_query(database):    """Test database operations."""    result = database.query("SELECT * FROM users")    assert len(result) > 0``` ### Fixture Scopes ```python# Function scope (default) - runs for each test@pytest.fixturedef temp_file():    with open("temp.txt", "w") as f:        yield f    os.remove("temp.txt") # Module scope - runs once per module@pytest.fixture(scope="module")def module_db():    db = Database(":memory:")    db.create_tables()    yield db    db.close() # Session scope - runs once per test session@pytest.fixture(scope="session")def shared_resource():    resource = ExpensiveResource()    yield resource    resource.cleanup()``` ### Fixture with Parameters ```python@pytest.fixture(params=[1, 2, 3])def number(request):    """Parameterized fixture."""    return request.param def test_numbers(number):    """Test runs 3 times, once for each parameter."""    assert number > 0``` ### Using Multiple Fixtures ```python@pytest.fixturedef user():    return User(id=1, name="Alice") @pytest.fixturedef admin():    return User(id=2, name="Admin", role="admin") def test_user_admin_interaction(user, admin):    """Test using multiple fixtures."""    assert admin.can_manage(user)``` ### Autouse Fixtures ```python@pytest.fixture(autouse=True)def reset_config():    """Automatically runs before every test."""    Config.reset()    yield    Config.cleanup() def test_without_fixture_call():    # reset_config runs automatically    assert Config.get_setting("debug") is False``` ### Conftest.py for Shared Fixtures ```python# tests/conftest.pyimport pytest @pytest.fixturedef client():    """Shared fixture for all tests."""    app = create_app(testing=True)    with app.test_client() as client:        yield client @pytest.fixturedef auth_headers(client):    """Generate auth headers for API testing."""    response = client.post("/api/login", json={        "username": "test",        "password": "test"    })    token = response.json["token"]    return {"Authorization": f"Bearer {token}"}``` ## Parametrization ### Basic Parametrization ```python@pytest.mark.parametrize("input,expected", [    ("hello", "HELLO"),    ("world", "WORLD"),    ("PyThOn", "PYTHON"),])def test_uppercase(input, expected):    """Test runs 3 times with different inputs."""    assert input.upper() == expected``` ### Multiple Parameters ```python@pytest.mark.parametrize("a,b,expected", [    (2, 3, 5),    (0, 0, 0),    (-1, 1, 0),    (100, 200, 300),])def test_add(a, b, expected):    """Test addition with multiple inputs."""    assert add(a, b) == expected``` ### Parametrize with IDs ```python@pytest.mark.parametrize("input,expected", [    ("valid@email.com", True),    ("invalid", False),    ("@no-domain.com", False),], ids=["valid-email", "missing-at", "missing-domain"])def test_email_validation(input, expected):    """Test email validation with readable test IDs."""    assert is_valid_email(input) is expected``` ### Parametrized Fixtures ```python@pytest.fixture(params=["sqlite", "postgresql", "mysql"])def db(request):    """Test against multiple database backends."""    if request.param == "sqlite":        return Database(":memory:")    elif request.param == "postgresql":        return Database("postgresql://localhost/test")    elif request.param == "mysql":        return Database("mysql://localhost/test") def test_database_operations(db):    """Test runs 3 times, once for each database."""    result = db.query("SELECT 1")    assert result is not None``` ## Markers and Test Selection ### Custom Markers ```python# Mark slow tests@pytest.mark.slowdef test_slow_operation():    time.sleep(5) # Mark integration tests@pytest.mark.integrationdef test_api_integration():    response = requests.get("https://api.example.com")    assert response.status_code == 200 # Mark unit tests@pytest.mark.unitdef test_unit_logic():    assert calculate(2, 3) == 5``` ### Run Specific Tests ```bash# Run only fast testspytest -m "not slow" # Run only integration testspytest -m integration # Run integration or slow testspytest -m "integration or slow" # Run tests marked as unit but not slowpytest -m "unit and not slow"``` ### Configure Markers in pytest.ini ```ini[pytest]markers =    slow: marks tests as slow    integration: marks tests as integration tests    unit: marks tests as unit tests    django: marks tests as requiring Django``` ## Mocking and Patching ### Mocking Functions ```pythonfrom unittest.mock import patch, Mock @patch("mypackage.external_api_call")def test_with_mock(api_call_mock):    """Test with mocked external API."""    api_call_mock.return_value = {"status": "success"}     result = my_function()     api_call_mock.assert_called_once()    assert result["status"] == "success"``` ### Mocking Return Values ```python@patch("mypackage.Database.connect")def test_database_connection(connect_mock):    """Test with mocked database connection."""    connect_mock.return_value = MockConnection()     db = Database()    db.connect()     connect_mock.assert_called_once_with("localhost")``` ### Mocking Exceptions ```python@patch("mypackage.api_call")def test_api_error_handling(api_call_mock):    """Test error handling with mocked exception."""    api_call_mock.side_effect = ConnectionError("Network error")     with pytest.raises(ConnectionError):        api_call()     api_call_mock.assert_called_once()``` ### Mocking Context Managers ```python@patch("builtins.open", new_callable=mock_open)def test_file_reading(mock_file):    """Test file reading with mocked open."""    mock_file.return_value.read.return_value = "file content"     result = read_file("test.txt")     mock_file.assert_called_once_with("test.txt", "r")    assert result == "file content"``` ### Using Autospec ```python@patch("mypackage.DBConnection", autospec=True)def test_autospec(db_mock):    """Test with autospec to catch API misuse."""    db = db_mock.return_value    db.query("SELECT * FROM users")     # This would fail if DBConnection doesn't have query method    db_mock.assert_called_once()``` ### Mock Class Instances ```pythonclass TestUserService:    @patch("mypackage.UserRepository")    def test_create_user(self, repo_mock):        """Test user creation with mocked repository."""        repo_mock.return_value.save.return_value = User(id=1, name="Alice")         service = UserService(repo_mock.return_value)        user = service.create_user(name="Alice")         assert user.name == "Alice"        repo_mock.return_value.save.assert_called_once()``` ### Mock Property ```python@pytest.fixturedef mock_config():    """Create a mock with a property."""    config = Mock()    type(config).debug = PropertyMock(return_value=True)    type(config).api_key = PropertyMock(return_value="test-key")    return config def test_with_mock_config(mock_config):    """Test with mocked config properties."""    assert mock_config.debug is True    assert mock_config.api_key == "test-key"``` ## Testing Async Code ### Async Tests with pytest-asyncio ```pythonimport pytest @pytest.mark.asyncioasync def test_async_function():    """Test async function."""    result = await async_add(2, 3)    assert result == 5 @pytest.mark.asyncioasync def test_async_with_fixture(async_client):    """Test async with async fixture."""    response = await async_client.get("/api/users")    assert response.status_code == 200``` ### Async Fixture ```python@pytest.fixtureasync def async_client():    """Async fixture providing async test client."""    app = create_app()    async with app.test_client() as client:        yield client @pytest.mark.asyncioasync def test_api_endpoint(async_client):    """Test using async fixture."""    response = await async_client.get("/api/data")    assert response.status_code == 200``` ### Mocking Async Functions ```python@pytest.mark.asyncio@patch("mypackage.async_api_call")async def test_async_mock(api_call_mock):    """Test async function with mock."""    api_call_mock.return_value = {"status": "ok"}     result = await my_async_function()     api_call_mock.assert_awaited_once()    assert result["status"] == "ok"``` ## Testing Exceptions ### Testing Expected Exceptions ```pythondef test_divide_by_zero():    """Test that dividing by zero raises ZeroDivisionError."""    with pytest.raises(ZeroDivisionError):        divide(10, 0) def test_custom_exception():    """Test custom exception with message."""    with pytest.raises(ValueError, match="invalid input"):        validate_input("invalid")``` ### Testing Exception Attributes ```pythondef test_exception_with_details():    """Test exception with custom attributes."""    with pytest.raises(CustomError) as exc_info:        raise CustomError("error", code=400)     assert exc_info.value.code == 400    assert "error" in str(exc_info.value)``` ## Testing Side Effects ### Testing File Operations ```pythonimport tempfileimport os def test_file_processing():    """Test file processing with temp file."""    with tempfile.NamedTemporaryFile(mode='w', delete=False, suffix='.txt') as f:        f.write("test content")        temp_path = f.name     try:        result = process_file(temp_path)        assert result == "processed: test content"    finally:        os.unlink(temp_path)``` ### Testing with pytest's tmp_path Fixture ```pythondef test_with_tmp_path(tmp_path):    """Test using pytest's built-in temp path fixture."""    test_file = tmp_path / "test.txt"    test_file.write_text("hello world")     result = process_file(str(test_file))    assert result == "hello world"    # tmp_path automatically cleaned up``` ### Testing with tmpdir Fixture ```pythondef test_with_tmpdir(tmpdir):    """Test using pytest's tmpdir fixture."""    test_file = tmpdir.join("test.txt")    test_file.write("data")     result = process_file(str(test_file))    assert result == "data"``` ## Test Organization ### Directory Structure ```tests/├── conftest.py                 # Shared fixtures├── __init__.py├── unit/                       # Unit tests│   ├── __init__.py│   ├── test_models.py│   ├── test_utils.py│   └── test_services.py├── integration/                # Integration tests│   ├── __init__.py│   ├── test_api.py│   └── test_database.py└── e2e/                        # End-to-end tests    ├── __init__.py    └── test_user_flow.py``` ### Test Classes ```pythonclass TestUserService:    """Group related tests in a class."""     @pytest.fixture(autouse=True)    def setup(self):        """Setup runs before each test in this class."""        self.service = UserService()     def test_create_user(self):        """Test user creation."""        user = self.service.create_user("Alice")        assert user.name == "Alice"     def test_delete_user(self):        """Test user deletion."""        user = User(id=1, name="Bob")        self.service.delete_user(user)        assert not self.service.user_exists(1)``` ## Best Practices ### DO - **Follow TDD**: Write tests before code (red-green-refactor)- **Test one thing**: Each test should verify a single behavior- **Use descriptive names**: `test_user_login_with_invalid_credentials_fails`- **Use fixtures**: Eliminate duplication with fixtures- **Mock external dependencies**: Don't depend on external services- **Test edge cases**: Empty inputs, None values, boundary conditions- **Aim for 80%+ coverage**: Focus on critical paths- **Keep tests fast**: Use marks to separate slow tests ### DON'T - **Don't test implementation**: Test behavior, not internals- **Don't use complex conditionals in tests**: Keep tests simple- **Don't ignore test failures**: All tests must pass- **Don't test third-party code**: Trust libraries to work- **Don't share state between tests**: Tests should be independent- **Don't catch exceptions in tests**: Use `pytest.raises`- **Don't use print statements**: Use assertions and pytest output- **Don't write tests that are too brittle**: Avoid over-specific mocks ## Common Patterns ### Testing API Endpoints (FastAPI/Flask) ```python@pytest.fixturedef client():    app = create_app(testing=True)    return app.test_client() def test_get_user(client):    response = client.get("/api/users/1")    assert response.status_code == 200    assert response.json["id"] == 1 def test_create_user(client):    response = client.post("/api/users", json={        "name": "Alice",        "email": "alice@example.com"    })    assert response.status_code == 201    assert response.json["name"] == "Alice"``` ### Testing Database Operations ```python@pytest.fixturedef db_session():    """Create a test database session."""    session = Session(bind=engine)    session.begin_nested()    yield session    session.rollback()    session.close() def test_create_user(db_session):    user = User(name="Alice", email="alice@example.com")    db_session.add(user)    db_session.commit()     retrieved = db_session.query(User).filter_by(name="Alice").first()    assert retrieved.email == "alice@example.com"``` ### Testing Class Methods ```pythonclass TestCalculator:    @pytest.fixture    def calculator(self):        return Calculator()     def test_add(self, calculator):        assert calculator.add(2, 3) == 5     def test_divide_by_zero(self, calculator):        with pytest.raises(ZeroDivisionError):            calculator.divide(10, 0)``` ## pytest Configuration ### pytest.ini ```ini[pytest]testpaths = testspython_files = test_*.pypython_classes = Test*python_functions = test_*addopts =    --strict-markers    --disable-warnings    --cov=mypackage    --cov-report=term-missing    --cov-report=htmlmarkers =    slow: marks tests as slow    integration: marks tests as integration tests    unit: marks tests as unit tests``` ### pyproject.toml ```toml[tool.pytest.ini_options]testpaths = ["tests"]python_files = ["test_*.py"]python_classes = ["Test*"]python_functions = ["test_*"]addopts = [    "--strict-markers",    "--cov=mypackage",    "--cov-report=term-missing",    "--cov-report=html",]markers = [    "slow: marks tests as slow",    "integration: marks tests as integration tests",    "unit: marks tests as unit tests",]``` ## Running Tests ```bash# Run all testspytest # Run specific filepytest tests/test_utils.py # Run specific testpytest tests/test_utils.py::test_function # Run with verbose outputpytest -v # Run with coveragepytest --cov=mypackage --cov-report=html # Run only fast testspytest -m "not slow" # Run until first failurepytest -x # Run and stop on N failurespytest --maxfail=3 # Run last failed testspytest --lf # Run tests with patternpytest -k "test_user" # Run with debugger on failurepytest --pdb``` ## Quick Reference | Pattern | Usage ||---------|-------|| `pytest.raises()` | Test expected exceptions || `@pytest.fixture()` | Create reusable test fixtures || `@pytest.mark.parametrize()` | Run tests with multiple inputs || `@pytest.mark.slow` | Mark slow tests || `pytest -m "not slow"` | Skip slow tests || `@patch()` | Mock functions and classes || `tmp_path` fixture | Automatic temp directory || `pytest --cov` | Generate coverage report || `assert` | Simple and readable assertions | **Remember**: Tests are code too. Keep them clean, readable, and maintainable. Good tests catch bugs; great tests prevent them.