Claude Agent Skill · by Jeffallan

Database Optimizer

Install Database Optimizer skill for Claude Code from jeffallan/claude-skills.

Install
Terminal · npx
$npx skills add https://github.com/jeffallan/claude-skills --skill database-optimizer
Works with Paperclip

How Database Optimizer fits into a Paperclip company.

Database Optimizer drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.

S
SaaS FactoryPaired

Pre-configured AI company — 18 agents, 18 skills, one-time purchase.

$27$59
Explore pack
Source file
SKILL.md147 lines
Expand
---name: database-optimizerdescription: Optimizes database queries and improves performance across PostgreSQL and MySQL systems. Use when investigating slow queries, analyzing execution plans, or optimizing database performance. Invoke for index design, query rewrites, configuration tuning, partitioning strategies, lock contention resolution.license: MITmetadata:  author: https://github.com/Jeffallan  version: "1.1.1"  domain: infrastructure  triggers: database optimization, slow query, query performance, database tuning, index optimization, execution plan, EXPLAIN ANALYZE, database performance, PostgreSQL optimization, MySQL optimization  role: specialist  scope: optimization  output-format: analysis-and-code  related-skills: devops-engineer, postgres-pro, graphql-architect--- # Database Optimizer Senior database optimizer with expertise in performance tuning, query optimization, and scalability across multiple database systems. ## When to Use This Skill - Analyzing slow queries and execution plans- Designing optimal index strategies- Tuning database configuration parameters- Optimizing schema design and partitioning- Reducing lock contention and deadlocks- Improving cache hit rates and memory usage ## Core Workflow 1. **Analyze Performance** — Capture baseline metrics and run `EXPLAIN ANALYZE` before any changes2. **Identify Bottlenecks** — Find inefficient queries, missing indexes, config issues3. **Design Solutions** — Create index strategies, query rewrites, schema improvements4. **Implement Changes** — Apply optimizations incrementally with monitoring; validate each change before proceeding to the next5. **Validate Results** — Re-run `EXPLAIN ANALYZE`, compare costs, measure wall-clock improvement, document changes > ⚠️ Always test changes in non-production first. Revert immediately if write performance degrades or replication lag increases. ## Reference Guide Load detailed guidance based on context: | Topic | Reference | Load When ||-------|-----------|-----------|| Query Optimization | `references/query-optimization.md` | Analyzing slow queries, execution plans || Index Strategies | `references/index-strategies.md` | Designing indexes, covering indexes || PostgreSQL Tuning | `references/postgresql-tuning.md` | PostgreSQL-specific optimizations || MySQL Tuning | `references/mysql-tuning.md` | MySQL-specific optimizations || Monitoring & Analysis | `references/monitoring-analysis.md` | Performance metrics, diagnostics | ## Common Operations & Examples ### Identify Top Slow Queries (PostgreSQL)```sql-- Requires pg_stat_statements extensionSELECT query,       calls,       round(total_exec_time::numeric, 2)  AS total_ms,       round(mean_exec_time::numeric, 2)   AS mean_ms,       round(stddev_exec_time::numeric, 2) AS stddev_ms,       rowsFROM   pg_stat_statementsORDER  BY mean_exec_time DESCLIMIT  20;``` ### Capture an Execution Plan```sql-- Use BUFFERS to expose cache hit vs. disk read ratioEXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)SELECT o.id, c.nameFROM   orders oJOIN   customers c ON c.id = o.customer_idWHERE  o.status = 'pending'  AND  o.created_at > now() - interval '7 days';``` ### Reading EXPLAIN Output — Key Patterns to Find | Pattern | Symptom | Typical Remedy ||---------|---------|----------------|| `Seq Scan` on large table | High row estimate, no filter selectivity | Add B-tree index on filter column || `Nested Loop` with large outer set | Exponential row growth in inner loop | Consider Hash Join; index inner join key || `cost=... rows=1` but actual rows=50000 | Stale statistics | Run `ANALYZE <table>;` || `Buffers: hit=10 read=90000` | Low buffer cache hit rate | Increase `shared_buffers`; add covering index || `Sort Method: external merge` | Sort spilling to disk | Increase `work_mem` for the session | ### Create a Covering Index```sql-- Covers the filter AND the projected columns, eliminating a heap fetchCREATE INDEX CONCURRENTLY idx_orders_status_created_covering    ON orders (status, created_at)    INCLUDE (customer_id, total_amount);``` ### Validate Improvement```sql-- Before optimization: save plan & timingEXPLAIN (ANALYZE, BUFFERS) <query>;   -- note "Execution Time: X ms" -- After optimization: compareEXPLAIN (ANALYZE, BUFFERS) <query>;   -- target meaningful reduction in cost & time -- Confirm index is actually usedSELECT indexname, idx_scan, idx_tup_read, idx_tup_fetchFROM   pg_stat_user_indexesWHERE  relname = 'orders';``` ### MySQL: Find Slow Queries```sql-- Inspect slow query log candidatesSELECT * FROM performance_schema.events_statements_summary_by_digestORDER  BY SUM_TIMER_WAIT DESCLIMIT  20; -- Execution planEXPLAIN FORMAT=JSONSELECT * FROM orders WHERE status = 'pending' AND created_at > NOW() - INTERVAL 7 DAY;``` ## Constraints ### MUST DO- Capture `EXPLAIN (ANALYZE, BUFFERS)` output **before** optimizing — this is the baseline- Measure performance before and after every change- Create indexes with `CONCURRENTLY` (PostgreSQL) to avoid table locks- Test in non-production; roll back if write performance or replication lag worsens- Document all optimization decisions with before/after metrics- Run `ANALYZE` after bulk data changes to refresh statistics ### MUST NOT DO- Apply optimizations without a measured baseline- Create redundant or unused indexes- Make multiple changes simultaneously (impossible to attribute impact)- Ignore write amplification caused by new indexes- Neglect `VACUUM` / statistics maintenance ## Output Templates When optimizing database performance, provide:1. Performance analysis with baseline metrics (query time, cost, buffer hit ratio)2. Identified bottlenecks and root causes (with EXPLAIN evidence)3. Optimization strategy with specific changes4. Implementation SQL / config changes5. Validation queries to measure improvement6. Monitoring recommendations