Claude Agent Skill · by Affaan M

Clickhouse Io

Install Clickhouse Io skill for Claude Code from affaan-m/everything-claude-code.

Install
Terminal · npx
$npx skills add https://github.com/affaan-m/everything-claude-code --skill clickhouse-io
Works with Paperclip

How Clickhouse Io fits into a Paperclip company.

Clickhouse Io drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.

S
SaaS FactoryPaired

Pre-configured AI company — 18 agents, 18 skills, one-time purchase.

$27$59
Explore pack
Source file
SKILL.md439 lines
Expand
---name: clickhouse-iodescription: ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads.origin: ECC--- # ClickHouse Analytics Patterns ClickHouse-specific patterns for high-performance analytics and data engineering. ## When to Activate - Designing ClickHouse table schemas (MergeTree engine selection)- Writing analytical queries (aggregations, window functions, joins)- Optimizing query performance (partition pruning, projections, materialized views)- Ingesting large volumes of data (batch inserts, Kafka integration)- Migrating from PostgreSQL/MySQL to ClickHouse for analytics- Implementing real-time dashboards or time-series analytics ## Overview ClickHouse is a column-oriented database management system (DBMS) for online analytical processing (OLAP). It's optimized for fast analytical queries on large datasets. **Key Features:**- Column-oriented storage- Data compression- Parallel query execution- Distributed queries- Real-time analytics ## Table Design Patterns ### MergeTree Engine (Most Common) ```sqlCREATE TABLE markets_analytics (    date Date,    market_id String,    market_name String,    volume UInt64,    trades UInt32,    unique_traders UInt32,    avg_trade_size Float64,    created_at DateTime) ENGINE = MergeTree()PARTITION BY toYYYYMM(date)ORDER BY (date, market_id)SETTINGS index_granularity = 8192;``` ### ReplacingMergeTree (Deduplication) ```sql-- For data that may have duplicates (e.g., from multiple sources)CREATE TABLE user_events (    event_id String,    user_id String,    event_type String,    timestamp DateTime,    properties String) ENGINE = ReplacingMergeTree()PARTITION BY toYYYYMM(timestamp)ORDER BY (user_id, event_id, timestamp)PRIMARY KEY (user_id, event_id);``` ### AggregatingMergeTree (Pre-aggregation) ```sql-- For maintaining aggregated metricsCREATE TABLE market_stats_hourly (    hour DateTime,    market_id String,    total_volume AggregateFunction(sum, UInt64),    total_trades AggregateFunction(count, UInt32),    unique_users AggregateFunction(uniq, String)) ENGINE = AggregatingMergeTree()PARTITION BY toYYYYMM(hour)ORDER BY (hour, market_id); -- Query aggregated dataSELECT    hour,    market_id,    sumMerge(total_volume) AS volume,    countMerge(total_trades) AS trades,    uniqMerge(unique_users) AS usersFROM market_stats_hourlyWHERE hour >= toStartOfHour(now() - INTERVAL 24 HOUR)GROUP BY hour, market_idORDER BY hour DESC;``` ## Query Optimization Patterns ### Efficient Filtering ```sql-- PASS: GOOD: Use indexed columns firstSELECT *FROM markets_analyticsWHERE date >= '2025-01-01'  AND market_id = 'market-123'  AND volume > 1000ORDER BY date DESCLIMIT 100; -- FAIL: BAD: Filter on non-indexed columns firstSELECT *FROM markets_analyticsWHERE volume > 1000  AND market_name LIKE '%election%'  AND date >= '2025-01-01';``` ### Aggregations ```sql-- PASS: GOOD: Use ClickHouse-specific aggregation functionsSELECT    toStartOfDay(created_at) AS day,    market_id,    sum(volume) AS total_volume,    count() AS total_trades,    uniq(trader_id) AS unique_traders,    avg(trade_size) AS avg_sizeFROM tradesWHERE created_at >= today() - INTERVAL 7 DAYGROUP BY day, market_idORDER BY day DESC, total_volume DESC; -- PASS: Use quantile for percentiles (more efficient than percentile)SELECT    quantile(0.50)(trade_size) AS median,    quantile(0.95)(trade_size) AS p95,    quantile(0.99)(trade_size) AS p99FROM tradesWHERE created_at >= now() - INTERVAL 1 HOUR;``` ### Window Functions ```sql-- Calculate running totalsSELECT    date,    market_id,    volume,    sum(volume) OVER (        PARTITION BY market_id        ORDER BY date        ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW    ) AS cumulative_volumeFROM markets_analyticsWHERE date >= today() - INTERVAL 30 DAYORDER BY market_id, date;``` ## Data Insertion Patterns ### Bulk Insert (Recommended) ```typescriptimport { ClickHouse } from 'clickhouse' const clickhouse = new ClickHouse({  url: process.env.CLICKHOUSE_URL,  port: 8123,  basicAuth: {    username: process.env.CLICKHOUSE_USER,    password: process.env.CLICKHOUSE_PASSWORD  }}) // PASS: Batch insert (efficient)async function bulkInsertTrades(trades: Trade[]) {  const values = trades.map(trade => `(    '${trade.id}',    '${trade.market_id}',    '${trade.user_id}',    ${trade.amount},    '${trade.timestamp.toISOString()}'  )`).join(',')   await clickhouse.query(`    INSERT INTO trades (id, market_id, user_id, amount, timestamp)    VALUES ${values}  `).toPromise()} // FAIL: Individual inserts (slow)async function insertTrade(trade: Trade) {  // Don't do this in a loop!  await clickhouse.query(`    INSERT INTO trades VALUES ('${trade.id}', ...)  `).toPromise()}``` ### Streaming Insert ```typescript// For continuous data ingestionimport { createWriteStream } from 'fs'import { pipeline } from 'stream/promises' async function streamInserts() {  const stream = clickhouse.insert('trades').stream()   for await (const batch of dataSource) {    stream.write(batch)  }   await stream.end()}``` ## Materialized Views ### Real-time Aggregations ```sql-- Create materialized view for hourly statsCREATE MATERIALIZED VIEW market_stats_hourly_mvTO market_stats_hourlyAS SELECT    toStartOfHour(timestamp) AS hour,    market_id,    sumState(amount) AS total_volume,    countState() AS total_trades,    uniqState(user_id) AS unique_usersFROM tradesGROUP BY hour, market_id; -- Query the materialized viewSELECT    hour,    market_id,    sumMerge(total_volume) AS volume,    countMerge(total_trades) AS trades,    uniqMerge(unique_users) AS usersFROM market_stats_hourlyWHERE hour >= now() - INTERVAL 24 HOURGROUP BY hour, market_id;``` ## Performance Monitoring ### Query Performance ```sql-- Check slow queriesSELECT    query_id,    user,    query,    query_duration_ms,    read_rows,    read_bytes,    memory_usageFROM system.query_logWHERE type = 'QueryFinish'  AND query_duration_ms > 1000  AND event_time >= now() - INTERVAL 1 HOURORDER BY query_duration_ms DESCLIMIT 10;``` ### Table Statistics ```sql-- Check table sizesSELECT    database,    table,    formatReadableSize(sum(bytes)) AS size,    sum(rows) AS rows,    max(modification_time) AS latest_modificationFROM system.partsWHERE activeGROUP BY database, tableORDER BY sum(bytes) DESC;``` ## Common Analytics Queries ### Time Series Analysis ```sql-- Daily active usersSELECT    toDate(timestamp) AS date,    uniq(user_id) AS daily_active_usersFROM eventsWHERE timestamp >= today() - INTERVAL 30 DAYGROUP BY dateORDER BY date; -- Retention analysisSELECT    signup_date,    countIf(days_since_signup = 0) AS day_0,    countIf(days_since_signup = 1) AS day_1,    countIf(days_since_signup = 7) AS day_7,    countIf(days_since_signup = 30) AS day_30FROM (    SELECT        user_id,        min(toDate(timestamp)) AS signup_date,        toDate(timestamp) AS activity_date,        dateDiff('day', signup_date, activity_date) AS days_since_signup    FROM events    GROUP BY user_id, activity_date)GROUP BY signup_dateORDER BY signup_date DESC;``` ### Funnel Analysis ```sql-- Conversion funnelSELECT    countIf(step = 'viewed_market') AS viewed,    countIf(step = 'clicked_trade') AS clicked,    countIf(step = 'completed_trade') AS completed,    round(clicked / viewed * 100, 2) AS view_to_click_rate,    round(completed / clicked * 100, 2) AS click_to_completion_rateFROM (    SELECT        user_id,        session_id,        event_type AS step    FROM events    WHERE event_date = today())GROUP BY session_id;``` ### Cohort Analysis ```sql-- User cohorts by signup monthSELECT    toStartOfMonth(signup_date) AS cohort,    toStartOfMonth(activity_date) AS month,    dateDiff('month', cohort, month) AS months_since_signup,    count(DISTINCT user_id) AS active_usersFROM (    SELECT        user_id,        min(toDate(timestamp)) OVER (PARTITION BY user_id) AS signup_date,        toDate(timestamp) AS activity_date    FROM events)GROUP BY cohort, month, months_since_signupORDER BY cohort, months_since_signup;``` ## Data Pipeline Patterns ### ETL Pattern ```typescript// Extract, Transform, Loadasync function etlPipeline() {  // 1. Extract from source  const rawData = await extractFromPostgres()   // 2. Transform  const transformed = rawData.map(row => ({    date: new Date(row.created_at).toISOString().split('T')[0],    market_id: row.market_slug,    volume: parseFloat(row.total_volume),    trades: parseInt(row.trade_count)  }))   // 3. Load to ClickHouse  await bulkInsertToClickHouse(transformed)} // Run periodicallysetInterval(etlPipeline, 60 * 60 * 1000)  // Every hour``` ### Change Data Capture (CDC) ```typescript// Listen to PostgreSQL changes and sync to ClickHouseimport { Client } from 'pg' const pgClient = new Client({ connectionString: process.env.DATABASE_URL }) pgClient.query('LISTEN market_updates') pgClient.on('notification', async (msg) => {  const update = JSON.parse(msg.payload)   await clickhouse.insert('market_updates', [    {      market_id: update.id,      event_type: update.operation,  // INSERT, UPDATE, DELETE      timestamp: new Date(),      data: JSON.stringify(update.new_data)    }  ])})``` ## Best Practices ### 1. Partitioning Strategy- Partition by time (usually month or day)- Avoid too many partitions (performance impact)- Use DATE type for partition key ### 2. Ordering Key- Put most frequently filtered columns first- Consider cardinality (high cardinality first)- Order impacts compression ### 3. Data Types- Use smallest appropriate type (UInt32 vs UInt64)- Use LowCardinality for repeated strings- Use Enum for categorical data ### 4. Avoid- SELECT * (specify columns)- FINAL (merge data before query instead)- Too many JOINs (denormalize for analytics)- Small frequent inserts (batch instead) ### 5. Monitoring- Track query performance- Monitor disk usage- Check merge operations- Review slow query log **Remember**: ClickHouse excels at analytical workloads. Design tables for your query patterns, batch inserts, and leverage materialized views for real-time aggregations.