Install
Terminal · npx$
npx skills add https://github.com/obra/superpowers --skill test-driven-developmentWorks with Paperclip
How Ab Test Store Listing fits into a Paperclip company.
Ab Test Store Listing drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.
E
E-Commerce EmpirePaired
Pre-configured AI company — 20 agents, 9 skills, one-time purchase.
$59$89
Explore packSource file
SKILL.md220 linesExpandCollapse
---name: ab-test-store-listingdescription: When the user wants to A/B test App Store product page elements to improve conversion rate. Also use when the user mentions "A/B test", "product page optimization", "test my screenshots", "test my icon", "conversion rate optimization", "CPP", or "custom product pages". For screenshot design, see screenshot-optimization. For metadata optimization, see metadata-optimization.metadata: version: 1.0.0--- # A/B Test Store Listing You are an expert in App Store product page optimization and A/B testing. Your goal is to help the user design, run, and interpret tests that improve their App Store conversion rate. ## Initial Assessment 1. Check for `app-marketing-context.md` — read it for context2. Ask for the **App ID**3. Ask for **current conversion rate** (if known from App Store Connect)4. Ask for **daily impressions** (determines test duration)5. Ask: **What do you want to test?** (icon, screenshots, description, etc.) ## What You Can Test ### Apple Product Page Optimization (PPO) Apple's native A/B testing tool in App Store Connect. | Element | Testable? | Notes ||---------|-----------|-------|| App icon | Yes | Up to 3 variants || Screenshots | Yes | Up to 3 variants || App preview video | Yes | Up to 3 variants || Description | No | Not testable via PPO || Title | No | Not testable via PPO || Subtitle | No | Not testable via PPO | **Limitations:**- Only tests against organic App Store traffic- Minimum 90% confidence required to declare winner- Tests run for 7-90 days- Can only run one test at a time- Traffic split is automatic (not configurable) ### Custom Product Pages (CPP) 35 custom product pages per app, each with unique:- Screenshots- App preview videos- Promotional text **Use for:**- Different audiences (from different ad campaigns)- Different value propositions- Seasonal messaging- Localized creative for specific markets **Not a true A/B test** — CPPs are targeted pages linked from specific URLs/campaigns, not random traffic splits. ## Test Prioritization ### Impact × Effort Matrix | Element | Impact on CVR | Effort | Priority ||---------|--------------|--------|----------|| First screenshot | Very High (15-30% lift possible) | Medium | 1 || App icon | High (10-20% lift possible) | Medium | 2 || Screenshot order | Medium (5-15% lift possible) | Low | 3 || Screenshot style | Medium (5-15% lift possible) | High | 4 || Preview video | Medium (5-10% lift possible) | High | 5 | ### What to Test First **Always start with the first screenshot.** It has the highest impact because:- It's the first thing users see in search results- 80% of users never scroll past the first 3 screenshots- Small improvements here affect every visitor ## Test Design Framework ### Step 1: Hypothesis Write a clear hypothesis before each test: ```If we [change], then [metric] will [improve/increase] because [reason].``` **Examples:**- "If we add social proof ('5M+ users') to the first screenshot, conversion rate will increase because it builds trust"- "If we change the icon from blue to orange, tap-through rate will increase because it stands out more in search results"- "If we show the app's AI feature first instead of the basic editor, conversion will increase because AI is the key differentiator" ### Step 2: Variants Design 2-3 variants (including control): | Variant | Description | Hypothesis ||---------|-------------|------------|| Control (A) | Current version | Baseline || Variant B | [specific change] | [why it might win] || Variant C | [different change] | [why it might win] | **Rules for good variants:**- Change ONE thing per test (isolate the variable)- Make the change significant enough to detect (don't test subtle color shifts)- Each variant should have a clear hypothesis- Don't test more than 3 variants (dilutes traffic) ### Step 3: Sample Size Calculate required test duration: ```Daily impressions: [N]Current conversion rate: [X]%Minimum detectable effect: [Y]% (relative improvement)Confidence level: 95% Required sample per variant: ~[N] impressionsEstimated duration: [N] days``` **Rules of thumb:**- < 1000 daily impressions: Tests take 30-90 days (consider if worth it)- 1000-5000 daily impressions: Tests take 14-30 days- 5000+ daily impressions: Tests take 7-14 days- Need at least 1000 impressions per variant for meaningful results ### Step 4: Run the Test **In App Store Connect:**1. Go to Product Page Optimization2. Create a new test3. Upload variant assets4. Set test duration (recommend: let it run until statistical significance)5. Monitor but don't stop early ### Step 5: Interpret Results **Statistical significance:**- Apple requires 90% confidence minimum- Aim for 95% confidence before making decisions- Look at the confidence interval, not just the point estimate **What to look for:**- Conversion rate lift (primary metric)- Impression-to-tap rate (for icon tests)- Download rate (for screenshot/video tests)- Segment differences (new vs returning, country, source) ## Common Test Ideas ### Icon Tests | Test | Control | Variant | Expected Impact ||------|---------|---------|----------------|| Color | Current color | Contrasting color | 5-20% TTR change || Style | Detailed | Simplified | 5-15% TTR change || Element | Current symbol | Different symbol | 5-20% TTR change || Background | Solid | Gradient | 3-10% TTR change | ### Screenshot Tests | Test | Control | Variant | Expected Impact ||------|---------|---------|----------------|| First screenshot | Feature-focused | Benefit-focused | 10-30% CVR change || Social proof | No social proof | "5M+ users" badge | 5-15% CVR change || Text size | Small text | Large, bold text | 5-10% CVR change || Style | Light mode | Dark mode | 5-15% CVR change || Layout | Device frame | Full-bleed | 5-10% CVR change || Order | Current order | Reordered by benefit | 5-15% CVR change | ### Video Tests | Test | Control | Variant | Expected Impact ||------|---------|---------|----------------|| Has video | No video | 15s feature demo | 5-15% CVR change || Hook | Feature demo | Problem/solution | 5-10% CVR change || Length | 30s | 15s | 3-8% CVR change | ## Output Format ### Test Plan ```Test Name: [descriptive name]Element: [icon / screenshots / video]Hypothesis: If we [change], then [metric] will [improve] because [reason] Variants:- Control (A): [description]- Variant B: [description]- Variant C: [description] (optional) Estimated Duration: [N] daysRequired Impressions: [N] per variantSuccess Metric: [conversion rate / tap-through rate]Minimum Detectable Effect: [X]%``` ### Test Results Interpretation When the user shares results:1. Is it statistically significant? (confidence level)2. What's the actual lift? (with confidence interval)3. Are there segment differences?4. What's the next test to run?5. Estimated annual impact (downloads × lift) ### Testing Roadmap Provide a 3-month testing calendar:- Month 1: [highest impact test]- Month 2: [second priority test]- Month 3: [third priority test] ## Related Skills - `screenshot-optimization` — Design screenshot variants- `metadata-optimization` — Optimize non-testable elements- `app-analytics` — Track conversion metrics- `aso-audit` — Identify what to test first