Pre-Spend Ad Intelligence Playbook
How to research keywords, creatives & competitors before your first dollar. Transform competitive research into a Day 1 testing roadmap that minimizes wasted spend.
0. The Core Framework: Hypothesis-Led Research
Philosophy
The goal is not to catalog where to look, but to build a structured hypothesis about what will work for your product category + audience, validated by competitor evidence, before spending a dollar.
Before diving into any platform, define what you're trying to learn:
The Three Questions
- What pain points does my audience have? (Demand signals)
- How are competitors positioning against those pain points? (Supply signals)
- Where is the gap between what the audience wants and what competitors deliver? (White space = your Day 1 advantage)
The Evidence Hierarchy
Not all signals are equal. Rank what you find:
| Signal | Confidence | How to Detect |
|---|---|---|
| Validated Winner | Very High | Same creative running 60+ days on Meta AND migrated to TikTok/YouTube |
| Active Winner | High | Running 30-60 days + high variant count (5+ versions) |
| Active Test | Medium | Running <14 days, multiple variants, frequent rotation |
| Zombie Ad | Low | Running 60+ days BUT no variants, low frequency, stale copy |
| New Launch | Unknown | <7 days, single variant — too early to read |
Critical 2026 nuance: Duration alone is no longer a reliable proxy. Meta's AI and Google's PMax keep "zombie ads" running in low-value placements to spend residual budget. You need duration + variant velocity + cross-platform migration to identify true winners.
The Cross-Platform Migration Signal (Highest Confidence)
The single most valuable pre-spend signal: Track when a creative moves across platforms.
| Migration Path | What It Tells You |
|---|---|
| Meta → TikTok | Winner validated on broad audience, adapting to native format |
| TikTok → Meta | Viral concept being scaled with paid budget |
| Meta → YouTube | Winner moving to longer-form storytelling |
| Any → Apple Search Ads screenshot | Message tested on social, now used in App Store |
How to detect: Save competitor creatives weekly in a vault (Foreplay, MagicBrief, or spreadsheet). Compare across platforms monthly. Same hook/concept appearing on 2+ platforms = 10/10 confidence signal.
1. Meta (Facebook / Instagram)
Tools
| Tool | Cost | Use For |
|---|---|---|
| Meta Ad Library | Free | Primary research — all active ads, any advertiser |
| Foreplay / MagicBrief | $49-99/mo | Save ads to permanent vault (critical — competitors delete winners to hide them from scrapers) |
| AdSpy / PowerAdSpy | Paid | Archived creatives beyond Meta's 90-day window |
URL: facebook.com/ads/library
What You Can See
- All active (and recently inactive) ads from any advertiser
- Ad copy, visuals, video, carousel, and collection formats
- Start dates + whether ad is still running
- Platform placement (Facebook, Instagram, Messenger, Audience Network)
- Multiple versions of same ad (variant count)
Research Workflow
- Build a watch list: 5-10 direct competitors + 2-3 aspirational brands in adjacent categories
- Search by brand name: See full active catalog for each competitor
- Search by product keywords: (e.g., "puzzle game", "RPG", "match-3") to discover unknown competitors
- Filter by country and platform: Geo-specific creative strategies reveal market entry priorities
- Save everything to a vault: Use Foreplay/MagicBrief or at minimum screenshot + spreadsheet. The library is ephemeral — ads disappear after ~90 days, and competitors actively delete winners.
Winner Identification (Multi-Signal Approach)
Don't rely on duration alone. Score each ad on multiple signals:
| Signal | Weight | How to Assess |
|---|---|---|
| Duration (days active) | 25% | 60+ days = validated |
| Variant count | 25% | 5+ versions = active optimization |
| Format consistency | 15% | Same format repeated = proven format for category |
| Cross-platform presence | 25% | Same concept on TikTok/YouTube = highest confidence |
| Frequency of appearance | 10% | Appears across multiple searches = broad targeting (big budget) |
Creative Extraction Template
For each winner, capture:
| Element | What to Note | Why It Matters |
|---|---|---|
| Hook (first 3 sec / headline) | Emotional trigger, question, stat, shock, loss-aversion, curiosity gap | Hook determines 80% of ad performance |
| Visual style | UGC vs. polished, gameplay vs. lifestyle, color palette, text overlays | Establishes category norms |
| Audio (video ads) | Voiceover gender/tone, music genre, sound effects, trending sounds | Audio drives completion rate on social |
| CTA | "Play Now" vs. "Download Free" vs. "Try It" vs. deep link | CTA impacts conversion rate 10-30% |
| Copy length | Short punchy vs. long-form testimonial vs. no copy | Platform norms differ |
| Offer/angle | Feature-led vs. emotion-led vs. social-proof-led vs. FOMO | Core messaging strategy |
| Ad format | Static, video, carousel, collection, playable | Format allocation signals what converts |
| Landing destination | App Store, custom product page, website, in-app deep link | Full funnel intelligence |
Advantage+ & Automation Detection
Key Insight: In 2026, many competitors run Advantage+ Shopping Campaigns (ASC) or Advantage+ App Campaigns. If competitors are succeeding with broad AI campaigns, your creative quality matters more than targeting — Meta's AI handles audience finding.
Signs of Advantage+ usage:
- Broad, generic copy with no specific targeting language = likely AI-optimized
- High volume of slight variations (auto-generated by Meta's creative AI) = Advantage+ Creative enabled
- Multi-advertiser ads = Meta bundling advertiser with others in same placement
Limitations
Known Gaps
No CTR, CPC, ROAS, or impression data for commercial ads. Can't see audience targeting parameters (but Advantage+ means less targeting anyway). Inactive ads disappear after ~90 days (use third-party vaults). Competitors actively delete winning ads to prevent copying.
2. Google (Search, Performance Max, YouTube, Display)
Tools
| Tool | Cost | Use For |
|---|---|---|
| Google Ads Transparency Center | Free | All Search, Display, YouTube ads — no login required |
| Google Keyword Planner | Free (with Ads account) | Volume, CPC, suggestions — note: ranges are vague without active spend |
| Google Trends | Free | Relative interest, geo distribution, seasonal patterns |
| Google Ads Reach Planner | Free (with Ads account) | Forecast de-duplicated reach across YouTube + Display pre-spend |
| SpyFu | $39/mo+ | Historical competitor keyword bids, ad copy archive |
| SEMrush | $150/mo+ | Keyword gap analysis, competitor ad copy, traffic estimates |
| Ahrefs | $99/mo+ | Keyword difficulty, organic vs. paid overlap, Site Explorer |
| SimilarWeb | Free tier | Traffic sources, audience overlap, referral paths |
| VidIQ / TubeBuddy | Free tiers | YouTube tag intelligence, comment sentiment, trending topics |
Research Workflow — Search Ads
- Seed keyword brainstorm: List 20-30 terms your audience would search
- Keyword Planner expansion: Input seeds, extract 200-500 suggestions. Note: without active spend, you get ranges (10k-100k) not exact volumes. Use SEMrush/Ahrefs for more precise estimates.
- Intent classification: Separate keywords into buckets:
- Informational: "how to improve aim in FPS games" (top-of-funnel content)
- Commercial: "best puzzle games 2026" (comparison/consideration)
- Transactional: "download Royal Match" (direct install intent)
- Navigational: "[competitor brand name]" (conquest opportunity)
- CPC benchmarking: Sort by estimated CPC to map competitive intensity. High CPC = proven conversion value in that keyword.
- SERP analysis: Search top 20 keywords in incognito mode. Screenshot ads. Note messaging themes, CTA patterns, ad extensions used, and number of advertisers per keyword.
- Long-tail discovery: AnswerThePublic, "People Also Ask", Google auto-complete, Reddit threads
- Competitor keyword gap: Use SpyFu or SEMrush to find keywords competitors bid on that you haven't considered
Research Workflow — Performance Max (PMax)
Intelligence is harder with PMax: PMax is the dominant Google campaign type in 2026. The Transparency Center doesn't show which assets are "Top" rated within a PMax campaign.
What you can still learn:
- Which competitors run PMax (visible in Transparency Center as mixed format ads)
- Their asset types: headlines, descriptions, images, videos, logos
- Landing page destinations (often custom LP vs. App Store)
- Creative rotation patterns
PMax-specific research:
- Search competitor domains in Transparency Center
- Catalog all visible creative assets (text + visual + video)
- Note which assets persist across weeks (= top-performing assets Google's AI keeps)
- Check if they use YouTube assets in PMax (signals video-first strategy)
Research Workflow — YouTube
- Transparency Center: Pull all video ads from competitors
- VidIQ/TubeBuddy: Analyze competitor YouTube channel tags, comment sentiment, trending topics
- Comment mining: Read top comments on competitor game/product videos — this is a goldmine for ad copy hooks (real language your audience uses)
- Video structure analysis: For top-performing ads, map: Hook (0-3s) → Problem (3-10s) → Solution (10-20s) → CTA (20-30s)
- Reach Planner: Forecast impressions and reach for your target audience before spending
Best practice: YouTube comment mining is one of the most underrated research tactics. Real audience language in comments produces ad copy with 2-3x better engagement than marketer-written copy.
3. TikTok
Tools
| Tool | Cost | Use For |
|---|---|---|
| TikTok Creative Center | Free | Top ads dashboard, keyword insights, trend discovery |
| TikTok Creative Challenge (TTCC) | Free to browse | See what creators are being paid to make — pre-trend signals |
| Foreplay / MagicBrief | $49-99/mo | Cross-platform ad vault |
URL: ads.tiktok.com/business/creativecenter
Research Workflow
- Top Ads by vertical: Filter to your product category (e.g., "Gaming", "App") and region
- Analyze top 20 ads with structured scoring: first-frame hook type, pacing, audio, text overlay placement and style, duration and completion structure
- Keyword Insights: Extract trending keywords in your category's ad copy — these are proven conversion words
- Emerging sounds (not just Popular): Use Trend Discovery's "Emerging" filter to find sounds 24-48 hours old. By the time they're "Popular," the novelty advantage is gone.
- TikTok Creative Challenge (TTCC): Browse the challenge portal to see what brands are paying creators to produce. These briefs reveal the creative direction before ads go live.
- Organic signal mining: Search "TikTok Made Me Download + [category]" sorted by "Last 7 Days" to find organic content that converts — potential Spark Ad candidates.
Winning Creative Patterns (2026)
| Pattern | Performance Signal | Category Fit |
|---|---|---|
| UGC-style (real people) | Higher trust, better CTR | Universal |
| Hook in first 2 seconds | Stop-the-scroll mandatory | Universal |
| Creator-led (employee/influencer) | Dramatically higher engagement | Mid-funnel, consideration |
| Spark Ads (boosted organic) | ~20% higher completion rate (2026 benchmark) | Proven organic content |
| Action overlays ("Download", "Try it") | 18%+ conversion lift | Bottom-funnel |
| 15-30 second duration | Optimal completion-to-action ratio | Universal |
| Problem-solution narrative | High intent capture | App/game installs |
Creative Velocity Intelligence
| Metric | What to Measure | Benchmark (2026) |
|---|---|---|
| Creatives per month (top advertisers) | Total new ads launched | 100-500+ for F2P gaming top 1% |
| Creative lifespan | Days before retirement | 7-14 days (hyper-casual), 21-30 days (mid-core) |
| Variant ratio | Variants per concept | 5-10x per winning concept |
| Format split | Video vs. static vs. playable | 70% video, 20% playable, 10% static (gaming) |
TikTok Algorithm Nuance: Over-optimized ads get penalized by the algorithm (too polished = less native). Sound selection impacts distribution beyond just the ad — trending sounds get more impressions. Hashtag challenges do not equal ad performance (don't confuse organic virality with paid ad effectiveness).
4. Apple Ads (formerly Apple Search Ads)
Tools
| Tool | Cost | Use For |
|---|---|---|
| Apple Ads dashboard | Free (with account) | Search popularity scores (1-100), suggested keywords |
| App Store auto-complete | Free | Apple's own keyword suggestions |
| AppTweak | Free Starter | Keyword suggestions, competitor visibility, download estimates, CPP analysis |
| MobileAction | Limited free | Competitor keyword strategies, estimated budgets, impression share |
| SplitMetrics | Free trial | Creative A/B testing for App Store pages, CPP optimization |
| SensorTower | $25K+/yr | Full creative gallery, keyword analytics, CPI forecasts |
Research Workflow
- Seed keywords from your app metadata: Title, subtitle, keyword field
- Competitor app analysis: Via ASO tools, identify keywords in competitor metadata
- Auto-complete mining: Type partial category terms, note all Apple suggestions
- Search popularity scoring: Prioritize keywords with popularity 40+ (below 40 = too low volume)
- Competitor ad monitoring: Search your target keywords, screenshot which competitors appear
- Custom Product Page (CPP) analysis: This is the 2026 differentiator — 20% of ASA success is keywords, 80% is CPP alignment
Custom Product Pages (CPPs) — The 2026 Lever
The key insight: Apple Ads now links to Custom Product Pages, not just your default store listing. Competitive intelligence must include CPP analysis.
| What to Research | How | Why It Matters |
|---|---|---|
| Which CPP variant links to which keyword | AppTweak CPP tracker | Message match = higher TTR |
| Screenshot themes per CPP | Manual App Store search | Visual alignment with search intent |
| Deep link behavior | Test competitor ads | Some skip product page entirely → in-app events |
| CPP text (subtitle, promo text) | ASO tools | Conversion copy that's been validated |
Key Metrics
| Metric | Target | Notes |
|---|---|---|
| Share of Voice (SOV) | 50-70% on core terms | 70-90% is too aggressive — inflates CPI by 20-25% in bid wars |
| Tap-Through Rate (TTR) | Category benchmark | Varies by genre; use AppTweak/SensorTower |
| Keyword overlap | Map vs. top 5 competitors | Identify uncontested terms |
| Search popularity | Prioritize 40+ | Below 40 = marginal volume |
2026 Updates: Rebranded from "Search Ads" to "Apple Ads". New placements: "Today" tab (with video support), product page ads, Search tab suggestions. "Today" tab video ads showing 12%+ higher TTR for gaming categories. Deep link support enables bypassing product page entirely for known users.
5. Mobile Game Ad Networks (Unity, AppLovin, ironSource)
Intelligence Challenge
No public ad libraries: Unlike Meta/Google/TikTok, these networks have NO public ad libraries. All intelligence requires third-party tools or reverse engineering.
Tools
| Platform | Price | Coverage | Key Feature |
|---|---|---|---|
| AppMagic | $380/mo+ | 150M+ video creatives | Top creative charts by segment, CPI/CTR benchmarks, playable logic analysis |
| SensorTower | $25-40K/yr | All major networks | Creative Gallery, spend estimates, SDK detection |
| BigSpy | Free tier | 71 countries, 10 networks incl. Unity | 1B+ ads, filter by format/network |
| data.ai | Enterprise | Global | Cross-network spend and creative intelligence |
| Apptica | Varies | SDK-level detection | Competitor mediation stacks, ad network distribution |
| GameAnalytics | Free | Genre benchmarks | Retention, session, monetization benchmarks by genre |
| Liftoff | Free reports | UA benchmarks | CPI, CPA, ROAS benchmarks by genre |
| GameRefinery | Free trial | Creative database | YouTube ad creative database for games |
Research Workflow
- Competitor SDK detection: Which ad networks and mediation platforms are competitors using? (Tools: Apptica, SensorTower SDK Intelligence)
- Mediation stack analysis: Are they on AppLovin MAX or Unity LevelPlay?
- Creative format distribution: % playable vs. video vs. static per competitor
- Top creatives by genre: Filter AppMagic/BigSpy to your game sub-genre
- Creative refresh rate: How often do top advertisers rotate?
- Geo strategy: Which countries/tiers are competitors targeting most aggressively?
- Spend-per-creative ratio: If tools show estimated spend + creative count, calculate efficiency. Low spend-per-creative = testing phase. High = scaling winners.
Genre-Specific Creative Cadences
| Genre | Creatives/Month | Refresh Cycle | Dominant Format | Notes |
|---|---|---|---|---|
| Hyper-casual | 200+ statics, 50+ videos | Weekly | Static + short video | Volume game, rapid burnout |
| Casual (puzzle, match) | 50-100 videos | Bi-weekly | Video (30s) | Emotional hooks, progression fantasy |
| Mid-core (RPG, strategy) | 30-50 videos + playables | 3-4 weeks | Playable + video | Gameplay depth showcase |
| Social casino | 20-40 videos | Monthly | Video (15-30s) | Win fantasy, jackpot moments |
The "Fake Ad" / Marketing Gameplay Question
Critical for F2P research: If the category norm is "marketing gameplay" (mini-games/puzzles that don't exist in the game) and you plan to run core gameplay ads, you're fighting an uphill CPI battle. Research what the category standard is before committing to a creative direction.
IP & Collaboration Detection
Check if a competitor's strong-performing ads coincide with a licensed IP event (anime collab, movie tie-in). This inflates their creative longevity and can lead to false conclusions about creative effectiveness. Always cross-reference ad timing with game update/event calendars.
Playable Ad Research (Advanced)
- App-ads.txt scraping: Check competitor's app-ads.txt file to see authorized ad sellers → reveals where they buy inventory
- Playable logic flow: AppMagic shows the decision tree within playables — study win/fail states and CTA placement
- Format performance by genre: Playables outperform video in mid-core by ~20% CTR but underperform in casual by ~15%
2026 Market Context: Unity launching AI-powered "Vector" platform to compete with AppLovin. AppLovin MAX continues to dominate mediation market share. AI-generated creative variants becoming standard (500+ variants/month for top advertisers). Post-IDFA environment means creative quality matters more than targeting precision.
6. Cross-Network Synthesis (The Missing Link)
Why This Matters
Most playbooks stop at per-platform research. The highest-value intelligence comes from synthesizing across platforms.
Creative Migration Tracking
Set up a monthly review cadence:
| Week | Action | Output |
|---|---|---|
| Week 1 | Catalog new creatives on Meta (top 10 competitors) | Meta vault |
| Week 2 | Catalog new creatives on TikTok + YouTube | Cross-reference with Meta vault |
| Week 3 | Check Google Transparency Center + Apple Ads | Full cross-platform map |
| Week 4 | Synthesis: Which creatives migrated? Which are platform-exclusive? | Migration report |
The Adaptation Matrix
When a winning creative migrates, it adapts. Track these transformations:
| From → To | Typical Adaptation | Your Action |
|---|---|---|
| Meta → TikTok | Remove polish, add native feel, use trending sound | Start with TikTok-native version of Meta winner |
| TikTok → Meta | Add text overlays, extend to 30s, add end card | Scale proven TikTok hook with Meta optimization |
| Meta → YouTube | Extend to 15-30s, add voiceover, deeper story arc | Test YouTube pre-roll version of social winner |
| Social → App Store | Distill into screenshots + short preview video | Align CPP with proven social messaging |
Message Match Audit
Intelligence isn't just the ad — it's the full funnel. For each competitor winner:
- Capture the ad (hook, messaging, CTA)
- Follow the click: Where does it land? (App Store default? Custom Product Page? Website?)
- Score message match: Does the landing page reinforce the ad's promise?
- Note disconnects: Poor message match = opportunity for you to do it better
Attribution & Privacy Intelligence (2026 Reality)
| Signal | Where to Find It | What It Tells You |
|---|---|---|
| In-game surveys | App reviews, Reddit, competitor app screenshots | They're using self-reported attribution |
| RevenueCat / AppsFlyer SDK detected | Apptica | Their measurement stack |
| Conversion value schema | SKAN documentation patterns | How they optimize post-install events |
| Privacy Manifest compliance | iOS App Privacy details | How they handle data collection |
Best practice: Cross-platform adaptation beats platform-native creation. Validate on one platform, adapt to others. This reduces production cost and increases confidence in every creative you launch.
7. F2P Mobile Gaming — Deep Dive
Pre-Spend CPI Benchmarks (2026 Estimates)
| Genre | iOS CPI (US) | Android CPI (US) | iOS CPI (Global) | Source |
|---|---|---|---|---|
| Hyper-casual | $0.50-$1.50 | $0.20-$0.80 | $0.30-$1.00 | Liftoff, AppMagic |
| Casual (puzzle, match) | $2.00-$5.00 | $1.00-$3.00 | $1.50-$4.00 | SensorTower |
| Mid-core (RPG, strategy) | $5.00-$15.00 | $2.00-$8.00 | $3.00-$10.00 | Liftoff |
| Social casino | $15.00-$40.00 | $5.00-$15.00 | $8.00-$25.00 | Industry reports |
Key: Use these to sanity-check whether your creative strategy can produce viable unit economics before spending.
Pre-Spend LTV Estimation
Before spending, estimate whether the category math works:
- Genre retention benchmarks: GameAnalytics publishes D1/D7/D30 by genre (free)
- Monetization benchmarks: ARPDAU by genre from industry reports
- Back-into CPI target: If genre D30 retention = 8% and ARPDAU = $0.15, you can model LTV and set max CPI
- Compare to CPI benchmarks above: If your max CPI < category CPI, you need either better creatives or better product before spending
Creative Quality Indicators (Before You Produce)
Research what predicts performance in your genre:
| Element | Hyper-Casual | Casual | Mid-Core |
|---|---|---|---|
| Primary hook | Satisfying mechanic | Emotional progression | Power fantasy |
| Video length | 10-15s | 15-30s | 20-45s |
| Dominant format | Short video + static | Video + carousel | Video + playable |
| Key visual | Oddly satisfying loop | Before/after transformation | Character/gear showcase |
| Audio | SFX only or trending | Music + light narration | Epic/dramatic score |
| CTA timing | Immediate (3-5s) | Mid-roll + end | End card after demo |
AI-Powered Creative Research (2026 Workflow)
Use AI models to accelerate analysis of competitor creatives: Collect top 50 competitor video ads, run AI feature extraction (vision models), build a feature matrix, identify clusters of winning combinations, and find gaps that represent testing opportunities.
- Collect top 50 competitor video ads from your genre
- AI Feature Extraction: Use vision models (GPT-4o, Gemini) to generate structured analysis covering voiceover, hook type, color palette, text overlay, gameplay shown, and music genre
- Build a Feature Matrix: Spreadsheet of all 50 ads with structured attributes
- Identify clusters: Which combinations appear most frequently in winners?
- Find gaps: Which combinations are underrepresented? = testing opportunities
Rapid Prototyping ($0 Creative Testing)
Before expensive production, validate concepts:
| Method | Cost | Speed | Fidelity |
|---|---|---|---|
| AI image generation (Midjourney, DALL-E) | ~Free | Hours | Medium - static concepts |
| AI video (Runway, HeyGen) | Low | Hours | Medium - motion concepts |
| Screen recording + text overlays (CapCut) | Free | Hours | High - gameplay capture |
| Competitor creative remix | Free | 1-2 days | High |
The goal: Test 5-10 creative concepts as lo-fi prototypes before committing to full production. If the hook works in a rough version, invest in polished production.
8. AI Prompt Library (Ready-to-Use)
How to Use These Prompts
Copy these prompts directly into GPT-4o, Gemini, or Claude. Replace bracketed placeholders with your specific data. Each prompt is designed to produce structured output you can paste into your research spreadsheet.
Prompt 1: Competitor Video Feature Extraction
Analyze this competitor ad video screenshot/frame sequence. For each ad, extract:
1. Hook type (first 3 seconds): [question / stat / shock / POV / transformation / loss-aversion / curiosity gap]
2. Voiceover: [yes/no], Gender: [male/female/none], Tone: [aggressive/calm/excited/urgent]
3. Visual style: [UGC / polished / gameplay / lifestyle / hybrid]
4. Gameplay shown: [core / marketing (minigame) / hybrid / none]
5. Text overlay density: [none / minimal (1-2 lines) / heavy (3+ lines)]
6. Color palette: [dominant 3 colors]
7. Music genre: [epic / electronic / pop / ambient / trending sound / SFX only / none]
8. CTA: [exact text and placement timing]
9. Duration: [seconds]
10. Estimated confidence level: [validated winner / active winner / active test / zombie / unknown]
Output as a structured row I can paste into a spreadsheet.
Prompt 2: Hook Pattern Clustering
I have a spreadsheet of [N] competitor ads with these columns: [paste column headers].
Analyze the data and identify:
1. The top 3 most common hook patterns (with frequency count)
2. The top 3 hook patterns correlated with "Validated Winner" status
3. Any hook patterns that appear in 0-2 ads (= white space opportunities)
4. Recommended Day 1 test plan: 3 concepts using proven hooks + 2 concepts using white space hooks
Format as a creative brief with specific hook scripts I can produce.
Prompt 3: Audience Pain Point Extraction from Reviews
Analyze these [N] app reviews from [competitor name]. Extract:
1. Top 5 pain points users mention (with frequency)
2. Top 5 features users praise (with frequency)
3. Exact phrases/language users use to describe their experience (verbatim quotes)
4. Emotional triggers: What makes users excited? Frustrated? Surprised?
5. Unmet needs: What do users wish the app did that it doesn't?
Output a "Voice of Customer" document I can use to write ad copy in the audience's own words.
Prompt 4: Cross-Platform Creative Brief Generator
Based on this competitive research summary:
- Top Meta hooks: [list]
- Top TikTok hooks: [list]
- Top Google ad copy themes: [list]
- Genre CPI benchmarks: [numbers]
- White space opportunities: [list]
- Target audience: [description]
Generate a cross-platform creative brief that includes:
1. 3 "Proven Winner" concepts (adapted from competitor successes, differentiated with our product)
2. 2 "White Space" concepts (angles no competitor is using)
For each concept, provide:
- Meta version (15-30s video script)
- TikTok version (native-feel adaptation)
- YouTube version (15s pre-roll)
- Google Search ad copy (3 headlines + 2 descriptions)
- Apple Ads screenshot concept
Prompt 5: Sentiment Mining (Reddit/Discord/Communities)
Analyze these [N] Reddit posts/Discord messages from [community name] about [game category/product type]. Extract:
1. Why do players start playing games like this? (Acquisition motivation)
2. Why do players keep playing? (Retention drivers)
3. Why do players quit? (Churn triggers)
4. What language do they use to recommend games to friends? (Organic referral hooks)
5. What competitors do they mention positively? Negatively? Why?
Output as an "Emotional Hook Map" I can use to write ads that speak to real motivations.
Pro tip: Run these prompts on multiple AI models (GPT-4o, Gemini, Claude) and compare outputs. Different models catch different patterns. Merge the best insights from each.
9. Creative Fatigue Forecasting
Budget-to-Asset Calculator
Before you spend, estimate how many creative assets you need:
| Launch Budget | Concepts Needed | Variants per Concept | Total Assets | Reasoning |
|---|---|---|---|---|
| $5,000 | 2-3 | 2-3 each | 6-9 | Minimal viable test |
| $10,000 | 3-5 | 3-4 each | 12-20 | Enough to identify 1-2 winners |
| $25,000 | 5-8 | 4-5 each | 25-40 | Robust test across hooks + formats |
| $50,000 | 8-12 | 5-8 each | 50-80 | Full matrix test (hook x format x CTA) |
| $100,000+ | 12-20 | 8-10 each | 100-200 | Scale-ready with planned refresh pipeline |
Rule of thumb: 1 new concept per $5,000-$10,000 of testing spend. Below that, you don't have enough signal to evaluate.
Fatigue Timeline by Platform
| Platform | Creative Lifespan | Signal of Fatigue | Action |
|---|---|---|---|
| Meta | 2-4 wks (hyper-casual), 4-8 wks (mid-core) | CTR drops 20%+ from peak | Rotate variant or kill concept |
| TikTok | 1-2 weeks (trend-dependent) | Completion rate drops below 50% of peak | Refresh with new trending sound/hook |
| Google (Search) | 4-8 weeks | Quality Score drops, CPC rises 15%+ | Refresh ad copy, test new extensions |
| Google (PMax) | 3-6 weeks per asset | Asset rating drops from "Best" to "Good"/"Low" | Replace low-rated assets |
| Apple Ads | 6-12 weeks | TTR drops 15%+ from peak | Refresh CPP screenshots/preview video |
| Unity/AppLovin | 1-3 wks (hyper-casual), 3-6 wks (mid-core) | eCPM drops 20%+ | Rotate entire creative |
Pre-Spend Production Planning
Week -3 to -2: Produce
Produce launch assets based on research findings
Week -1: QA
Quality assurance, format adaptation across platforms
Week 1-2: Launch
Launch, collect data, identify early winners
Week 2-3: Refresh
Begin producing first refresh batch based on early signals
Week 3-4: Optimize
Rotate fatigued creatives, scale winners, test new concepts
The pipeline never stops: Plan for continuous creative production from Day 1. Pre-spend research tells you what to produce first, but you need a refresh pipeline planned before launch.
10. Back-of-Envelope LTV Calculator
Pre-Spend Unit Economics Check
Before spending a dollar, verify the math works for your category. If the math doesn't work on paper with genre-average benchmarks, spending money won't fix it.
Step 1: Gather Genre Benchmarks (Free Sources)
| Metric | Source | Where to Find |
|---|---|---|
| D1/D7/D30 Retention | GameAnalytics | gameanalytics.com/benchmarks |
| ARPDAU | Industry reports | Liftoff, data.ai annual reports |
| Payer conversion rate | Genre averages | Sensor Tower free reports |
| Average CPI | Liftoff benchmarks | liftoff.io/resources |
Step 2: Calculate Rough LTV
LTV (D30) = ARPDAU x Sum(Daily Retention D1 through D30)
Example (Casual Puzzle):
- ARPDAU: $0.12
- D1: 35%, D7: 15%, D30: 8%
- Sum of daily retention (approximated): ~4.5 user-days in first 30 days
- LTV(D30) = $0.12 x 4.5 = $0.54
LTV (D365 projection) = LTV(D30) x Multiplier
- Casual games: 1.8-2.5x multiplier
- Mid-core: 2.5-4.0x multiplier
- Social casino: 3.0-5.0x multiplier
Projected LTV(D365) = $0.54 x 2.1 = $1.13
Step 3: Set CPI Ceiling
Max CPI = LTV(D365) x Target ROAS margin
Example:
- LTV(D365): $1.13
- Target ROAS: 120% (20% margin above breakeven)
- Breakeven CPI: $1.13 / 1.43 = $0.79 (at 143% store-fee-adjusted breakeven)
- Max CPI with margin: $0.79 / 1.20 = $0.66
Step 4: Compare to Market CPI
Viable
If Max CPI ($0.66) > Category average CPI ($0.50-$1.50 for hyper-casual): Viable with good creatives
Not Viable
If Max CPI ($0.66) < Category average CPI: Product economics don't support paid UA at current monetization. Fix product before spending.
Decision Gate
If the math doesn't work on paper with genre-average benchmarks, spending money won't fix it. Either improve monetization, improve retention, or find a cheaper acquisition channel.
11. Minimum Viable Intelligence Stacks
$0/month (Solo Founder / Pre-Revenue)
| Tool | What You Get | Time Investment |
|---|---|---|
| Meta Ad Library | Competitor creative catalog | 2-3 hrs/week |
| Google Transparency Center | Search/Display/YouTube competitor ads | 1-2 hrs/week |
| TikTok Creative Center | Top ads + trending sounds/keywords | 1-2 hrs/week |
| Google Keyword Planner | Volume + CPC estimates | 1 hr setup |
| Google Trends | Category demand validation | 30 min setup |
| App Store manual search | ASO keyword discovery + competitor CPPs | 1-2 hrs/week |
| GameAnalytics (free) | Genre retention/monetization benchmarks | 1 hr setup |
| Liftoff reports (free) | CPI/ROAS genre benchmarks | 30 min read |
| CapCut (free) | Lo-fi creative prototyping | As needed |
| Spreadsheet | Manual creative vault + feature matrix | Ongoing |
Total time: ~8-12 hrs/week during research phase (3 weeks), then 3-4 hrs/week ongoing. Coverage: ~70% of what you need. Main gap: no permanent ad vault (competitors delete winners) and no mobile network creative intelligence.
~$450/month (Growth-Stage / Serious Pre-Launch)
Everything above, plus:
| Tool | Cost | What It Adds |
|---|---|---|
| Foreplay | $49/mo | Permanent ad vault across Meta/TikTok/Google — solves the deletion problem |
| AppMagic | $380/mo | Mobile game creative intelligence, 150M+ ads, playable analysis, CPI benchmarks |
| VidIQ Pro | $10/mo | YouTube tag intelligence + competitor analytics |
Coverage: ~90% of what you need. Main gap: no Apple Ads keyword depth (add AppTweak ~$100/mo if ASA is a priority).
$2,000+/month (Scaling Team)
Everything above, plus SEMrush ($150/mo), SplitMetrics (varies), and consider SensorTower ($25-40K/yr) for enterprise-grade cross-network intelligence with SDK detection.
12. Organic-to-Paid Signals
The Missing Intelligence Layer
Many 2026 winners start as organic content that gets amplified with paid spend. Organic-first creatives often outperform research-derived creatives because they've already been validated by real audience behavior.
| Signal | Where to Find It | What It Means |
|---|---|---|
| Organic TikTok with 500K+ views | TikTok search by category | Proven concept → Spark Ad candidate |
| Reddit post with 1K+ upvotes about a game | r/gaming, r/AndroidGaming, genre subs | Real audience language + validated interest |
| YouTube gameplay video with high like ratio | VidIQ/TubeBuddy | Community-validated content worth amplifying |
| App Store review mentioning specific feature | Manual review mining | Feature that resonates → ad hook candidate |
| Discord/community buzz about game mechanic | Manual community monitoring | Word-of-mouth driver → ad angle |
Organic-First Creative Workflow
Monitor
Monitor organic content in your category (Reddit, TikTok organic, YouTube, Discord)
Identify
Identify high-engagement organic content about competitors or your category
Extract
Extract the hook/angle that drove engagement
Adapt
Adapt into paid creative format (Spark Ad, inspired-by creative, UGC brief)
Test
Test alongside your research-driven concepts
13. Operational Safety & Execution Readiness
Policy & Compliance Filter
Competitive research often surfaces "winners" that violate platform policies. Before producing any concept inspired by competitors, run this check:
| Risk | What to Check | Consequence of Violation |
|---|---|---|
| Misleading gameplay | Does the ad show mechanics not in the actual game? | Meta/Google/TikTok reject or ban; Apple rejects app update |
| Copyright audio | Is the trending sound licensed for commercial use? | Ad rejected, potential DMCA takedown |
| Restricted claims | Health, financial, or gambling claims without disclaimers? | Account suspension |
| Competitor trademark | Using competitor brand names in ad copy or keywords? | Trademark complaint, ad disapproval |
| UGC rights | Do you have rights to use creator content as Spark Ads? | Legal liability |
| Privacy compliance | Does creative collect/imply data collection? (Privacy Manifest for iOS) | App Store rejection |
Rule: If a competitor's "winning" ad is clearly policy-violating (e.g., fake gameplay that doesn't exist in-game), note the hook/emotion it uses but produce a compliant version. The hook works; the policy violation is unnecessary.
48-Hour Calibration Protocol
After launch, immediately compare predictions to reality to sharpen your research methodology:
| Metric | Pre-Spend Prediction | Actual (48hr) | Delta | Action |
|---|---|---|---|---|
| CPI | $ _____ | $ _____ | ___% | If >30% over: pause, revisit creative |
| CTR | ___% | ___% | ___% | If <50% of prediction: hook failed |
| CVR (click→install) | ___% | ___% | ___% | If low: message match or store page issue |
| Top creative concept | [name] | [name] | Match? | If different: research methodology needs calibration |
| Top platform | [name] | [name] | Match? | If different: audience assumption was wrong |
Purpose: This isn't just campaign optimization — it's research methodology validation. If your predictions consistently miss by >30%, your intelligence collection or synthesis process has a blind spot.
Role-Based Ownership (RACI)
| Deliverable | Responsible | Accountable | Consulted | Informed |
|---|---|---|---|---|
| Keyword map + CPC benchmarks | Media Buyer | Head of Growth | Product Manager | — |
| Creative vault + feature matrix | Creative Lead | Head of Growth | Media Buyer | — |
| Hook library + messaging matrix | Creative Lead | Creative Lead | Product Manager | Media Buyer |
| LTV calculator + CPI ceiling | Media Buyer / Analyst | Head of Growth | Finance | Product Manager |
| Cross-platform migration map | Media Buyer | Head of Growth | Creative Lead | — |
| Creative brief + storyboards | Creative Lead | Head of Growth | Product Manager | Media Buyer |
| AI feature extraction | Analyst / Creative Lead | Creative Lead | — | Head of Growth |
| Rapid prototypes | Creative Producer | Creative Lead | — | Media Buyer |
| Day 1 campaign structure | Media Buyer | Head of Growth | Creative Lead | Finance |
Solo founder? You're all roles. Use this table to ensure you don't skip steps.
Creative Asset Naming Convention
For the Feature Matrix, Fatigue Forecasting, and cross-platform tracking to work at scale, use a standardized naming system:
[Concept]_[Hook]_[Format]_[Platform]_[Version]_[Date]
Examples:
PuzzleRush_LossAversion_Video30s_Meta_v1_20260315
PuzzleRush_LossAversion_Video15s_TikTok_v1_20260315
PuzzleRush_LossAversion_Playable_Unity_v1_20260315
RoyalEscape_Curiosity_Static_Meta_v1_20260315
RoyalEscape_Curiosity_Carousel_Meta_v2_20260322
| Field | Values | Purpose |
|---|---|---|
| Concept | Short name for creative concept | Groups variants of same idea |
| Hook | LossAversion, Curiosity, Social, Shock, POV, Transform | Links to Feature Matrix hook taxonomy |
| Format | Video15s, Video30s, Static, Carousel, Playable | Fatigue tracking by format |
| Platform | Meta, TikTok, YouTube, Google, Apple, Unity | Cross-platform migration tracking |
| Version | v1, v2, v3... | Variant tracking within concept |
| Date | YYYYMMDD | Production timeline + fatigue calculation |
14. The Universal Pre-Spend Framework
Phase 1: Market Hypothesis (Days 1-3)
Goal: Form a testable hypothesis about what will work for your product + audience.
| Step | Action | Output | Success Criteria |
|---|---|---|---|
| 1 | Define product category keywords (15-25 terms) | Keyword map | Covers all intent types |
| 2 | Map competitor landscape (5-10 direct, 2-3 aspirational) | Competitor matrix | Includes market leaders + rising challengers |
| 3 | Define audience personas (2-3 segments) | Persona cards | Each has distinct pain points + channels |
| 4 | Benchmark category CPC/CPI | Cost matrix by platform | Know your unit economics ceiling |
| 5 | Pre-check LTV viability: Can category math support paid UA? | Go/No-Go decision | CPI target > category average = proceed |
Phase 2: Intelligence Collection (Days 3-10)
Goal: Build a comprehensive evidence base across all platforms.
| Step | Action | Tools | Volume Target |
|---|---|---|---|
| 6 | Pull all competitor ads from Meta Ad Library | Meta Ad Library → Foreplay/vault | 50+ ads saved per competitor |
| 7 | Pull competitor ads from Google Transparency Center | Transparency Center | All Search + Display + YouTube |
| 8 | Analyze TikTok top ads for your vertical | TikTok Creative Center | Top 20 ads, keyword list, 5 trending sounds |
| 9 | Research Apple Ads keywords + CPPs | AppTweak + App Store | 100+ keywords scored, 10 CPPs analyzed |
| 10 | Mobile network creative research | AppMagic, BigSpy | Genre top 50 creatives |
| 11 | YouTube comment mining | VidIQ + manual | 50+ comments = real audience language |
| 12 | App review mining (competitors) | App Store + Google Play | 100+ reviews for pain points and language |
Phase 3: Pattern Analysis & Synthesis (Days 10-14)
Goal: Convert raw intelligence into actionable patterns.
| Step | Action | Output |
|---|---|---|
| 13 | Score all saved ads using Evidence Hierarchy (Section 0) | Ranked creative database |
| 14 | Identify top 5 hooks across all platforms | Hook library with confidence scores |
| 15 | Map creative formats by platform + performance proxy | Format priority matrix |
| 16 | Extract messaging angles (feature/emotion/social proof/FOMO) | Messaging matrix |
| 17 | Identify cross-platform migrations | Migration map (highest confidence signals) |
| 18 | Identify white space (angles/formats no one uses) | Differentiation opportunities |
| 19 | Determine: Core gameplay or marketing gameplay? | Creative direction decision |
| 20 | AI Feature Extraction on top 50 competitor videos | Structured feature matrix |
| 21 | Message match audit: ad → landing page alignment | Funnel gap analysis |
| 22 | Benchmark CTAs, offers, and copy patterns | CTA + copy testing plan |
Phase 4: Creative Strategy & Rapid Prototyping (Days 14-21)
Goal: Build and validate creative hypotheses before committing budget.
| Step | Action | Output |
|---|---|---|
| 23 | Write creative brief from research (not assumptions) | Brief anchored in evidence |
| 24 | Design 5-8 concepts: 3 proven angles + 2-3 white space | Concept storyboards |
| 25 | Prioritize by: category norm weight (60%) + white space bet (40%) | Test priority matrix |
| 26 | Rapid prototype top 5 concepts (AI/lo-fi) | Lo-fi creative assets |
| 27 | Plan testing matrix: format x hook x CTA x platform | A/B test plan |
| 28 | Set KPI benchmarks from competitive research | CPI, CTR, CVR targets by platform |
| 29 | Define kill criteria: What results = kill the concept? | Decision framework |
| 30 | Build Day 1 campaign structure | Ready to launch |
Phase 5: Cross-Platform Alignment (Ongoing)
Goal: Ensure research translates into unified multi-platform execution.
| Step | Action | Frequency |
|---|---|---|
| 31 | Refresh competitor vault across all platforms | Weekly |
| 32 | Track creative migration signals | Monthly |
| 33 | Update feature matrix with new winners | Bi-weekly |
| 34 | Recalculate white space as competitors fill gaps | Monthly |
| 35 | Adapt winning creatives across platforms using Adaptation Matrix | Per creative winner |
Key Principles (Updated for 2026)
8 principles for pre-spend intelligence:
- Duration + variant velocity + cross-platform migration = true winner signal (duration alone is unreliable)
- Volume of variants = active testing — but normalize by budget tier (small teams can't sustain 500/month)
- Category norms are starting points, not ceilings — know them, then selectively challenge them
- White space is your Day 1 advantage — if no competitor uses an angle, you get first-mover CPI advantage
- Refresh rate by genre — hyper-casual weekly, mid-core monthly (don't apply one cadence to all)
- Cross-platform adaptation beats platform-native creation — validate on one platform, adapt to others
- AI accelerates research but doesn't replace judgment — use AI for extraction, humans for synthesis
- Message match across the funnel — ad → landing page → onboarding alignment wins
15. Tool Cost Summary
Free (Zero Cost)
| Tool | Network/Use | What You Get |
|---|---|---|
| Meta Ad Library | Meta | Full ad catalog, any advertiser |
| Google Ads Transparency Center | Search, Display, YouTube ads | |
| TikTok Creative Center | TikTok | Top ads, keyword insights, emerging trends |
| Google Keyword Planner | Search | Volume ranges, CPC estimates, suggestions |
| Google Trends | Cross-platform | Relative interest, geo, seasonal trends |
| Google Ads Reach Planner | YouTube + Display | Pre-spend reach forecasting |
| BigSpy (free tier) | Multi-network | Limited daily searches across 10 networks |
| AppTweak (free starter) | App stores | Basic keyword + download data |
| GameAnalytics | Mobile games | Genre retention/monetization benchmarks |
| Liftoff (free reports) | Mobile UA | CPI, ROAS benchmarks by genre |
| VidIQ (free tier) | YouTube | Tag intelligence, basic analytics |
| CapCut | TikTok/social | Free video editing for rapid prototyping |
Paid — By Budget Tier
Bootstrapped ($0-500/mo on tools)
| Tool | Price | ROI Justification |
|---|---|---|
| Foreplay | $49/mo | Permanent ad vault — critical for tracking winners that get deleted |
| VidIQ Pro | $10/mo | YouTube tag + competitor intelligence |
Growth Stage ($500-2,000/mo on tools)
| Tool | Price | ROI Justification |
|---|---|---|
| AppMagic | $380/mo | Mobile game creative intelligence, 150M+ ads |
| SEMrush | $150/mo | Full PPC competitive suite |
| SpyFu | $39/mo | Google Ads keyword history |
| MagicBrief | $99/mo | Cross-platform creative analysis |
| SplitMetrics | Varies | ASA creative A/B testing |
Scale ($2,000+/mo on tools)
| Tool | Price | ROI Justification |
|---|---|---|
| SensorTower | $25-40K/yr | Enterprise mobile intelligence (all networks, creative gallery, SDK detection) |
| data.ai | Enterprise | Global cross-network spend + creative intelligence |
| Ahrefs | $99/mo+ | Deep keyword research + competitive SEO |
Pre-Spend Essentials
- Define hypothesis before researching
- Use the Evidence Hierarchy to rank signals
- Track cross-platform creative migrations
- Verify unit economics before spending
- Plan creative refresh pipeline from Day 1
- Use AI for extraction, humans for synthesis
Common Mistakes
- Relying on ad duration alone as a signal
- Spending without checking LTV viability
- Skipping the compliance filter
- Not saving ads to a permanent vault
- Confusing organic virality with paid performance
- Ignoring the 48-hour calibration protocol
Getting Started
- Set up your free tool stack — Meta Ad Library, Google Transparency Center, TikTok Creative Center, GameAnalytics
- Build your competitor watch list — 5-10 direct competitors + 2-3 aspirational brands
- Run the LTV calculator — Verify the math works for your category before committing budget
- Follow the 35-step framework — Phases 1-4 over 21 days, Phase 5 ongoing
- Set up your creative vault — Even a spreadsheet works; the key is permanent storage of competitor winners
- Plan your first 48-hour calibration — Compare predictions to reality and refine your methodology
Pre-Spend Ad Intelligence Playbook v3.1 — March 2026. Cross-referenced research from Meta, Google, TikTok, Apple, Unity/AppLovin ad ecosystems.
