Loading...

Private & Confidential — Transcend
← Back to Guides

Portfolio Company Product-Marketing Analysis Playbook

Start with the product state machine, not the telemetry. A systematic methodology for diagnosing product-marketing performance across F2P mobile game portfolios.

Analytics F2P Diagnostics Version 2.0 Last Updated: January 2026

Critical Data Caveats (Read Before Any Analysis)

Before proceeding to any phase, internalize these constraints. They apply to every analysis in this playbook and are surfaced here — not buried in later sections — because ignoring them produces confidently wrong conclusions from the start.

iOS Attribution Is Fundamentally Broken Post-ATT

Apple's App Tracking Transparency framework means iOS creative-level attribution data is modeled, not observed. Never treat iOS creative-level data as ground truth. Use Android as the "clean lab" for creative performance analysis. For iOS, rely on Media Mix Modeling (MMM) or modeled attribution from your MMP — and always flag iOS-derived metrics as directional only.

Platform Fee Breakeven Determines Your Target

All ROAS targets are meaningless without knowing the company's fee tier. Check PROFILE.md for current revenue before interpreting any ROAS figure.

< $1M Annual Revenue

Platform Fee: 15% (Small Business Program)

Breakeven ROAS: 118%

> $1M Annual Revenue

Platform Fee: 30%

Breakeven ROAS: 143%

Hybrid Monetization Changes the Math

Many portfolio companies use both IAP and ad monetization (IAA). If a company has hybrid revenue, ROAS calculations must include ad revenue (eCPM × impressions) alongside IAP. A game with 0.8x IAP ROAS but strong ad revenue may actually be profitable. Always check the revenue model before declaring a cohort unprofitable.

Pre-Analysis Checklist (MANDATORY)

Complete every item before beginning Phase 1. If any item is RED, the analysis will produce misleading results — either pause until the condition clears or explicitly flag the limitation in all outputs.

#CheckHow to VerifyGREENRED (Pause or Flag)
1Cohort MaturityCompare oldest cohort install date to todayOldest cohort ≥ D90All cohorts < D60 — label "IMMATURE"
2Promotional PricingCheck avg transaction price vs steady-stateWithin 20% of steady-statePromo active — flag all CVR/LTV
3Version StabilityCheck app version distribution>80% on same major versionMajor version change mid-cohort — segment
4Creative Mix StabilityCheck top creative share WoWVariance < 10ppNew creative >25% share — segment pre/post
5iOS Attribution QualityCheck MMP method for iOS cohortsSKAN/MMM documentedRaw iOS creative data as observed — switch to Android
6Revenue ModelCheck PROFILE.mdAll streams in ROASMissing ad revenue for hybrid — recalculate
7Platform Fee TierCheck PROFILE.md revenueBreakeven ROAS knownUnknown tier — cannot set targets
8Geo Mix DocumentationCheck spend by Tier 1/2/3Geo breakdown availableNo geo data — cannot normalize ARPU

1. Analytical Philosophy

Your time is the most expensive resource in the room. We do not perform analysis for the sake of reporting; we perform it to identify the 4% of levers that will drive 50% of the enterprise value. Four principles govern every analysis.

1.1 Product-State-Machine-First

If you start with a BigQuery table, you are at the mercy of an engineer's naming conventions. If you start with the Product State Machine, you define the ground truth.

The Journey Graph: Map every screen from ad click to paywall, purchase, or churn.

Visual Context: Use Figma exports and screen recordings. If you have not seen the "Subscribe" button's placement, you do not know why CVR is 3%.

Telemetry Linkage: Every node must map to a specific telemetry event. Unmapped nodes are analysis-breaking blind spots.

Why This Matters

The state machine reveals gaps that raw metrics never will — an unmapped "survey skip" branch, a hidden A/B test variant, or a branching path that sends 40% of users down a dead-end flow. Without the journey graph, metrics are orphans: numbers without parents, impossible to interpret correctly.

1.2 Power Law Thinking

F2P gaming is a business of outliers. Averages are lies.

Segment Before Aggregating: 4% of users often drive 50% of total revenue. 10% of ad creatives often drive 90% of profitable spend.

The Whale Hunt: Identify "Power Law Segments" by geo, creative hook, and in-app behavior before looking at the population mean.

Creative Concentration: If one creative drives >30% of all conversions, the account is one fatigue cycle away from a ROAS collapse. That is a fragility risk, not a success story.

1.3 Causal Humility

This is our most important cognitive guardrail. Correlation between engagement and conversion is almost always selection bias, not causation.

The users who play 10 levels on Day 0 did not pay because they played 10 levels. They played 10 levels because they were high-intent users already predisposed to pay.

Every engagement-conversion correlation must be tested against the selection bias hypothesis before any product recommendation is made.

1.4 MECE Decomposition

Before claiming "the new onboarding is better," eliminate all confounds. Decompose every metric change by:

1. Channel — Is the mix of Meta vs. Google different? 2. Geo — Did spend shift toward Tier 3? 3. Spend Level — Deeper into diminishing returns? 4. App Version — Comparing v1.2 to v1.1? 5. Creative — Did a misleading hook bring lower-intent users?

Only after all five dimensions are controlled can you attribute a change to the product.

Philosophy Summary

PillarCore PrincipleDiagnostic Question
Product-State-Machine-FirstMap journey graph before reading metrics"Can I draw the user's path from ad click to payment?"
Power Law Thinking4% drive 50% — find the 4% first"Which segment is carrying the economics?"
Causal HumilityCorrelation is selection bias until proven otherwise"Would this change if we randomized assignment?"
MECE DecompositionSlice by all confounds before product claims"Have I controlled for channel, geo, spend, version, creative?"

2. The Three Context Documents

Before any SQL is written or chart built, three documents must be generated. These are not optional pre-reads — they are the lens through which all data is interpreted.

Document A: Ad Creative Analysis

The ad IS the first product touchpoint. You cannot understand funnel conversion without knowing the "promise" that brought the user to the store.

How to Create: Pull creatives from Meta Ad Library → Pass through Gemini Vision API → Compile structured inventory.

FieldDescriptionExample
Creative Name/IDInternal tracking identifierpuzzle_rpg_boss_v3
TypeStatic, Video, UGC, PlayableVideo
The HookFirst 3 seconds"Loss aversion — player loses streak"
HeadlinePrimary text overlay"Can You Beat Level 50?"
CTAWhat the user is told to do"Download Free"
Visual DescriptionColors, characters, UI elementsFrustrated player montage, then satisfaction
Est. Impression Share% of total budget35%

If a "Low-Stress / Zen" ad creative leads to a 50% drop-off at a "High-Difficulty" tutorial, the problem is not the tutorial — it is the creative-to-product mismatch. Without Document A, you will misdiagnose every conversion problem that originates in the ad promise.

Document B: User Journey / Information Architecture (IA)

The "Ground Truth" of the product state — the complete mapping of every screen from install to monetization.

How to Create: Export screens from Figma → Gemini Vision analysis for headlines, CTAs, branching logic → Compile structured IA document.

FieldDescriptionExample
Step #Sequential order3
Screen NameCorrelated to telemetry eventtutorial_hand
Headline/Body CopyExact text the user reads"Master Bluffing Basics"
CTA CopyButton text"Practice Free" / "Skip to Pro"
Branching LogicConditions that route usersIf survey = "Expert" → skip tutorial
A/B Variants ActiveLive versions of this screenVariant B: Gamified progress bar (20% traffic)

Document C: Cohort Segmentation Schema

Performance varies dramatically by geography, language, and device. Without a segmentation schema, you will average away the signal.

Geo Tier Definitions

TierCountriesCPI RangeARPU MultiplierTypical Use
Tier 1US, UK, CA, AU, DE, FR, JP, KR$3-15+1.0x (baseline)Primary revenue geos
Tier 2BR, MX, PL, TH, TW, IT, ES$0.50-30.3-0.5xVolume + moderate monetization
Tier 3IN, ID, PH, VN, EG, NG$0.05-0.500.05-0.15xVolume only; rarely profitable on IAP

Language/Culture Effects

FactorImpactAction
Localized creatives20-40% CTR lift in non-English T1 (JP, KR, DE)Segment creative performance by language
Cultural hooksHumor, competition, social proof vary by region"Challenge" hooks may underperform in JP
Localized ASO15-30% CVR lift for localized screenshotsCheck before attributing low organic CVR to product

Device-Tier Implications

Device TierTypical MarketsPerformance Impact
High-endT1 dominantBaseline; no degradation
Mid-rangeT1/T2 mixLoading +30-50%; D1 retention -2-5pp
Low-endT2/T3 dominantCrashes, ANRs; D1 retention -5-15pp

A game with 35% D1 retention in the US and 18% in India does not have a "product problem in India." It has a device-tier problem, a CPI efficiency question, and possibly a localization gap. These three documents take 3-5 hours to build but save weeks of misdiagnosis.

3. The 7-Phase Analysis Protocol

Follow this sequence exactly. Do not skip to Phase 4 because you "have a hunch" about the creatives. Each phase builds on the previous one, and skipping phases guarantees misdiagnosis.

1

Phase 1: Product State Machine Mapping

Build the journey graph from install to monetization to retention loop.

2

Phase 2: Funnel Decomposition

Identify the biggest "leak" by absolute user loss, not relative drop-off rate.

3

Phase 3: Causal Attribution (Intent vs. Content)

Determine whether the product causes conversion or merely detects pre-existing intent.

4

Phase 4: Creative Performance Analysis

Link the ad "promise" to the post-install "outcome."

5

Phase 5: Monetization System Analysis

Evaluate gating mechanics that convert free users to paying users.

6

Phase 6: LiveOps & Event De-Averaging

Separate steady-state performance from temporary event lifts.

7

Phase 7: Strategic Synthesis (STOP / DO / TEST)

Convert findings into a prioritized operational roadmap.

Phase 1: Product State Machine Mapping

  1. Map Every Node: Every screen is a node. Every click, swipe, or timer expiration is an edge. Include splash screens, permission prompts, loading states.
  2. Identify Decision Points: Auth method selection, survey responses, paywall dismiss vs. purchase, energy wall choices, daily return triggers.
  3. Telemetry Alignment: For every node, find the corresponding event in the data warehouse. Document exact event name and parameters.
  4. Flag Blind Spots: If there is a "Level Complete" screen but no level_complete event, mark it as a telemetry gap.
  5. Document Branching Logic: Every branch creates a separate cohort with different expected behavior.
EXAMPLE STATE MACHINE (Poker Education App) [Ad Click] --> [App Store] --> [Install] --> [Splash Screen] | (event: app_open) | [Auth Selection] / | \ [Google] [Apple] [Guest] \ / / (event: auth_complete) | [Skill Survey] / \ [Beginner] [Advanced] | | [Tutorial] [Hand Calculator] \ / [First Gameplay Session] | [Push Permission] / \ [Granted] [Denied] | | [Paywall View] [Continue Free]

Without the journey graph, you cannot distinguish between a product problem and an instrumentation gap. A "low D1 retention" number might mean users are churning — or it might mean the day_1_return event only fires after auth, and 30% of users are returning but playing as guests without triggering the event.

Phase 2: Funnel Decomposition

For each funnel step, calculate absolute CVR, step drop-off, platform split, version split, and geo tier split. Always identify the highest absolute drop-off point, not the highest relative rate.

CheckpointTargetRed FlagTypical Cause
Auth Completion>75%<70%OAuth failure, too many options
Push Permission (iOS)40-56%<30%Poor priming, bad timing
Checkout Started-to-Verified>75%<50%Price shock, payment error
Paywall View-to-Purchase2-3% median<1%Wrong audience, bad copy
D1 Retention (F2P)26-28%<20%Broken first session, creative mismatch

✓ The Biggest Leak Rule

  • Always identify highest absolute drop-off
  • 100K installs, 25K reach auth, 40% auth drop = 10K lost
  • Absolute numbers drive revenue impact

✗ Common Mistake

  • Optimizing downstream while ignoring upstream hemorrhaging
  • Obsessing over paywall copy while losing 35% at auth
  • Using relative % instead of absolute user counts

Phase 3: Causal Attribution (Intent vs. Content)

This is the most important phase — and the phase most frequently botched. Getting this wrong leads to the most expensive strategic errors in F2P.

The Two Causal Models

Model A: Content Causes Conversion

Prediction: Linear CVR rise with content depth

Implication: Gate paywall behind content milestones

Signal: Smooth, gradual CVR rise

Model B: Pre-existing Intent (Selection Bias)

Prediction: Step-function CVR at a threshold

Implication: Show paywall early to high-intent users

Signal: Sharp CVR cliff; zero-content converters exist

Four Diagnostic Tests

Run all four. If three or more point to Model B, treat the product as an intent-harvester, not an intent-creator.

Test 1: Zero-Content Converters

What % of paying users converted before completing the first core loop?

<2%: Content likely contributes (Model A). 2-5%: Ambiguous. >5%: Pre-existing intent dominates (Model B).

Minimum sample: 500 total payers.

Test 2: Push Permission as Intent Proxy

Push granters typically convert at 8-15%, deniers at 0.5-1.5%. A 10x+ ratio means push reveals intent, not creates it.

Minimum sample: 725 users per group.

Test 3: Content Depth vs. CVR Curve Shape

Plot CVR against content depth. If CVR jumps from 5% to 97% at lesson 6, that is a survivorship filter, not a "magic moment."

Statistical note: Fit linear vs. logistic step-function. If delta-AIC > 10, Model B is strongly supported.

Test 4: Dual-Mode Engagement

If the product has multiple modes (puzzles AND lessons), users engaging with both convert at 10-16% vs. 2-5% for single-mode. Multi-mode usage reveals high intent.

Minimum sample: 400 users per group. Use chi-squared test.

Case Study: The Poker Education App

Users of both "Hand Calculator" and "Training Drills" converted at 16%, while single-feature users converted at 3%. The Model A interpretation was to force everyone into both features.

❌ Model A Mistake

Forcing casual users into drills did not make them pay — it made them uninstall.

✔ Model B (Correct) Strategy

Identify high-intent users via push permission + dual-mode behavior, then present the paywall earlier to that segment. Revenue per install increased 34%.

Phase 4: Creative Performance Analysis

iOS Attribution Warning

iOS creative-level attribution is modeled, not observed, post-ATT. All creative analysis in this phase MUST use Android data as primary. iOS is directional only.

AnalysisMethodSignal
Power Law CheckRank creatives by conversion shareTop creative >30% = fragility risk
Type SegmentationStatic vs. Video vs. UGC CVRStatics often 3.5x UGC for skill apps
Hook-to-CVR MappingCorrelate hook type with post-install CVRHigh CTR + Low CVR = clickbait mismatch
Creative-Product CoherenceCompare ad promise to first 60s of productMismatch = churn source, not product problem
Geo-Creative InteractionSegment by Document C tiersCultural mismatch across geos

The Coherence Matrix

High Post-Install CVRLow Post-Install CVR
High CTRWinner: Scale this creativeClickbait: The ad overpromises
Low CTRHidden Gem: The ad undersellsDud: Retire this creative

Phase 5: Monetization System Analysis

Energy/Lives System Effectiveness

Wall Hit RateDiagnosisAction
<5% of DAUCap too high — untapped leverLower cap or increase consumption
5-20% of DAUHealthy rangeOptimize offers at the wall
>20% of DAUChoking retentionRaise cap or add free refills

Paywall Timing

  • Early (Hard Gate): Best for high-intent, "utility" apps where user knows the value.
  • Delayed (Soft Gate): Best for "discovery" apps where user needs to find "the fun."
  • Intent-Based: Show earlier to users with high-intent signals (push granted, dual-mode). This is the Model B optimization.

Price Point Analysis

MetricHealthy RangeWarning Sign
Annual vs. Monthly mix40-60% annual>80% annual (short-term revenue risk)
Trial-to-Paid (iOS)25-35%<20%
Trial-to-Paid (Android)10-20%<10%
Avg price during analysisSteady-stateIf $0.10 (launch promo), CVR data meaningless

Ad Monetization (IAA) — Hybrid Games

MetricDefinitionBenchmark (Hybrid F2P)Red Flag
eCPMCost per 1,000 ad impressions$8-20 (US, rewarded video)<$5
Ad ARPDAUDaily ad revenue per DAU$0.05-0.15<$0.03 or >$0.25
Ads per sessionRewarded + interstitial per session2-4 rewarded, 1-2 interstitial>6 total
Ad-to-IAP cannibalizationDoes rewarded video reduce IAP CVR?<5% IAP CVR reduction>10% reduction
Blended ROAS(IAP + Ad revenue) / SpendAbove breakevenBelow breakeven with ads included
Ad revenue shareAd revenue / Total revenue20-50%>70% or <10%

Phase 6: LiveOps & Event De-Averaging

A game that shows 1.5x D30 ROAS during a Halloween event but 0.9x in steady-state is not a 1.5x ROAS game. It is a 0.9x ROAS game with a seasonal bump.

Event Types & Typical Impact

Event TypeDurationRevenue LiftRetention Lift
Battle Pass / Season4-8 weeks30-80% IAP5-15pp D7
Limited-Time Offer1-7 days50-200% daily IAPMinimal
Seasonal Event2-4 weeks20-60% blended3-10pp D7
Content DropPermanent (lift decays)20-40% first-week5-10pp D7 (decays)
Competitive Event3-7 daysVariable10-20pp D1 for engaged

De-Averaging Protocol

  1. Identify Event Windows: Catalog all LiveOps events during the analysis period.
  2. Segment Cohorts: Classify users as event-exposed or steady-state.
  3. Calculate Baseline: Steady-state ROAS from non-event cohorts only.
  4. Adjust Spend: Base scale decisions on steady-state ROAS, not blended.

LiveOps Red Flags

Event-dependent: Steady-state ROAS below breakeven but event ROAS above = game is not self-sustaining.

Revenue cliff: >40% drop within 48 hours of event end = no organic monetization habit.

Diminishing returns: Each successive event produces smaller lift = event fatigue.

Phase 7: Strategic Synthesis (STOP / DO / TEST)

CategoryDefinitionEvidence BarExample
STOPActions actively destroying valueClear data showing harmStop running "Zen Garden" creative with 90% D0 churn
DOLow-regret actions with clear evidenceHigh confidence, measurable impactFix Android checkout bug costing 15% of revenue
TESTHypotheses worth validatingPromising signal, needs experimentTest early paywall for high-intent push cohorts

Priority Scoring

Priority = (Expected Revenue Impact) × (Confidence Level) / (Engineering Effort)

Confidence: 0.3 (hypothesis) / 0.6 (strong signal) / 0.9 (proven in analogous context).

Mandatory Data Caveats Section

Every synthesis must include: unanswered questions due to sample size, immature cohorts, iOS attribution breakdowns, telemetry gaps, LiveOps-inflated findings, and geo-specific vs. universal findings.

4. Data Source Hierarchy & Access Patterns

SourceWhat It Tells YouWhat It Cannot Tell YouWhen to Use
AppsFlyer CohortsROAS by cohort, channel, geo, creativeIndividual user journeysMacro-level channel/geo performance
BigQuery (Direct)User-level event logs, every clickCross-platform attributionMicro-level funnel, causal tests
RevenueCatSubscription status, trial-to-paid, churnPre-paywall user behaviorMonetization funnel, pricing
Ad Network DashboardsIAA revenue: eCPM, fill rate, impressionsWhy users watch/skip adsPhase 5 IAA analysis
Meta Ad LibraryActive creatives, estimated spendActual performance metricsDocument A, competitive analysis
Figma ExportsIntended user experience, flowsWhat users actually doDocument B, product state machine
Google AdsSpend, clicks, conversionsPost-install behaviorChannel-level spend efficiency
App Store ConnectOrganic baseline, ratings, crash reportsPaid attributionOrganic health, sentiment

Access Pattern: Start with AppsFlyer for the macro view. Drill into BigQuery for micro questions. Use RevenueCat for monetization. For hybrid games, add ad network dashboards. Never skip the three context documents.

5. Benchmarks & Reference Data

Onboarding & Engagement

MetricMedianGoodExcellentRed Flag
Onboarding Completion45-55%65%80%+<35%
Auth Completion75-85%88%>92%<70%
Push Opt-in (iOS)40-56%58%65%+<30%
D1 Retention (F2P)26-28%32%40%+<20%
D7 Retention10-12%15%20%+<8%
D30 Retention4-6%8%12%+<3%

Monetization (IAP)

MetricMedianExcellentRed FlagNotes
Paywall CVR (Soft)2-3%10%+<1%10%+ = high-utility niche
Checkout Started-to-Verified70-75%85%+<50%<50% = technical/UX issue
Trial-to-Paid (iOS)25-35%45%+<20%Platform-specific
Trial-to-Paid (Android)10-20%30%+<8%Significantly lower than iOS is normal
Energy Wall Hit Rate8-15%10-15%<5% or >20%<5% = undertapped; >20% = choking

Ad Monetization (IAA)

MetricMedian (US, Rewarded)GoodExcellentRed Flag
eCPM$8-12$15$20+<$5
Ad ARPDAU$0.05-0.08$0.10$0.15+<$0.03
Rewarded opt-in rate30-40%50%65%+<20%
Ads/session (total)2-33-44-5>6 (retention damage)
Ad revenue % (hybrid)25-40%35-45%40-55%>70% or <10%

Marketing & Creative

MetricTypical RangeWarningNotes
Top creative share30-40% of conversions>70% = SPOF riskSingle Point of Failure
Static vs. UGC CVRStatics 3.5x higher (skill apps)UGC outperformingUnusual for utility apps
Creative-Product CoherenceHigh CTR + High CVRHigh CTR + Low CVRClickbait mismatch
iOS Creative AttributionModeled / estimatedTreated as ground truthPost-ATT: directional only

6. Common Analytical Traps

Each trap has destroyed at least one analysis cycle in our portfolio's history.

6.1 Simpson's Paradox

The Trap: "Our overall ROAS is 1.5x. We should increase spend."

❌ The Reality

Channel A is 3.0x ROAS (10% of budget), Channel B is 0.5x ROAS (90% of budget). The blended average hides the fact that you are burning money on Channel B.

✔ The Fix

Always decompose by Channel > Geo > Creative before interpreting aggregate ROAS. Analyze the dominant channel in isolation first.

6.2 Organic as Clean Control

The Trap: "Organic users have 50% D1 retention vs. 25% for paid. Our paid marketing brings in trash users."

❌ The Reality

Organic users are brand seekers with massive pre-existing intent that no paid channel can replicate. Comparing paid to organic is comparing apples to spaceships.

✔ The Fix

Use a "low-spend, high-intent" paid cohort as baseline, not organic. Or use the Organic Attribution Analyzer skill.

6.3 ARPU / ROAS as Product Signal

The Trap: "ARPU went up 20% after the new update. The product is better."

❌ The Reality

The marketing team shifted spend from India to the US. ARPU and ROAS are contaminated by bid strategy, geo mix, and channel mix.

✔ The Fix

Normalize ARPU by geo and channel before attributing changes to the product. Hold acquisition mix constant.

Additional Traps (6.4-6.9)

6.4 Promotional Pricing Distortion: A $0.99/year intro offer inflates CVR to 12%. When price reverts, CVR collapses. Flag all CVR/LTV as "promotional period."

6.5 Immature Cohort Bias: D14 ROAS at exactly 14 days misses late converters. Cohorts need D90+ for extrapolation. Safe cutoff: today - (milestone_days + 3).

6.6 Version Confounding: January vs. February cohorts differ in version, creative mix, geo, and CPI simultaneously. Control all variables or you cannot attribute to product.

6.7 The "Trial" Mirage: 1,000 trials at 5% conversion (50 paid) is worse than 500 trials at 30% (150 paid). Track cohort-adjusted net revenue, not trial starts.

6.8 LiveOps Inflation: A Battle Pass on Day 10 inflates D30 ROAS to 1.8x. Steady-state is 1.0x. Apply Phase 6 de-averaging.

6.9 Ignoring Ad Revenue: A 0.9x IAP-only ROAS may be 1.3x blended. For hybrid games, always calculate (IAP + IAA) / Spend.

Every one of these traps produces a confidently wrong conclusion that leads to confidently wrong actions. The fix is always the same: decompose, control for confounds, and never trust an aggregate number.

7. Deliverables & Output Templates

Every analysis cycle produces four artifacts, each serving a different audience.

ArtifactAudienceKey ContentsFormat
Holistic Analysis ReportCEO + Investment TeamExecutive Summary, Power Law Findings, Causal Verdict, STOP/DO/TESTBranded HTML (MODE 1 for Drive, MODE 2 for email)
IA ReferenceProduct & EngineeringScreen catalog, event mapping, A/B inventory, telemetry gapsMarkdown in portfolio/[Company]/analysis/
User Journey VisualizationAll StakeholdersState machine diagram, drop-off heatmap, branch annotationsInteractive HTML (Mermaid.js or SVG)
Operational One-PagerMarketing & Product LeadsNet ROAS targets, funnel benchmarks, top 3 experiment prioritiesHigh-density single page PDF

Different audiences need different artifacts. The CEO needs the strategic narrative. The product team needs the IA reference. The marketing lead needs the one-pager. Delivering a single 40-page document to all audiences guarantees nobody reads the part relevant to them.

8. Causal Model Deep-Dive: Worked Example

A puzzle RPG with both a puzzle mode and an RPG story mode. Data shows users who complete the "Chapter 1 Boss" have 45% 30-day retention vs. 5% for those who do not.

Model A Trap

"The Boss fight is a Magic Moment. Make the game easier so 100% reach it."

Model B (Correct)

"The Boss fight is a Skill/Intent Filter. Only users who enjoy the mechanics reach it."

Diagnostic Results

  • Zero-content converters: 7% of payers purchased before completing a single puzzle. Model B signal.
  • Push permission: Boss winners who granted push had 42% retention; deniers had 28%. Intent gradient within "successful" cohort.
  • Content-CVR curve: Sharp cliff at Chapter 1 — not gradual rise. Model B confirmed.
  • Pre-Boss monetization: 80% of Boss Winners had already made a micro-transaction before reaching the Boss.

Strategic Move

STOP: Gating the in-app store behind Chapter 1 completion. DO: Add power-up offers before the Boss to capture high-intent revenue earlier. TEST: Show discounted annual subscription to push-permission-granted users within 2 sessions.

Result: Revenue per install increased 34% without any change to core game difficulty.

9. Cross-References

SystemRelationshipReference
Marketing Experimentation SystemPhase 7 TEST items become experiment hypothesesCompanion guide
ROAS Analysis PipelineROAS projection methodologyskills/ROAS_ANALYSIS_PIPELINE.md
F2P Marketing Analysis FrameworkSpend allocation decisionsskills/marketing-analytics/
Causal Attribution AnalyzerAutomated Phase 3 testsSkill in skills/
Data Quality AssessmentValidate cohort data before analysisSkill in skills/
Organic Attribution AnalyzerQuantify organic vs. paid intent gapsSkill in skills/
Statistical Significance FrameworkSample size requirementsskills/STATISTICAL_SIGNIFICANCE_UNIFIED_FRAMEWORK.md
Geo Profitability AnalyzerTier-level profitability analysisSkill in skills/
Company DIAGNOSTIC_FINDINGS.mdCompany-specific confoundsportfolio/[Company]/

10. Statistical Appendix

Sample Size for Proportion Comparisons

n = (Zα/2 + Zβ)2 × [p1(1-p1) + p2(1-p2)] / (p1 - p2)2

Where Zα/2 = 1.96 (95% confidence), Zβ = 0.84 (80% power).

Quick Reference Table

Baseline RateMDERequired N per Group
2% (paywall CVR)+1pp (to 3%)3,822
2% (paywall CVR)+2pp (to 4%)1,031
5% (push-denied CVR)+5pp (to 10%)725
10% (push-granted CVR)+5pp (to 15%)686
25% (trial-to-paid)+5pp (to 30%)1,022
30% (D1 retention)+3pp (to 33%)2,877
30% (D1 retention)+5pp (to 35%)1,033

Confidence Intervals for Proportions

CI = p ± Zα/2 × √(p(1-p) / n)
Sample Sizep = 3% (paywall)p = 10% (CVR)p = 30% (retention)
100± 3.3pp± 5.9pp± 9.0pp
500± 1.5pp± 2.6pp± 4.0pp
1,000± 1.1pp± 1.9pp± 2.8pp
5,000± 0.5pp± 0.8pp± 1.3pp
10,000± 0.3pp± 0.6pp± 0.9pp

Bootstrap ROAS Confidence Intervals

1. Sample N users with replacement from cohort (1,000 iterations) 2. Calculate ROAS for each bootstrap sample 3. Take 2.5th and 97.5th percentiles as the 95% CI Preferable to parametric methods because F2P revenue is highly skewed (power-law). Mean dominated by whales — normal-distribution assumptions invalid.

Bayesian Shrinkage for Low-Volume Creative Cells

pshrunk = (successes + αprior) / (trials + αprior + βprior)

Effect: High-volume creatives barely affected. Low-volume creatives pulled toward portfolio mean, preventing false "winner" or "loser" declarations from small samples.

When to apply: Any creative cell with fewer than 200 conversions.

Statistical Decision Rules for Phase 3

TestNull HypothesisReject IfMinimum Data
Test 1: Zero-content% zero-content ≤ 2%Observed > 2% with 95% CI lower > 2%500+ payers
Test 2: Push CVR gapCVR(granted) = CVR(denied)Chi-squared p < 0.05 AND ratio > 3x725 per group
Test 3: Curve shapeLinear fits equallyStep-function AIC < Linear AIC by >108+ bins, 100+/bin
Test 4: Dual-modeCVR(both) = CVR(single)Chi-squared p < 0.05 AND both > 2x single400 per segment

Interpreting ambiguous results: If 2 of 4 tests point to Model B and 2 are ambiguous, default to Model B (causal humility principle). The cost of incorrectly assuming Model A (hiding the paywall) is far greater than incorrectly assuming Model B (showing it too early).

Conclusion: The Analyst as Operator

At Transcend, we do not sit on the sidelines. We do not ask for reports; we build the tools to generate them. We do not accept dashboard summaries; we query BigQuery, scrape ad libraries, and map Figma screens.

By starting with the Product State Machine, maintaining Causal Humility, and obsessing over Power Laws, we provide our portfolio companies with insight they cannot get from a standard agency or generic analytics platform.

Analysis is not about the past. It is about the next experiment. Every finding should feed directly into the Marketing Experimentation System. Every STOP/DO/TEST recommendation should have an owner, a timeline, and a success metric.

Remember: Telemetry is the shadow. The product is the object. The ad is the light source. To see clearly, you must understand all three.

✍️ Select text to comment
➕ Add Comment

💬 Comments

💬

No comments yet.

Select text to add the first comment.

Add Comment