Job Interview / intermediate

How to Solve the Analytics Case Interview by Asking "So What?" at Every Step

8 min read 15 min AI practice Dr. Elena Vasquez · Head of Data Science at a fintech company
How to Solve the Analytics Case Interview by Asking "So What?" at Every Step

"Our premium subscription conversion rate dropped from 12% to 8% last quarter. Where would you start?" Dr. Elena Vasquez leans forward, pen hovering over her notepad. She has a PhD in Statistics. She has scaled data teams from 3 to 40. And she has watched 150 data science candidates fail this question the same way: they jump to methodology before asking whether the problem is even real. "I would run an A/B test..." Dead. "Let me build a logistic regression model..." Wrong direction. "The p-value threshold should be..." You're showing off, not solving. The candidates who pass this round do something that feels almost too simple: they say, "Before I analyze anything — is the drop from 12% to 8% statistically significant, or could this be noise?"

Why This Conversation Goes Wrong

You accept the problem at face value. "Let me investigate why conversion dropped." But did it actually drop? Or are you looking at a quarterly fluctuation that's within normal variance? The first question should always be "Is this signal or noise?" A data scientist who doesn't question the premise of the problem is solving the wrong problem.

You propose an A/B test as the first step. "Let's A/B test the onboarding flow." But you don't know if the onboarding flow is the cause. Running an A/B test before understanding the root cause is like taking medicine before getting a diagnosis. It's action masquerading as analysis.

You overcomplicate the methodology. "I would fit a Bayesian hierarchical model with time-varying coefficients..." Elena's PhD is in Statistics. She knows the fancy methods. What she wants to see is judgment about when to use them. A chi-squared test might be all you need. Choosing a bazooka for a problem that needs a scalpel shows poor analytical judgment.

You analyze without translating to business impact. "The coefficient on the new onboarding variable is -0.04 with a p-value of 0.002." Elena's response: "So what?" If you can't tell the CEO what this means in plain English and what action to take, the analysis is academic.

The So What

Elena asks "So what?" after every step because the ultimate output of data science is not a model — it's a decision. The So What framework structures your analysis as a chain of hypotheses, each one answering the question: "If this is true, what do we do about it?" Every analytical step should point toward an action, not just an insight.

1

Question the premise before analyzing it

"First, I want to confirm this is a real drop and not seasonal variance. Q1 is historically weaker — is the 12% baseline from Q4, which tends to be stronger? And is the drop sudden or gradual?" These questions establish credibility before a single number is calculated. Elena is checking whether you have the scientific instinct to validate the observation before investigating it.

2

Generate multiple hypotheses, ranked by testability

"I can think of four hypotheses: (1) The new onboarding flow, which launched mid-quarter and affects 60% of users. (2) The marketing channel shift — 40% less Google Ads, 60% more TikTok, which may be bringing different-intent users. (3) The competitor's new free tier. (4) Seasonal factors. I'd prioritize the onboarding flow first because we have the most direct data to test it." Ranking hypotheses by testability shows analytical maturity.

3

Design the analysis, then describe what each result would mean

"I'd segment conversion by users who saw the old onboarding vs. the new one. If the new flow shows lower conversion, the next question is where in the funnel are they dropping off. If conversion is similar across both, the onboarding flow is not the cause and I'd move to the marketing channel hypothesis." Pre-registering your interpretation prevents data-dredging. Elena is watching for this.

4

Address confounders before Elena asks about them

"One confounder: the TikTok marketing shift and the onboarding change happened in the same quarter. Users acquired through TikTok may have different conversion baselines regardless of onboarding. I'd want to control for acquisition channel when comparing onboarding flows." Naming confounders proactively is the difference between a junior analyst and a senior data scientist.

5

End with a recommendation, not a finding

"If the data shows the new onboarding flow is the primary driver, I'd recommend rolling back to the old flow for a controlled A/B test with a clear success metric: day-7 conversion rate by cohort. If the marketing channel mix is the driver, I'd recommend adjusting spend back toward Google Ads or optimizing TikTok landing pages for higher-intent users." Every analysis should end with: "here's what we should DO."

The moment that changes everything

Elena doesn't want a brilliant analysis. She wants one that tells the CEO what to do Monday morning.

The hidden scoring rubric in Elena's interview is business impact per unit of analytical complexity. She has seen candidates build beautiful models that took three weeks and told her nothing she didn't already know. She has also seen candidates run a simple segmented conversion analysis in two hours that changed the company's strategy. The candidate who wins is the one who picks the simplest method that answers the question — and then has the courage to say "Based on this analysis, here's what I recommend." Data scientists who stop at "here's what the data shows" are analysts. Data scientists who continue to "here's what we should do about it" are leaders. Elena is hiring leaders.

What to Say (and What Not To)

Instead of

"I would build a predictive model to understand the conversion drop."

Try this

"First — is this drop statistically significant, or could Q1 seasonality explain it?"

Instead of

"Let me run a regression on all available variables."

Try this

"I have four hypotheses, ranked by testability. Let me walk through them."

Instead of

"The p-value is 0.002, confirming the effect is significant."

Try this

"The new onboarding flow drops conversion by 4 points. If we roll it back, that's $2M in recovered revenue."

Instead of

"I'd need more data to be sure."

Try this

"One confounder: the channel mix shifted at the same time. I'd control for acquisition source."

Instead of

"Here are the results of my analysis."

Try this

"Here's what I recommend doing Monday morning based on this analysis."

The Bigger Picture

A 2024 survey of 800 data science hiring managers found that "ability to communicate findings to non-technical stakeholders" ranked as the #1 skill gap in candidates — above machine learning, above statistics, above Python proficiency. The technical skills are table stakes. The differentiator is translation: can you turn a correlation coefficient into a sentence a CEO can act on?

Elena's interview format is based on a real phenomenon in data science teams: the "analysis paralysis" loop. Teams run sophisticated analyses, present findings, and then the business stakeholders ask "But what should we DO?" The analysis restarts. The data scientists who break this cycle are the ones who pair every finding with a recommendation and a metric to track. Elena tests for this because she's lived it.

Here's a statistic that should shape how every data scientist approaches case interviews: Gartner reports that 87% of analytics projects never reach production. The #1 reason: the analysis couldn't be translated into a clear business action. The data was correct. The methodology was sound. But nobody could answer the CEO's question: "So what should we do?" The So What framework isn't an interview trick. It's how data science becomes business science.

Dr. Elena Vasquez

Practice This Conversation

15 minutes · AI voice roleplay with Dr. Elena Vasquez

Reading about this is step one. Practicing it changes everything. Sonitura lets you rehearse this exact conversation with Dr. Elena Vasquez, a realistic AI head of data science at a fintech company who reacts to your words in real time. It takes 15 minutes. The next time someone shows you a declining metric, you'll question the premise before you open your laptop.

Practice This Scenario Free →
✓ No credit card required ✓ Real-time AI voice ✓ Performance feedback

Related Guides