|
Interview context | Types of execution questions | Example interview | Video mock interview
The PM interview process will test your comfort with analytics and data. These are a key part of the product manager job, since they can often tell the PM whether they are making an impact.
The kinds of data you work with as a PM will depend heavily on the product you work on, but there are certain categories of analytics questions to expect in the interview. This article will also introduce the TOFU framework which covers different categories of metrics useful to keep in mind during an analytics interview.
💡 Tip: Certain companies refer to this category of questions differently. Facebook calls these "product execution", while Google often refers to them as "analytics". Rest assured though, regardless of the name the intent of the question is the same!
Data and analytics come up very frequently in the product management job, so of course a company hiring for a PM wants to know if the candidate is analytically inclined. Because there are different ways that analytics are used on the job, the kinds of questions in an analytics interview can also vary, but they generally relate to using metrics and quantitative analysis to describe, interpret and / or troubleshoot a situation.
One way analytics comes up in a PM's job is in assessing impact. A PM's job, loosely defined, is to help the company and product grow by understanding its users, figuring out the users' pain points, and solving those pain points in effective ways by executing on solutions. At the end of the lifecycle of a particular idea or project is the question of "how do we know this idea actually had an impact?" The answer to that is often answered quantitatively. And, you don't just want to think about a feature's impact after it launches; you want to be able to predict the impact ahead of time so that you can make good prioritization decisions.
PMs should also have a good grasp of how the product and business functions overall - this gives them a lot of context to make better product decisions. As a company grows, it tends to have more and higher-quality data about all aspects of itself, and everything becomes more and more quantified. A PM has to be comfortable with metrics to function well in this environment.
Given that numbers show up everywhere in a PM's job, there are a lot of different ways they can show up in a PM interview. Here are four common contexts that you should know about.
A PM will want to know how a feature, product or business is doing overall in a quantitative way. Such metrics can be useful for measuring progress, estimating impact, proactive problem detection, and more. In other words, these are some of the key metrics you want to watch to monitor the success and health of the product.
There are many different ways you can measure how a product is doing - some very straightforward and standard, others more creative and custom to a particular company. You can think of this "metrics landscape" as having four major buckets:
If you want an easy acronym to remember, try TOFU - Tech, Objects, Finance, Users. These are ranked roughly in order from least likely to come up to most likely to come up, and we'll cover them in that order.
The first category relates to the technical infrastructure behind the product. A lot goes on "behind-the-scenes" to run a product, and this category of metrics can serve double-duty as a small test of your technical knowledge.
This category is generally more important to certain kinds of PMs (especially Technical Product Managers), so they may only come up in specific contexts. Here, you might care about metrics like page load times, the number of API calls being made, the amount of computing resources being used, etc. Some analytics questions may focus on this category of metrics. If a question doesn't invoke one directly, you could consider bringing such a metric up proactively, though it may not always be very pertinent for every analytics question.
The next category in our metrics landscape contains metrics relating to key "objects" in the product. This category of metric is much more common than the previous one, and could very likely make an appearance if the analytics question is about a particular product.
Generally, every product has some kind of object or objects it cares about. An e-commerce site might care about its products - and thus maybe how popular a particular product is and how much inventory there is of it. A productivity tool like Google Docs might care about documents - and thus how many are created, how many are shared with more than a certain number of people, etc. A streaming app like Spotify might care about individual units of media like songs - and thus how many there are total, what proportion of the overall library is played, what are the most popular songs, etc.
The third bucket consists of business-related metrics. There may be some analytics questions which don't invoke this category at all, but when one does, it's important that you can speak to the business side of a product.
Revenue is one such metric that many PMs might care about; costs are not usually as much a concern for PMs but might be for certain products. A growth PM might be more concerned with marketing-related metrics like Customer Acquisition Cost (CAC). SaaS products also have a relatively common set of metrics they care about, such as churn rate, net revenue churn, lifetime value, etc. (some of these are relevant for non-SaaS as well).
Generally speaking, the common metrics are relatively straightforward to learn and understand; you just need some exposure to them. If you're interviewing for a company in a particular sub-industry, such as SaaS, brush up on the relevant business metrics before the interview.
The last category of metrics is the most important and the most common category. This bucket is about user engagement and behavior - how to measure the activity of your users. There are a huge number of ways to measure this.
One popular framework to use is HEART (PMs love frameworks), which stands for:
Each of these five is a category of metrics, and a product team would likely pick one or several metrics in each category to track.
Happiness is a measure of how much users like the product; this is often informed by qualitative feedback, but could be measured quantitatively using a Net Promoter Score (NPS). Engagement is how deeply users are using your product, which could measure the frequency with which they do an action or how far they get into a user journey; one metric here might be the percent of users subscribing to a paid plan. Adoption is how many users who come into the product funnel actually reach a certain step in your user journey (see below); an example is how many users perform a key early task like signing up. Retention is how often users come back, which could be a simple measure of % of new users who come back, or a more complex metric like "how many users used our product 3 days out of the last 7?" And finally, Task Success requires the product team to define what exactly "success" is for a user, then look at what percentage of users achieve that.
In addition to HEART, you'll want to know certain common terms or metrics. DAU and MAU refer to Daily and Monthly Active Users, which is a count of unique users who perform whatever activity on your product that makes you think they are "active" (oftentimes, this is as simple as visiting your product's webpage). You could also speak about the DAU and MAU of a specific feature of a product.
The term "conversion rate" can be used in many contexts, so you'll want to define what exactly "conversion" means for a product. Sometimes this might actually be a "signup rate" - the percent of site visitors who sign up for an account - or a "subscription rate" - usually the percent of users who start paying for the product.
Also, remember that tech companies can easily "instrument" their products, which means to add logging to specific parts of their product. For example, they can log whenever a user clicks on a particular button, submits a form, adds something to their cart, unsubscribes from a paid plan, etc. Nowadays, pretty much anything a user does on a website can be instrumented, so if you have freedom to pick metrics you care about in an interview question, you have the latitude to be creative.
Having a small library of potential metrics to draw from will be useful for your analytics interview. A few more concepts are also likely to show up.
The previous section talked about how you'd measure the health of different parts of a product or business, but often a PM is focused on a particular user journey. Most products have some kind of a "funnel" somewhere in their experience: a somewhat linear sequence of steps that a user goes through.
Generally, the funnel leads users to an action that the company wants the user to do - e.g. complete a purchase or subscribe to a paid plan. The flipside is that usually each step of the funnel has some amount of users dropping off. Generally, if a company defines a funnel, they care about the metrics relating to it - the conversion rate of users going from the very beginning to the very end of the funnel, but also the dropoff rates at each step. PMs are often tasked with optimizing a funnel, i.e. getting more users to the end of it successfully, so this is a fair topic for a PM interview.
Many products will have at least one funnel, many may have more than one important funnel, and some may not really care about funnels. Ecommerce sites generally care a lot about their funnels, which could look something like:
(Note that this used the convention of calling somebody who doesn't have an account yet a "visitor" and then switching to "user" when they've signed up for an account - but this could very well be different for a given company).
A productivity tool, however, might not care too much about a funnel at all. Yes, visitors do have to sign up to use productivity tools, so perhaps a Growth PM at such a company might care about funnels. But, as an example, does the main Google Docs experience have a user funnel? Perhaps small ones relating to specific tasks (e.g. sharing a doc), but no real overall funnel.
Sometimes interview questions require you to spot that there is a funnel involved. For example, if you're asked what you might do to improve the rate users subscribe to your product, you might want to think about whether there's a relatively linear sequence of events leading up to the user subscribing.
With funnel questions, it's generally useful to lay out the steps of the funnel and think about what happens at each step. What does success look like at each step? Why might a user not proceed to the next step? What are some ideas that might increase the rate at which users get to the next step?
💡 Got a PM interview? Our PM interview drills help get you in top form
There's never a shortage of ideas for how to improve a product, but how do you know which ideas will actually work? For this, experimentation is a popular tool in the PM toolkit. Generally this refers to running a controlled, randomized test, otherwise called an A/B test. Here, you randomly assign users to two groups. One group gets the "control", or the current, normal experience. Another group gets the "variant", which is a new version of the experience that you think might improve some metric, but you're not sure. The idea is that because the groups are randomly assigned, if you see the metric move significantly up or down in the variant, it's likely because of the change you're testing.
There is a lot of online literature about this topic for PMs, so this post won't cover too many of the technical details. For interviews, remember that this technique exists! If you ever find yourself saying "I wonder if X will improve the user experience?", mentally consider whether an A/B test would help answer that question. If you're ever asked "How would you find out if X improves [any metric]?", consider whether an A/B test would help.
A/B tests are great because they are often the most direct way you can prove causation - that a change made to a product is singularly responsible for a movement in metrics. Products that have a lot of users (e.g. Google, Facebook, Amazon) are always running numerous tests, because with so many users, it's easy to relatively quickly get statistically-significant results. Even with smaller products though, A/B tests can be used to great effect for data-driven decisions.
But, also remember that A/B tests are not always appropriate. They are great for situations where you have an existing experience and you want to test a relatively small change to that experience. Put another way, it can help you find a "local optimum" - a solution that is within the current general solution space that is better than what you have now. But, the bigger the change, the more careful you have to be about doing an A/B test.
For example, let's say Venmo begins offering a completely new product line to its users - a credit card. It might not be entirely fair to run an A/B test where the control doesn't get offered a credit card while the variant does, and look at the impact on the overall transaction volume happening on Venmo. This isn't exactly fair for an A/B test, because it's a completely new source of transactions that the control group doesn't have access to at all. (Other metrics might be fair game for an experiment in this situation - for example, it might be ok to gauge whether seeing the offer for a credit card influences rates of visitors signing up for accounts, for example.)
A few last things on A/B tests. They are not as useful for situations where you have a small number of users to run the test on, as it would potentially take too long to get a statistically significant sample. They are also not as useful to gain validation for a completely new product idea, since there's no control to compare against. And finally, don't forget that it's often powerful to gather qualitative data to supplement your interpretation of the quantitative results from an A/B test - in other words, consider asking users for their feedback!
Finally, metrics are useful for telling you when something weird is happening with your product and helping you investigate what's going on. An analytics interview could ask you to investigate such an "anomaly". Generally, this takes the form of a particular key metric being unusually high or low. The question asks you how you would approach investigating or understanding this situation.
This question requires a dose of understanding product analytics and also a dose of detective work. Be inquisitive!
The first step is usually to get more context and try to uncover if there's an "easy" explanation. Do you have historic data to look at? Could it be due to something external to the product, like a global event or seasonality? Could it be due to something the team did, like rolling out new code or starting a marketing push?
If nothing has cracked the problem yet, then start digging in deeper. It may help to slice the main metric by certain dimensions - what does the metric look like for a specific user group, or a specific geography, or a specific time of day, etc? Are there other metrics that might help you understand the situation, either ones that did show the same anomaly, or ones that should have but didn't?
The goal is to figure out what might have caused the data anomaly, but there isn't too much of a rigid framework to follow other than to ask questions and tug on any interesting threads you find. If you get stuck, try thinking about the product, its target users and the pain points it solves - you might find inspiration.
Here's a question from the RocketBlocks drill database. In this case, we're picking an investigation question (type 4):
"You're working as a PM at Uber and notice that ride cancellations in Minneapolis have spiked by 7.8% WoW, a very large deviation from the normal amount. Your immediate contact with relevant expertise, the GM of the Minneapolis market, is out on vacation. Your GPM wants to know what's going on today. How would you investigate what's going on?"
Hm, this is an interesting situation - what would cause ride cancellations to spike like that?
To start off, I'd first like to just look at the historics for this metric - I know you said this is a very large deviation, but I'm curious if there have been any other spikes or sudden dips in the past, what caused these, and if there's anything to be learned from those for this situation. Also, have any other cities seen this too, and does that give us any clues?
Assuming that doesn't yield anything, I would try to talk to somebody on-the-ground in Minneapolis to see if there's anything happening in that city that might lead to this. For example, is there really inclement weather? Is there perhaps a large part of the city that's closed off for some event? I'm looking for any unique event happening that might cause such a large spike. I'm assuming this isn't something that's seasonal-but-normal, since you said this is a very large deviation from the normal amount.
I'd also want to ask around internally just to make sure this isn't due to a change we made. Did we perhaps roll out a bug accidentally that just impacts Minneapolis? Did we very recently change the user experience just for Minneapolis users? Is a team running some kind of experiment that's really causing dramatic results?
If that still doesn't yield anything, I'd want to dig more into the data. How do other key metrics look in Minneapolis in this time period? We probably have the ability to segment our user base along many dimensions - for example, perhaps by where the rides start, or how long the user has been with us, or the demographics of users, etc. Does the spike happen evenly across all these divisions, or can we pinpoint it to a particular kind of user or situation?
Doing all this will take a bit of time, so I'd be curious to check in periodically and see whether the dip has fixed itself, gotten worse, or persisted. That might give me more hints as to the nature of the cause.
This response could've gone in many different directions depending on how the interviewer responded and any other information they released.
Overall, the response covered a breadth of ideas to look into - as long as the interviewer hasn't led you down a specific path, you should probably continue aiming for breadth of hypotheses. The response considered events external to the company as well as technical reasons. Barring any "simple" explanations there, it then offered some ideas on how to dig into the data more. One way the response could've been even stronger is by incorporating the idea of a funnel. The response did wonder about other key metrics in Minneapolis, but perhaps the candidate could have proactively sketched out what the funnel looks like for this product, and asked about specific metrics relating to that funnel.
For another example of a product execution interview, check out the video below where RocketBlocks Founder, Kenton Kivestu, gives Ankur Biswas, a Microsoft PM, a product execution interview. This particular question is a similar investigation based execution question looking into a decline in Instagram engagement.
Finally, don't forget to have some fun with investigation questions like this one - they're like a mini mystery for a PM to solve.
Real interview questions. Sample answers from PM leaders at Google, Amazon and Facebook. Plus study sheets on key concepts.