Search Product Management: The Most Misunderstood Role in Search?
What does it mean to be a Product Manager (PM)? What does the search team do? What does the PM do for the search team at your organization? Is there a difference between Search PMs and other PMs?
First, let’s talk about the PM role in tech, more broadly.
The Product Manager
Product management might be the most misunderstood discipline in tech. If I were being glib, I could sum it up with scene from the movie “Office Space” where the two Bobs ask Tom “What would you say … you do here?” To which Tom responds “I take the specifications from the customers to the engineers!”
Of course, that’s a gross oversimplification. If all you do as a PM is take specs from customers and give them to the engineers, then you really aren’t adding value.
In reality, Product Management is a nuanced, delicate, intricate discipline of connecting customers, engineers, marketing, management, design, research, data, and a host of other stakeholders. The PM is the hub of the team, taking in information and distributing it appropriately.
There’s more to a PM’s job than distributing information. She must prioritize tasks, make connections, and set the direction (mission, vision) for the team. She must also use the mission and vision to set goals for the team and evaluate the team against those goals.
If all a PM does is take specs from customers and send them to the engineers … well, that would be doing a great disservice to the team, the customers, and the business. You see, when customers say “I want X feature” you may try to deliver X feature, only to find out that feature gets little use. Perhaps you didn’t understand X well enough? Maybe you didn’t check to see if X was useful to more than that one customer? Maybe that customer didn’t really understand what motivated them to want X? Maybe your team just didn’t deliver X as the customer understood X… you actually delivered Y?
All of these are reasons that a feature can fail. In my mind, understanding what the customer/user really wants is the job of the PM. As Henry Ford once famously said (but didn’t actually):
“If I’d asked my customers what they wanted, they would have said a faster horse”
Never mind that he didn’t say it, and that when asked, people would probably have thought a horse that pooped less would be better than a faster horse; this quote has stuck around because it’s enduring truthiness: people don’t know what they really want. In usability, we call this a “latent need,” a need that the user doesn’t know they have or can’t express well. Perhaps a better approach might be a Jobs To Be Done framework. People don’t need 1/4" drill bits, they need 1/4" holes. Come up with a better way to make 1/4" holes and you’re in the money. Really, JTBD is about finding tasks that are important and underserved, but the 1/4" holes thing is just too good. Regardless, when your customers ask for longer buggy whips, don’t just ask them how much longer, eh.
The job of the PM is figure out what the users’ jobs are, how they go about accomplishing them, and how we could help them do those jobs better. Then work with the engineering team to figure out how best to deliver features that solve those tasks. Then work with multiple stakeholders to ensure smooth delivery of the products or features that solve those tasks. Easy right? I wish!🙄
Now that we know what a PM does, then it should be easy enough to translate to the Search domain, right? Not exactly…
Product Management, as a discipline, takes on a different bent when we move to Search. Search is just built different.
The Search PM
I’ve written before about a statistical and human approach to search. I’ve written about measurement, human evaluation, A/B testing for search, and some processes to bring all of that together.
The job of the PM in the search team is to bring all of those things together. That means that the Search PM needs to have an understanding of metrics, statistics, A/B testing methods, machine learning, as well as the technical aspects of search: indexes (indices?), query understanding, query rewrites, suggestions, spell corrections, faceting, and so much more. They need to understand these things at least well enough to have intelligent conversations with the engineering team.
The job of the Search PM is to connect the available technology to the users’ problems.
But that’s the same for any PM! What’s different about being a Search PM? Well, a couple of things. First, search is just an inherently more speculative and experimental domain than many other user-facing domains. Second, search is just more data-oriented. Third, in search, there is some constraint to the tasks your users are trying to do.
If you are a search PM, “what the users are trying to do” is usually trying to find a document, product, webpage, or some information. They do that by providing a search query expressing their intent (often poorly). It’s our job, as search PMs, to understand the technology to get those users to their documents. That doesn’t mean that you need to be a machine learning expert (that’s what the engineers are for!); your job is to understand the technology and what it enables for your users.
I’ve used the example previously of finding the right price range in a category to feature appliances when users search for an appliance, e.g. “microwave”, rather than featuring the less expensive parts for that appliance. That’s a pretty simple realization, but to make that work requires pretty significant understanding of the “right” price range to feature. How does one figure that out on a per-category basis? Well, you have to look at a lot of transactions and figure out where the majority of purchases happen, then boost the items that are in that range. You could do that a number of ways: rules, decision trees, logistic regression, a naive Bayes classifier, etc etc. So, the challenge becomes: which method meets our constraints the best while achieving our goals?
This is where you as the search PM have to understand the constraints: do we need to do this classification at runtime or will it be at index time? How long does each method add to either? What is the accuracy of the classification? How many person-hours will it take to implement and tune? These are technical questions that require technical expertise to answer. But to trade off the answers and come to an acceptable solution, that’s a product decision. If you have a classifier that will take 6 weeks to develop, delivers 98% accuracy, and only adds 30ms to the query execution time, is that good? Or would it be better to go for the system that will take 2 weeks to develop, adds 3 hours to index time, but is only 80% accurate? The answer is “it depends.” It depends on the goals of your business, users, and technologists. It’s a tradeoff that someone has to decide, and that someone, dear Search PM is you.
Another role of the Search PM, other than prioritizing the efforts of the team, is to evaluate the work of the search team. I don’t mean looking over everyone’s shoulder to find missed semicolons, but rather developing the metrics and goals that the team will be evaluated against (0 misplaced semicolons). This requires an understanding of metrics, what it will take to optimize them, and the implications of that optimization. Again, it’s not being a data scientist, but listening to your data scientists and understanding the tradeoffs. A data scientist isn’t going to tell you that your users’ chief goal is finding one perfect document or is finding a bunch of relevant stuff. That has to come from you, it’s a product decision. Naturally, you’ll want to listen to UXR, Market Research, Marketing, or business stakeholders, but it’s up to you to decide, so you’ll want to make sure you understand the tradeoffs. A search engine tuned to P@10 is going to be different than one tuned for MRR is going to be different than one tuned for DCG. Know what kind of search engine you are creating when you tune for each metric.
The other piece of setting goals is tracking against the goals over time. Are you delivering the value over time, release after release or experiment after experiment? You need to be familiar with the metrics and check them over time. You should probe questions like “why did DCG fall after last week’s release?” Maybe the Search team released a new feature, or maybe the Search Front End team changed the order of filters. You need to be checking in with your “heartbeat” every day. The value of doing this will help you see “how far away am I from my goal?”
The problem with the situation as I’ve described in the previous paragraph is that Search having a goal of “10% increase in DCG@5 over the course of the year” can be affected by things outside the Search team’s control. If the front-end team changes how filters work, that could improve DCG without any help from our Search Relevance team (I mean, we’d take the win). My approach to goal setting is to aggregate the experiment improvements over the course of time. So if we set a goal for 10% improvement, what we really mean is that the sum of experimental DCG results will be 10%. So if we run 5 experiments with 2% gain, each, we have met our goal. This will remove the impact of outside forces on our goals (like Covid causing 10% user attrition 😬). It can be helpful to have an overall goal, just to keep ourselves honest, but experiment-based goals help us focus on the things we can truly change, and grant us the serenity to accept the metrics changes we cannot. We’ll have to work with other teams to figure out why they are shipping features that impact our precious DCG, though!
Which is a great example of the “hub” metaphor for Search PMs. Other teams will have their own goals, and they may not be worried about DCG. We need to stay abreast of tests (through the launch review process) and make sure that the Search team’s voice is being heard if those experiences negatively impact Search team key metrics.
Likewise, another job of the Search PM is to decide if an experiment is worth shipping. Sometimes, we may ship an experiment, even with negative metrics, because it adds some other value to our site. More often, what happens is that a test with neutral or conflicting metrics will need someone to act as decision maker: to ship or not to ship, that is the question … that a Search PM must answer. Again, it’s the combination of the business, technical, and user experience considerations that must be considered. If a test adds some business, technical, or UX value, it could be worth shipping, even with neutral or even negative search metrics. Someone has to make that choice, and I propose that someone is the Search PM.
The final thing that a Search PM does — and it may be the most important — is to connect the engineering team to the users. Engineers have the technical aptitude, but they don’t often get the exposure to real users (and real user problems) that will motivate them to solve those issues. You’ll notice that I haven’t once suggested that the PM is the “haver of ideas” for the Search team. Often (perhaps mostly) it’s the engineers who have the best ideas of how to solve user issues. In order for that brainstorm to occur, the engineers have to connect with customers, understand users’ problems, and figure out how technology can solve those problems. That’s what is so powerful about the Query Triage methodology. Connecting technologists to user issues will invariably spark multiple ideas for what’s most important and how best to solve those issues. The goal for the PM is to get the engineering team to get their noses out of the latest WSDM paper and “fall in love with the problem.”
Great ideas can come from anywhere in search, but they have to find the right application. Technology can push and user need can pull, it’s up to the PM to help define the right approach for the right problem, to prioritize, connect the team to the larger organization, and connect them team (most crucially) to the users. In that way the PM is the most important person on the team, but also the least important. The job of PM is to prioritize and facilitate, without which the team will lack direction. If a search team doesn’t have that direction, it will lack impact.
But! It’s not so straightforward as just prioritizing and shipping, either. Everything in search is an experiment, it’s speculative. There are no guaranteed wins on relevance. Remember it’s baseball, not golf, and you need to take as many swings as possible, with the best information available.
That… is the job of the Search PM. To guide the team and ensure they are having an impact on the user experience and the business. Focus the team on improving relevance. Help them take as many swings as possible and connect them with user problems so they are swinging at the right pitches.