Recruiters & LinkedIn Don’t Have Problem/Solution Fit? [Product Idea Validation Template]

This post shows you how I validated a product idea for free. I also found out that recruiters have unique challenges.

Below I describe specific steps I took and some techniques and tools I used + how I reasoned about the problem.

At the end, I summarize what I could improve and what you could try instead. 

Why validate a product idea?

You may have heard horror stories about failed product launches or failed companies that folded because of failed products. That’s scary, but you say to yourself: “That’s NEVER going to happen to me.”

How e-commerce platform Elliot fell back down to Earth

But…

All executives become invested in solutions that they promise. They push for their completion without asking “why” or “would anyone care”.

I can commiserate with them. It’s scary to propose something experimental/optional in a culture where everyone expects you to know the one right answer. And yet, you also have to pitch & generate answers all the time (again, culture).

So there you have the perfect storm that creates the famed “Feature Factories”. I’ve worked in them, you’ve worked in them, we all worked in them (and some still do).

There are certainly people who invest into stocks without researching them. People buy courses without researching if other students have eventually reached their goals.

And many products and feature ideas go on roadmaps and backlogs without any validation.

And guess what: a product manager is not immune to this. On the contrary. In fact, I know this from my own professional experience.

Raise your hand if you’ve ever supported a feature that someone on the leadership team had been very passionate about, but had done ZERO market or customer research to confirm if it was something anyone would want?

🙈🙋🏽‍♂️🙋🏽‍♂️🙋🏽‍♂️🙋🏽‍♂️🙋🏽‍♂️

“The waiter is a product manager who, at heart, is an order taker. They go to their stakeholders, customers, or managers, ask for what they want, and turn those wants into a list of items to be developed. There is no goal. There is no vision. There is no decision making involved.”

(c) Melissa Perri – Escaping The Build Trap

Don’t be a waiter.

a tweet that says "We waste years because we cannot waste hours." by Shane Parrish

The idea

I lost my job a few weeks ago.

Once again, I had CVs and cover letters to write & jobs to apply for. Nobody was hiring, and companies froze lots of roles.

As I communicated with some recruiters, I started to come across people who were less experienced. Some who were abrupt or even rude.

It’s important to remember that everyone is fighting a fight no one else knows about. But it got me thinking how I could improve the experience for candidates like myself.

I wished that there was a way to provide feedback to the recruiters, both good and bad.

Which recruiters were more helpful than others?

I imagined a system that would squeeze over 5000 recruitment professionals in New Zealand to a Top 5 list. That list would be someone’s absolute best chance of getting hired. A crowd-sourced system that required minimal admin.

Not a brand-new idea, but an idea nonetheless.

Still, it felt a bit manufactured to me, and I reasoned that I had to get some feedback.

Before anything else

Write about it

You should write about the first stems of the idea.

Write it in your diary or blog or tweet about it.

I type “docs.new” in my browser (assuming you have Google Docs) and write my thoughts without editing. I write everything that’s piled up in my head: customers, ideas, channels, habit loops – literally anything that my mind’s conjured.

Map it

Next, I use Lean Stack to map my current understanding of this space. I know that it is likely wrong and faulty in many spots at that stage. This is what I need to find out – what are those wrong spots in the first place?

Lean Stack uses a bunch of familiar concepts, one of which is the variation of the Business Model Canvas – the Lean canvas.

Completing it takes less than 30 minutes and it serves as the anchor for the idea from then on.

I recommend Ash’s work to anyone working on new products.

Contemplate risky assumptions

I could see issues with the idea right away. I anticipated a 3-sided product, with recruiters, candidates and hiring companies (clients) all coming to the product to learn about the top recruiters.

There would be problems with motivation, as well as value propositions for each side.

Why would the recruiters want to pay for something that potentially works against them? Only if to improve their score through insights, hidden behind a paywall. That sounded like my main customer would hate my product, right out of the gate.

And if this was a pay-to-participate platform, then recruiters with big pockets would raise their profile, leaving the good independent players in the dust. 

And why would a candidate want to pay for the product? The whole purpose would be to make it public. Would they pay to post reviews of recruiters? That would allow someone with a bit of money to potentially ruin the reputation of anyone on the platform.

My dismay at some unprofessional behavior had generated this idea, so I knew this idea could appeal to someone else.

But imagine disgruntled candidates, rushing to pay for a product that would let them destroy a recruiter who had shunned them.

Then the recruiter would also pay for access to reconcile those reviews? It painted a picture of a very toxic platform, and even the above motivation was hypothetical.

(I also completely missed out on a very obvious solution that already existed. More on that later.)


Still, I wanted to learn about the industry’s problems in general. I set out to explore as I validated my assumptions.

(This post focuses on just one side of that industry – the recruiters – since my first idea was that recruiters would be the primary customers.)

Interview

Research plan

I decided to design a low-end research plan. I wrote out my assumptions & questions. Against each, I wrote how and when I will address them.

You don’t have to write a plan, but if you do, IBM has a lot of great resources for this.

screenshot of the exact research plan I used for my first interviews

That anchored me & gave me clarity around ways to verify some of my core assumptions.

Interviews seemed like the most versatile tool. I was eager to do more interviews.

In the past I relied on remote unmoderated studies and surveys, which were safe to do, but limited the amount of insights. Interviews are more intense and can be scary to some at first.

But interviews are worth it, and after this project, I love them and want to do more of them.

Discussion guide

The most important aspect of the interview are the questions. I opted for the topic maps approach. 

screenshot of the clusters of topics made in Miro
several jump-off points

Clusters allow you to stay mindful of topics covered. They allow you to dive into asking about something from different vantage points depending on the conversation flow.

Outreach

It was easy to source people to interview from LinkedIn. I designed an outreach message and collected a list of people to reach out to. I had Calendly set up and Zoom to host the interviews.

screenshot of a message sent to recruit someone into a study
the outreach
a screenshot of the small Google Sheet where I tracked my interviewees
the spreadsheet

I was the sole interviewer and note-taker, so I had to record my participants.

I don’t recommend you do this alone.

Recording is great even if you’re working with the full team, but transcribing video/audio after the fact was tedious. (I wish I also used Otter.ai to follow along and transcribe in the conversation. Whatever you do here, making sure you’re doing it legally is a good idea.)

Obtain consent for recording. I did this at the beginning of the interview and let the participants know when I started recording. No need for NDAs from my side since there’s nothing to disclose.

Insights

Capturing & synthesis

This stage is manual, and I can see why many UX research products have popped up to try to “automate” this.

5-step workflow that I use for analysing interviews, as posted on Twitter

You might have a much better approach and if you feel charitable, please share it!

The first problem didn’t pan out

During my interviews I could see that the “recruiter rank list” idea had no legs. Recruiters didn’t care about their rankings. The industry had no formal governance and the one official organization was a joke to most professionals. They all knew each other through word of mouth. They could see their competitors’ rankings on Google using SEO tools.

I was embarrassed that I missed out on an easy way to give feedback to recruiters. Remember Google Reviews? Yeah, turns out if you google any of the recruitment agencies, you will find the exact information I have been planning to provide. Only it’s free.

In fact, I found my own review for one of the agencies (5 stars to Guy Day at Potentia).

And guess what? There were a lot of bad reviews from people who felt slighted by recruiters’ lack of communication. (This insight is a doorway into our next exploration.)

ouch…
it hurts…

If not this idea, then what?

I asked a bounty question at the end of each interview.

“If you had a magic wand, what hugely annoying problem would you wish away?”

I don’t recommend this because the insights are not as great as you expect them to be.

People often jump to solutions, fantasise or freeze up, but you can still pick out a few themes. The key is to try to get people to respond with the first thing that comes to mind. You may interrupt if they take too long.

Having those quotes pulled into Airtable or Miro, now you can synthesize.

Don’t overthink it. Grab your notes and arrange them in groups. Name the groups. Do this several times (ideally with others). At the end you’ll have clusters of topics.

Affinity map can be used to validate product ideas by splitting feedback into groups

That’s really it!

What did I learn?

Without revealing much about the participants, I managed to learn a lot about the industry. Being a complete newbie to it, I found out that:

  • Recruiters struggle with time management & communication with candidates. It’s only right: there are many candidates for a role. The recruiter details are often visible on the ad, so people call & email recruiters all day. Trying to balance being nice (someone might be a good candidate) with actually getting things done is a tough feat.
  • The CRMs aren’t always that handy at being the one source of truth. Emails and details slip through, and then you have to do double-entry. It only amplifies that above issue of time management.
  • Competitors are a concern to some, especially when several agencies + internal recruitment team are also involved in the race. Getting a clear read on what contracts to pursue can be a struggle. The solution to this is building trust with clients, usually by sourcing good candidates consistently (i.e. being good at your job).
  • Identifying great candidates is the main challenge. This is what the industry does, essentially. Or does it?
  • People don’t see value in recruiters. They don’t understand what a recruiter does. There is no clear evidence that recruiters indeed assist in hiring people. They may be unavoidable by virtue of being gatekeepers. But most candidates would prefer to talk directly to the hiring manager. This is further supported by the fact that a large part of recruiter’s time is essentially arranging the communication between candidates and the client.

As Hiten Shah, the co-founder of Crazy Egg & KISSmetrics, may put it, recruiters don’t have product/market fit. (Read Hiten’s awesome analysis of why Zoom doesn’t have product/market fit.)

What’s Next?

The 5 themes above are 5 new product problems to explore (that will undoubtedly branch into many more).

For each, we now have to create a new Lean Canvas and explore individually.

This fractal approach is very similar to Melissa Perri’s Product Kata concept. Using it over and over will help validate a number of problems.

An illustration of the Product Kata process by Melissa Perri. It is used to validate product ideas & problems.

What first?

Anchoring the recruiter as the user, I put down their needs and connected them. I could start seeing some connections, similar to what Ryan Singer from Basecamp shared recently, called an “interrelationship diagram”.

A tweet from Ryan Singer with an illustration of the interrelationship diagram method.

Recruiters had an issue with time and communication because they had many candidates reaching out to them.

Those who didn’t cut it, as many don’t, didn’t get a response – which soured the relationship and damaged future trust towards recruiters.

Meanwhile, exceptional candidates rarely (if ever) apply. This, in turn, means that the clients often get second-hand options. This tarnishes their trust and drives the need to hire many recruiters (reasoning that at least one would solve the problem).

the interrelational diagram that helps to decide what problem to start with when validating product ideas
not a pentagram!

New problem

I didn’t want to get into CRM wars. Who has ever heard anyone gushing about their CRM? While this might spell “great opportunity”, going against Salesforce or even Job Adder seemed like a tall feat. It’s also hard to be excited about a CRM.

Time management and email management may have legs. I played with the idea of creating a time management course, or repurposing GTD and selling that. But it seemed like a band aid to apply to a severed hand.

I arrived at the problem of finding & identifying great talent.

But of course, there is the elephant in the room – this is what recruiters do themselves, right? And what about LinkedIn? Isn’t that why LinkedIn exists, to a certain degree?

But ask yourself – do either of these “products” have good fit?

Someone also said that bad candidates would fill out any form you give them, while great candidates wouldn’t even lift a finger. Instead of pulling in all sorts of candidates through an ad, maybe there’s a way to use a push model? A recruiter is really a scout, a talent agent.

My current hypothesis is that there is an untapped market of great hidden talent:

  • Not on LinkedIn at all or is inactive on it;
  • On LinkedIn and active, but not thinking about moving. They’re working in a nice role, have support from leadership and are growing.

These people:

  • Don’t want to fill out any forms;
  • Don’t care if they are unemployed;
  • Are on a career trajectory that is solid;
  • Are so busy that they don’t have time to promote themselves.

How would this affect other problems?

  1. Having great hidden talent on tap would make the recruiter indispensable in the eyes of clients. Some competition doesn’t matter as much.
  2. Time will be spent on priority candidates. It’s like being secure in your relationship instead of playing around. You get a lot of free time.
  3. Communications improve because there are fewer participants (there may still be a lot of back & forth over contract terms).
  4. The funnel is smaller but each candidate is heftier.
  5. CRM data is just a non-issue at this stage.

Next steps

It’s back to the Product Kata. The next step is to validate this new theme.

Appendix

What I could have done differently?

Interview skills

  • Since I haven’t done many interviews in the past, I feel like having more practice would be beneficial. This was practice though, and I’m glad I did it. But I could have benefited from some coaching.
  • My discussion guide was a mess. I asked some questions and not others. I recalled the clusters and barely looked at it. I only remembered to ask the bounty question at the end – everything else was not scripted.
  • Similar to the first point, when reviewing my questions, I noticed that I often formulated questions poorly. Perhaps I should have stuck with a more structured and rigid discussion guide. Our conversations flowed, but sometimes it went in a direction that added no value.

Lean UX research

  • It would have been good to follow my research plan and do a survey. I would also run some FB/LinkedIn/Google Ads on top of the interviews to gain more quantitative data. Alas, I am in conservation mode until I get my FU money – so I didn’t do those.
  • Subsequently, I didn’t run any smoke tests. I ran out of my Carrd subscription right before COVID-19 hit, and I haven’t renewed. It would have been early to do so to validate anything. However, smoke tests make you think about features, copy and problems in an intense way – and maybe asking those questions would have generated a more robust discussion guide.

If you have any thoughts on how to do things better, hit me up on Twitter, LinkedIn on via email.

If you have used this guide to validate your idea, I would love to hear about it, too!