
The Breakthrough Hiring Show: Recruiting and Talent Acquisition Conversations
Welcome to The Breakthrough Hiring Show! We are on a mission to help leaders make hiring a competitive advantage.
Join our host, James Mackey, and guests as they discuss various topics, with episodes ranging from high-level thought leadership to the tactical implementation of process and technology.
You will learn how to:
- Shift your team’s culture to a talent-first organization.
- Develop a step-by-step guide to hiring and empowering top talent.
- Leverage data, process, and technology to achieve hiring success.
Thank you to our sponsor, SecureVision, for making this show possible!
The Breakthrough Hiring Show: Recruiting and Talent Acquisition Conversations
EP 165: AI-Powered Recruitment Strategies with Metaview's Co-Founder and CEO Siadhal Magos
James Mackey and Elijah Elkins speak with Siadhal Magos, Co-Founder and CEO of Metaview, about the methods MetaView uses to maximize recruitment potential, including AI-driven candidate rediscovery and strategic compensation alignment.
He shares how AI technology enhances hiring efficiency and precision, enabling organizations to make informed decisions based on real-time data from candidate interactions.
Thank you to our sponsor, SecureVision, for making this show possible!
Our host James Mackey
Follow us:
https://www.linkedin.com/company/82436841/
#1 Rated Embedded Recruitment Firm on G2!
https://www.g2.com/products/securevision/reviews
Thanks for listening!
Hey, welcome to the Breakthrough Hiring Show. I'm your host, James Mackey. We got Sal Megas today. He is the co-founder and CEO of MetaView. Sal, it's really good to have you back on the show. Thanks for joining us.
Speaker 2:Thanks for having me, james, pumped to dive in again.
Speaker 1:Yeah, absolutely. We also got Elijah here, our co-host. What's up, Elijah?
Speaker 3:Hey, james, good to be back.
Speaker 1:All right, so MetaView. A lot of folks in the recruiting space have heard of MetaView. You guys have been around for what? Six, seven years? Is that right at this point?
Speaker 2:That's right. Yeah, that's right.
Speaker 1:Cool. So yeah, it's a name in the industry that a lot of people have heard of and I know, particularly over the past couple of years, I think, from like you're getting some really incredible momentum and traction, even more than what you already had, which I knew you were already pretty well known. So I'm excited to dive into that. But for folks that maybe aren't as familiar with you and MetaView, it'd be great if you could just give us a bit of an intro background on yourself, your founding story, that kind of stuff would be really cool.
Speaker 2:Yeah, totally so. Yeah, so I started MetaView really off the back of my experiences hiring at a really high growth company, so I was not in recruiting, but I was a hiring manager at Uber. I was a product manager there and building out, contributing to hiring more product managers, engineering managers, product marketing managers, design managers everything you expect. My co-founder was an engineering lead at Palantir and was deeply involved in hiring at Palantir and him and I would sort of jam on things that maybe we could solve in the world. And one of the things that we thought was a really big A problem and obviously B opportunity was we're part of these companies Uber and Palantir that take hiring incredibly seriously and are doing everything they can to get world-class results in their hiring process. But still there's a lot of guesswork involved. It's very lossy. You're losing a lot of the information that you're receiving during the hiring process.
Speaker 2:Just generally like a messy thing and everyone knows this right, that's not new, so I don't need to like labor that too much, but the thing that we thought was changing in the world was given.
Speaker 2:It's so clearly obvious that the most important data when you're trying to hire someone is the information you get from them when you're having a conversation with them.
Speaker 2:Are we now in a world where you can capture and do something with that data instead of just asking your sort of however many hundred people in your company run interviews to write that information down and hopefully do something with it that way?
Speaker 2:So that was what we thought had changed six or seven years ago as more interviews started to move onto digital formats. Had changed six or seven years ago as more interviews started to move onto digital formats and, as you mentioned, we've built a product that some customers loved, I would say, whereas now in the last two years or two and a half years maybe, since that GPT moment, suddenly everything on our roadmap that we thought was maybe 10 years away just got dragged forward and you can just do magical things with this unstructured data that just make recruiters' lives far easier hiring managers way more confident in their decisions, recruiting leaders far more informed about what's actually going on in the hiring process. There's just a ton of downstream impact when you are capturing that data, so that's a bit of a long background to say what we really do is we focus on capturing the data from within the interview process so that organizations can radically increase their efficiency and their precision when they're hiring.
Speaker 1:Love it and I'm really excited to learn about what direction you want to continue to build in as well. I was looking at the website and I wanted to get a poll. I don't know, maybe this is almost going too specific up front, but one of the things you mentioned is looking at different reports, right, and I'm curious what type of reporting and analytics you're doing within the product and maybe because that sort of also informs potentially product roadmap, right, like I think Elijah when we were talking with was it Ben over at BrightHire they were doing some analytics around, like how well the hiring team was interviewing, like giving the interviewers a score, and then I think Pillar was going in a different direction with some of their analytics and I can't remember, but it seems that, like, people are thinking about measuring interviewing success in different ways and leveraging this technology in different ways. So I'm curious, like, when it comes to actionable insights and reporting, what's like the focus for your team right now?
Speaker 2:Yeah, things around interview quality are important. We have various features and functionality in various ways. You can query the AI to help inform you on that. I think there's way more that we can all do there.
Speaker 2:Where MetaView is really strong on the reporting side is, I would say more on the tactical elements. It's like understanding who are the candidates in my pipeline who have said that they've closed deals over a certain size and are willing to relocate to this area. Like I'm trying to fill this rec, metaview knows you're trying to fill this rec because you were part of the intake meeting either with the hiring manager or with the client and these are the people that actually match up to that. So it's a little bit more tactical, connected to the thing that is stressing out a recruiter or a hiring manager on that day. So we do think of it as reporting because the output is often a list or a chart of well, these are the number of people that match these conditions that you've seen over time. But in some cases it's almost just a very, very sophisticated AI filter of all of the conversations you've had on the platform. So really common ones we see are things like one I mentioned there, which is AEs who match certain conditions of the types of deals that they've closed, and maybe even the previous companies that they've worked for. So I can go and source from those companies because I think, if I want more people like that, we use people use MetaView a lot, for we think of it as dynamic salary reporting. What I mean by that is, of course, there's really robust data sets you can get for salary benchmarks, but they're not real time and all the time you're getting that. While you're getting that data let's say once a month, once a quarter when it refreshes you're actually getting hundreds of data points every day of what candidates are telling you their salary expectations are. Now that doesn't mean that's definitely what they're getting paid, but you're at least learning what their expectations are. And that's a really interesting data point if you can look at it over time and again, see how it differs in geography or by role and all these different things. So that's a really common one that people do as well.
Speaker 2:And then the last one I'd say is quite related to one of the other ones you mentioned, which is a lot of the time, I think, the very specific. Hey, this is a good interview and this is exactly how this interview needs to change. I actually don't think we're there yet, because I think conversations with candidates are rightfully more nuanced than that and we don't want these super robotic hey ask this. Then this formula, which I think is the limitation of having reporting that focuses on that too much. You get false positives right. You call someone out for being a bad interviewer and actually they're not, and they get flagged with a bad interviewer and then you just lose credibility and you lose trust in that platform. So we really try and avoid that stuff. But what we can do really well is flag anomalies like something that is clearly out of line with how interviews are being run in the rest of the company. People also use MetaView for that, and it's almost again.
Speaker 2:You introduced this question or this segment with the word reporting. We do call it reporting. This is our AI reports functionality, but actually sometimes, when I'm talking about this with customers, I talk about it more as like an AI scout. You can think of this as you now have a colleague, a coworker, an AI frankly, in all of these conversations and whatever it thinks you should really know about in any one of these conversations, or as any trend emerges across these conversations, whether it's hey, they look like there's a really low talk time over here, a really high interrogation interview over here that then resulted in the candidate rejecting the offer. Like fusing these different data points is something that results in and again, it's not necessarily a report in terms of something you can show to your senior leadership team every month of hey, this is how we're getting better over time. Some of that stuff would be cool as well. It's more listen. There was a problem yesterday. Why don't we do something about it right now so that we can save that candidate?
Speaker 1:So for. So that's really helpful context too, and the reason I led with the reporting angle is really just to try to show like a fresh perspective and start off an episode a little differently. Maybe we have others in the past, so I wanted to dive into that and it's. But it's interesting because you're saying, like what you usually say is recruiting is asking the AI agent essentially hey, in the last three months, can you what are the people that had an average deal size over 100k or whatever that?
Speaker 1:Whatever you just said, I have been thinking a lot about candidate rediscovery lately as a use case for AI and different, just different use cases for AI. And I'm just wondering too is there like a broader application there where it's like, if somebody was interviewed two, three years ago where you could, would you be able to ask like MetaView agent, hey, can you pull a list of people for all the revenue, jobs or for this role or whatever for the past three years that fit the following criteria or that answered questions? In this way can be used like further back for candidate rediscovery as new recs are coming up.
Speaker 2:Yeah, the only reason I don't give an example around like three years ago, I guess one, is because you mentioned it. A lot of our growth has been in the last two years, so most of our customers don't have three years of data.
Speaker 2:So like even just conceptually, it's not like something they can go and do right now because we're getting more new customers every month and this sort of thing. But I think there is some like time limitation as well, where I think one thing that's really amazing about starting to capture and harness conversational data which every recruiting leader should be doing, in my opinion I think it's irresponsible not to at this point is one thing you're doing is you're making the half-life of your data much longer. You're making it so that even right now, the data you have in your applicant tracking system about a candidate once that hire is made, you're probably almost never going to look at some of that data again. So even by getting three months or six months or a year's usage out of it, it's really improving things.
Speaker 2:I would say there might be some diminishing returns of a silver medalist candidate that was from three years ago or something, just because so much will have changed in their life by the time you're looking at them again three years later. But of course, there could be cases where it's interesting. But yeah, there's no limitation on the timelines other than what the customer imposes. Some customers like to delete data regularly and they might get rid of it. But that's on them.
Speaker 1:Yeah, for sure. Yeah, I guess that also depends too, is like where they want the data stored, right.
Speaker 2:Yeah, yeah.
Speaker 1:Okay, cool, elijah, I could keep going here, but any questions top of mind for you right now?
Speaker 3:I just wanted to mention. I think the compensation piece is huge, right? Every tech startup that I've worked with over the past seven or eight years. That's one of the key points, right, they want from almost every candidate interview is what are their expectations? Right? We got to be careful because in New York City and different locations around compensation and asking them like what they're making now, but we usually ask it in a way where it's more nuanced of what are your expectations for a role like this.
Speaker 3:But that data is really important, right, and especially for I don't know if you're seeing customers use it like this, but we've had a number of recruitment leaders or senior recruiters wanting to change the comp band with the comp and Ben's team and they then need that data to give pushback to the comp and Ben's team to say this is what candidates want at the experience level that we're looking for and this is what they're asking for and here's our range, right, and we're not going to get the talent that the hiring manager needs and that we need. Are you seeing it used in kind of a similar way with your customers?
Speaker 2:Oh, 100, 100. Yeah, yeah, all the time, and I think one like bit of flavor that I'd add to that. So I think the the ability to extract that data even though you haven't structured it yourself that's the really that's the big thing that's changed right. You've just your company has been bombarded with all of these words by these candidates and you can use AI to make sense of some of that without having to have your team members structure it manually. That's obviously the magic there. But there is something that's actually quite powerful, even just about the basic concept of the audio or the video of the candidate explaining their compensation expectations to share with folks in, let's say, comp and bends, right, because suddenly you've gone from, hey, the recruiting team is saying that they're not hitting their headcount goals because of comp and and they're just complaining about it. And it's very easy to distance, distance yourself from it almost. But actually even just hearing like one or two sound audio clips or video clips of candidates saying it, it just brings a sort of a level of richness and a level of realness to it that makes it creates a much better conversation basically with internally to solve it. So, yeah, that's really common.
Speaker 2:There's there's all sorts of cases beyond just straight up recruiting, like other HR functions, where you see benefits and learnings from within this data as well, things like operationalization of the values. Whenever an HR team leads or works with the executive team on really re-instituting or even sometimes redesigning values, one of the first places that gets pushed to of course rightly is the interview process, but it's not necessarily recruiting's job. There's still this handover moment where it's an HR slash leadership thing and then it's now we need to operationalize it in the interview process. But again, there's really high quality feedback about the salience of that cultural messaging or the messaging of that values that is actually really important to the C-suite and the HR team and things like that. So there's a lot of value beyond just the recruiting team actually in the recruiting data.
Speaker 3:Yeah, that's great. One more quick question. Okay, so this is a very direct question, but I think it's super relevant for your buyers because I've been in that seat myself and I think what a lot of talent acquisition leaders when they're looking into interview intelligence tools or whatever category, however you want to phrase that. You've got your generic tools like Fathom, zoom, ai, so you've got those not purpose-built tools that people may be using, recruiters may be using. So first part is like how is MetaView and even the category of, maybe interview intelligence different and better than those more generic tools? Because there is a premium right versus the generic tools. And then the other part of that question is how is MetaView different than maybe other kind of players or competitors in the space and where do you see MetaView going? Do those kind of two parts make sense?
Speaker 2:Yeah, for sure. I think there are three parts to the answer and I probably don't need to separate out generics from other folks in the space. I think that. So before I get into the specifics which, don't worry, I will the core of the differentiation between something like MetaView and Zoom, microsoft, copilot these obviously amazing companies who are putting hella effort into building amazing AI products the key difference is obviously focus right.
Speaker 2:We are actually truly thinking about what's the day of life like in a recruiter's world or a hiring manager, truly thinking about what's the day of life like in a recruiter's world or a hiring manager's world. What's the outcome they care about? Really importantly, what's the other pieces of context that we can infuse our AI with to make it outperform? A generic AI? That can be everything from as simple as how we fine tune or how we build our system prompts within our product, all the way through to knowing if we know that this candidate is going for the senior software engineer role and this is the job description for that role and this is the scorecard for that role. There's just a bunch that our AI can do to more successfully create really high quality notes out of that conversation that are really relevant to the recruiter or the hiring manager or the approver of the candidate than a generic tool can, and so if those big guys decide that they also want to do that work of sucking in other job contexts, then that's what it would take to match the accuracy which is really what I'm getting at there, that accuracy of the notes that we can achieve within a recruiting context. So that's really the first one is accuracy. Now MetaView even in relation so much when I compare MetaView to other interview intelligence tools, I really think the difference comes down to we're really an AI company first, and and so much of the starting point is broadly similar, which is let's start by capturing the conversation, because that's the new thing, that's the new data source that's suddenly useful. Actually, I think a lot of the downstream and I think people are seeing that already if folks check out various products ends up being quite different.
Speaker 2:So for us, accuracy is actually the number one thing. You cannot build any exciting AI applications on top of this conversational data if your AI does not have an accurate understanding of the context. So these are very tactical things I'm about to explain, but they're actually really important If your AI, if you're recruiting AI, which is obviously what we are for many of our customers and what we're increasingly trying to be doesn't understand that when that candidate said my comp is $200, they meant $200,000. That's really important for the AI to understand within this context. What's being talked about here. What's the currency? What's the actual number they're referring to. Same thing, when they're talking about what languages they, what programming languages they are familiar with. When they say C sharp, they don't mean I see sharp things. They obviously they're talking about the programming language. So I know this sounds very it almost sounds funny to think about it this way, but that's literally the work that we are going on in our company multiple times every week where we're realizing okay, we need to bake in this additional context, create additional, add things to our knowledge base in our library so that our AI is smarter about these things. Not just because we want the notes to be perfect that is really important but also because if I want to run a report on that data later, I can't do it if it hasn't actually understood the context correctly. And increasingly, as you want the AI to get more and more agentic and proactive again, if it doesn't have the right understanding of the context, it won't be able to do those things. So the way that differentiation right now manifests itself is that MetaView has by far the most accurate notes on the market. The great thing about that is it also is the thing that saves recruiters and hiring managers the most time, because if something's not accurate and you have to fix it, then actually you're not saving yourself time at all. You're doing the opposite.
Speaker 2:Another differentiation from a generic will be workflow integration. The fact that MetaView will speak to your applicant tracking system, your HRIS, your scheduling systems. All this makes it just a whole lot easier to not only adopt but also administer. So right now, if you wanted to roll out, if you wanted to make Gemini your note taker of choice within your recruiting workflow, you'd have to rely on your interviewers every time remembering to click the little button to make sure they captured the interview right. It's just not really a reliable way to capture what is really important data for you, whereas when you take a recruiting specific approach through a tool like MetaView, you can administer this centrally. You decide which conversations get captured and which don't.
Speaker 2:You might want every interview stage for the senior software engineer role captured, all the way up to the final call which you specifically don't, unless it's got this person on it, in which case I do you can actually orchestrate the AI and capture what you want, as opposed to again have this generic approach of, well, everyone has their own Gemini, they can use it as and how they wish, which has value to it in other cases, but I think recruiting is enough of a snowflake to warrant its own thing. So that's the here and now. It's really accuracy and workflow. I'd say in future, the way you'll see things developing, and I obviously can't speak to other folks' roadmap, of course, but what I know we care most about is much as workflow.
Speaker 2:Integration is important, actually most of the work that we think about because we're really focused on the people that actually do the hiring the recruiters, the hiring managers, the interviewers, the VPs who approve roles and who obsess over who's actually in the company. A lot of the work that they do when they're hiring doesn't actually live in any system right now. Right, it's a lot of time. It's going on in their head like trying to digest. Well, how does that person compare to this person previously? Or it's like a hallway conversation they have with a colleague, or a side Slack thread or debrief is a really common case as well.
Speaker 2:Basically, a lot of hiring doesn't really happen in any system at all, and so what we really care about is building AI, tooling and enabling, essentially, a workspace for a lot of that work, as opposed to the things that are currently done in other systems but could be improved by AI.
Speaker 2:We're a little bit less focused on that, partly for strategic reasons, because we don't have as you know, there's other good companies that can probably do some of that stuff but mainly because we actually think that's the most important stuff. Actually, the most important part of hiring is the stuff that's happening, like between two people's ears and when they're trying to communicate with each other, and that's the stuff that doesn't really have a home. Yeah, they're trying to communicate with each other, and that's the stuff that doesn't really have a home. Yeah, you'll see more and more from MetaView, where it really helps people in a very hands-on way in their next steps after many of these conversations, whether that's the next step after an intake meeting and the thing that a recruiter will do after that, the next step after a debrief and whatever the hiring manager the recruiter will do after that, or, of course, the next steps after interviews and what that means for our downstream interviews and how we should change those in order to get the information we need by the end of the process with the candidate.
Speaker 3:Yeah, I appreciate you sharing all that. It makes me think about weekly syncs. I know a lot of recruiters use weekly syncs. Is that something sometimes people use MetaView for to almost have a history of the role, like how changes are being made and be able to reference. Mr and Mrs Hiring Manager, you said three weeks ago we were changing the focus of the role and you didn't want us to source more candidates like that, or we removed a requirement. But we've done that manually, right. I've done that in paper documentation before. Is that kind of a use case that sometimes people use MetaView for?
Speaker 2:Yeah, yeah, and in fact, we've gone one step further now, and whether it's in your weekly sync or whether it's in any meeting, really, if it's really clear that the role context has changed, then MetaView's AI will now go and literally suggest changes to make your job description. So I think this is a great example of what I mean by this proactive agentic AI, of listen, we know you've just had a conversation about this role. Yeah, there's a bunch of things you're going to do as a result of that conversation. Sure, one of the things that sometimes gets left behind is well, hang on, jd, we actually wrote that two years years ago. It's now nothing like the role that we're we're focusing on, and that's the type of thing that MetaView can stay on top of very easily, very cheaply for you. So, yeah, that happens all the time. It's actually another great example, I would say actually, elijah, of, again, when you're thinking about, do you go generic or do you treat this as, again, as a bit of a unique? Do you want a unique intelligence layer, for this is maybe the way I would put it, rather than a unique note taker? I think I don't really care who the note taker is, it's more about the intelligence layer.
Speaker 2:That's a great example, right? Because this concept that there's this, these meetings don't run where we're going to have this meeting, we're only going to talk about this topic and we'll stop that meeting when that topic's finished, and you can break it up that way. It's obviously much more fluid than that and so having that sort of like consistency, almost that lineage of we know how this role has gone over time, and actually we talked about these three candidates last week and this week we're talking about these four, two of which were also present last week. So if I want to aggregate the data about these two candidates split across these three meetings, like all of that stuff is I'm not MetaView's not there on every single aspect of that yet, but that's exactly the type of world that people should.
Speaker 2:They should expect that right. They should expect the AI to know. Actually, in this meeting with James, you were talking about this candidate because, yeah, the AI is never going to be as smart as you. It's never going to be able to help you as much as you could help yourself if it doesn't actually have all the context that you have, which is often through your conversations with hiring managers and whoever else.
Speaker 3:Yeah. And it could even suggest, right, if it has the context of the job posting or the job description, it could say, hey, based on this context, we recommend these two changes to these two bullet points. Do you want to accept that Right? And then yeah potentially even push that to the ATS at some point. That's what it does. Oh, okay, great yeah. Hey, I'm done. James, go ahead.
Speaker 1:Oh yeah, no, it's just, it is kind of incredible how much training AI, how much it really does take it's to get all the nuance. It's like this endless build and refinement and same with, of course, like on the product side, building out the features. Customers continue to ask more and more. It's definitely just 10x more value going with an industry-specific tool and you really start to see that when you're talking with your customers and they're asking you to build different features or better features, or you're going through the motions of training the system Right now with my company, june, a very early stage but essentially doing a candidate pre-screening with our AI agent, june, and we're doing I think I mentioned before we recorded it's like inbound, outbound and candidate rediscovery. Right, we're doing it in those three different use cases. It's just as we're going through and training June. It's every time, every week, there's oh no, we have to train this, we have to add this, we have to find this, or okay, wait, june's responding this way.
Speaker 1:Why wait? Why is June doing that? Okay, now we have to go like it's a continuously add more and then, of course, like the product and feature workflow was just there's so much to do, right, there's just so many different things to make it work really well within a specific use case, versus just a generic.
Speaker 3:Why isn't there like a specific? I'm just curious, like why isn't there a recruiter specific LLM that all these like recruitment products could leverage as like a layer between GPT or Gemini or whatever, where it's like maybe just a slight extra cost on the call right, the API call, you're putting it through this recruiter. Maybe it's open source and a lot of people are building it because a lot of there's so many use cases for recruitment to build really good recruitment products.
Speaker 3:It'd be nice, like a lot of their recruitment products use people data labs for data yeah but if there was, like this, recruiter specific llm and then maybe there's one for sales, and like that'd be cool I don't know, I don't know who would be like.
Speaker 2:I think we've got our we're, we're a very capable team and we've got our hands full bring building out the sort of the slice of the hopefully like ever-growing slice of the stack that we really care about. So I think there's, frankly, just the resources and I'm by that with team, but also like compute resources to build that layer would actually be pretty high. I think I'd imagine the sort, the more net, the more efficient approach is probably the one we have, which is like Team Black Hours, leveraging this super intelligence and then internalizing, obviously having the domain expertise within our company.
Speaker 1:Yeah, yeah, elijah, from a baseline proficiency perspective. The LLMs they are smart enough to understand what recruiting is right, sure sure right so there already it has a base of, but then, yeah, it's going to vary very much.
Speaker 3:So, specifically based on the use case, but if it's 30% of the way there and then you're saying you have to do, you know MetaView has to do another 70% to get it like a hundred percent of the way there, I just I don't know. Just be nice.
Speaker 2:I wouldn't say, it's I. I think it's like it's probably more Pareto Like.
Speaker 1:it's like the last 20% is the hardest but for any one domain it matters a lot.
Speaker 2:And then I would also say some of it is not. In some cases it's nothing to do specifically with the intelligence of the reply from the LLM and how recruiting appropriate it is. It's, again, it's to do with the context that you're able to infuse it with. And so a company like mine, the way, what we think about a lot is how can we get right of admission to context. Now, again, what our bet is that the most important context to be able to give an LLM is all the information the candidate just told you in a conversation. That's in many cases going to be the most important thing. And actually if you combine that with even with an off the shelf LLM, you're going to get better outputs about who that person is and how can I get information about them than you would if you had not that context.
Speaker 2:And so I think really what we think about the word we use most internally is context a lot of time, or what other contexts can we help?
Speaker 2:Sometimes that is literally just the context of getting like an artifact I've mentioned it already like a job description, but sometimes actually the more important context, because I use that a little bit as a throwaway.
Speaker 2:Actually the more important context is actually, if we can get you collaborating with the AI on top of the creation of those notes or the creation of that JD. Actually, that's also context. The AI has the context of okay, they didn't like it when I did this, they did like it when I did this. They wanted to remove this suggestion, but they accepted this suggestion. That's all this layer upon layer of, like, proprietary, user level and company level context that you're building up and that's how you're going to get these truly magical experiences. And so I think maybe the answer to your question, which initially was this why is it to make sense of this recruiting layer on top of the llms is probably it's probably just more just that I think the really exciting things to build are more related to what context can you, how can you, what guardrails can you put around and how can you encase the context such that you get just magic out of whatever models the geniuses at Anthropic and OpenAI produce for us next?
Speaker 3:Yeah, that's great.
Speaker 1:I just say let's talk about the future right as a tail end of our episode here. It'd be cool to hear your thoughts in terms of the current roadmap, as much as you feel comfortable sharing. And then I would love to hear about some of the more recent conversations you've had with customers. What features and products are they most interested in? What are they saying, hey, can many of you do X right? What are those types of conversations look like right now?
Speaker 2:Yeah, sure, yeah. So we base our roadmap around three principles that we have. These are very much principles that we have arrived at by working alongside customers, but also things that we just think are the right way to build the future of the workspace for hiring, the AI workspace for hiring. So pillar number one is precision. So what can we do to help people make far more high probability decisions when they're hiring? The second is efficiency what can we do to enable them to spend far less time on the undifferentiated work within the hiring workflow? And then the final one is adoption, which is less flashy but so important, because hiring is such a team sport. Right, if you only get the recruiting team on something like MetaView, it's good, don't get me wrong. It's a really good place to start, but you're not getting the full value. You need to actually work out a way of how can I get these hundreds of people in the company, many of whom I don't actually know by name and I have no relationship with whatsoever. How can I get them to use this thing as well, so that we actually A can make their lives easier and they can do a better job at hiring, because they want great colleagues. But B, we can start to harness this fresh corpus of data, which is all of this unstructured data we get from candidates. So those are the three precision, efficiency and adoption. On the precision side of things, one of the things we talked about previously is in this call, in this conversation, is around starting to identify, either based on historics or based on your design, how well are interviews matching up against our expectations of what a good interview looks like. So that's something that we hear from customers sometimes. It's something we're working on at the moment. Where we get really excited is where you can start to use that data to evidence match. So how can you find from this conversation, given the JD, given the scorecard, given the rubric, here are the elements that seem to align really well. These are the things we should be including in the scorecard because it seems to match up really well with the thing we're saying we're looking for. So those are the type of things we mean when we're talking about precision, things that can really give you pause or can really accelerate you in your thinking when you're trying to figure out.
Speaker 2:You know what I really like that candidate, but I can't quite put my finger on why which is so common or the vice versa sometimes. I think there's a really interesting climate at the moment around hiring as well, where we've gone through this phase of being so anti-gut like you should not be using your gut at all. Like your gut is terrible, like never listen to your gut, and obviously a lot of the frankly most successful people in the world would often say no, you should probably listen to your gut a little bit. And I said I think there's going to be an interesting clash between those two things. My personal take is you only caught your gut because you haven't thought of the words to be able to articulate it yet. So there's probably something there and if you had the time, you could spend the time on really unpacking it and realizing, yes, that's the thing that I wasn't but didn't think was on point with that candidate. And yes, I know I didn't include that in my sort of specification initially, but I'm only human. I didn't realize that was going to be something. So now we should include it and I think those things are totally fine. I think we should have more accepting. We should accept. That is the part of a part of how you are anyway. Point is that's some of the things we think about within precision, within efficiency I mentioned.
Speaker 2:There's a lot of things that happen off the back of these conversations. After an intake meeting, you might go and redraft a job description, or you might go and create an interview plan or create a list of sample candidates in order to calibrate with the HR. There's just all these sort of bodies of work that often result in one, two, three week delays before you even start to see a candidate. Right, you go from an intake. You'll take a week to get back to them with. Here's a list of. Maybe you take a couple of days to get back with a few example candidates. They actually they get busy. They don't reply to you right away with what do you think of those people? Suddenly, a week has passed and then you get in the second batch. Yeah, cool, we're calibrated. Now let's start sourcing. Oh dang, we haven't actually got a JD. Let me spend a bit of time.
Speaker 2:Candidate can often take four, five, six weeks, which is, I think, again one of the reasons people almost don't rely on hiring as a key way to achieve their goals in a given quarter or half year. Because you're like my team for this half year has decided already. Because it's going to take me three or four months to hire anyone. I think we can change that really considerably. So that's the stuff on the efficiency side, really a lot of the work product that recruiters and hiring managers have to put out there. They're things that can be much more deeply assisted by AI.
Speaker 2:And then on the adoption side, it's probably not as fun to talk about that, but there's just a lot we can do around getting people to more familiar with some of our I don't want to say more expert features, because I think that's almost like the.
Speaker 2:That's a bad excuse to have for us. But essentially there are some really powerful ways that you can collaborate with the AI within MetaView. Let's say, you can instruct the AI exactly how you want your notes laid out and you can attach different sets of instructions to different roles in your database. So you might be working 10 different roles and obviously you want a different structure of notes for each one. You can tell the AI which set of instructions of how to take notes apply to which roles, which technical skills you're most interested in for which different roles and therefore which to pull out and flag and create as an attribute within the candidate profile. There's a ton of these really powerful things you can do which I guess, I'll just say, not enough of our customers are doing. So we think we can do a lot more in the product just to get people adopting those things. So, yeah, that's what's top of mind, which I think is very connected to what we're hearing from customers as well. So, yeah, I think that covers it, that's awesome.
Speaker 1:I think we have time for one more question, or I have one more question, elijah. I don't know if you have one more, but just I I'll jump in quick. Are you seeing any specific industries, customer segments, outperform others in terms of where you're seeing most traction? Like, where are you seeing people really? Where's the demand for this product?
Speaker 2:We tend to focus on startup, mid-market and maybe small enterprise tech, so that's quite a broad range, I would say you obviously get slightly different. The startups and the mid-market were quicker to just realize and understand just the huge productivity gains associated with not having their people have to worry about being the people responsible for capturing this data anymore, and so they reacted really well just to the efficiency you could buy very cheaply by not having your people do this. I think now you're seeing upper mid-market small enterprise think a lot more about some of the strategic stuff like the reporting, and almost it's my responsibility as a talent leader to be capturing this data because there's already things I can do with it and who knows what I'll be able to do with it in future as well. So now you're starting to see that sort of manifest and it's a slightly different catalyst, slightly different. It's a slightly different like catalyst for adopting and it was slightly more delayed, I would say as well. So, yeah, that's been interesting.
Speaker 2:I would say outside of outside of tech executive search and almost not boutique staffing firms but like highly focused staffing firms are also have also been really interesting that their adoption has been really impressive. Frankly, I would say, a lot of our most expert users are exec search folks, because they they have these really high quality products they like to give to their clients, which is well this is my write-up on this candidate or they are turning up for a meeting with a client and they really their clients are paying them a lot of money and so they really want to be able to represent those candidates effectively and have information at their fingertips when they're asked it by their often like C-suite client, whatever it might be. So they actually value. I think they're really on the bleeding edge of needing the functionality as well. So I think that's been a surprise.
Speaker 2:It wasn't a focus for us. It now really is, because we have a pretty big business there and a lot of great customers and whatnot, and also they obviously relate very closely to tech. A lot of them do executive recruiting for tech companies, so it's all sort of one ecosystem really. But yeah, that's probably been the I wouldn't say surprise. It's pretty intuitive when you think about it. But I've been really impressed by the organic pickup.
Speaker 1:That's awesome, awesome. Thank you for sharing, elijah. Do you have any other questions?
Speaker 3:I just wanted to compliment the PLG motion. It seems like you guys have been running with the freemium model right, so you do have a $0 a month so people can try it out.
Speaker 2:Is it up to maybe 24 conversations? It is. I think it's 20, 20 per month. And yeah, we said we were offering. We used to operate as a free trial so you could just try it out for a short period and then you have to decide you want to keep using it or not. We have recently switched to a free plan so you can now keep using the product ad infinitum for free if you want to. There's obviously really good reasons to upgrade to a team plan or a professional plan, but the free products really like it's definitely the best thing, like no excuses. Basically, if you're a recruiter and you're using a generic note taker, let's say, and if the reason for that is because it was cheaper than MetaView, which the professional version of MetaView rightly, given it's a professional tool is more expensive than some of those generics, then there's no longer that excuse because actually the free plan on MetaView is really, yeah, really powerful.
Speaker 3:Yeah, so you're giving away a lot of value there. And then also forgive me if I'm wrong, but don't you do something where, if people within a company are using it, you highlight to someone who signs up that they actually have a bunch of colleagues also using MetaView, or something like that?
Speaker 2:Yeah, we do that partly because obviously hiring is a team sport, so sometimes it's good to know if your colleagues are on it. But actually the main reason is because one of the other conditions of the free plan is you have a personal usage limit, but there's also a company usage limit. It's good for you to know how many other people in the company are on the product, because actually, if there's 20 of you using it, you're probably going to hit your company limit quite soon and therefore so you might want to have a conversation internally or reach out to one of our team and say, hey, let's move to the actual team plan, because we're all going to get blocked soon.
Speaker 3:I think it's social validation as well, though in a positive way. When I was testing it out and using the product, I looked on there and I saw a few other people. I was like, oh, this is great, there's other people using it. It was a good decision for me to sign up for the free trial or whatever. I think there's a lot to that social side as well. Just seeing your colleagues on it makes you feel like you made a good decision and you know who to ask if maybe you need some help, so I think that's a great move.
Speaker 2:Nice Thanks.
Speaker 1:Awesome. This has been a great episode, definitely, as always. I feel like I've learned a lot and I wanted to just say, sal, thank you so much for joining us today and sharing your insights and everything you've learned over the years building MetaView. I'm really excited for you and your team. It's really great to see how well things are going. Just looking at your website, your team's progressed a whole lot since last time we connected. It's amazing to see those logos and to see the functionality and what you've been able to build out. It's really cool. It's really impressive. So, thank you, I appreciate you coming on today.
Speaker 2:Oh man, thanks so much for the kind words and yeah, anytime.
Speaker 3:Always, always, always pumped to chat with you, bo. Hey you too, elijah. It's good seeing you too.
Speaker 1:I'm glad you can make it Always a good time. Cool hey for everybody tuning in. Thank you so much for joining us and we'll talk to you real soon. Take care Bye.