The Breakthrough Hiring Show: Recruiting and Talent Acquisition Conversations
Welcome to The Breakthrough Hiring Show! We are on a mission to help leaders make hiring a competitive advantage.
Join our host, James Mackey, and guests as they discuss various topics, with episodes ranging from high-level thought leadership to the tactical implementation of process and technology.
You will learn how to:
- Shift your team’s culture to a talent-first organization.
- Develop a step-by-step guide to hiring and empowering top talent.
- Leverage data, process, and technology to achieve hiring success.
Thank you to our sponsor, SecureVision, for making this show possible!
The Breakthrough Hiring Show: Recruiting and Talent Acquisition Conversations
EP 159: Human Powered Technical Assessments with Woven’s Founder and CEO Wes Winler.
Our hosts sit down with Wes to discuss how Woven helps companies hire great-fit engineers without spending countless hours in tech interviews. We discuss their async technical interview platform and how actual engineers score real-world engineering work. The conversation progresses into how AI candidate evaluations may be regulated.
Thank you to our sponsor, SecureVision, for making this show possible!
Our host James Mackey
Follow us:
https://www.linkedin.com/company/82436841/
#1 Rated Embedded Recruitment Firm on G2!
https://www.g2.com/products/securevision/reviews
Thanks for listening!
Hello, welcome to the Breakthrough Hiring Show. I'm your host, james Mackey. Thanks for joining us today. We're back on the AI for Hiring series. We've got Elijah, our co-host, with us today. Elijah, what's up? Hey, james, happy to be here, yeah, it's great to have you back, and Wes Winham is the founder and CEO of Woven. He's joining us today to tell us all about his product. Wes, thanks for joining us.
Speaker 3:I am stoked for the conversation.
Speaker 1:Yeah, we are as well. It's going to be a lot of fun and, just to start us off, we'd love to learn more about you, about your background and how you came to founding Woven and then getting into your primary value, prop. Would be great, I think. A great place to start.
Speaker 3:I came to talent acquisition from being a hiring manager. I was a software engineer and joined an early startup and became the leader and that meant I needed to hire and I thought I had a gut that could spot talent. So made three hires. It was great, I was great, I could just see when you have it. And then I made my fourth hire and it was not great and it was my fault. I hired someone who was trying really hard. I put them in a seat. They were not going to be successful. It was really bad for our biggest customer, for my team, and that was my wake-up call that I don't have a gut and actually you need to be good at this thing and it's hard.
Speaker 3:And read Thinking Fast and Slow, and talked to a lot of other engineering managers and read IO Psych Research like what does science say? And my insight was if you're going to hire dancers, you should probably watch them dance. Doing the job predicts doing the job, but that was really hard to do. And then when I sold that startup, I founded Woven to make it easier to assess folks in a real-world manner in an engineering context for tech, because it's not easy.
Speaker 1:Yeah, it doesn't sound like it. I know it isn't, and so does Elijah. It's definitely a science, not an art, and it takes a lot of process, repeatability, iterating, and there's a lot of nuance. Right, it's not just about best practices will get you a really long way, but it's also understanding the nuance of the employer right, like the specific requirements they have, their environment, their strengths, their weaknesses. A lot goes into it, right, definitely pretty complex process, but I think what's great is the products coming out these days seem to hopefully be handling, absorbing some of the complexity through some of the products that are being built. So I would love to learn more about your product and figure out how you're solving, exactly what problem you're solving, and maybe we can go ahead and get a layer deeper as well.
Speaker 3:Yeah, so Woven is a human-powered technical assessment. So pre-employment assessment for tech roles, mostly software engineers, data engineers, data science, that sort of thing. You're either coding or maybe coding in a spreadsheet, and we are human powered because that allows you to evaluate the things that you actually care about. It's not. Can you write code that passes some automated test? Can you handle a messy real world situation? Here's this pull request. How do you prioritize it?
Speaker 3:Here's this system that's broken. Here's this email from a colleague where they're not even clear what they're asking you but you're supposed to make a technical business decision. Respond back to them. Because we're human powered, we can evaluate that messier work which creates candidates like it more, because they like doing stuff. That's like the job. That's why they have that job and that's why they stick it out. And then it also creates more signal on who's going to pass, and especially for those folks who don't quite have the prestigious resume like that's.
Speaker 3:What got me fired up about this is there's hiring managers have opinions right, and some of them are informed by the real world. Some of them are just opinions about what they like. And if an assessment can change a hiring manager's opinion about what resume and background really matter. Take a candidate from probably not to yeah, that's where I feel like we can make a big impact, like someone who just needed a shot. And this assessment gives someone a way to get a read on their actual skills without having to commit to a one-hour interview with them.
Speaker 3:Because y'all have seen it, you can only do so many speculative interviews with hiring managers before they start to be like, eh, maybe I don't trust this person Judgment. So that's what we do. That's our core business and we've started a just for. We have a small RPO arm for some of our customers on the smaller end who don't have recruiting. We have a small RPO arm for some of our customers on the smaller end who don't have recruiting, have intermittent hiring, and we do a best described as like a resume AI powered resume matching engine. I don't like the word matching, but that's what the market calls it. But generally like how can you find if this candidate meets your requirements in a consistent way when you got a thousand applicants in your ATS and you are one person trying to read through those?
Speaker 1:Yeah, okay, yeah. So I have a few follow-up questions dialing back to the product aspect or the human power. Technical assessments yeah, what aspects. Or product specific. Like we get more granular into what's run by the product, what's and how people on your team are incorporated into that, and like what their kind of roles are? Like I would just. Can we just get a little bit more detailed into that part?
Speaker 3:Absolutely so. Y'all are familiar with other technical assessments, like to hacker. Rank is probably the brand that has the most recognition. It's something where some point in the process, a hiring manager recruiter will pick some assessments to match the role. Then the candidate gets invited over email for the ATS. They go and they take some series of tests. That's the same. We, you pick assessments.
Speaker 3:What is different is we are able to offer different types of assessments, so things that are more free form. And then on the backend, where the humans come in, is we actually have two engineers that are blind, evaluating that candidate's work. So it's not just some automated thing, it's two engineers looking through that analysis, blind to anything about the candidate, blind to each other, scoring independently and then creating feedback for that candidate. So every candidate that goes through gets feedback on things they did well and things they could have done better. So the folks that you're advancing, they're going to get a feedback email, like within a day after completing the assessment. That's praising them for the things they did. The folks that maybe you're not going to advance this time are at least going to get something useful out of that assessment. They're going to get an area for improvement that they can level up their career. That's what you get when you use humans versus getting automated tests. You can't really do that.
Speaker 1:Yeah, for sure. I'm just thinking so to better understand where in the interview pipeline this sits. Is this after a phone screen or is this? People go through the resumes. They decide the top applicants they send them this, or where does it fit within the interview process?
Speaker 3:applicants they send them this or where does it fit within the interview process? Yeah, great question. Like all good questions, it depends. The typical spot is a recruiter has done a resume screen, probably a first screening call, then woven as an assessment before the hiring manager or technical interview, and that allows the recruiter to take more shots on maybes and saves hiring manager time while giving them some more signal. It depends because if you're hiring a more entry-level role, sometimes you can skip that recruiter screen and just have a more rigorous application. Sometimes you might be hiring for a VP of engineering and actually it's worth doing an extra call before you send the assessment. So it moves around a little bit, but typically after the recruiter screen.
Speaker 1:Okay, yeah, I would assume every company runs their process a little differently, right? So it makes sense to me. As long as the candidate engagement's high enough, it's always better to do it earlier in the process, save the hiring team time.
Speaker 3:Right, yeah, one of my one of my controversial opinions is that recruit it's it's easy for a job to feel like the activities I'm taking are the value. And when it comes to recruiting, like reading resumes aren't the value, screening candidates aren't the value, it's getting a great candidate to a conversation with a higher manager is the value. And if you can skip any of those other things, you can do more other valuable things. So if you can skip the screening step, like you said, have an engaged candidate.
Speaker 3:We're seeing more and more customers do like video recordings. That first five minutes of a call you just repeat over and over your mission, your vision, why your founder is awesome. You record that in a loom and send that to any candidate that passes, your knockout questions or your early screening, and you can get a lot of candidates that are very engaged without needing to schedule that screening call which slows things down. Most folks have jobs. It's hard to schedule it during the day. So I think there's a lot of exciting things happening in that candidate engagement that aren't jump on another 30-minute screening call where you smile at somebody.
Speaker 1:Got it Okay, so you got the resume matching aspects right, as people call it, and then you're sending out screening questions too, prior to assessments. Or is the company doing that? Is that run through your product? Can your products say okay here, ask the high level screening questions to candidates and then send them up with the assessment? Or? Because you mentioned something about screening questions, so I just want to double down on that.
Speaker 3:Yeah, so that. So product one is that technical assessment, human powered technical assessment. Product two is application screening and matching for technical roles, and here this is selecting. So what we cover is making sure the requirements are correct and this is the most important part, and I don't see a lot of people talk about this. Just I like to see how, what's the state of the art in RPO and staffing. So every once in a while I'll make a hire with a staffing agency just to see how the process was.
Speaker 3:Recently went through one with a company everyone has heard of. They make they have $20 billion in revenue every year in staffing and I was looking for a front end engineer for a one-off project. I was looking for a React experience and they rejected a bunch of candidates for me that were like senior React engineers because they didn't list CSS on their resume. And if you're not a technical recruiter, everyone who does React does CSS. You cannot do React without CSS, but because that got into the requirements list, that becomes candidates that get rejected for no good reason.
Speaker 3:So we have a tool to get the requirements list solidified and that means being and this actually we take a lot longer on this than other times because no one likes to do this part. They like to see candidate resumes but saying is CSS a nice to have or a must have is really important? Because no one asked me that question. It was obviously a nice to have, but it got into the must have lists and job descriptions are crap. So we we essentially turn those requirements into evals for an LLM yeah, so yes, no questions. And then we create a hierarchy, those evals, and then we can run those evals against a resume and application plus knockout questions that we generate, and then that allows us to sort candidates into qualified and unqualified buckets.
Speaker 1:Right and on the resume side, is that how much of that is product versus?
Speaker 3:human review. It's product. We do a human review right now just because we want to learn from any mistakes. But there is no, it's not a. We're not building this to be a ranking algorithm where the conceit is, oh, it's just ranking, it's just your top 50 out of a thousand. It's not really a hiring decision because it's just ranking. But we all know those other 950 folks are not going to get looked at the same way. I am not super stoked this is maybe a little controversial but I'm not super stoked about the EEOC decision. When is it a hiring decision? And if they have applied, it's a hiring decision. If you're doing outbound, it's not. So that's the thing we're all skating by on. But like ranking when the other people are not looked at, I feel like that's, it's like this cloth. So I feel like, as vendors, we need to be building systems that pass scrutiny. So Workday is getting sued right now. Have y'all seen that lawsuit that pass scrutiny? So Workday is getting sued right now. Have y'all seen that lawsuit?
Speaker 1:Yeah, I don't know what's the latest. Have there been any recent updates the past month or anything?
Speaker 3:I think June or July was last I saw something new. Yeah, and Workday's defense is essentially hey, we're not an employer, we're not a staffing firm, don't hold us to any criteria, and I think that is not the approach we should be taking as technology firms. We should be instead thinking of crypto like they're all these fly-by-night crypto companies. And then Coinbase stepped forward and said we're going to be regulated and we're going to lean into it. We're going to ask for regulation, we're going to meet the standard of goodness and lack of shadiness In this case it would be lack of bias and that's a system we're building that can be run automated because you put the human effort at front at the requirements Does this house need a basement or not? If you decide later on, you want to change that. It's very expensive. But if you spend the time up front saying, okay, this human signed off, doesn't need a basement, then you've created the paper trail that the EEOC needs.
Speaker 3:And then all the LLMs all the AI is doing is like data entry, it's just doing dumb things like matching companies versus criteria or looking for not keywords but like skills, because LLMs are already better than recruiters at most of the skill matching, like everyone's still using examples from the keyword things in that resume. Oh, it's Kubernetes, and someone said K8S. So of course these robots are done. They are already better than most, better than me, better than most tech recruiters Like I. One of our testing data. We're looking for a engineer with TypeScript and someone came through with Nextjs experience. I didn't know that Nextjs is a platform only written in TypeScript, but the robot knew, so they marked that candidate as a pass. Like the bots are already better at data entry, we should let them do data entry.
Speaker 1:Yeah for sure. Yeah, that's going to be interesting to see what happens with the workday or precedent that sets as well. Could you, can we double back? You were talking about EOC and like the analogy in terms of not needing a basement. Could you help explain that a little bit more, like how you think these products and tools are going to protect against that and like what the best practice is going to be? Is there any more you could share there?
Speaker 3:Yeah, so this is so. I have a background in computer security and and one of the things you learn really on in computer security is for regulatory compliance. It's a documentation game. Yes, there's some stuff you should do, but it's obvious stuff. Same thing with EOC. The stuff you should do is, it's obvious, it's a documentation game. Yes, there's some stuff you should do, but it's obvious stuff. Same thing with EOC. The stuff you should do is, it's obvious.
Speaker 3:It's not like they're not asking for crazy stuff. They're just asking for documentation that a human made this decision they were not using. They made it for good reasons, they can justify it and you have a trail that you were then following that criteria and you could do this manually. You build a big resume rubric or application rubric with 17 rows, you weight the criteria, you fill out zeros and ones for every application. It takes nine minutes per candidate and no human would do that because you're immediately like, oh, I can't do this, I'll just use my deep learning network that's between my ears to make a judgment and there's a carve out. For human made a judgment, they're probably not biased. Can you prove they were biased, whereas for the robots you have to prove that they weren't biased, but there's already a way to do that. The thing about robots is they don't get bored. They will fill out that 17 item rubric and they will do it better than a human if you build the tech right.
Speaker 3:Like, hallucinations are one thing. When you're asking someone to look up something, when you're asking someone to search a small document for a very specific criteria or enrich a document with LinkedIn data, hallucination is not the problem. That's just a lot of plumbing. So for the EEOC, you need to say like this requirement is job related, it's bona fide. There was a person that made that decision and here's how that requirement chased through and here's why we rejected that person based on the requirement. There's nothing about knockout questions in any of the EEOC. Everyone uses knockout questions, but there's not like a carve out for the exception for automated knockout questions. There's just is this a job requirement?
Speaker 1:So that's what Is it a job requirement? As long as it's clearly posted that a human came up with the job requirement. If the AI is then making the evaluation by essentially matching to the job requirement as you put a knockout question and there's a documentation trail of that that's probably not something that the AOC is going to flag.
Speaker 3:Yeah, will you get sued. If you're Workday, you're going to get sued Like anybody can get sued Like this country.
Speaker 1:We love suing each other. It's like our favorite thing, it's like our favorite pastime in business.
Speaker 3:But will you win that lawsuit before it goes to actual trial, because you can dump this amazing documentation? Yeah, as long as you don't do something. If you put on that form is white male, then advance. Okay, yeah, you're going to go to jail. Good luck. I'm glad we have this documentation trail. That's progress.
Speaker 1:Yeah, yeah, I think for a lot of these products too, it's like limiting the data to make sure there's no like personally identifiable information or really anything that could be used. And for some products that gets a little more challenging. The more wide in scope the AI is, more data the AI is evaluating, then it could be a little bit harder and somebody could have had short tenure because they had a baby or something like that. Then there could be something that you wouldn't think could be related, could be discrimination or considered discrimination could slip or there could be. It could be looked at in a different context.
Speaker 1:And that was an interesting counterpoint. He's just like, yeah, you got to be careful because sometimes there might be things that are introducing bias or discrimination or whatever else into the process. You just have no idea. It's just hard to catch everything. But the more limited, I feel like, the scope is. Like just looking at a resume, I feel like, but again, there's the tenure, there's stuff, there's always things. I think the more limited the scope, the more we can prevent against that at first, steve is a very smart guy.
Speaker 3:Jim is well. A lot of our customers use Jim. They get a lot of value in it. He's not wrong from his perspective, and I'm going to take the opposite point here. You can come up with these, but what about X? From my perspective, we're not comparing to some perfect recruiter who can spend five minutes per resume and notice that was a gap but then notice there's oh, this is a woman, so that was probably that. And that's not what happens. Like you look at data recruiters get, depending on which study you use, between 15 and 30 seconds per resume. That is not enough time to take all of that in there.
Speaker 3:Nobody is that good as recruiters. We get good at it, we feel good at it, we can do it easily. That doesn't mean we're actually effective at it. That doesn't mean the result is suitable for purpose. That's a different. Expertise needs a feedback loop. It doesn't mean just feels easy. It feels easy for us because we do it a lot Doesn't mean you're good at it when this is studied. So interviewingio is the best study in this. They did 10 years ago and they just did a new one. They asked tech recruiters working at tech companies. Hey, here's some resumes. Categorize them based on their likelihood to pass a technical interview. Easy right. These are tech recruiters. That's what they do all day, that's like their main job. They were slightly better than a coin flip slightly better. And when they did some post-h hoc analysis on what predicted a recruiter picking a resume, it was underrepresented status. Recruiters really do care about diversity and it was prestige of previous employer and specifically name recognition of previous employer, not was this employer selective? It's have I heard of this employer. That's what mattered.
Speaker 1:They must have came from enterprise companies. I feel like a startup recruiter would go crazy if they heard a hiring manager request that type of experience.
Speaker 3:Well, the thing is hiring managers don't usually request this. This is a common Recruiters. This is my opinion. I would love to have the pushback. You would know more than me. I have never hired a recruiter, I've only partnered with them.
Speaker 3:The hiring managers want people who are good and they would like to live in a world where they don't have to confront the reality that there are a thousand resumes and a lot of them look good. A recruiter has to pick, and so I have to pick on something. And what they tend to pick on, regardless of whether they admit it, if you look in this study and others, they look at brand name recognitions Like, oh, you worked at Airbnb, cool. The problem with that? It actually is pretty effective. There are some. A lot of name brands are selective institutions. The folks you pick from there really are more likely to pass your interview. It's not wrong. It feels icky.
Speaker 3:The problem is it's incomplete because there are selective institutions. A startup that you have never heard of, a scale up that you have never heard of, because now we're all recruiting remote, and there's all these companies we've never heard of, all across the country in the world who is better than the one you've heard of? Who is more selective than Airbnb. So there's this prestigious resume that super predicts passing your tech screen because they have a harder tech screen than you do, but you never heard of that company. So you as a recruiter, in a hurry, you have to pass on them because you just don't recognize it.
Speaker 3:The robot can build a list of what are selective schools, what are selective employers, and match it against the list. So you're doing the same thing, you're just doing it better, and we can talk about how to fight against just prestige bias that's in their topic. But at the start, if we're going to do the same thing humans are doing, let's just do a better job at it. Let's match the prestige list to one that is more complete and hits the people who haven't worked at Google but have worked at the most selective startup, in a fintech startup in New York that recruiters have never heard of but is incredible at selecting developers.
Speaker 1:Yeah, I find that kind of depressing that senior recruiters would overemphasize like pedigree or where people worked, because it's much more relevant to look at Does the person come from a relevant environment?
Speaker 3:right, what does?
Speaker 1:their team look like.
Speaker 1:What is the technical stack, everything that might go. Okay, what size customers, what industries do you service? All of the the nuanced things that get into? Uh, all right, like from a looking for, like a technical perspective, not like engineering technical, but like looking at it from an analytical perspective of what the actual environment looks like and matching that is much more critical. I go through this all the time with my customers that are in the startup and growth stage phase and it's just, I'll see. I got a customer in HR tech right and they are a startup or growth stage company, probably around 50 employees, 400 plus customers and their primary they were looking at. Okay, we need to hire salespeople. Oh, let's get people from LinkedIn.
Speaker 1:And I'm of the opinion you don't really sell LinkedIn. Sorry, it's a monopoly business. People come inbound, you're shuffling papers around. It is what it is right. A lot of the times you don't have to develop very strong sales skills and it's a totally different motion than working for a startup or growth stage company that nobody's ever heard of, that doesn't have every resource available under the sun, that isn't heavily automated, has every point possible technical stack thing in place, the motions, the consultative, strategic motions of everything, and knowing what it's like of working for a startup, being spread thin, the work ethic, everything that goes into servicing customers. It's just way different.
Speaker 1:So, yeah, I want the no-name startup that's growing fast. I want people that come from the same environment. I want people that I don't think could do the job. I want people that have done the job. I want people that have done the job and I can get references from previous direct managers. I don't really care where you worked from, as long as the environment fits.
Speaker 1:So I think it's just like if you're enterprise and you're going to another enterprise, it's yeah, you could do that. But if you're enterprise going to a startup, I don't care where you come from, I see that as a riskier hire, like it's just riskier. Like even an engineer working at a big company. Now, there are situations where this is the nuance right, they were working at a bigger company. It was on a smaller team. Was it like in a subsidiary? Was it like in a new kind of project? Did they have fewer resources than like the parent company? Or like they were off doing their own thing over here? So they were doing a lot more Sometimes, like you'll see, even on an engineering point, as we're a startup engineer, if you're first like one of the first 10 or first 20, your scope of what you might be doing is a lot wider right. The technologies you might be working on are a lot like more recent.
Speaker 3:So there's no onboarding docs. You're figuring it on your own Google. You have six months to onboard with this pristine process you got to be. It helps to be a PhD to navigate the environment, but that's not what a startup needs. But that Google resume. I got to show the hiring manager 10 resumes. Am I going to skip the Google resume Because they'll be excited about that Google resume?
Speaker 1:Yeah, it's not, they'll be excited until the person flops. An engineer from Google probably isn't going to. They're obviously going to be incredibly sharp. So, like for dialing into software engineer, yeah, I'd probably. If we could afford the guy from the guy from the guy or gal from Google. If they're not like doubled at our comp range, then yeah, maybe we should consider them.
Speaker 3:But yeah, it's also nuance on the role, right, like it's all like. About the nuance aspect too, yeah, and my, my belief is that the people who have the best like strategic thinking around this, hire around the role, around the company's position, around their budget. Realistically, they should put their effort at the very front, defining the requirements and getting really uncomfortably specific. So prestige is something nobody likes to talk about unless you're doing outbound, like all the outbound tools. They have that filter. They have that prestige top 1% filter, top 20% filter, but no one's built that on the inbound yet because it feels gross.
Speaker 3:I don't see a lot of scorecards that are like must be from a top 20% institution anymore. But the reality is your recruiters often are having to make decisions and they're using prestige. So why not make it an explicit requirement and get to decide? Is it a must have? Is it a nice have? Right now? Let's put that thinking upfront. Have the hard conversations and don't let it get into the squishiness of the recruiter with 30 seconds trying to figure out if this person gets to have a screen or not.
Speaker 1:Yeah, for sure, Elijah. I don't know if any questions are coming up for you. I know I've been monopolizing our side of the conversation here.
Speaker 2:All good. I'm curious. So if a, let's say, the hiring manager or the recruiting team used AI to actually generate the job description in the first place, including some of those must haves and nice to haves, is that can do you think that's considered a hiring decision relative to the EEOC because they, like, reviewed it after before doing something with it? Does that make sense?
Speaker 3:I think it's a gray area.
Speaker 3:So I think if you copy a resume from a job description from online and cargo, cult it and then that's the thing, that's just on the job page, and then you do something totally different and you can't show that you're tracing your actions and screening criteria to something relevant, whether it's a job description or another document. I prefer having another document that is not the job ad, that has the actual requirements and rules To me job description. I think companies who see that as an advertisement perform much better than companies who see it as a job description. I think that's a distinction that I make and everyone does. But at least you have to show that, whatever the thing is, whether it's a another document or job ad, you are tying your actions to that, and then the EOC tends to be, and that means you need documentation. So that's the key is write something down somewhere that you can send to a lawyer. Not only is that good for the lawyer, but that's good for us to stop to not lie to ourselves that actually looked at this.
Speaker 1:Wait. So it's like. So your recommendation, like should, and sorry if I missed something here, but I know we're covering a lot of ground and I think you, I really appreciate your advice here and I think I think a lot of people are at least, I'm very interested in this stuff. So should these products like? Should they be helping companies create the job descriptions so the company can type in your share role requirements and then it can refine JDs? Because a lot of these products are doing that too. A lot of products right now are actually, it seems like, almost helping shape role requirements, and so is that something where maybe people should be staying away from having AI craft requirements, and it's more of like giving AI like very clear requirements and then like Personally, I worry more about AI helping with.
Speaker 3:No one likes writing job descriptions. It's a marketing hat. Whenever that's not your thing, nobody likes it, and so I get why Gen AI that's an early target. I worry more about Gen AI crafting the requirements versus we'll be excited about you like kind of must-haves. I worry more about that than I do about Gen AI ranking resumes, frankly, because if you do the second thing, the first thing is where, if you get that wrong because you just cargo culted someone else, everything else is going to be wrong yeah, just, it's like the yeah, the foundation to everything you're doing.
Speaker 1:Yeah, yeah, I like your distinction, too about the job ad versus the jd. That's really cool.
Speaker 2:Yeah, the only problem is with that because I've used that in previous companies is you're then trying to, you're trying to manage like multiple documents and there is a certain level of transparency with whatever goes online being the actual requirements, right, if you have like shadow requirements, things you're not telling people and personally, right, like I just struggle a little bit and it gives you like more things to manage.
Speaker 2:If the job description is essentially like the core requirements and then I don't know, maybe AI is going to create a job advertisement that's just like a few bullet points and is more of like marketing marketing, I guess that's fine. But yeah, I've tried to manage both and I think it can be a huge challenge to try to maintain multiple documents. And then maybe there's risk, right, when those get out of alignment. A requirement change on the job description nobody updated the job advertisement, and then how does that work with the scorecard, right, that's been created and then any of the questions, right, that are trying to pull certain responses or examples to fill out the. I just, yeah, I think there's a lot of inconsistency with if there's a job description, a job advertisement, a scorecard, questions that are aligned with the scorecard. Those rarely seem to all line up in this like beautiful, consistent way.
Speaker 1:Elijah, what if? When, if people make changes to the job description, it automatically updates the job ad? Oh yeah, 100% right, If it could automatically do that.
Speaker 2:Right now, none of the technology does that. The ATSs are all set up. Tell me if you've seen one different where there's a job ad that you can edit and it's not also the job description. If you're going to have a job description somewhere else usually it's Google Docs or like a Word file, but then you have to go remember to update the ATS, because the ATS is where the scorecards are housed, which is going to be what the recruiters and the hiring teams are using to actually evaluate the candidates.
Speaker 1:I think we're going to see that more like startups, like AI, native companies that are doing some of this generation stuff. I think a lot of it will come down to what happens workday right Like, and some of the requirements as that gets more clear. If, to your point Wes like, if there is becomes this distinction on who's defining requirements, as like people that are writing requirements and reviewing requirements, then I could see these, the product roadmaps building out this distinction of here's the JD and here's the job ad. But then you're like Elijah, I think you touched on this too. I wonder how it's going to be viewed. Think about, like transparency laws, right Around compensation. Are there also going to be? Like how much of the requirements need to be publicly facing too, and having two different documents. That's a I don't know, it'll just be. It's weird. I think we have to keep like product roadmaps a little bit loose, like you try to guess. I think you're thinking about it Like it really. It seems it makes it seems very logical to me.
Speaker 3:I think I agree with what you're saying. I'm very autistic. It is the autistic approach to resume screening, but systematized. It's not out of this for better or worse. And yeah, I think it's. We have knockup. We already have the knockup questions scorecard, interview questions, job description. We already have four things we're keeping in sync and the other thing adds an additional one which is hard. It's hard to keep those things in sync.
Speaker 3:I'm excited about vendors like Poetry that seem to be targeting this problem of reusability reusable things, maybe you can use something to generate another thing. I don't think they have this yet, but I would love for them to build it. And Ashby has, as far as ATS vendors to complement, they have a version of creating requirements that are separate from your job description. They're suggested based on the job description and then you have those as a separate thing that you can add or remove. I think that's a good advancement and you can use LLMs to score them. You're on your own for prompt engineering the stuff. It feels very you know beta, but I love that they're doing it and putting it out there and letting folks get the power of technology, because the lms are at least as good as a very busy person, in my opinion yeah, I know there's a lot of like concern around llms involved in the hiring process, but I I get it.
Speaker 1:It could be like bias at scale and it's.
Speaker 1:I don't know I think they're going to. It's going to be significantly better and less biased than people. Yeah, it seems very obvious to me and it did at first, before I was really diving into AI and LLMs, before I really knew and I don't think really any of us really knew a whole lot about the technology that came out a couple of years ago. Like maybe you did, but I, a lot of us, were like, well, how the hell does this really work and what's, what are the? But now that I've learned like a fair amount, like it just becomes more and more clear to me. I understand the fear. It's a priori.
Speaker 1:Their CEO, elijah, was. He came on the show and we were talking with they do. It's another AI kind of product, but what they were essentially talking about is like the analogy to autonomous cars, and so it's like statistically it's safer, but people are still scared of it, like that concept. I think it's like the same with AI. Here. It's statistically, we should be able to create this tech like in the near term to be significantly less biased. Like bias is a huge problem in the United States. It's massive. Okay, so this is an opportunity to make it significantly better. So it's you know, I think it was like all this like fear, and I get it, but I don't know. I think this is like. I think the more that people become a little bit more comfortable with it, it's pretty clear. It's pretty clear that this is going to be so much better, so much better as people.
Speaker 3:We're going to mess it up along the way. People are going to mess it up. They're going to use it for the wrong reasons. They're going to just say, hey, who should I hire? And they're going to hire that person and be like what's wrong? We're going to do dumb things. That's how we get through new technology. But I think anyone's looking closely at the advancement and saying we're not going to give it this A to Q problem. We're going to give it B and C and E and F and Z. Wait, that's outside. You give them the pieces that it's going to be really good at and I think the drudge work of like kind of data entry. Does this application match this criteria? That's a really good use case right now.
Speaker 1:I think it is. I think in different parts of the process too, not even just top of funnel if case right now, I think it is. I think in different parts of the process too, not even just top of funnel, if you get specific enough right. That's where I say like wider in scope, where it's, if it has access to demographic data and stuff like that, like you're, of course, the likelihood of bias like creeps significantly. But if it's very dialed into looking for very specific information and also like the, I think, the prompt engineering and what's happening with the like AI and all these types of things, people are writing in things already to prevent bias.
Speaker 1:It's an ongoing thing where there's already a lot happening to train these systems to not be biased, and it's going to be a lot more effective than your employee taking a compliance class once a year. This is like mentally checked out. It's the last thing they want to do after working 40 hours a week. It's just I think hopefully there's going to be mistakes made. But it's the same thing with the autonomous cars, like just because if one crashes like we shouldn't just say, oh, it's not safe, if statistically it's safer, and then we need to keep refining it, and there's never an okay amount of bias or discrimination ever. Anything above zero is bad, but if we can move in the right direction and it has significantly less, yeah, let's do that and I think it will.
Speaker 2:I think one of the best things I've seen, going back on the job description stuff, is that concept from Lou Adler called performance-based hiring. And Lou's this great older gentleman who's been using this for years and what. There's this great older gentleman who's been using this for years and basically he says to start with KPOs key performance objectives. Every time I've done this, when I can get the hiring manager to like really partner with me and be specific, the whole search goes better. So you basically get.
Speaker 2:I think it's three to five. What are the top three to five things that need to be accomplished within the first, let's say, 12 months to determine whether or not this was a successful hire? So you're tying together, almost like the performance evaluation at day 365 after their start date, to figure out how are they going to be evaluated and how are we going to know that we made a successful hire a year in. And then you're determining those well needs. To close, let's say it's I don't know $800,000 in new business If it's a sales role needs to, and you go through these three to five key performance objectives. Then you use that to build the job description and the requirements and anything else.
Speaker 2:And then the scorecard right in the evaluation is also what, like the key performance objectives are on the scorecard. So you're basically trying to figure out like, can this person do what we need done? And, as James alluded to earlier, have they done? Do they have examples of doing that before in similar contexts to this? I'm a big fan of that performance-based hiring. When you can get those performance objectives, everything else is more consistent and clear throughout the whole rest of the process, including their first one-year performance eval.
Speaker 1:Yeah, I see, I totally agree with that. And one thing you'll hear too is sometimes people say, okay, based on the role. It can be challenging, and if it's challenging, then you don't know the role well enough. Yeah.
Speaker 2:Don't hire it yet If you're not going to be able to know whether you made a good hire 12 months in. Are you actually ready to spend X hundreds of thousands of dollars hiring that person? Probably not.
Speaker 1:So hiring like hiring should be looked at a it's an investment, right? So what is going to be the return on that investment? We should have a very clear ROI and, mind, right, I don't know why this is randomly coming to mind, but this is Sam Jacobs. I don't know if you guys know he's like the founder and CEO of a networking group called Pavilion. Yeah, the tech industry.
Speaker 1:So, like one of the things, revenue, collective revenue yeah, the revenue collective. Uh, yeah, so they're, it's a cool group. But west I don't know if you're they have a ceo group. That might be interesting for you guys. But anyways, yeah, like, one of the things he always talks about is head count is not scale, right, like scale is unit economics, your revenue, your margin, scaling at a rate where you're making more money because of certain investment decisions. And he often says a company's mistakes scale with just growing teams, which is really just a cost burden often. And so it's getting into that mindset of ROI surrounding hires and there isn't a clear way to track back how this hire is helping the company achieve the North star metric. Like, why is that hiring being made? And you're right, that's how we should be thinking about everybody. We should be thinking about putting together job descriptions. There should be a very real ROI that's tangible and not just it doesn't even have. It doesn't have to be a sales role, it should be any role within the company.
Speaker 3:That should be your requirement, especially executive roles, because they vary so much depending on your context. That's such a good exercise I like the A method for hiring is the one. But it sounds very similar to what you're describing, elijah, where you what are the accomplishments that this person you want us to have? And for me, one time I went through that exercise and realized I wanted to hire. I was like I'm going to hire a VP of marketing. I was like no, I don't need a VP of marketing to do these things. I need an entry-level person. It'll be way cheaper, they will actually like their job, versus if I get a VP to try to update AdWords. They're going to hate me and it saved me $100,000 and a lot of pain just because-.
Speaker 1:Oh yeah, the opportunity cost. Getting an executive hire wrong is literally a seven problem.
Speaker 1:At least For a small business, it's a. For a small to medium size company, it's a seven figure problem. For a bigger company, You're talking like multi-million dollar. Yeah, it's just nuts On the flip side. What's also really interesting is that when I'm evaluating talent like one of the things that I also look for if I'm hiring for my team is does the person I'm hiring understand how their role directly impacts North Star Metrics? Do they understand the correlation to what their activity, why it actually matters and how it's driving the business forward? And are they aware of their environment in terms of how they might be doing that? Or do they have ideas on maybe more efficient ways to do that, how they think the team could operate and I do this for ICs Even if they're not in a strategic role I want to know if they have that self-awareness, business acumen to some extent and if they really understand the impact that they need to have right Like, versus just tactically operating day to day.
Speaker 3:Yeah, that's how you, that's how you level up an organization is make sure everyone knows how they impact the level above. And that's hard to do. I read a book. It's called Turn the Ship Around. It's about a submarine, many things, and his approach is called leader, where it's basically the person that's reporting to you. They should do the thing and tell you they're doing it while they're doing it, so you can correct it, but they just go and do it. So it shows that they know the next level it, but they just go and do it. So it shows that they know the next level. And one of the things in the book is asking do people know what they're connected to? And I was like, yeah, I'm crushing this. And then I went and asked I asked that and everyone I want to have for the next two weeks, and I was not crushing it.
Speaker 1:Like you're like oh everyone knows, of course.
Speaker 3:They know how this goes to revenue or more customers. You have your customers, Nope?
Speaker 1:About half the people did, did not. It's hard to do. Yeah, I also like one of my go-to questions when I was scaling out either my team aggressively in 2022, we were just hiring like recruiters every multiple month and I would ask like recruiters, like two years of experience and be like hey, so if you were made CEO of your current employer tomorrow, what are the, what would be your top three initiatives and why? What would you double down on, what would you change, what would you discontinue? And I feel like I got so much value from that, just again like feeding into their awareness, like their understanding of their role within the company, other people's roles within the company, like to me, it makes a huge difference. So it goes both ways. It's like the hiring team needs to understand people's point of impact. How's it? Impacting with stars, Like best candidates are going to understand people's point of impact.
Speaker 3:How's it, you know, impacting with stars like best candidates are going to understand that too yeah, to go all the way back to resumes, it's if css ends up on the job requirements and no one can trace why css on someone's skills list traces to them being effective in their first 90 days, their first year, then probably we should remove that and stop looking at it on resumes.
Speaker 1:Yeah, for sure, for sure. Look, this has been a really fun episode. I definitely learned a lot. I really enjoyed the EOC conversation and you definitely had me thinking about some different problems and challenges and opportunities in a new way today. So I'm sure I'm not going to be the only one, as people are tuning in here. It's definitely a lot of value that you've shared with us today. Wes, thank you very much for taking the time to educate us and our audience on everything that you're working on and knowing. It's definitely really impressive, and we're really thankful that you've come on the show today to talk to our community and help us out. Pleasure is mine. Thanks, guys.