← Back to all articles

How to Hire Without an HR Department: A Field Guide for U.S. Small Businesses

The U.S. Small Business Administration counts roughly 33 million small businesses in the country. They are 99.9% of all U.S. businesses and employ nearly half of the private workforce 1. The vast majority of them hire without an HR department. The owner interviews. The hiring manager interviews. Sometimes both interview together without aligning on criteria first. The decision comes out of a quick conversation at the end of the day: “I liked them, let’s offer.”

The problem is not the absence of HR per se. The problem is that, in a small business, a bad hire hurts more. You do not have 200 people to dilute the error. Each hire, from the second to the fifteenth employee, takes up a meaningful slice of the team, the payroll, and the culture.

This guide lays out the minimum viable process for hiring with rigor without a recruiter on staff. No invented bureaucracy. No outsourced decision. Just enough structure to cut the risk of the expensive error.

The rule that holds at any size

Decades of organizational psychology research have been clear and consistent: structured interviews predict on-the-job performance substantially better than unstructured ones. The McDaniel et al. (1994) meta-analysis 2 reported criterion-related validity nearly three times higher for structured interviews (.63 vs .20). Schmidt and Hunter’s 1998 synthesis of 85 years of selection research 3 reported a smaller but still substantial advantage (.51 vs .38). Wingate et al. (2025) re-validated the same direction with modern data 4. The rule does not change when the company is small. What changes is the margin for error: you simply cannot afford to hire wrong.

The good news: almost everything that makes a hire work can be done in one morning, before you see the first candidate. The rest is discipline during and after.

Before opening the role: define the criteria

Most errors start here. The owner or manager writes a generic job description copied from a competitor (“looking for a proactive, dynamic professional with an ownership mindset”), publishes it, gets resumes, interviews, and hires. At no point does the team stop to define, in concrete and documented form, what this specific role actually requires.

The minimum viable exercise: list 4 to 6 criteria for the role. For each:

  • What is the criterion? Not “communication”. Something specific: “ability to explain a technical decision to a non-technical stakeholder without losing precision”, or “track record of taking a B2B negotiation through to close on a deal of $50K+ without giving away margin”.
  • What weight does it carry? Is this criterion disqualifying, important but negotiable, or nice-to-have? Honest weights stop the first impression from dominating.
  • What does strong evidence look like? What would a strong candidate say, with what level of detail? Example: “strong evidence = candidate cites a specific deal with industry/segment, deal size, what the objection was, how they responded, and the measurable outcome”.
  • What does weak evidence look like? Generic answer, no concrete example, no numbers, no context.

This is what we call the Role Blueprint. It does not need to be pretty. It needs to be written down before the first candidate walks through the door. The interview scorecard template walks through how to turn the Blueprint into a tool you can score with during the conversation.

Before opening the role: write an honest job description

Once the criteria are defined, the JD becomes easy to write and attracts the right candidates. Instead of “looking for a dynamic professional” (attracts everyone), you write “looking for someone who has closed at least 2 B2B negotiations on tickets over $50,000 in the last 24 months, with direct accountability for margin” (attracts people who actually have what the role needs).

Practical effect: you receive fewer resumes, but the fit rate goes up substantially. SHRM has identified rushed and overly-broad job postings as a leading cause of bad hires 5. For an SMB, that self-selection filter is worth its weight in gold.

During screening: read the resume against criteria, not impression

For every resume, open the Role Blueprint and mark, criterion by criterion: does this candidate have documented evidence for this? Partial evidence? A gap to investigate in the interview?

You finish screening with something concrete: “this candidate covers 4 of 6 criteria from the resume; these 2 are gaps to probe in the interview”. Compare that to normal screening, based on “I liked the resume”. The first protects you from the Performer who has a polished resume but little substance. The second does not.

During the interview: same opening questions, adaptive depth

This is the golden rule of the semi-structured model: every candidate for the same role starts from the same place. The same 4 to 6 opening questions, one per criterion, open every interview.

From there, the conversation diverges. If the answer was specific and substantive, you go deeper (ask for more detail, ask for contrast with another situation, test the claim). If it was vague, you ask for a concrete example before moving on. If the resume already covered the criterion, you can spend less time there and more on a gap.

But the starting point is the same for everyone. Without that, you are comparing five conversations that happened in five parallel realities, and the final decision inevitably falls on the candidate you “remember best”, which is almost always whoever spoke most fluently, not whoever delivered the most evidence.

During the interview: capture evidence, not impression

The worst hiring decision is the one that forms in the interviewer’s head five minutes after the call. By that point, what survives is impression, not evidence. The candidate who spoke fluently feels stronger. The candidate who was nervous feels weaker. They will be judged on the polish of the presentation, not on the content of what they said. This phenomenon is the halo effect, documented in decades of cognitive psychology since the seminal work of Nisbett and Wilson 6: a single positive attribute (verbal fluency, confident posture) contaminates evaluation of every other dimension, and most of the decision ends up being made in the first minutes of the conversation, before any substantive evidence appears.

To avoid this: during the interview, write down what the candidate actually said. Not “answered well”, but “said: led the migration of a 12-person team from monolith to microservices over 9 months, KPI was deploy frequency, went from weekly to daily”. Real quotes, tied to specific criteria.

If you are alone on the interview, keep a notebook open. If the conversation is on video, consider a transparent note-taking or transcription tool (with disclosure to the candidate; never record without notice).

After the interview: the brief memo

Before any “I liked them”, write a short document, even if only for yourself. One paragraph per criterion, with cited evidence. One line of decision: strong / mixed / weak for each. At the end, one sentence: “recommend / do not recommend, and why”.

That document does three things:

  1. Forces you to look at evidence before deciding, instead of impression.
  2. Lets you compare candidates honestly, side by side.
  3. Defends the decision later, in compliance reviews or simply if the hire does not work out and your co-founder or board wants to understand what happened.

The more formal your business becomes over time, the more you will appreciate having started this habit early.

What helps additionally when the team is small

A few practices that further reduce risk in a small-business context:

  • More than one decision-maker. When possible, two people interview the candidate (separately or together) and compare memos before deciding. This dramatically cuts the chance of a hire driven by personal affinity.
  • Probationary period taken seriously. U.S. employment is at-will in most states, but the first 90 days are by convention the period during which expectations are calibrated. Use those 90 days to validate against objective indicators whether the hire matched expectations. It is not “let’s see how it goes”. It is “by day 90, I am evaluating against these 5 indicators that I defined today”.
  • Do not outsource the decision to a recruiter without written criteria. Recruiters are useful for sourcing. The decision of who to hire belongs with you, against your criteria.
  • Accept that you might not hire. The worst hire is the one that happened because “we needed to fill the role”. The total cost of a bad hire, per SHRM, is between 0.5x and 2x annual salary 5. The cost of waiting another 30 days for the right candidate is, in almost every scenario, a fraction of that. Waiting almost always costs less than hiring wrong and starting over.

When you don’t have HR, you ARE the discipline

The method above works at any company size. But without an HR department, you are the only thing keeping the discipline alive. Every step (write criteria, draft questions, score against rubric, capture evidence live, write the memo) lands on the founder or hiring manager who is already running the rest of the business.

The most common failure modes are predictable:

  • The criteria are written for hire #1, then forgotten by hire #2; every role starts from scratch
  • The rubric stays mental, never written down, so scoring drifts week by week
  • Evidence capture is abandoned the moment the conversation gets real, because nobody can interview, document, and read the room at the same time
  • The post-interview memo turns into “I’ll write it later” and becomes “I remember I liked them”

Each of those failures cancels the gain from structure, and the cost of each failure is exactly what this discipline exists to prevent (SHRM puts the total cost of a single bad hire at 0.5x to 2x annual salary).

Recrutador is a Hiring Intelligence Platform built for businesses that hire without a dedicated HR department. The Strategist (a chat-first AI consultant) defines the Role Blueprint with you and persists it across hires. The Job Description is generated from the Blueprint. Resumes are ranked by it. During the live interview, the desktop HUD listens, transcribes in real time, and surfaces the next right probe one action at a time, so you can focus on reading the candidate. At the end, the Post-Interview Memo is generated automatically with quoted evidence and a structured decision.

It works for any role, any seniority. Proven in production from CTO to long-haul truck driver, on the same engine, with no role-specific code.

If you want to understand more, read What is Recrutador. If you want to go straight in, get started.

References

Footnotes

  1. U.S. Small Business Administration, Office of Advocacy. Frequently Asked Questions About Small Business. There are approximately 33 million small businesses in the U.S., representing 99.9% of all firms and employing nearly half of the private workforce. SBA Office of Advocacy.

  2. McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The Validity of Employment Interviews: A Comprehensive Review and Meta-Analysis. Journal of Applied Psychology, 79(4), 599-616. PDF

  3. Schmidt, F. L., & Hunter, J. E. (1998). The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings. Psychological Bulletin, 124(2), 262-274. DOI

  4. Wingate, T. G., et al. (2025). Evaluating interview criterion-related validity. International Journal of Selection and Assessment. Wiley Online Library

  5. Society for Human Resource Management. The Cost of a Bad Hire Can Be Astronomical. SHRM puts the total cost of replacing an employee at 0.5x to 2x annual salary, with management roles often reaching the upper end. Article. 2

  6. Nisbett, R. E., & Wilson, T. D. (1977). The Halo Effect: Evidence for Unconscious Alteration of Judgments. Journal of Personality and Social Psychology, 35(4), 250-256. PDF