· 5 min read ·

What's Actually Happening When an Algorithm Interviews You for a Job

Source: hackernews

A few days ago, The Verge published a first-person account of a journalist going through an AI-conducted job interview. The piece is worth reading for the texture of the experience alone, but what struck me more than the article itself was how unremarkable this situation has become. An AI interviewer now generates enough HN discussion to hit the front page, but the technology has been quietly proliferating in corporate hiring pipelines for years. Most candidates who encounter it have no idea how it works or what it is optimizing for.

Let me try to explain what is actually going on on the other side of those chat bubbles.

From Rule-Based Chatbots to Conversational Agents

The first wave of AI in hiring, roughly 2015 to 2020, was mostly about screening at scale. Tools like HireVue and Paradox’s Olivia used a combination of pre-scripted question trees and asynchronous video analysis. Candidates would record themselves answering prompts, and algorithms would analyze facial expressions, speech cadence, word choice, and keyword density against a model trained on employees the company considered successful.

HireVue’s facial analysis feature drew significant criticism from AI researchers. The problem was not just that the correlation between facial micro-expressions and job performance is scientifically dubious. The deeper issue was that the model was trained on existing employees, which meant it reproduced whatever demographic skews already existed in the hiring pipeline. HireVue quietly dropped facial analysis from their product in 2021 under regulatory pressure, but the keyword and speech analysis components remained.

The second wave, which is where we are now, replaced those brittle rule-based chatbots with LLM-backed conversational agents. Instead of a flowchart of decision nodes, the interviewer is now a fine-tuned language model that can handle unexpected responses, ask clarifying follow-ups, and maintain conversational coherence across a multi-turn exchange. This is a qualitative shift in user experience, and it is why articles like The Verge’s one feel different from what candidates described encountering five years ago. The conversation actually feels like a conversation.

What the System Is Evaluating

This is where it gets murky, and where most candidates are flying blind.

At the technical level, modern AI interviewers are doing several things simultaneously. The most basic layer is structured information extraction: job history verification, availability, compensation expectations, location constraints. This is fairly transparent and genuinely useful for both sides.

Above that sits a competency scoring layer. The system is mapping your responses against a rubric, typically derived from a job description fed into the model at configuration time. If you are interviewing for a role that emphasizes “cross-functional collaboration,” the model is listening for semantic indicators of that trait: mentions of stakeholder alignment, examples involving other teams, framing that acknowledges ambiguity. It is not just keyword matching at this point. Modern embedding-based approaches compute semantic similarity, so the absence of a specific phrase does not necessarily hurt you, but the conceptual territory your answers cover still matters.

The third layer, and the least transparent one, is culture fit scoring. This is where the training data provenance becomes important. If a company trained their screening model on transcripts from their top-performing employees, they have essentially encoded whatever communication styles and rhetorical norms those employees happen to share. Candidates who communicate in a similar register score well. Candidates who do not, regardless of their actual ability, may not make it to a human.

The Regulatory Response Is Patchy

A few jurisdictions have moved to regulate this. The Illinois AI Video Interview Act (2019) was one of the first, requiring employers to notify candidates when AI analysis is used, obtain consent, and limit who can access the recordings. It also required employers to delete recordings within 30 days of a request.

New York City’s Local Law 144 (effective since July 2023) goes further, requiring employers to conduct annual bias audits of any automated employment decision tool (AEDT) and make the results publicly available. The law covers tools that “substantially assist or replace” human decision-making in screening candidates.

The EU AI Act classifies AI systems used in employment and recruitment as “high-risk,” which triggers a set of obligations around transparency, data governance, and human oversight. Those obligations are still being phased in, but they represent the most comprehensive regulatory framework so far.

In practice, most candidates in most jurisdictions are encountering these systems with no disclosure, no idea what is being measured, and no meaningful ability to opt out without withdrawing from the hiring process entirely.

The Asymmetry Problem

Here is what bothers me most about this as a developer who builds conversational systems: the informational asymmetry is profound and structurally baked in.

When a human interviews you, both parties are operating with roughly similar uncertainty. The interviewer is reading signals from your responses; you are reading signals from their reactions and adjusting in real time. There is a feedback loop. The interaction is genuinely bidirectional.

When an AI interviews you, that loop is broken. The model has been configured with evaluation criteria you cannot see. Its responses are calibrated to elicit information without revealing what weight that information carries. It does not give you the micro-signals a human would, so you cannot calibrate. You are performing into a void.

The candidates who do best in these systems are not necessarily the ones who would do best in the job. They are the ones who have figured out the evaluation rubric through trial and error, or who have paid for coaching services that have reverse-engineered specific systems. This is not a trivial distinction. It means these tools, despite their stated purpose of reducing bias and increasing objectivity, may simply be shifting the advantage toward candidates with more resources to invest in interview preparation.

What This Changes About Preparation

If you are going to go through one of these systems, a few things are worth knowing.

First, treat the job description as a technical specification. Whatever competencies are listed, those are likely the dimensions the scoring model is evaluating. Structure your responses to explicitly address those dimensions, not because you are gaming the system but because the system was built around that document.

Second, be verbose where you might normally be concise. Human interviewers reward economy of language; AI systems reward coverage. A brief, precise answer may score worse than a longer one that touches more semantic territory, even if the shorter answer demonstrates clearer thinking.

Third, ask directly whether AI is being used to evaluate your responses. In Illinois and New York, you are entitled to this information. Elsewhere, you may not get a straight answer, but asking establishes that you are aware of the practice and signals something about how you engage with institutions.

The Verge article captures the phenomenology of this experience well, the uncanny feeling of performing for a system that will never feel anything about your answers. What it cannot fully convey is that the strangeness is not a bug in the experience. It is a feature of what these systems were designed to do, which is to make the messy, human, unreliable act of evaluating other humans legible to a spreadsheet. Whether that is progress depends entirely on what you think hiring is actually for.

Was this interesting?