AI in Interviews: Assistance or Cheating?
How AI is turning job interviews into a game of cheat-or-be-cheated
Welcome to this week’s edition!
Today, we talk about a 21-year old that created a 1 million dollar business, helping people cheat on tech interviews.
This leads us to the following questions:
Is the hiring process broken, and will AI fix it or further dismantle it?
Where do we draw the line between 'assistance' and 'cheating' in the age of AI, and who gets to decide?
If AI can write code and ace interviews, what skills will truly matter in the future?
Does the rise of AI-assisted cheating erode trust between employers and potential hires, and what are the long-term consequences?
Let’s dive in!
The $1 Million Cheating Startup
To prepare for software engineering interviews, 21-year-old Columbia student Chungin 'Roy' Lee spent over 600 hours on Leetcode, vast library of coding and algorithmic problems - currently a key tool in the hiring process for many tech companies. Frustrated by the experience, and convinced that this type of test is an equivalent of companies evaluating candidates based on ‘how many jumping jacks they can do’, he set out to create InterviewCoder, an undetectable tool that can be used during technical interviews to ‘cheat’ and have AI answer the problems.
You can see how it works in this video:
So what did he do with it?
Roy applied and used this tool at big tech including Amazon, Meta, and TikTok - landing job offers from all of them, which he promptly refused. His goal? Make the following point very loud and clear: “Everyone programs nowadays with the help of AI, so it doesn’t make sense to have an interview format that assumes you don’t have the use of AI”. In the process, he’s out to make 1$M ARR by selling 60$/month subscriptions to thousands of users all over the world.
Companies and hiring managers did two things, as often happens:
Publicly took distance
Secretly admired (and offered him several jobs)
The saga obviously continues as Roy shares his journey ‘fucking with big tech’ (title on his Linkedin bio) on social media.
Defining the Line: Assistance vs. Cheating
This all reminded me when I was interviewing (circa 2015 - more about my experience with interviews on ‘Is it Hiring or Speed Dating?’) and was asked to sum two complex fractional numbers. My answer ‘there’s excel for that thankfully’ wasn’t well received, but I did feel like that task would not have proved any correlation with the role, and was pretty frustrated.
Are we just living in the moment in time like the one that happened with calculators and then excel?
The irony is that companies like Anthropic (who are building AI tools) are esplicitly asking to refrain from using this technology in job applications:
Now, of course it is different to use AI to solve a technical problem, than to use it to tailor your CV or write a cover letter for example. But companies can use ATS (Applicant Tracking System) that uses AI to parse through applications, keywords in resumes and so on, and they are far from perfect and ‘human’. Also I wonder: how does a cover letter really show communication skills? Is the CV enough in 2025? (Spoiler: it isn’t - wrote about this in ‘The Death of the CV’).
So is using cheating?
Maybe.
Maybe it's just smart.
Maybe it's a necessary adaptation to a system that's increasingly out of touch with reality. We're asking people to play by rules that are designed for a world that no longer exists. And in a world where AI is becoming an integral part of nearly every profession, insisting on a purely "human" application process feels increasingly arbitrary.
The real question should be: what are we trying to measure in the first place? If we're looking for problem-solvers, innovators, and collaborators, then maybe we need to focus on rethinking the hiring process.
In the meantime, hiring managers are starting to flock on social media and suggesting that even if candidates do use AI to cheat, if asked to explain their reasoning, they obviously fall short:
However, literally 4/6 of them have obviously been using AI resources very blatantly in our interviews...clearly reading from their second monitor, creating very perfect solutions without an ability to adequately explain motivations behind specifics, having very deep understanding of certain concepts while not even being able to indent code properly, etc."
I’m honestly torn on this issue. On one hand, I use AI tools daily to accelerate my workflow. I understand why someone would use these, and theoretically, their answers to my very basic questions are perfect.
My fear is that if they’re using AI tools as a crutch for basic problems, what happens when they’re given advanced ones?
The Broken Hiring Process: Leetcode and Beyond
But let’s keep our eyes for a moment on candidates. Take a stroll around subreddits like r/Recruitinghell and r/Jobs these days, and you’ll find communities of hundreds of thousands of candidates complaining about the hiring process, namely:
Crazy Application Forms and Questions - Lengthy, repetitive forms asking for information already on resumes, and bizarre, irrelevant questions that seem designed to test patience rather than skills.
ATS and Automatic Rejections: Algorithms filtering out qualified candidates based on keywords, often missing the nuances of real-world experience.
Ghosting and Fake Job Postings: Companies abruptly ending communication with candidates, leaving them in limbo. Fake postings that aim to fish candidates for information.
Endless Rounds of Interviews: A seemingly endless gauntlet of screenings, technical challenges, and personality assessments, draining candidates' time and energy.
Why?
1️⃣ Platforms like LinkedIn and Indeed have made it too easy for candidates to apply for jobs. Most people play the numbers game, sending as many applications as possible, hoping to land a job or at least an interview.
2️⃣ This leaves HR managers with two options: • Go through hundreds or thousands of resumes, which is physically impossible. • Focus on a small group of applicants and hope to find good candidates.
Both options are problematic. In the first case, HR managers waste valuable time combing through resumes instead of focusing on other duties. In the second, qualified candidates may never get their resumes reviewed, turning job hunting into a near-impossible task.
There’s no easy or one-size-fits-all solution, but the ingredients are going to be including:
Skill-based Hiring - Ditch the Diplomas, Demand Demonstrations:
We're not talking about just listing skills on a resume; we're talking about proving them. Real, tangible abilities that directly translate to the job. Forget the pedigree and the fancy degrees. Can you code a working application? Can you design a user interface that doesn't make people want to throw their computers out the window? Can you manage a team without causing a mutiny?
This means assessments that mirror the actual work, not abstract puzzles designed to test how well someone can memorize algorithms. Work on real-world scenarios and see how they perform. If they can build a functional prototype or solve a complex problem, who cares where they went to school? It's about results, not resumes.
Proof of Work - Portfolios, Projects, and Tangible Evidence:
Talk is cheap. Portfolios, code repositories, design mockups, project reports – these are the real indicators of competence. Let's move away from hypothetical questions and focus on concrete examples. Ask candidates to demonstrate their abilities.
Paying for Candidate Time - Respecting the Investment, Reducing the Waste:
If a company is going to make someone jump through countless hoops – multiple interviews, lengthy assessments, take-home projects – they should be compensated for their time. This approach also serves as a natural filter. Companies will think twice about dragging candidates through endless rounds if they have to pay for it.
Realistic Job Descriptions: Honesty, Not Fantasy:
Ditch the unicorn job postings. No one is a "rockstar ninja guru" who can juggle a dozen languages and frameworks while simultaneously predicting the future. Let's be honest about the actual requirements and expectations. It's not about finding the perfect candidate; it's about finding the right fit. Suggestion: Let’s introduce a hard-standard and Application vetting before these things go online.
The Future of Skills: AI Management & Critical Thinking
Now, even if we do manage to make these improvements, it’s inevitable that one of the most important skills for the next few decades will be knowing how to work and manage AI. This means we will need to be able to test how well candidates cheat use tools available to make their applications and even more importantly their jobs better.
How do we do that?
We need to move beyond simply asking, "Can you code?" and start asking, "Can you collaborate with AI to solve complex problems?" We need to assess not just technical proficiency, but also the ability to:
Prompt Engineering and AI Direction:
Use real-world problem and ask them to use AI tools to generate solutions. How well can they formulate prompts, refine AI outputs, and integrate AI-generated code into a larger system?
AI Output Evaluation and Critical Analysis:
Can candidates critically evaluate AI-generated code, identify potential errors, and understand the limitations of the tools they're using? Can they distinguish between a brilliant solution and a convincing-sounding hallucination? How long does it take them to improve?
Ethical AI Usage and Responsible Implementation:
Can candidates demonstrate an understanding of the potential biases and pitfalls of AI? Can they design and implement AI solutions that are fair, transparent, and accountable?
Adaptability and Continuous Learning:
The AI landscape is constantly evolving. Can candidates demonstrate a willingness to learn new tools and adapt to changing technologies? How do they approach continuous learning and stay up-to-date with the latest AI advancements?
Real-World Scenarios and Collaborative Projects:
Real-world scenarios that require to use AI tools in a collaborative setting. How well can candidates work with AI and human team members to achieve a common goal?
The "Why" Behind the AI:
It is not enough to get the correct answer. We must test if the candidate understands the process that the AI took to get to that answer. Can they explain the logic, and reasoning behind the AI's output?
Erosion of Trust: Long-Term Consequences
Until we build this new, better road for the job market, we will be worsening one of the most underrated topics of the whole game: trust.
Trust is the currency of relationships, and relationships are the cornerstone of effective collaboration and therefore organizations.
This reliance on AI, while seemingly efficient, is chipping away at the foundation of trust. Each instance of AI-optimized resumes or AI-generated interview responses creates a sense of detachment, a suspicion that we're not seeing the authentic candidate.
A trust gap has formed across all levels of the workforce, with employees demonstrating even lower levels of trust in AI than those in leadership positions. At the leadership level, only 62% welcome AI, and only 62% are confident their organisation will ensure AI is implemented in a responsible and trustworthy way. At the employee level, these figures drop even lower to 52% and 55%, respectively. This is the AI Trust Gap.
The consequences of this erosion are significant:
Increased Skepticism and Lengthy Processes: Hiring managers, already overwhelmed, become even more wary, leading to longer, more intrusive screening procedures. This adds to the existing frustrations of a broken hiring process, and further alienates candidates.
Dehumanization of the Candidate Experience: Candidates feel reduced to keywords and algorithms, leading to disengagement and a sense of being treated as a resource rather than a person. This can negatively impact employer branding, and make it harder to attract top talent.
Misalignment of Skills and Culture: A system that prioritizes AI manipulation over genuine connection risks placing candidates in roles where they don't truly fit. This leads to higher turnover, lower productivity, and a decline in overall team morale.
The Rise of Digital Facades: We risk creating a system where the "best" candidate is the most skilled at gaming the system, not necessarily the most qualified or culturally aligned. This fosters a culture of dishonesty and manipulation, which ultimately undermines organizational success.
Long-Term Impact on Company Culture: As trust erodes, so does the sense of community and collaboration within organizations. A culture of suspicion and detachment can stifle innovation and creativity.
Ultimately, we must ask ourselves: are we building a future where genuine human connection is valued, or one where it's replaced by digital trickery? The answer will determine not just the future of hiring, but the future of work itself. We need to create a system that acknowledges the role of AI, but also prioritizes the importance of human connection and trust.
Cheating has always existed and always will will find new forms in the face of technological advancement.
Instead of playing whack-a-mole with AI-assisted deception, we need to address the root causes.
And wait for AI wearable to kick-in.
Ciao,
Matteo
The logical conclusion of 4 decades of bad hiring practices. Nicely explained.
My take on the hiring process is that is long broken.
Please allow me to share. The best anecdote about a situation where a flood of résumés competed for just a few openings came from an HR manager. Let’s remember that this was back when job applications were sent by mail. His company was on the verge of bankruptcy, and the CEO walked into the room demanding urgency in selecting a few candidates. Realizing the daunting challenge of making a decision that would take days, he gave his HR manager the following instruction:
"Throw the résumés into the air, and I’ll grab some as they fall."
Seeing the HR manager’s shocked and bewildered expression, the CEO added:
"Given the state of this company, what we need most are lucky candidates on our team."