Candidate's Using AI in Remote Job Interviews
In this post I'll review how some technical candidates might be using AI in their interviews. These AI's can help give them answers they otherwise might not know.
Tim Kenney
10/14/20253 min read


The AI-augmented interview: what candidates are really using (and why it’s hard to spot)
TL;DR: In remote interviews, candidates can quietly layer AI tools for real-time coaching, note-taking, translation, coding help, and even eye-contact correction—often with zero visible footprint to the interviewer. Traditional “AI detection” doesn’t work here, and most signals you’d look for are smoothed by system-level video/audio effects.
The live toolkit candidates actually use
Real-time speaking coaches (overlay while they talk)
Yoodli gives live pacing/filler-word nudges and interview drills; it can run alongside Zoom/Meet/Teams.
Poised floats live suggestions and post-call analytics on top of your video app.
Meeting copilots & AI notetakers (quiet, persistent, searchable)
Otter Meeting Agent now answers questions during meetings and produces summaries/action items; it can join Zoom/Teams/Meet automatically.
Microsoft Copilot in Teams summarizes, proposes action items, and answers real-time questions from the meeting transcript.
Zoom AI Companion creates meeting summaries and, in newer releases, interoperates beyond Zoom.
Tactiq (Chrome extension) transcribes Meet/Zoom/Teams without a bot joining—so nothing new appears in the attendee list.
Fireflies.ai records, transcribes, and now supports voice-activated Q&A during calls.
Sidebar assistants (answers on a second screen or in a browser side panel)
HARPA AI, Merlin, Monica: multi-model browser sidebars that summarize pages, draft replies, and surface facts on any site via hotkeys—perfect for “quiet lookups” mid-interview.
Coding copilots (for live exercises)
GitHub Copilot Chat explains code, writes tests, and suggests solutions inside the IDE; many engineers keep it open off-screen.
Cursor IDE and Codeium provide Chat/inline completions and refactors that can speed up “on-the-spot” coding.
Teleprompters & overlays (for prepared answers)
Speakflow Overlay, Blocks (Meet teleprompter), and other teleprompter extensions can float a scrolling script over the call window, near the camera.
“Second brain” & recall
Rewind records your screen/audio locally and makes past conversations instantly searchable for quick recall of names, metrics, and project details.
Audio/video polishers (hide tell-tale cues)
Krisp removes keyboard clicks, background noise, and even offers accent-assist features.
Eye-contact correction: NVIDIA Broadcast, Windows Studio Effects, and Apple’s FaceTime Eye Contact subtly pull your gaze toward the lens—so reading side notes doesn’t look like reading.
Why it’s so hard to detect in remote interviews
Invisible footprint. Tools like Tactiq run as local extensions and don’t add a bot participant, so nothing looks different in the attendee list. Browser sidebars (HARPA/Merlin/Monica) sit on a second monitor or a sliding panel you can’t see.
System-level masking. Eye-contact correction and noise suppression run below the app layer, smoothing over classic “tells” (downward eye flicks, typing noise). These effects apply across apps once enabled.
On-call answers, privately. Meeting agents (Otter, Teams Copilot, Zoom AI) can summarize, surface decisions, or answer clarifying questions in real time—feeding the candidate prompts without interrupting the flow.
Prepared scripts, natural delivery. Teleprompter overlays keep a script close to the lens, while eye-contact features keep gaze steady—making scripted answers look spontaneous.
Text detectors don’t help. Most remote interviews are spoken. Even for written follow-ups, AI text detectors are notoriously unreliable and biased—especially against non-native writers—so they’re weak evidence.
What this means for hiring teams (briefly)
If your work culture expects AI use day-to-day, you should expect it in interviews, too. The goal is to assess judgment, reasoning, and collaboration—with or without AI.
Practical adjustments
Be explicit about allowed assistance. Define “open-book but think-aloud”—e.g., “You may use notes, but narrate your reasoning and cite tools you consult.”
Probe the why, not just the what. Ask for trade-offs, alternatives, and failure cases; request live refactors or extensions of their own answer.
Design tasks that resist parroting. Give novel scenarios with internal constraints (e.g., ambiguous APIs, incomplete data) and ask candidates to surface/resolve the ambiguity out loud.
For coding: use a shared environment and ask for incremental commits with commit messages that explain intent; then request a change of direction mid-way to see how they adapt without starting from a blank prompt.
For fairness: avoid over-reliance on AI “detection” scores; evidence shows they can misfire. Prefer structured rubrics and multi-signal evaluation.
