Skip to main content

Understanding my candidates' proctoring results

A recruiter's guide to reading the integrity results shown on each candidate's profile.

Written by Roisin Walsh
Updated yesterday

Introduction

Maki's proctoring system helps you understand whether a candidate's assessment was completed without anomalies. Instead of a single "cheater or not" label, you see a structured view: what events were recorded during the session, how those events combine into an overall picture, and a final risk level to support your review. This article walks through each category of signals and how to interpret the final risk label.

💡Tip: Proctoring results are one input into your decision, not a verdict. A flag is a signal worth reviewing, not proof that a candidate cheated.

1. Identity integrity

This category covers the signals related to candidate presence — when a face is visible, when more than one face appears, when the webcam is turned off, and when the same device is used across sessions. These signals are analysed and are recorded as events to support the recruiter's review.

Maki runs the following checks:

  • Face detection on webcam snapshots: Every 5 seconds during the assessment, the platform captures a webcam snapshot and analyses it for face presence. If no face is visible in a snapshot, this is recorded as an event.

  • Multiple faces detected: If a snapshot contains more than one face, this is recorded as an event.

  • Matching Session (device fingerprint). The system can detect when the same device has been used across separate candidate sessions. This may indicate one person completing assessments on behalf of multiple candidates, or a single candidate submitting multiple applications under different email addresses.

💡Tip. A single snapshot without a face (for example, the candidate leaned out of frame) is usually classified as low risk. Repeated patterns across multiple answers are what push risk higher.

2. Environment integrity

This category looks at the candidate's environment and how they interacted with the assessment tab. It is where most candidate questions come from, so it is worth understanding in detail.

Maki runs the following checks:

  • Tab switching (focus-out events). The system records when the candidate's browser leaves the assessment tab. It does not see, capture, or monitor what the candidate looks at elsewhere. It only records that focus left the tab, when it happened, and for how long. This applies for fullscreen exit also. If the assessment is configured to run in fullscreen and the candidate exits, this is logged as an event.

  • Object detection. Using the webcam, the system can detect prohibited objects such as phones, or a second screen. Detection only triggers when an object is clearly visible and recognisable.

  • Location (IP checks). The system records the country and region the candidate is connecting from. If the IP address changes significantly during a single assessment session, this is flagged. Occasional IP changes are common (for example, switching between wifi and mobile data) and typically weighted as low risk. Multiple large geographic jumps are what drive higher risk.

  • AI Plagiarism detection. The system analyses how a candidate speaks and moves during their recorded responses, using visual cues (head and eye movement) and audio cues (speech rhythm, pauses, multiple voices). This helps identify candidates reading from a script, receiving outside help, or using AI-generated content. Because it looks at behavioural signals rather than just the text, it's more robust than simple text-only detection.

💡Tip. Tab switching counts events, not content. Neither Maki nor you as the recruiter can see what the candidate opened in another tab. Only that they left the assessment window.

3. Reading the overall risk level

At the top of each candidate's Proctoring tab, you will see an overall risk level. This is what to focus on when deciding whether a candidate's results need a second look.

Risk is reported as one of four levels:

  • Clear. No integrity events were detected during the session.

  • Low. Minor, isolated events were detected. These are common in normal test-taking conditions and typically do not warrant follow-up.

  • Medium. Repeated or structured events were detected in at least one category. We recommend reviewing the timeline before moving forward.

  • High. Strong or corroborated events were detected. Multiple independent checks pointed to the same concern. We recommend reviewing carefully before making a hiring decision.

How the risk level is calculated

Maki's proctoring uses a principle called corroboration. No single event, on its own, will mark a candidate as high risk. High risk is only assigned when multiple independent checks point in the same direction, across one or more categories. This is what makes the system defensible and reduces false positives.

In practice, this means:

  • A single focus-out event will not flag a candidate as high risk.

  • A single snapshot without a face will not flag a candidate as high risk.

  • A high-risk label means the system saw several consistent events that, taken together, suggest something worth reviewing.

💡Tip. The overall risk level can be driven by events that are not all visible in the Integrity Check summary. If you want to understand a specific risk label, open the full events timeline on the Proctoring tab. That is the complete picture of what was detected.

Need more help?

⚠️Need more help? Contact us at [email protected] or use the chat widget in the bottom-right corner.


Did this answer your question?