Technology

Modern test security: Why human judgment still matters in an AI-driven world

Test security is involved in a significant transformation driven by advancements in artificial intelligence and the evolving methods of test delivery. In a webinar titled, “Human Oversight vs. AI Automation: Striking the Right Balance in Test Security”, Kryterion brought together industry experts to talk about the major challenges and opportunities that are presented by integrating AI into test security protocols.

The conversation featured Benjamin Hunter, Vice President of Sales for Caveon, a U.S.-based company that specializes in exam security, including fraud detection, web monitoring, and secure exam development; Jacob Evans, Chief Technology Officer at Kryterion, and Steve Winicki, Head of Sales for Kryterion, who served as the host and facilitator. The webinar underscored the vital necessity of constantly monitoring both AI and human intervention to ensure their combination achieves peak effectiveness.

AI provides a whole pack of detection capabilities, including anomaly detection and content monitoring, to help protect exam content and secure the test environment. However, it is important to understand the risks of becoming overly reliant on this technology. This was a central theme of the discussion.

A recent academic study explored how generative AI behaves differently from human test-takers in multiple-choice exams. Using statistical methods, the researchers showed that AI responses often follow distinct patterns, which can help flag potential cheating. But, they also note that spotting this kind of behavior is still a new and developing area — reinforcing the need for human oversight in certification testing.

Hunter highlighted the particular difficulties posed by remote or bring-your-own-device (BYOD) testing, stating: “There’s no control on a systemic level around the devices being used to access an exam.” According to industry data, over 80% of organizations use BYOD today, with 70% of BYOD use cases involving employees bringing unmanaged devices into the workplace. This lack of systemic control over the devices being used adds a significant layer of complexity to security efforts.

Evans followed this sentiment, describing the exam delivery phase as “where the rubber really meets the road” due to the high risk of items leaking. This underscores why a layered approach to security is essential and why the industry is in a constant state of evolution, like a game against bad actors.

Evans delved into the practical implementation of security measures, particularly emphasizing the delicate balance between robust test security and a positive candidate experience. He stated, “That’s a really tough kind of balance to strike between promoting tech test security and the candidate experience.”

This idea is also echoed in academic discussions. While AI-based proctoring can enhance security, it may also introduce challenges related to fairness and privacy. In a study published in the Hastings Law Journal, it is possible to see that AI proctoring systems can infringe on student rights. That’s why they recommend pausing their usage until adequate safeguards have been established.

As a leading test delivery organization, Kryterion actively employs and develops security technologies and practices that effectively integrate AI. They utilize computer vision for identity verification and environment scanning. Through experimentation, Kryterion has found that AI can sometimes be more accurate than humans in detecting aberrant behavior.

However, their experience has also demonstrated that AI technology is not yet perfect, necessitating human judgment for critical decisions. Kryterion learned from a pilot program, for example, that relying solely on AI for judgment calls could lead to “bad feedback loops” for candidates, and they adjusted their procedures to ensure humans make the final determination based on the data collected by AI.

Navigating test security in the age of AI requires a strategic and dynamic balance. To successfully integrate AI means leveraging its strengths to augment human capabilities and making processes more efficient and effective, all while prioritizing the integrity of the assessment, the defensibility of security measures, and the experience of legitimate test-takers.

Previous Article

Leave a Reply

Your email address will not be published. Required fields are marked *