Ars Technicast special edition, part 2: Spotting bad actors inside a company


Artist's impression of an insider threat stealing your stuff.

Enlarge / Artist's impression of an insider threat stealing your stuff. (credit: D-Keine / Getty Images)

In the second of our series of podcasts on artificial intelligence produced in association with Darktrace, we dive into something a little spookier: the world of "insider threat" detection.

There have been a number of recent high-profile cases where people within organizations use their access to data for self-enrichment or ill-intent, and it slipped past the usual policies and tools that are collectively referred to as "data loss prevention." Most of the time, employees are long gone before the data theft is noticed (if it ever is), and preventing data loss almost requires a Minority Report level of pre-cognition.

To get some insight into how AI could play a role in detecting insider threats, Ars editors Sean Gallagher and Lee Hutchinson spoke with Kathleen Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University, about her research into identifying the tells of someone about to take the data and run. Lee and Sean also talked to Rob Juncker, senior vice president of Research and Development at data loss prevention software company Code42, about whether AI can really help detect when people are about to walk off with or upload their employer's data. And Justin Fier, director for Cyber Intelligence and Analysis at Darktrace, spoke with Lee about how AI-related technologies are already being brought to play to stop insider threats.

Read 5 remaining paragraphs | Comments

via Biz & IT – Ars Technica https://ift.tt/2vMmBd0

Comments