A.I. has a bias problem and that can be a big challenge in cybersecurity
July 17, 2019 / Saheli Roy Choudhury
Inherently biased artificial intelligence programs can pose serious problems for cybersecurity at a time when hackers are becoming more sophisticated in their attacks, experts told CNBC. Bias can occur in three areas — the program, the data and the people who design those AI systems, according to Aarti Borkar, a vice president at IBM Security. “One is the algorithm itself,” she told CNBC, referring to the lines of codes that teach an AI program to carry out specific tasks. “Is it biased in the way it’s approached, and the outcome it’s trying to solve?” A biased program may end up focusing on the wrong priorities and could miss the real threats, she explained. “If you’re trying to solve the wrong outcome, and the outcome is biased, then your algorithm is biased,” Borkar said. The role of AI is expanding in cybersecurity. Many CEOs see cyber attacks as the biggest threat to the global economy over the next decade.