Artificial Intelligence in Cybersecurity

The Promise and Limitations of Artificial Intelligence in Cybersecurity

Artificial Intelligence in Cybersecurity

Artificial intelligence [AI], once seen as a dead end in computer science research, has surged to the fore in recent years. New technologies built on neural networks, machine learning, and deep learning, combined with virtually unlimited MIPs and storage in the cloud, have started to show promise for solving certain kinds of problems. In many cases, the promise is justified. But marketing hype makes it difficult to distinguish the real applications of AI from the bogus ones.

In the cybersecurity arena, hype runs deep, and AI is no exception. Most chief information security officers of larger entities are intrigued by the promise of AI but skeptical when they see vendors touting AIs that can detect and neutralize threats without high false positives. They know that AI-based cybersecurity solutions require a close partnership between humans and machines.

The Paradoxical People Problem

Instead of eliminating the need for security staff, AI solutions actually require dedicated staff to manage them. You need people to train the AI and tune its output in order to ensure that its recommendations are as useful as possible. You also need people to monitor AI-generated alerts to determine which ones are real threats and which are false positives.

A recent Gartner survey found that AI was the most-often-mentioned cybersecurity technology that CISOs are considering. But, if you look closely at this study, it becomes clear that CISOs are experimenting with AI, not installing it in mission-critical applications. In fact, the analyst who conducted the study warned CISOs to be prepared for disillusionment.

Source: Alexander García-Tobar – Nextgov

Artificial Intelligence in Cybersecurity


Click to access the login or register cheese