Artificial Intelligence

Solving the ‘black box’ problem: Learning how Artificial Intelligence makes decisions

Advances in deep learning and neural networks have improved the precision of Artificial Intelligence algorithms and enabled the automation of tasks that were previously thought to be the exclusive domain of human intelligence. But the precision in performance comes at a cost to transparency. Unlike with traditional software, we don’t always have an exact idea of how deep-learning algorithms work. Troubleshooting them is very difficult, and they often fail in unanticipated and unexplained ways. Even the creators of deep-learning algorithms are often hard-pressed to investigate and interpret the logic behind their decisions.

The failure of Facebook’s machine-translation system is just one of the many cases in which the opacity of deep-learning algorithms has caused larger troubles.

What’s widely known as the Artificial Intelligence “black box” problem has become the focus of academic institutions, government agencies, and tech companies that are researching methods to explain AI decisions or to create AI that is more transparent and open to investigation.

Their efforts will be crucial to the development of the Artificial Intelligence industry — especially as deep learning finds its way into critical domains where mistakes can have life-changing consequences.

Source: Ben Dickson | PC Mag


Click to access the login or register cheese