- October 29, 2018
- Posted by: admin
- Category: Uncategorized
Artificial Intelligence Crisis- Major breakthroughs in AI have seen machines being entrusted with business and safety-critical decisions, from guiding vehicles to diagnosing diseases. Yet a reproducibility crisis is creating a cloud of uncertainty over the entire field, eroding the confidence on which the AI economy depends.
Reproducibility, the extent to which an experiment can be repeated with the same results, is the basis of quality assurance in science because it enables past findings to be independently verified, building a trustworthy foundation for future discoveries. This is crucial because previous breakthroughs are the barometer by which to measure all subsequent progress.
Without the capacity to reproduce past results, the entire basis on which machines are increasingly making legal, corporate and even medical decisions is called into question. This could stop us from being able to benefit from some of the greatest advances in the field, from the AIs that power smart cities to those that find new drug treatments.
For example, deep reinforcement learning (RL), whereby machines try every possible solution until they find the right one, could enable driverless cars to endlessly crisscross in virtual reality until they learn to safely change lanes in the real world. Yet experts found that RL results are not easily reproducible, raising questions over whether it can be relied on to ensure road safety. An analysis of 30 AI papers similarly found that the majority of them were difficult to reproduce because key records of their methodologies were missing, from training data sets to study parameters.
As a result, Google researcher Ali Rahimi has likened AI to alchemy. The way in which alchemy produced new innovations such as glass alongside false cures such as leeches is directly analogous to the way that AI has discovered potential cancer cures yet failed to distinguish masks from faces.
Lack Of Traceability
The fundamental problem is that data science is not governed by the same generally accepted standards of quality assurance as other fields of science. As a result, the data trail charting the road from the origins of AI to its latest iterations is shrouded in mystery.
There are currently no universal standards governing the data capture, curation and processing techniques that give vital meaning and context to AI experiments. This is the equivalent of climate scientists investigating global warming without any rules on how to document the locations or units of temperature readings.
This is particularly concerning as there are so many iterations involved in developing machine-learning tools and there is no universal benchmark of good practice for implementing and recording them all. A single experiment to create a facial-recognition system involves a complex layer cake of processes, from training runs to software updates, file changes and tweaks to the algorithm. If any of these steps is not meticulously recorded, it would be painstakingly difficult to modify the AI or reproduce the original results.
Read More Here
Article Credit: Forbes
The post How Do We Address The Reproducibility Crisis In Artificial Intelligence? appeared first on erpinnews.