HOW ANALYTICS IS CHANGING THE ECOMMERCE INDUSTRY
7th January 2020What is Reinforcement Learning?
11th January 2020Black box AI is any artificial intelligence system whose inputs and operations are not visible to the user or another interested party. A black box, in a general sense, is an impenetrable system.
Deep learning modeling is typically conducted through black box development: The algorithm takes millions of data points as inputs and correlates specific data features to produce an output. That process is largely self-directed and is generally difficult for data scientists, programmers and users to interpret.
When the workings of software used for important operations and processes within an organization cannot easily be viewed or understood, errors can go unnoticed until they cause problems so large that it becomes necessary to investigate and the damage caused may be expensive or even impossible to repair.
AI bias, for example, can be introduced to algorithms as a reflection of conscious or unconscious prejudices on the part of the developers, or they can creep in through undetected errors. In any case, the results of a biased algorithm will be skewed, potentially in a way that is offensive to people who are affected. Bias in an algorithm may come from training data when details about the dataset are unrecognized. In one situation, AI used in a recruitment application relied upon historical data to make selections for IT professionals. However, because most IT staff historically were male, the algorithm displayed a bias toward male applicants.
If such a situation arises from black box AI, it may persist long enough for the organization to incur damage to its reputation and, potentially, legal actions for discrimination. Similar issues could occur with bias against other groups as well, with the same effects.To prevent such harms, it’s important for AI developers to build transparency into their algorithms and for organizations to commit to accountability for their effects.