Three People-Centered Design Principles for Deep Learning

Algorithms gain more and more control over our daily lives. We need to build in principles into AI systems that work in the favor of people:

  1. Transparency. Wherever possible, make the high-level implementation details of your AI project available for all of those involved. In this case of a deep learning initiative, people should understand what deep learning is, the way it works to include how data sets are used to tune algorithms, and how deep learning may affect their work. When intellectual property or other sensitive information might be exposed, an organization may want to include a panel of external stakeholders, keeping in mind that certain data sets might need to be protected from disclosure if they contain sensitive information or there are privacy concerns.

  2. Explainability. Employees within an organization and external stakeholders, to include potential customers, should be able to understand how any deep learning system arrives at its contextual decisions. The focus here is less on an explanation of how the machine reached its conclusions — as deep learning often cannot be explained at that level of detail — and more on the level of what method was used to tune the algorithm(s) involved, what data sets were employed, and how human decision makers decided to use the algorithm’s conclusion.

  3. Reversibility. Organizations also must be able to reverse what a deep learning effort “knows.” Think of it as the ability to unlearn certain knowledge or data, which helps protect against unwanted biases in data sets. Reversibility is something that must be designed into the conception of an AI effort and often will require cross-functional expertise and support.

Read More: Three People-Centered Design Principles for Deep Learning

Photo by Pixabay from Pexels
Posted in General and tagged .

Comments are closed.