Artificial Intelligence (AI) is transforming many businesses. In addition to the strategical use-cases, such as churn or fraud, countless other use cases are emerging. AI’s power as technology is catalyzing organizational and societal change. From performing robotic surgery to self-driving cars, AI is starting to show up in every aspect of our daily lives. However, with great power comes great responsibility.

Even though AI paves the way for a digital and innovative future, it also raises significant concerns.  Specifically, most of the AI solutions deployed lack transparency, privacy, and show significant bias. Responsible AI tries to address these concerns by making ethics and accountability significant priorities. These priorities can be addressed only through strong technologies.

Ethics is possible with a human-centered design*, because ethics in living business systems are very complex and can not possibly be addressed without including a responsible human. Firstly, the AI solution should be understandable. Explainable AI is very significant as it explains a rationale for its decision-making process and shows the user an understanding of how the specific AI solutions work, instead of a “black-box” decision. Without explainability, it is hard to convince a user of why the specific AI solution will work. All of the business decisions should be justifiable and understandable to maintain your customer’s trust. Understandability by data scientists (through Shapley and Lime explanations, for example, [1,2]) is not enough. Also, different business units will require different types of explanations, therefore adaptive and dynamic user interfaces to address the dynamic business needs is a must. Continuous user training and support, so that non-data scientist users gain speed and independence in their exploration of AI solutions is also a must.

The human-centered design allows users to engage with AI systems and provides accountability. While designing AI solutions, it is very critical to have an understanding of user needs and show empathy. The user experience should be designed in a way that encourages the use of AI solutions and the system should be easy to use. The design process should also consider the privacy and security of user data in order to gain trust in our society. Additionally, it is important to think about the environmental aspects of deploying AI solutions. For environmental responsibility, there is a need to implement continuous AI in deployment, which is based on continuous learning algorithms. Continuous learning AI deployments can be 100s of times energy efficient compared to batch learning systems. In the era of COVID-19, continuous learning became even more critical for corporations since they are able to adapt to significant changes much faster than batch learning systems.

When the human-centered design is implemented together with continuous learning, real-time monitoring is also enabled for the business user where the user can keep track of the business results instantaneously with different metrics. This is especially important for medical and health infrastructure applications. By presenting different metrics and explanations of the AI-based business model, automated and real-time decisions can be made and anomalies or inefficiencies can also be addressed.  

AI with human-centered design and continuous learning is a huge step forward from AI to human. These principles make AI solutions apprentices to the business users. Business user training for the AI solution enables maintenance of a qualified workforce in the long-term and AI systems can become apprentices that complete us as human experts.

Date: 29.01.2021

Author: Hakan Tekgul (Data Scientist at TAZI)

References:
* https://ai.google/responsibilities/responsible-ai-practices/

[1] SHAP (Shapley Additive exPlanations)”, reference: Lundberg, Scott M., and Su-In Lee. “A unified approach to interpreting model predictions.” Advances in Neural Information Processing Systems. 2017

[2] LIME (Local interpretable model-agnostic explanations). ref: Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “Why should I trust you?: Explaining the predictions of any classifier.” Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM (2016)

[Image 1]: https://morioh.com/p/2cdf1d224a58

[Image 2]: https://www.element14.com/community/docs/DOC-95903/l/a-framework-for-ethical-ai