Artificial Intelligence (AI) has made its way into all end-use industries. The use of AI tools has immensely enhanced the scope of operations, workflow management, and security enforcement. While stakeholders have accepted the disruption AI has made, certain business leaders are scratching their heads, wondering how these tools actually function.
The underpinnings of many AI tools are hard to decipher. This is because users are acquainted only with models analyzing data and making predictions, but not the processes behind these predictions. Hence, companies and researchers have been experimenting with what is known as ‘explainable AI’.
According to IBM, explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust results and output created by machine learning algorithms. To put it simply, explainable AI is used to describe an AI model itself, its expected impact, and potential biases.
It is imperative for organizations to develop a full understanding of all AI decision-making processes. Model monitoring and accountability must comprise the core strategy of all enterprises. This objective is expected to be realized with explainable AI. Explainable AI can enable humans to understand and explain ML algorithms, deep learning, and neural networks.
Explainable AI is opening various opportunistic doors for service providers. With explainable AI, businesses can troubleshoot and enhance model performance while helping stakeholders understand the behaviors of AI models. Investigating model behaviors by tracking insights on deployment status, fairness, quality, and drift is essential to scaling artificial intelligence.
Explainable AI applications are expected to be leveraged for developing robust augmented reality solutions. Researchers at Meta Reality Labs have recently created XAIR, a framework that could help developers to make the processes underpinning the predictions of AI easier to understand. The framework was introduced in a paper presented at the Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
To demonstrate how explainable AI functions, the paper presented two likely scenarios. The first one involves assistance with route suggestions while jogging. An individual who loves cherry blossoms is presented with detours containing a plethora of cherry blossom trees in her AR glasses. This mildly surprises the user, albeit in a pleasant manner. The user is curious as to how this new route was recommended. Hence, her user goal is to resolve her surprise, for which an explanation is automatically triggered using explainable AI.
The second scenario involves reminders for a user to use appropriate fertilizers on her garden plants. The user visits her neighbor to learn some basic tips about gardening. Once she returns home, her AR glasses recommend instructions about plant fertilization by showing a care icon on the plant. Concerned about technology invading her privacy, the user wishes to know the reason behind this recommendation. In this case, the system goal is that of building trust between the user and the gadget.
Considering this concern, the default explanation merges both Why and How. The system scans the plant’s visual appearance. It has abnormal spots on the leaves, which indicate fungi or bacterial infection. For a detailed explanation, the full content of the three types is presented in a drop-down list upon her request.
Given the extensive potential explainable AI has in augmented reality applications, it is highly likely that end-use industries will make significant usage of this technology