Over the past few decades, AI approaches have become increasingly advanced, to attain remarkable results in many real-world works. However, most currently used AI systems do not play a role in analysis, and the stages of predictions for human users, which can make their reliable evaluation to be extremely challenging.
In a new development, a team of researchers at UCSD, UCLS, Beijing Institute of General Artificial Intelligence, and Peking University have developed a new AI system that can explain its decision-making steps to human users.
The system described in a paper published in Science Robotics could be a new step toward the development of more dependable and understandable AI.
Meanwhile, objective of the field of explainable AI is to create collaborative understanding between robots and humans, and the DARPA XAI initiative is a great catalyst for advancing in this area, stated one of the first authors of the paper.
At the commencement of the project, research teams primarily focus on examining models for classification works by communicating the decision process of AI to the user.
In fact, the project was particularly aimed at developing new and favorable XAI systems. While participating, the team started studying what XAI would mean in a larger sense, particularly on its effects on collaborations between humans and machine.
Importantly, the recent paper is based on a previous work, where the team investigated the impact XAI systems could have on a user’s trust and perceptions in AI during human-machine interactions. Earlier, in the past study, the implementation and testing of explainable systems was undertaken physically, while in the new study it was undertake in simulations.