Explainable AI: A Caltech Virtual Workshop, a part of the AI4Science Series
Sept. 23, 2021

As the size and complexity of data and software systems keep increasing, we are increasingly dependent on the Artificial Intelligence (AI) and Machine Learning (ML) in order to extract an actionable knowledge from the data. In science, we are steadily moving towards a human-AI collaborative discovery, as we explore complex data landscapes. However, the results or recommendations from the AI systems may be hard to understand or interpret, which is an essential component of the data-to-discovery process, whether in science, business, security, or any other data analytics domain. Trust and credibility of AI in practical applications can have significant ethical, political or even life-or-death consequences.