ML Transparency Visualization

At Riskified, machine learning models are core to the product and company and play a crucial role in identifying fraudulent transactions. However, the complexity of AI often leads to a lack of transparency, making it difficult to decipher the rationale behind its decisions.

While there was a lot of fraud expertise within the company, there was still a great need for a visualization that would allow articulating clearly the rationale behind an approval or rejection of orders. This was needed both for internal needs (quality assurance of the models) as well as for our external users.

The design process, led by me in my role as Head of Design, began with me working with a lead product designer to delve into and develop a deep understanding of the subject. Together, we tried to answer questions like:

What is the technology behind a decision? What exactly is feature extraction? How currently order decisions are communicated by our AM and Sales teams and how do our data scientists currently monitor decision models?

Our solution was to show the weighting of machine learning features for each decision. We developed a visualization that represented this weighting based on two parameters: Size, indicating the level of impact on the decision, and Color, reflecting the assigned risk score of each feature. This provided an intuitive understanding of a decision.

The end result was a data visualization interface that became a valuable tool for account managers and data analysts. It improved their ability to interpret ML decisions, explain them externally, and monitor them.

From Machine Learning to Human Understanding · ISVIS Conference, 2019