AI Model Insights User Guide

Document Overview

This document details the use of Chatterbox Labs’ AI Model Insights software product. The intended readers of this document are business users looking to use the software via the browser-based user interface.

Prerequisites

Prior to using AI Model Insights software, you should have a trained machine learning system which has a predict function that takes input and returns a score as output. You must also have at least 1 test data point and potentially access to training data for select pillars.

Accessing the software

Once the software is deployed, users access it via the web browser. Navigate to the computer on which is it installed, using port 3000 (this port can be set to any that is appropriate for your environment). For example, if the software is deployed on the local machine navigate to:

if it is installed on a network location with IP 10.10.10.10 navigate to:

The 5 pillars of AI Model Insights

The software is index around 5 pillars, each of which are accessible from the left-hand menu.  Organizations may use all of these, or only those that are applicable to their use case.  The pillars are:

Explain, Trace, Action, Fairness and Vulnerabilities.

Managing connectors

The connectors drawer configures connections to the endpoint which you wish to glean insights upon.  Connectors are provided to each of the cloud providers (AWS, IBM Watson, Microsoft Azure & Google Cloud) along with support for OpenAPI auto generated connectors and connectors via REST or gRPC. 

It is always accessible from the following icon in the top right of the platform.  Any changes made there will automatically update throughout the platform:

You may add multiple connectors and these can be saved to a workspace file (see below).

In this example we are connecting to a REST endpoint.  We enter the endpoint (the URL of the predict function) and the key to the JSON payload (typically ‘text’, ‘images’ or ‘payload’).  Enter the labels that the system should use extract scores from.  For example, if this is a classification task enter the class labels here, if this is a regression task enter the key that the regression score in indexed to.

Once you hit Save, this connector will be available to use throughout this session.

Managing Data

Data can be managed through the platform using the data drawer.  This is always accessible using the data icon that can be found in the top right of the user interface:

You can always return to the data drawer when using the platform if you need to add or change data.

Within the data drawer you will see two sections:

  1. Data – These are the datasets which you add.
  2. Domain – For each dataset a domain is automatically learned (and can be edited).  A domain defines the variables and their types within a dataset.

Mixed data.  CSV files can be uploaded with a maximum file size of 40mb.  The files should be complete, without missing values.  It is important to ensure that the order of the variables in your CSV file match the order that your model (and hence prediction endpoint) is expecting.  The order of columns in the CSV are preserved. 

Text data.  TXT files containing a single free text data point can be uploaded.  This enables data points which contain line breaks and white space to be added.

Image data. It is good practice to ensure that the dimensions of the image match those of your machine learning model. Commonly this is 224 x 224.  This is not a hard requirement; however it can significantly improve performance.  Support files are png and jpg.

Once these files are uploaded, a domain will automatically be added and type detection will take place automatically.  You can view and edit the domain.

Any changes made in the data drawer will automatically be made available for use within this session.

Managing Workspaces

A workspace contains your data and connectors.  It does not contain the results produced by running one of the five pillars.

When connectors and data are added, they are available for the duration of the browser session.

There are many instances where you wish to keep track of your connectors and data without setting them up each time. 

You can export and import your workspace to file within the Workspaces drawer using:

Please note that when you import a workspace it will replace any data and connectors that are active with those from the workspace file you are importing.

Explain

Select Explain from the left-hand navigation menu.

On the data and domain step enter the data which you wish to explain.  Depending on whether this is text, mixed or image data the subsequent workflow will updated as appropriate.

Text

Select a text data source that contains a single data point for explanation and move to the Connector step.  Select the connector for the text model and the target label from this connector which you wish to explain and test the connection. Move to the Explain step for a summary, if every is as you wish it to be run explain.

This will interrogate the endpoint many times as it assesses the contribution of various components of the text to the final prediction. 

Without needing any knowledge of the underlying model, Chatterbox Labs has extracted complex, multiword phrases.  This goes well beyond standard feature importance methods which rely on the machine learning features having meaning (which deep networks often do not), do not model the interaction between words (which is a critical part of text & language) and struggle to scale given the high dimensionality of text.

Immediately you have transparency in the machine learning prediction.  We understand which complex text components are most responsible for the prediction.  This is easily visualized in the chart showing the ranking of the importance, with two further visuals. We see the phrase in the context of the whole text datapoint, and we are able to drill down into the phrase to understand the interaction between its subcomponents.

Mixed Data

Select a csv data source which contains the data points that you wish to explain.  In the next box select the relevant data domain (this is the definition of the dataset that was learned in the data drawer).

Select the connector for the mixed data model and the target label from this connector which you wish to explain and test the connection. Move to the Explain step for a summary, if every is as you wish it to be run explain.

The explain step will interrogate the endpoint many times as it assesses the contributions of each variable (and the interactions between these variables) on the final predictions. 

Without needing any knowledge of the underlying model, Chatterbox Labs has identified which features are important for each test point.  The explanation will likely be different for each data point that is shown.

Aggregate scores are shown across all data points initially, select an individual datapoint from the data table to show the explanation specific to that data point.

Immediately valuable information can be seen.  Those variables that increase the prediction score (either the confidence or probability of the prediction if it is a classification task, or the value that is returned if it is a regression task) are shown with positive scores, those with negative scores decrease this value.

Image

Select a single image data source which contains the image that you wish to explain. Select the connector for the image model and the target label from this connector which you wish to explain and test the connection. Move to the Explain step for a summary, if every is as you wish it to be run explain.

This will interrogate the endpoint many times as it assesses the contribution of various areas of the image to the final prediction. 

The explanation is returned and rendered using a heatmap.  If you wish to access the underlying data that generated this heatmap this is accessible when integrated. 

The heatmap ranges from yellow to purple, showing areas that are important to the classifier in yellow and those that either decrease performance or pull towards another class in purple.  The heatmap colours can be scaled by the highest contribution found in this test image (the default) or normalized to the theoretical maximum contribution any image on this task could make.

Actions

Actions determines, with respect to your AI model, what should change in a data point to achieve an outcome (for example, how would a data point need to change to reduce a risk score).  It makes use of genetic algorithms and will query the endpoint many times.

Actions applies to mixed data datasets.

Select Actions from the left-hand menu.  In the Data & Domain step, chose the dataset that contains the datapoints you wish to evaluate. 

You are then able to restrict variables that should not be changed.  This is called freezing.  Typically, you would freeze a variable if it is impossible to change it in the real world (such as Age).  This step can be iterated upon.

Move to the Connector step.  Chose the relevant connector and the class of interest (if this is a regression task, this would be the key that indexes the regression value). 

Optionally you can set a threshold here.  This threshold is applied to the score returned from your AI model and is used to determine when an outcome has changed.  This threshold may be the threshold for assigning a class label (such as Bad risk) or it may be a desired value from a regression (such as the dollar value of an insurance premium).  By default it is set to 0.5.

Test the connection and hit Next to run Actions.  The Actions process will interrogate your prediction endpoint multiple times.

On the final Actions tab you will see the results.  For each data point, you will see a set of potential actions.  These are always tested against your AI model.  You may see things here which you would expect, but you may also see things that are not expected.  This is an important evaluation mechanism that enables you to check that your AI model is behaving as you would expect.

For each data point you will see the original scored result using the original data point on the left hand side.  On the right-hand side is an action.  This contains the fields and values that would be changed in this datapoint, and the resulting score from your machine learning model.  There are multiple possible actions for each datapoint with multiple resulting scores from your AI model.  You can filter through these using the slider.

Fairness

The fairness pillar enables you to assess the fairness of your AI model when applied to a particular dataset.  The aim is to determine whether your AI model is biased against any sensitive attributes.  These fields are completely customizable on a case by case basis. This is because bias is treated differently for different use cases.  The sensitive attributes do not need to be part of the model.

Select Fairness from the left hand menu. Fairness applies to mixed data.

The first step is to add the data that you wish to assess the fairness of.  Whilst you can input any dataset here, it is important that this reflects your task at hand. 

Move to the Features step and tell the system which fields are not part of your model (such as IDs, training labels or sensitive attributes).  Feature that are not part of the model will not be sent to the prediction endpoint.

Move to the connector step and select the appropriate connector and class.  Test the connection. If everything is OK move to the Fairness step.

Once the system has interrogated the endpoint multiple times, you are able to specify the sensitive attributes (and combinations thereof) which you wish to address for bias.

Using the charts you are able to assess disparity metrics, starting at a high level and then drilling down to very fine detail.  Should you wish to assess the full data, this is available at the bottom of the screen.

Vulnerabilities

The Vulnerabilities pillar enables you to profile exploitable weaknesses in your AI Model.  Rather than a typically security assessment (which is based on notions such as encryption, credentials, firewalls, etc) the vulnerabilities pillar focuses on the data.  Its aim is to identify times that a bad actor (or adversarial) could manipulate input data in order to fool the machine learning system into producing an unintended output.  For example, an attacker for an automated credit card approval system could manipulate their input data in subtle ways to fool the system into approving their credit card application.

Select vulnerabilities from the left hand menu.  On the Data and Domain step input some test data (this is a set of valid examples that your system will score) and the relevant domain.  Select the appropriate connector and label.  The score returned for this label is used to determine the impact of any potential vulnerability.

Run the vulnerabilities step – this will interrogate your endpoint multiple times.

On the final output step, move between data points in the test data points table.

The resulting potential vulnerabilities are shown in the table below.  These are the result of assessing each variable, for each data point, against a taxonomy of potential vulnerabilities (detailed in the ‘Test’ column).  The outcome field shows whether the machine learning system returned an error (such as HTTP 500 code) or a valid prediction.  If it returned a valid prediction, the score diff field will also be populated with the difference in score.

A common approach is to sort the table by the score diff field in order to see the potential vulnerabilities that have resulted in the greatest change in outcome.  For more details you can hit the glass in the details field.

Looking at a risk score assignment task, we can see an example potential vulnerability that needs to be addressed:

Here, if a bad actor simply sends an empty text string as the desired duration of the loan, the system will not return their correct, high prediction score (64.78%), instead it will return 0% which is the lowest possible risk score achievable in this AI Model.

Trace

Trace enables us to trace back to the training data (training data is required for the Trace step).  We want to do this to audit and validate the business case.  This is necessary because machine learning systems in the wild are often subject to business specification drift, and whilst they may still be confident, they may not be doing what we expect them to.

Trace applies to text and mixed data, here we will use text as an example.

Select Trace from the left-hand menu.  Add the Test and Train data and move to the Domain step.  If there are any types that need updating in the domain, this can be managed in the data drawer.  Hit next to run the Trace step.

The Trace step identifies datapoints which are most similar to your test data point.  This is so that you can audit the business specification.  This similarity, or distance, is shown in two ways:  the raw data is listed in the table whist the bubble plot renders this visually.  Data points on the right are farthest away, data points on the left are close.  You can choose to split the data, most often this would be by the target variable in your data. 

You can now check whether your test data point is most similar to training data of the same class (which shows that the business specification still holds), that it’s most similar to a different class (the model has been subjected to business specification drift) or there is not a clear separation of classes at all (the business specification was not clear in the first place).

Hardware Requirements

The AI Model Insights platform sits as a layer on top of an existing ML model therefore the underlying machine learning model will have its own resource requirements.

Typical minimum requirements for AI Model Insights application are:

  • CPU: Quad Core @ > 2GHz
  • Memory: 16GB
  • Software: JDK 11 or Docker

Client computers used to access the software (if different from where the application is deployed) should have a minimum of:

  • CPU: Dual Core @ > 2GHz
  • Memory: 8GB
  • Web browser: Firefox, Google Chrome, Safari or Microsoft Edge (Chromium)

Get in Touch