Partner Explainable AI – Developer Guide

Document Overview

This document details the integration points for Chatterbox Labs’ Partner Explainable AI software product.  The intended readers of this document are developers looking to deploy the software inside an existing software platform.  Business users should look to the Enterprise Explainable AI product.

Prerequisites

Prior to using Explainable AI software, you should have a trained machine learning system which has a predict function that takes input and returns a score as output. You must also have at least 1 test data point that is to be explained.

Deployment

Firstly, it is important to distinguish between the deployment of Chatterbox Labs’ Explainable AI software and the deployment of your existing trained machine learning models.

For the deployment of your own trained machine learning models, each organization will have their own process, however a typical and recommended approach is for these models to be deployed within a Docker container, with this container exposing a predict function.

Chatterbox Labs’ software runs on the Java Virtual Machine and as such can be deployed as a JVM software dependency, a standalone executable JAR or via a self-contained Docker container.

A Docker container is the preferred mechanism of deployment; it contains the standalone executable jar and manages the configuration of Chatterbox Labs’ software.

Accessing the software

Your organization will be provided with secure credentials to Chatterbox Labs’ online repositories.  These credentials will give you access to the Docker registry, downloads for standalone jars and a Maven style repo for the software dependencies.

Connecting via JSON

Whether deployed via Docker or the standalone jar, the software exposes JSON endpoints for your application to connect to. See Appendix A for more details.

Connecting to External AI systems

One your application connects to Chatterbox Labs’ software, this software in turn needs to interrogate your machine learning model (the External AI System in the diagram above). Default connectors to cloud providers are supplied (see Enterprise XAI User Guide for details), or a custom REST or gRPC connector can be created (see the custom connectors documentation).

Hardware Requirements

XAI sits as a layer on top of an existing ML model therefore the underlying machine learning model will have its own resource requirements.

Typical minimum requirements for XAI are:

  • CPU: Dual Core @ > 2GHz
  • Memory: 16GB
  • Software: JDK 11 or Docker

Logging

Logging happens via SLF4J with Logback.

Appendix A. REST/JSON Interfaces

Explain Endpoints

POST /api/v1/explain/text

input:

{
  "input": "A long string to be explained",
  "distance": "signed",
  "predictor": "co.chatterbox.connectors.rest.GenericText",
  "params": {
    "endpoint": "http://fast.text.keras.cbox/predict",
    "label": "positive",
    "payloadKey" : "text"
  }
}

returns:

{
  "text": {
    "str": "A long string to be explained",
    "predicted": 0.6281051635742188
  },
  "explanations": [
    {
      "id": "0",
      "ranges": [
        {
          "lower": 20,
          "upper": 29
        }
      ],
      "tags": [
        "WORD:VERB"
      ],
      "ablated": [
        "explained"
      ],
      "score": 8.187423057429893,
      "parent": ""
    },
    {
      "id": "1",
      "ranges": [
        {
          "lower": 2,
          "upper": 6
        }
      ],
      "tags": [
        "WORD:ADJ"
      ],
      "ablated": [
        "long"
      ],
      "score": 1.5704364544042868,
      "parent": ""
    },
    {
      "id": "2",
      "ranges": [
        {
          "lower": 7,
          "upper": 13
        }
      ],
      "tags": [
        "WORD:NOUN"
      ],
      "ablated": [
        "string"
      ],
      "score": -8.35124143483302,
      "parent": ""
    }
  ]
}

POST /api/v1/explain/mixed

input:

{
  "explainable-data": [
    [
      "5.2",
      "3.5",
      "1.3",
      "0.25",
      "1",
    ]
  ],
  "train-data": [
    [
      "5.1",
      "3.5",
      "1.4",
      "0.2",
      "1",
    ],
    [
      "4.9",
      "3",
      "1.2",
      "0.2",
      "2",
    ]
  ],
  "predictor": "co.chatterbox.connectors.rest.GenericMixed",
  "params": {
    "endpoint": "localhost",
    "label": "iris",
    "payloadKey": "mixed"
  }
}

returns:

{
    "explanations" : [0.0, 1.0]
}

POST /api/v1/explain/image

input:

{
  "image": "/9j/4AAQZGDB",
  "predictor": "co.chatterbox.connectors.rest.GenericImage",
  "params": {
    "endpoint": "http://fashion.keras.cbox/predict",
    "label": "Apparel",
    "payloadKey": "images"
  }
}

returns:

{
    "explanations": {
        "pixel-score": [0.0, 1.0, 0.9]
    }
}

To get the heatmap image instead of raw scores add the param generate-heatmap

{
  "image" : "/9j/4AAQZGDB"
  "predictor": "co.chatterbox.connectors.rest.GenericImage",
  "params": {
    "endpoint": "http://fashion.keras.cbox/predict",
    "label": "Apparel",
    "payloadKey": "images"
  },
  "generate-heatmap" : true
}

returns

{
  "explanations": {
    "heatmap-relative": "iVBORw0KGgoAAAA",
    "heatmap-absolute": "AAADICAYAAAAKhR"
  }
}

Trace Endpoint

POST /api/v1/trace/distance

input:

{
  "explainable-data": [
    [
      "Ok lar, it's so early!"
    ]
  ],
  "train-data": [
    [
      "Ok lar... Joking wif u oni..."
    ],
    [
      "U dun say so early hor... U c already then say..."
    ],
    [
      "I HAVE A DATE ON SUNDAY WITH WILL!!"
    ]
  ],
  "data-types": [
    "T"
  ],
  "limit-results": 3
}

returns:

{
  "explanations": [
    [
      {
        "index": 1,
        "distance": 0.11661005
      },
      {
        "index": 0,
        "distance": 0.18568748
      },
      {
        "index": 2,
        "distance": 0.18970925
      }
    ]
  ]
}

Connectors Endpoint

GET /api/v1/connectors

returns:

{
  "plugin-path": "string",
  "connectors": {
    "co.chatterbox.connectors.rest.GenericText": {
      "endpoint": "localhost",
      "label": "spam",
      "payloadKey": "image"
    }
  }
}

Get in Touch