Enterprise Explainable AI – Configuring OpenAPI Connector

Overview

Deploying production ML models with JSON over REST is an industry standard approach (typically in a docker container).  Alongside this API you will have an OpenAPI specification (formerly named a Swagger specification) which defines how your API operates (such as endpoints, formats, structures, authentication, etc).  These specifications can also be auto generated

Chatterbox Labs’ can use this specification to automatically build a connector without any coding.  This document give information on Chatterbox Labs’ OpenAPI support and the process for configuring an OpenAPI connector.

Process to configure an OpenAPI Connector

Your OpenAPI specification file is typically served alongside your API endpoint.

First, select the OpenAPI connector in the connect AI model step:

From here select the data type (text, mixed or image) that your predict function operates on.

Enter the URL of your specification and load it up.  The endpoints that are defined in your spec will be extracted. Select the appropriate endpoint in the drop down menu:

The system will use the details in the specification to load the input and output structure of your API, along with any additional fields that it requires (such as authentication).

The first step is to tell the system where the payload (that is, the input to your machine learning model) is within your JSON.  You do this by clicking on the elements in the JSON.  Many APIs will support batched input (and this is recommended for performance reasons).  If this is the case, enable the Support batches? option.  If this option is enabled, you will also need to select batch structure in your json (typically an array).  In the text box you can see the path used for the query.  Try switching batches on and off to see the effect of this.

Carry out the same process for the response.  The item that you need to supply is the score that the prediction endpoint returns.  If batches were selected on the input, this will also be the case on the output.

If there are additional fields required by your API, they will be presented here.  Fill them in as appropriate.

Hit Done to complete the configuration of your OpenAPI connector.  You can test this connected out on the predict step.

OpenAPI Support

Formats & Versions

Both OpenAPI 2.0 and 3.0 standards are supported, in JSON and yaml formats.

Endpoint Selection

The full spec is taken as input, and a single endpoint string is used to narrow down the options to a single request/reply schema.  The following assumptions are made:

  • POST operation
  • JSON as the payload and response format, it must be specified as application/json in the spec
  • 200 as the code for the success response

URL Resolution

The endpoint URL can be overridden in several places.  The order of resolution is the following:

  1. servers URL in the top level spec
  2. if no server is specified we determine the URL from the spec-location param (on Chatterbox Labs’ API):
    1. we concatenate the top level domain of the spec-location with the path-url param
    1. for example: given spec-location = “http://xai.chatterbox.co/api/swagger.json” and path-url = “/v1/predict” the result will be: http://xai.chatterbox.co/v1/predict

Representing each input record

  • Text: One string of text
  • Mixed: An array of ordered values (type: string)
  • Image: A base64 encoded image (type: string)

Representing output

A score for each input record must be returned. If the underlying task is a classification task, the score for the original predicted label must always be accessible

If the predictor is not returning the scores but only a class name set the param expected-class (see below).

Representing an input dataset

Both single data points and batches of data points are supported.

Headers and params

We support adding headers, query params (like ?user=john) and path parameters (this will be applied to the endpoint path param, example: /classifiers/{classifier_id}/) to the requests. Users need to fill a map and pass it as the parameters param.

Params must be specified in the spec, here is an example:

parameters:
    name: classifier_id
    in: path
    description: The classifier ID to be used
    required: true
    schema:
        type: string

only the required params will be used, in will be used to decide where to put it and name must have a corresponding key in the map provided by users, the value will be a string that we will pass during the remote call.

Authentication

Two simple auth methods are supported (in line with those required by the mainstream cloud providers):

  • APIKEY: a pair of username/password or a single key to be passed, we support two methods:
    • header: we add it to the headers as: Authorization=Basic + base64_encode(username:password)
    • query: a single string which contains everything, added as ?apykey=token
  • Bearer token: a simple single string which contains the key to authenticate requests, we support it only as a header param.

API accessible params

  • spec-yaml a string with the content of the openapi spec, as opposed to the spec-location URL, if both params are set spec-yaml will take precedence
  • image-encoding-header in order to add a prefix to the image base64 encoded string, example: data:image/jpeg;base64. Defaulting to “” so that we will send only the base64 string
  • Set limit-request-second in order to limit the number of requests to be made to the endpoint
  • If the predictor only returns a class name (string) you can add the param expected-class so that it will assign 1.0 when the result match the param string, 0.0 in all other cases.

Get in Touch