Skip to main content

Model Deployment

Our models can be deployed in any of the following ways:

  • Docker container
  • Server (FastAPI)
  • Model Registry (MLFlow)
  • Cognaize Platform

Docker Container

The model can be deployed as a Docker container. The container can be by either using the build.sh in the repository or by running the following command:


docker build -t <image_name> .
docker run <image_name> python driver.py <arguments>

Server

The model can be deployed as a server using FastAPI. To do so, run the following can be run:


uvicorn server:app --reload

The model will be ready to be used in http://localhost:8000. And the documentation will be available in http://localhost:8000/docs.

Endpoints

Run model by giving document as input

This functionality allows you to run the model locally, by giving a document as input.

Endpoint: /run/predict/genie
Method: POST

  • Body:

    {
    "document_json": "str",
    "data_path": "str"
    }
  • Body validations:

    • document_json: Not blank
    • data_path: Not blank
  • Response Body

    {
    "status": "str",
    "error": "str",
    "result": "dict"
    }

Run model, and digest to cognaize platform

This functionality allows you to run the model locally, while reading the data from the platform and digesting the results back to the platform.

Endpoint: /run/genie/
Method: POST

  • Body:

    {
    "task_id": "str",
    "token": "str",
    "url": "str"
    }
  • Body validations:

    • task_id: Not blank
    • token: Not blank
    • url: Not blank
  • Response Body

    {
    "task_id": "str"
    }

Run base model

This functionality allows you to run the model locally, by giving a document as input.

Endpoint: /run/predict/genie
Method: POST

  • Body:

    {
    "input_type": "str",
    "input": "str | UploadedFile"
    }
  • Body validations:

    • input_type: Not blank
    • input: Not blank
  • Response Body

    {
    "status": "str",
    "error": "str",
    "result": "dict"
    }