Skip to main content

Framework for AI Models

Cognaize AI model framework is built over Kedro standard project structure. With Kedro, we have implemented various functionalities to enhance our model development process. It enables us to:

  • Build reproducible, maintainable, and modular AI models
  • Use a standard project structure and coding conventions
  • Track experiments and share the results of our models
  • Have our models ready as Dockers, servers, and packages

Our Framework

One of the key features we implemented is automated testing. Kedro allows us to write unit tests for our data pipelines, ensuring the correctness of our transformations and ensuring that our models perform as expected. Moreover, we have implemented a list of automated GitHub actions that run on all our models to ensure that our models are always tested and up to date.

In addition, we have integrated custom dataloaders into our pipelines using functionality of pycognaize to load and work with Cognaize Snapshots.

Pipelines

In our framework, we have implemented three main pipelines responsible for the complete model development process. The pipelines are:

Deployment

Our framework allows us to deploy our models in various ways. We can deploy our models as Dockers, servers, or packages. You can learn more about our that here.

Usecases

Models developed at Cognaize with our framework can be both be deployed in our platform, be dockerized and deployed in any cloud provider, or be deployed as a server in any machine. Moreover, some of our models are available to be used as a python package from cognaize-models registry.

File structure

You can find a detailed description of our file structure here.