Genki ML Format


Models are exported using the genkiml CLI. The input is a model/checkpoint file and the output is the exported version of the model along with C++ code to run and easily integrate it in an existing codebase. This tool supports PyTorch models.

As PyTorch has dynamic graph structures, the exported PyTorch model/checkpoints files do not have self-contained models (unlike for example Keras and ONNX models). Therefore, a PyTorch model first has to be exported into a static graph representation, such as torchscript or onnx before running it through genkiml.

After installing the required packages (see Installation CLI) run

python path/to/model

This will output a zip archive to the desired --output-path, or if none is provided into the current folder. Converting from PyTorch

Example using a PyTorch model

Working with PyTorch models is slightly more involved since it requires a stage to export the dynamic model, but still straight forward.

import torch
from torch import nn

class MinimalModel(nn.Module):
    def __init__(self):
        self.lin0 = nn.Linear(in_features=100, out_features=256)
        self.lin1 = nn.Linear(in_features=256, out_features=256)
        self.lin2 = nn.Linear(in_features=256, out_features=2)
        self.act = nn.ReLU()

    def forward(self, x):
        y = self.lin0(x)
        y = self.act(y)
        y = self.lin1(y)
        y = self.act(y)
        y = self.lin2(y)
        return y

model = MinimalModel()
example_input = torch.rand(1, 100)
traced = torch.jit.trace(model, (torch.rand(1, 100)))"")

After saving the model, simply run the command line interface. Note that the user needs to explicitly supply the input shape for torchscript models.

python --input-shape 1 100

This will output a file in the current folder,, that contains the exported model along with the runtime.

From ONNX and Keras