Neural networks, anywhere

Genki ML is everything you need to deploy neural networks in resource constrained environments

#include "genkiml.h"
int main()
auto model = genki::ml::load_model();
const std::array<float, 100> input { /* ... */ };
const auto result = model->infer({input});
// Do something with result...


Genki ML

Machine learning is revolutionizing edge computing 🚀

In the last decade machine learning has advanced leaps and bounds. This is particularly true when it comes to computer vision and language.

Models relying on time-series data, e.g., temperature or acceleration, have not progressed as fast, in no small part due to the lack of infrastructure. Genki ML aims to solve that issue.

Providing everything you need to accurately gather data, label, train, deploy, maintain and running real-time inference, Genki ML is the missing platform for time-series ML.


Please reach out to us on GitHub for feature requests and join our growing community of machine learning experts on Discord

For the time being we focus on making the deployment and real-time performance of neural networks seamless. That means providing a convenient way to convert trained models from various formats to one that you can embed in your application, and use in your build step when converting from offline R&D, e.g., in Python, to online performance, e.g., in C++.

Moving forward we extend our functionality to meet the needs of our users. Please reach out to us on GitHub for feature requests and join our growing community of machine learning experts on Discord.

Command Line Interface

genkiml is the command line interface (CLI) that powers the Genki ML. genkiml currently supports the conversion of formats such as ONNX, TensorFlow and PyTorch into the Genki ML C++ runtime.

Here is an example of how you can convert a fully connected Keras model using the genkiml CLI.

First we define a demo model:

import tensorflow as tf

model = tf.keras.models.Sequential([
    tf.keras.layers.Dense(256, input_shape=(100,)),

and then we point genkiml to it and the CLI takes care of the rest

python fully_connected_keras_model

Quest 2 Demo

As an example of something we have built using the Genki ML runtime, here we run inference on the Meta Quest 2 in real-time.

Hardware Agnostic

As long as you are working with time-series data, Genki ML is there for you!

We use the Wave smart ring as an example hardware, but note that Genki ML is hardware agnostic. The ring sends IMU data to the Quest where a surface detection model is used to detect whether the hand touches a surface or not.


Step-by-step guides to setting up the `genkiml` CLI.

GitHub Repository

Learn how the internals work and contribute.


Learn how to convert from TensorFlow and ONNX to the Genki ML format.


How to run run real-time inference on exciting new hardware!