Autoencoders are a class of ML models trained to map input to output. They are largely used as placeholders, or as a security measure (see arxiv 1802.03471).
In this project we focus on ONNX. ONNX is the up-and-coming standard for handling ML models between applications, and can be accessed with the ONNXRuntime API. Although mainly written for python, the developers included a C/C++ interface that we endeavored to use.
Vitis-AI is a possible alternative. It's an API built on top of ORT for Xilinx FPGAs. It doesn't support xgboost however (as of April 2021), so we work directly with ORT.
Unfortunately, I misunderstood the purpose of ORT, thinking that the API allowed access to nodes within the ONNX model. As a result I don't believe I will use ORT for future projects.
Train the autoencoder: export_onnx.py
Verify onnx model: import_onnx.py, import_onnx.cpp
The standalone C ONNX reader works fine. It is possible to integrate with host.c, but without access to node weights this isn't useful.
If one wants to use ORT with the v++ compiler, just add the library flags to the autogenerated makefile as below:
86: LDFLAGS += $(opencl_LDFLAGS) -L/home/jan/Github_repos/ONNXRuntime/onnxruntime-linux-x64-1.4.0/lib -lonnxruntime
The specific onnx library I'm using can be found here:
https://github.com/microsoft/onnxruntime/releases/download/v1.4.0/onnxruntime-linux-x64-1.4.0.tgz
The following resources were referenced:
https://michhar.github.io/convert-pytorch-onnx/
https://www.onnxruntime.ai/python/auto_examples/plot_load_and_predict.html
https://github.com/lutzroeder/netron ::: https://netron.app/