Skip to content

Releases: ML-TANGO/TANGO

tango-24.11

21 Nov 06:51
f39367f
Compare
Choose a tag to compare

TANGO 2024 11 Release

This release includes several enhancements and updates on following functions:

  • Project Manager: project_manager container,
  • Unified AutoNN: autonn container,
  • Code Generation: code_gen container,
  • Target Deployment: ondevice_deploy / cloud_deploy container

Project Manager

Project Manager includes following new functions:

  • Pipeline Iteration for CI/CD Support
  • Presets on target and datasets commonly used
  • Visualization of Unified AutoNN progress and statistics

Unified AutoNN

In this release, containers related to BMS, Yolo, Resnet, Visualization, NAS and HPO are unified into an Unified AutoNN (auonn) container for speedup with providing following features:

  • BMS function : rule-based model selection among Resnet, Resnet-Cifar, YOLOv7, and YOLOv9.
  • Resume training withd adjusted batch size when CUDA out of memory
  • LLM Inference(TangoChat)
  • LLM RAG(Retrieval-Augmented Generation) prototype
  • Model exporter: Torchscript, ONNX, ONNX end-to-end
  • Training progress and statistics visualization
  • Neural net model visualization
  • Support for continual learning (tested with partitioned COCO dataset)
  • Retraining Pipeline
    • NAS: SuperNet training-NAS(SubNet training)-retraining pipeline
    • HPO: NAS-HPO-retraining pipeline
  • Model Visulaization
    • Clarification of node connection points and edges
    • Support YOLOV9 model
      • YOLOV9 block node pop-up function
      • backbone, neck, head layout of YOLOV9

Code Generation

  • Run-time Code Generation for YoloV9

    • For detection applications, the functions of generating run-time codes for YoloV9 are added.
  • Web-based Application Code Generation

    • For Cloud services such as GCP(Google Cloud Platform) and AWS(Amazon Web Services), web-based code generation functions are added.
  • Raspberry Pi5 + Google Coral M.2 PCIe TPU support

    • For ondevice-applications, the code generation function for Raspberry Pi5 + Google Coral M.2 PCIe TPU + Tensorflow Iite is added.

Targe Deployment

In this release supported targets for deployment are added, refer to here for details.

tango-24.05

28 May 06:54
18586d2
Compare
Choose a tag to compare

TANGO 2024 05 Release

This release includes several enhancements and updates on following functions:

  • Project Manager: project_manager,
  • Base Model Selector: bms,
  • Visualizer : vis2code
  • Code Generation: code_gen

Project Manager

The version of Django used in Project Manager is upgraded from 3.2.12 to 3.2.25 for CVE(Common Vulnarabilities and Exposures)

Replace vue-sweetalert2 with sweetalert for resolving error in npm run build process.

Base Model Selector

Provide WebUI where user can specify the model size manually.

Enhance Batch Size check process with binary search algorithm.

The version of Django used in Project Manager is upgraded from 3.2.12 to 5.0.3 for CVE(Common Vulnarabilities and Exposures)

Visualizer

YAML import function

  • take input ResNet neural networks and YOLOv7-tiny neural networks described in YOLO-style YAML files and visualize them.

Support Complex Node

  • create and validate YOLOv7-tiny neural networks as PyTorch models (.pt)

YAML export function

  • convert and output Node & Edge-style neural networks described in JSON to YOLO-style YAML files.

Refer to PR #150 details on enhancement of Visualizer in this release.

Code Generation

Fix error with docker container build with python3.7 version:

Deployment module build 
SyntaxError: future feature annotations is not defined

tango-23.11

24 Oct 01:52
c7b8352
Compare
Choose a tag to compare

TANGO 2023 11 Release

This release includes several enhancements and updates on following functions:

  • Project Manager: project_manager,
  • Base Model Selector: bms,
  • AutoNN : autonn_yoloe (for detection task), autonn_resnet (newly added for classification task)
  • Code Generation: code_gen (formerly known as deploy_codegen),
  • Target Deployment: cloud_deploy, kube_deploy, ondevice_deploy
    • formerly single deploy_target container did the whole deployment task on different targets, now we have individual deployment containers for each target

To improve the efficiency of the autonn container, which typically requires several days for training even on high-performance GPUs, we have implemented a manual workflow step-forward functionality within the project manager. This functionality has been tested and validated through two stages:

  • bms + autonn_yoloe or autonn_resnet stage: This stage utilizes the dataset and target configurations within the project manager to select the appropriate base model. The selected base model undergoes fine-tuning, generating a trained model and associated codes for the subsequent stage.
  • code_gen + *_deploy stage: Building upon the trained model and generated codes from the previous stage, this stage prepares executable neural network codes for deployment on the specified target, as configured in the project_manager container. Please note that the current release includes the addition of K8s and cloud (ex. Google Cloud Platform) target deployment-related code, although it is still under going.

Notes on the current release:

  • We have tested K8s target deployment-related code, which has been developed and built.
  • The project manager in this release uses Vue.js based front-end.

BMS and AutoNN

This release includes the bms (Base Model Selector) container, developed by ETRI, which serves as a simple test for BMS member container role. The bms container selects the base model from the Yolo v7 or Resnet and suitable batch size for training based on target type information such as ondevice (PC, Andorid Device, Embedded Board), K8s or cloud, specified within the project configuration step of the project_manager. The selected base model is utilized in the AutoNN containers for fine-tuning with the dataset, also specifed within the project configuration step. Additionally, the AutoNN autonn_yoloe and autonn_resnet containers (implemented in autonn\YoloE and autonn\ResNet folders) hvae been included for testing the AutoNN member container role within the TANGO project workflow (pipeline).

Code Generation and Deployment on Targets

We have made changes to the source structure related to deployment codes compared to the previous release. The updates are as follows:

  • The codes responsible for generating the executable neural network code have been moved to the deploy_codegen/optimize_codegen folder.
  • The codes for deploying the executable neural network have been relocated to the deploy_target folder.

The deploy_target folder now includes sub-folders based on the deployment target:

  • cloud: codes for deployment to cloud environments done in cloud_deploy container
  • k8s: codes for deployment to Kubernetes done in kube_deploy container
  • ondevice: codes for deployment to on-device platforms such as PC, android phones, or embedded devices, fullfilled in ondevice_deploy container
    • for PC or embedded devices, ondvice_deploy container generates python codes that can be called by python interpreter.
    • for android phone, ondvice_deploy container generates an APK file.

Notes on deployment:

  • The codes in the deploy_target/k8s folder can be used to build a Docker image, but integration testing is still ongoing. Therefore, in this release, K8s deployment is not functioning properly.
  • In this release, code_gen container supports TensorRT, Apache TVM, PyTorch and ACL. Support for RKNN . Support for rknn is still in progress.

To Do:

TBD

Model/Dataset Mirror

26 Sep 06:53
7a57ac4
Compare
Choose a tag to compare

Pretrained Model Mirror

이 릴리즈는 TANGO에서 사용하는 pre-trained 모델 파일의 업로드/다운로드 기능을 제공합니다.

코드의 버전을 관리하기 위한 용도가 아닙니다.

참고 사항

  • 릴리즈에 포함되는 개별 파일의 크기는 2GB 미만이어야 합니다.
  • 릴리즈 전체의 크기는 제한이 없으며, Bandwidth 사용량에도 제한이 없습니다.
  • Ref: About releases: Storage and bandwidth quotas

우측 상단의 Edit 아이콘을 클릭하시고, 편집 모드에서 pretrained model을 Drag & Drop하신 후 하단의 Update release 버튼을 클릭하시면 됩니다.


업로드하는 모델에 대하여 링크와 설명을 추가해주세요.

tango-23.06

15 Jun 06:00
Compare
Choose a tag to compare

TANGO 2023 06 Release

This release introduces several enhancements and updates, leveraging the following containers: bms, autonn (YoloE), deploy_codegen, and deploy_target, all integrated into a React.js based project manager front-end.

To improve the efficiency of the autonn container, which typically requires several days for training even on high-performance GPUs, we have implemented a manual workflow step-forward functionality within the project manager. This functionality has been tested and validated through two stages:

  • bms + autonn stage: This stage utilizes the dataset and target configurations within the project manager to select the appropriate base model. The selected base model undergoes fine-tuning, generating a trained model and associated codes for the subsequent stage.
  • deploy_codegen + deploy_target stage: Building upon the trained model and generated codes from the previous stage, this stage prepares executable neural network codes for deployment on the specified target, as configured in the project manager. Please note that the current release includes the addition of K8s target deployment-related code, although it has not undergone full testing.

Notes on the current release:

  • We have introduced K8s target deployment-related code, which has been developed and built but requires further testing to ensure seamless integration.
  • This release will be the last release to feature the React.js based project manager front-end.
    Starting from the next release, we will transition to a new Vue.js based project manager front-end, currently developed under teslasystem_vue branch.

BMS and AutoNN

This release includes the BMS (Base Model Selector) container, developed by ETRI, which serves as a simple test for BMS member container role. The BMS container selects the base model from the Yolo v7 series based on target type information (pc, ondevice, cloud, or k8s) specified within the project configuration step of the project manager. The selected base model is utilized in the AutoNN containers for fine-tuning with the dataset, also specifed within the project configuration step. Additionally, the AutoNN container (implemented in autonn\YoloE), developed by ETRI, has been included for testing the AutoNN member container role within the TANGO project workflow (pipeline).

Code Generation and Deployment

We have made changes to the source structure related to deployment codes compared to the previous release. The updates are as follows:

  • The codes responsible for generating the executable neural network code have been moved to the deploy_codegen folder.
  • The codes for deploying the executable neural network have been relocated to the deploy_target folder.

The deploy_target folder now includes sub-folders based on the deployment target:

  • cloud: codes for deployment to cloud environments
  • k8s: codes for deployment to Kubernetes clusters (Please note that the k8s deployment functionality is not operational in this release but will be addressed in future updates.)
  • ondevice: codes for deployment to on-device platforms

Notes on deployment:

To Do:

In the next release, the project manager will use a Vue.js-based front-end with support for fully manual/automatic workflow step-forward functionality.

tango-22.11

07 Dec 02:40
Compare
Choose a tag to compare

TANGO 2022 11 Release
Includes

  • manual workflow step forward tested
  • core REST API implemented and tested
  • project mananger function: target set management
  • labelling tool: MS coco import tested

To do

  • automatic workflow step forward function

tango-22.11-pre1

31 Oct 09:32
Compare
Choose a tag to compare
tango-22.11-pre1 Pre-release
Pre-release

Description of This Pre-Release

The list of functions in this release are as follows:

Completed:

  • Project Configuration
  • Target Set Management
  • Dataset Management
  • Workflow Configuration
  • Workflow Control (manual start)
  • CPU only Member Container Unit Tested
    • bms: Base Model Selection,
    • vis2code: Model Visualizer,
    • auto_nn: Automatic Neural Network Generation (neck_nas, bb_nas)
    • code_gen : Code for Deployment on Target (On Device target)
    • deployment: On-Device Deployment

Scheduled:

  • Workflow Control (Automatic Coordination)
    • prior member container readiness check
    • workflow step forward : start(), status_report() processing
    • member container health check: heartbeat/probe
  • Auto Listing in Project Configuration
    • available targets auto list
    • available dataset auto list
  • GPU enabled Member Container Support
    • bms, backbon-nas, neck-nas