Skip to content

Releases: CentaurusInfra/fornax-serverless

v0.1.1

12 Jan 22:21
47ac4cb
Compare
Choose a tag to compare
v0.1.1 Pre-release
Pre-release

This Release includes a bunch of bugs fix and performance improvement of v0.1.0

v0.1.0

09 Dec 19:43
a5dfb1b
Compare
Choose a tag to compare

This release is the first prod release of the project. The following components and resources are included:

Delivered Components

  • Fornax Node Agent which includes:
    • FornaxCore/NodeAgent gRPC protocol full implementation
    • A Node actor communicate with Fornax Core, use revision track and report node, pod/session resource state with FornaxCore
    • Pod actors manage Pod lifecycle using CRI Api with Containerd
    • Session actors open/close Session with Pod using Session service interface and a gRPC Session service implementation
    • Integrate with Quark container runtime to hibernate/wake up container when Open/Close session
  • Fornax Core Server which includes:
    • K8s extension API Server which expose RW api for application and session, and RO and LW api for pods and nodes
    • Node Monitor and gRPC server which talk with Fornax Node Agent
    • Node/Pod Manager which manage node and pod state reported by Fornax Node Agent
    • Application manager which create pods and bind application session on pod and auto scale pod according session request and idle session target
    • Pod Manager which manage pod state of whole cluster and Pod scheduler to assign pods to Node
    • In Memory storage to replace Etcd to save resources, provide same capability like Etcd storage to power api server

Resource model

Two RW external and Two RO internal resources are exposed in this release

  • Fornax application
    • Define application container spec and scaling targets.
  • Fornax application session
    • Application workload resource, each session has independent ingress endpoint
  • K8s Pod
    • Internal resource, read only, support LW integration
  • K8s Node
    • Internal resource, read only, support LW integration

Performance test

Real cluster test

Test cluster configuration

  • FornaxCore : 1 GCE VM with 4 core and 16GB Ram
  • Nodes: 67 GCE VMs 32 core and 128GB Ram

Session Cold start latency test result

Applications Concurrency Sessions P50(ms) P90(ms) P99(ms) startup total time(s) throughput
67 1 20100 259 346 401 81 248
67 5 20100 544 719 783 39 490

Session Warm start latency test result

Applications Concurrency Sessions P50(ms) P90(ms) P99(ms) startup total time(s) throughput
67 1 20100 5 7 8 6.8 2955
67 5 20100 13 18 26 5.4 3720

Simulation Cluster Test

Test Config

  • 1000 simulation nodes which does not implement workload.
  • 1000 application with 300 pod replicas

Test cluster configuration

  • FornaxCore : 1 VM with 4 core and 16GB Ram
  • Simulation nodes: a Laptop 12 core and 64GB Ram

App scale test result

Test Applications Pods startup total time(s) pod throughput
test-1 1000 300000 80 3750

v0.1.0-alpha

29 Jul 21:03
66e8d9b
Compare
Choose a tag to compare
v0.1.0-alpha Pre-release
Pre-release

This release is the first release of the project. The following components and resources are included:

Delivered Components

  • Fornax Node Agent which includes:
    • Talk with Fornax Core Server using Fornax gRPC message protocol
    • Register node and config node using Fornax gRPC message
    • Create/Terminate pod using CRI Api with Containerd
    • Collect node resources and pod resources using CAdvisor and report node/pod resource state
    • Monitor pod state on node and sync node with Fornax Core Server
    • Persist pod state on node in sqlite db, recover pod states from save state and Containerd runtime
  • Fornax Core Server which includes:
    • API Server
    • Node Monitor and gRPC server which talk with Fornax Node Agent
    • Node Manager which manage node and pod state reported by Fornax Node Agent
    • Fornax Application manager control application pods lifecycle
    • Pod Manager which manage pod state of whole cluster and work with Pod scheduler to assign pods
    • A simple Pod scheduler assign pods to node considering Memory and CPU constraint

Resource model

There are only 3 resources are managed in this release

  • Fornax application
    • define application container spec scaling targets, currently only support static target scaling.
    • application spec and status are persisted by Fornax Core Server in Etcd
  • Node
    • Node spec and revision is created and stored in node
    • Fornax Core Server rebuild cluster information when Node Agent register.
  • Pod
    • Pod spec is created by Fornax Core Server application manager.
    • Pod scheduled is persisted in Node Agent when it's assigned to node, Node save spec and revision into node sqlite db.
    • Pod not yet scheduled are recreated each time when Fornax Core server restart