FedML 0.8.0
FedML Open and Collaborative AI Platform
Train, deploy, monitor, and improve machine learning models anywhere (edge/cloud) powered by collaboration on combined data, models, and computing resources
What's Changed
Feature Overview
- supporting MLOps (https://open.fedml.ai)
- Multiple scenarios:
- FedML Octopus: Cross-silo Federated Learning
- FedML Beehive: Cross-device Federated Learning
- FedML Parrot: FL Simulation with Single Process or Distributed Computing, smooth migration from research to production
- FedML Spider: Federated Learning on Web Browsers
- Support Any Machine Learning Framework: PyTorch, TensorFlow, JAX with Haiku, and MXNet.
- Diverse communication backends (MPI, gRPC, PyTorch RPC, MQTT + S3)
- Differential Privacy (CDP-central DP; LDP-local DP)
- Attacker (API: fedml.core.FedMLAttacker); README: python/fedml/core/security/readme.md
- Defender (API: fedml.core.FedMLDefender); README: python/fedml/core/security/readme.md
- Secure Aggregation (multi-party computation): cross_silo/light_sec_agg_example
- In FedML/python/app folder, we provide applications in real-world settings.
- Enable federated model inference at MLOps (https://open.fedml.ai)
For more detailed instructions, please refer to https://doc.fedml.ai/
New Features
- [Serving] Make all serving pipelines work: device login, model creation, model packaging, model pushing, model deployment and model monitoring.
- [Serving] Make three entries for creating model cards work: from the trained model list, from the web page for creating model cards, from the related CLI for fedml model.
- [OpenSource] Formally releases all of the previous versions as this v0.8.0 version: training, security, aggregator, communication backends, MQTT optimization, metrics tracing, events tracing, realtime logs.
Bug Fixes
- [CoreEngine] CLI engine error when running simulation.
- [Serving] Adjust the training codes to adapt the ONNX sequence rule.
- [Serving] URL error in the model serving platform.
Enhancements
- [CoreEngine/MLOps][log] Format the log time to NTP time.
- [CoreEngin/MLOps] Shows the progress bar and the size of the transferred data in the log when the client downloads and uploads the model.
- [CoreEngine] Client optimization when the network is weak or disconnected.