Tutorials

Federated Learning in Orbital Edge Computing
Bahman Javadi, Rodrigo N. Calheiros, Nancy Yang and Omer Rana
Abstract: Low Earth Orbit (LEO) satellites enable applications such as global internet connectivity, remote sensing, Earth observation, and scientific research, with significant growth driven by advancements in satellite miniaturization, lower launch costs, and increasing demand for broadband access. The integration of edge computing capabilities into LEO satellite systems gives rise to the new field of Orbital Edge Computing (OEC). OEC represents a transformative shift in space-based data processing, enabling real-time analytics, reduced latency, and enhanced autonomy in remote and bandwidth-constrained environments. In parallel with the development of OEC, Federated Learning (FL) emerged as a promising new approach in distributed machine learning by enabling model training across decentralized data sources without transferring raw data. This tutorial explores the architectural paradigms, technological enablers, and application domains of Federated Learning in Orbital Edge Computing. We review basic concepts of FL and an FL Framework (Flower) to provide hands-on experience with FL tools, followed by simulation integration and onboard processing considerations for LEO satellites.

Level: Intermediate

Assumed knowledge: Intermediate Python programming; basic cloud and edge computing; basic machine learning

Expectations: Able to access Google Colab online (Google account required). No installation required.

From Research to Edge: Practical Federated AI Orchestration for Data Sovereignty and Real-Time Intelligence
Hugo Miralles
Abstract: Federated Learning (FL) has emerged as a critical paradigm for privacy-preserving machine learning, yet the gap between research algorithms and production deployments remains substantial. While researchers develop sophisticated FL algorithms, the operational complexity of distributed edge infrastructure—including secure communication protocols, device heterogeneity, certificate management, and multi-tenant isolation—creates significant barriers to real-world adoption. This tutorial introduces Manta, a Federated Learning Operations (FLOps) platform that abstracts infrastructure complexity through a declarative approach, enabling researchers and data scientists to deploy FL algorithms across heterogeneous edge devices without managing underlying distributed systems concerns. We demonstrate how Manta's architecture—comprising orchestration manager, edge agents, and lightweight client SDK—handles critical production requirements including mTLS encryption, MQTT-based coordination, role-based access control (RBAC), and automated failure recovery. Participants will learn to deploy production-grade federated AI workloads in minutes using Manta's Python SDK, transform research algorithms into scalable edge deployments, and monitor distributed training across unreliable networks. Through hands-on exercises, we showcase real-world scenarios in Industry 4.0 and Smart Cities, demonstrating how FLOps bridges the research-to-production gap while maintaining data sovereignty and enabling real-time intelligence at the edge.

Level: Intermediate

Assumed knowledge: Basic Python programming; understanding of machine learning concepts; familiarity with federated learning principles

Expectations: Laptop with Python 3.9+ and Docker installed. Pre-configured sandbox environment will be provided. No cloud account required.

Serverless Orchestration on the Edge-Cloud Continuum: From Small Functions to Large Language Models
Reza Farahani
Abstract: Serverless computing simplifies application development by abstracting infrastructure management, allowing developers to focus on functionality while cloud providers handle resource provisioning and scaling. However, orchestrating serverless workloads across the edge-cloud continuum presents challenges, from managing heterogeneous resources to ensuring low-latency execution and maintaining fault tolerance and scalability. These challenges intensify when scaling from lightweight functions to compute-intensive tasks such as large language model (LLM) inferences in distributed environments. This tutorial explores serverless computing's evolution from small functions to large-scale AI workloads, covering advanced orchestration strategies, multi-objective scheduling, and energy efficiency.

Level: Intermediate

Assumed knowledge: Intermediate

Expectations: No installation required. All demos use prerecorded videos and pre-configured environments.