The AI-SPRINT components include design tools, runtime framework and deployable infrastructures used to build the project's architectural infrastructure
The Privacy-Preserving Tool gives AI applications a secure mechanism to evaluate models' resilience
The Federated Learning allows training of machine learning (ML) models jointly among parties along the computing continuum
The GPU Scheduler tool determines the best scheduling and GPU allocation for Deep Learning training jobs, reducing energy and execution costs while meeting deadline constraints.
The AI Models Architecture Search provides automatic designing of deep network architectures to facilitate the development of AI models for developers.
The Monitoring Subsystem gathers and processes runtime statistics and metrics from the entire AI-SPRINT system and sends alerts when a quality metric exceeds a specified threshold.
Performance Models support the AI-SPRINT design and runtime components in selecting an appropriate configuration
SCONE is a runtime that is integrated into executables during the compilation process to run applications in Trusted Executions Environments
SCAR is a framework to transparently execute containers out of Docker images in AWS Lambda.
The IM is a service for the complete orchestration of virtual infrastructures and applications deployed on it, including resource provisioning, deployment, configuration, re-configuration and termination.
COMPSs/PyCOMPSs and disLib are designed to ease the development of ML/AI applications targeting the Cloud-Edge-IoT Continuum.
OSCAR is an open-source platform that supports the Functions as a Service (FaaS) computing model for file-processing applications
rCUDA is middleware for remote GPU virtualisation and allows CUDA applications to be executed in nodes without a GPU.
Krake is an orchestrator engine for containerised and virtualised workloads across distributed and heterogeneous cloud platforms.