The AI-SPRINT components include design tools, runtime framework and deployable infrastructures used to build the project's architectural infrastructure
![Privacy-Preserving Tool](/sites/default/files/2023-11/privacy%20assessment%20tools.png)
The Privacy-Preserving Tool gives AI applications a secure mechanism to evaluate models' resilience
![Federated Learning](/sites/default/files/2023-11/federated%20learning.png)
The Federated Learning allows training of machine learning (ML) models jointly among parties along the computing continuum
![Scheduling for Accelerated Devices](/sites/default/files/2023-11/gpu%20scheduler%20%282%29.png)
The GPU Scheduler tool determines the best scheduling and GPU allocation for Deep Learning training jobs, reducing energy and execution costs while meeting deadline constraints.
![POPNAS](/sites/default/files/2023-11/popnas%20%282%29.png)
The AI Models Architecture Search provides automatic designing of deep network architectures to facilitate the development of AI models for developers.
![MONITORING_SUBSYSTEM](/sites/default/files/2022-07/AI-SPRINT_MONITORING_SUBSYSTEM.png)
The Monitoring Subsystem gathers and processes runtime statistics and metrics from the entire AI-SPRINT system and sends alerts when a quality metric exceeds a specified threshold.
![Performance Models](/sites/default/files/2022-06/Performance%20Models%20logo.png)
Performance Models support the AI-SPRINT design and runtime components in selecting an appropriate configuration
![SCONE logo](/sites/default/files/2022-06/SCONE%20logo.png)
SCONE is a runtime that is integrated into executables during the compilation process to run applications in Trusted Executions Environments
![SCAR](/sites/default/files/2022-06/scar%20logo.png)
SCAR is a framework to transparently execute containers out of Docker images in AWS Lambda.
![Infrastructure Manager](/sites/default/files/2022-06/Infrastructure%20Manager_0.png)
The IM is a service for the complete orchestration of virtual infrastructures and applications deployed on it, including resource provisioning, deployment, configuration, re-configuration and termination.
![PyCOMPSs](/sites/default/files/2022-06/PyCOMPSs.png)
COMPSs/PyCOMPSs and disLib are designed to ease the development of ML/AI applications targeting the Cloud-Edge-IoT Continuum.
![OSCAR](/sites/default/files/2022-06/oscar_0.png)
OSCAR is an open-source platform that supports the Functions as a Service (FaaS) computing model for file-processing applications
![rCUDA](/sites/default/files/2022-06/rCUDA.png)
rCUDA is middleware for remote GPU virtualisation and allows CUDA applications to be executed in nodes without a GPU.
![Krake](/sites/default/files/2022-06/Krake_v3.png)
Krake is an orchestrator engine for containerised and virtualised workloads across distributed and heterogeneous cloud platforms.