About
The Privacy-Preserving Tool gives AI applications a secure mechanism to evaluate models' resilience. The measurement allows quantifying information leaks and robustness points. The main components involve differentially-private training and regularization techniques.
The Differential Privacy component allows models to learn considering privacy constraints within the AI-SPRINT ecosystem. The regularization techniques empirically empower models' resilience to attackers' activities relying on regularization techniques.
The privacy-preserving component provides an attack simulation tool to evaluate the resilience level for each layer of the DL model. Thus, depending on the model architecture, it quantifies the amount of information that could leak in the face of attacks.
Open source / proprietary
The Privacy-Preserving Tool is an open source once, accessible through the Apache 2.0 licence.
Architecture
The Differential Privacy component is a separate engine. The current implementation is based on Python and relies on standard Python libraries. The regularization insertion is due to the models’ design.
The resilience evaluation tool relies on official privacy libraries (e.g. Tensorflow Privacy) and on tools developed inside AI-SPRINT concerning model inversion and membership inference attacks.
In the figure, there is an example of the model inversion tool used to evaluate the target model’s resilience layer by layer.