ScaDL 2022

 

The Scalable Deep Learning over Parallel And Distributed Infrastructure (ScaDL 2022) is an IPDPS workshop focused on Deep Learning (DL). This topic is receiving high attention in the research community thanks to the remarkable results obtained for a large number of machine learning problems.

 

Areas of Interest and research topics

The topics discussed during ScaDL 2022 aim to progress on the the following research studies:

  • Asynchronous and Communication-Efficient SGD;
  • High performance computing aspects;
  • Model and Gradient Compression Techniques;
  • Distributed Trustworthy AI;
  • Emerging AI hardware Accelerators;
  • The intersection of Distributed DL and Neural Architecture Search (NAS);

Research papers focused on distributed deep learning aiming to achieve efficiency and scalability for deep learning jobs over distributed and parallel systems are welcome to be submitted. In particular, some of the topics that can be included in the submissions are available below:

  • Deep learning on cloud platforms, HPC systems, and edge devices;
  • Model-parallel and data-parallel techniques; 
  • Asynchronous SGD for Training DNNs; 
  • Communication-Efficient Training of DNNs; 
  • Scalable and distributed graph neural networks, Sampling techniques for graph neural networks; 
  • Federated deep learning, both horizontal and vertical, and its challenges;
  • Model/data/gradient compression;
  • Learning in Resource constrained environments; 
  • Coding Techniques for Straggler Mitigation; 
  • Elasticity for deep learning jobs/spot market enablement; 
  • Hyper-parameter tuning for deep learning jobs; 
  • Hardware Acceleration for Deep Learning including digital and analog accelerators; 
  • Scalability of deep learning jobs on large clusters;
  • Deep learning on heterogeneous infrastructure; 
  • Efficient and Scalable Inference; 
  • Data storage/access in shared networks for deep learning; 
  • Communication-efficient distributed fair and adversarially robust learning; 
  • Distributed learning techniques applied to speed up neural architecture search.

 

Key Dates

  • Paper Submission: 7 February, 2022
  • Acceptance Notification: 1 March, 2022
  • Camera ready papers due: 15 March, 2022 (hard deadline)
  • Workshop Date: 3 June, 2022

 

Location

Due to COVID-19 restrictions and in order to guarantee a safe environment, the conference takes place on a hybrid format (online and physically in Lyon, France).

 

 

General Chairs
  • Danilo Ardagna, Politecnico di Milano, Italy (AI-SPRINT Scientific coordinator)
  • Stacy Patterson, Rensselaer Polytechnic Institute (RPI), USA
Program Committee Chairs
  • Alex Gittens, Rensselaer Polytechnic Institute (RPI), USA
  • Kaoutar El Maghraoui, IBM Research AI, USA
Program Committee Members
  • Misbah Mubarak, Amazon
  • Hamza Ouarnoughi, UPHF LAMIH
  • Neil McGlohon, Rensselaer Polytechnic Institute (RPI)
  • Nathalie Baracaldo Angel, IBM Research, USA
  • Ignacio Blanquer, Universitat Politecnica de Valencia, Spain
  • Dario Garcia-Gasulla, Barcelona Supercomputing Center
  • Saurabh Gupta, AMD
  • Jalil Boukhobza, ENSTA-Bretagne
  • Aiichiro Nakano, University of Southern California, USA
  • Dhabaleswar K. (DK) Panda, Ohio State University
  • Eduardo Rocha Rodrigues, IBM Research, Brazil
  • Chen Wang, IBM Research, USA
  • Yangyang Xu, Rensselaer Polytechnic Institute (RPI)
  • Hongyi Wang, CMU, MLD lab
Steering Committee
  • Parijat Dube, IBM Research AI, USA
  • Vijay K. Garg, University of Texas at Austin
  • Vinod Muthusamy, IBM Research AI
  • Ashish Verma, IBM Research AI
  • Jayaram K. R., IBM Research AI, USA
  • Yogish Sabharwal, IBM Research AI, India