Keynotes

Samuel Thibault

Mercredi 3 juillet

Programmation par tâches: une vision du futur pour le calcul parallèle

Le calcul parallèle a pendant longtemps été exprimé essentiellement par des processus ou des threads, ordonnancés par le système ou par une bibliothèque utilisateur. Durant la quinzaine d’années passée, le paradigme de programmation par tâche a cependant pris de l’ampleur, visible par exemple dans son émergence au sein du standard OpenMP. Dans cet exposé, on montrera comment la notion de tâche fournit des informations que les précédents paradigmes n’incluaient pas, et notamment sur le futur. Celles-ci se révèlent précieuses pour optimiser l’exécution du calcul sur les plateformes modernes imbriquant processeurs multicœurs, accélérateurs de calcul, et système distribué.

Philippe Swartvagher

Mercredi 3 juillet

Making reproducible and publishable experiments

For a long time, scientific publications focused only on experimental results, ignoring how, concretely, the results were obtained, making difficult for readers, but also for the author, to reproduce the experiments. Things are slowly changing: publication of so-called “artifacts” are encouraged by journals and conferences. However, releasing scripts and programs used for experiments can be challenging: how to organize the material? how to clearly document the instructions? how to ensure reproducibility of the experiments? how to ensure long-term availability? Several answers are possible to all these questions. In this talk, I will try to summarize how and why my methodology to build reproducible artifacts evolved over several years in the research area.

Sara Bouchenak

Jeudi 4 juillet

The Many Faces of Federated Learning: Bias, Robustness, and Privacy

Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in many application domains including remote healthcare, smart buildings, and mobile applications. However, FL raises several ethical concerns as it may introduce bias with regard to sensitive attributes (e.g., race, gender, etc.), it is not robust against malicious participants that attempt to poison the data and model, and it remains vulnerable to privacy attacks (e.g., membership inference attacks, etc.). In this talk, we will first discuss the open scientific issues in FL bias, robustness and privacy, before presenting novel FL protocols for handling them.

Sara Bouchenak is Professor at INSA Lyon and member of DRIM research group at LIRIS laboratory since 2014. She is head of Fédération Informatique de Lyon since 2021, grouping a total of 850 members. Sara Bouchenak’s research topics include distributed computing systems, distributed and federated learning, with a special interest to their fairness, robustness and privacy. Prior to that, she was Associate Professor at the University of Grenoble between 2004 and 2014, a visiting professor at Universidad Politécnica de Madrid in 2009-2010, and post-doctoral associate researcher at EPFL, Switzerland, in 2003. Sara Bouchenak is co-author of several A/A* rank publications, she is involved as scientific expert for the evaluation of EU and ANR projects, and she has been the coordinator of and participated to several European, national and regional projects.

Alberto Bosio

Vendredi 5 juillet

Reliable and Efficient hardware for Trustworthy Deep Neural Networks

Deep Neural Networks (DNNs) are amongst the most intensively and widely used predictive models in machine learning. Nonetheless, increased computation speed and memory resources, along with significant energy consumption, are required to achieve the full potentials of DNNs. To be able to run DNNs algorithms out of the cloud and onto distributed Internet-of-Things (IoT) devices, customized HardWare platforms for Artificial Intelligence (HW-AI) are required. However, similar to traditional computing hardware, HW-AI is subject to hardware faults, occurring due to process, aging and environmental reliability threats. Although HW-AI comes with some inherent fault resilience, faults can lead to prediction failures seriously affecting the application execution. Typical reliability approaches, such as on-line testing and hardware redundancy, or even retraining, are less appropriate for HW-AI due to prohibited overhead; DNNs are large architectures with important memory requirements, coming along with an immense training set. This talk will address these limitations by exploiting the particularities of HW-AI architectures to develop low-cost and efficient reliability strategies.

Alberto Bosio received his MSc (2003) and PhD (2006) in Computer Engineering in the area of digital systems dependability at the Politecnico di Torino (Italy). He is now a Full Professor at Ecole Centrale de Lyon, Institute of Nanotechnology (France). His research activities are related to the design and test of advanced digital circuits and systems. He served as committee and organizing member in several international conferences including DATE (Track Chair) and ETS (Program Chair) as well as guest and associate editors for many international journals. He is a member of the IEEE and the Vice-Chair of the Europeen Test Technical Technology Council.