![](images/headshot.jpg)
Zilinghan Li
Machine Learning Engineer
Hello! I am a Machine Learning Engineer at the Data Science Learning Division, Argonne National Laboratory, working under Ravi Madduri. I am interested in machine learning and deep learning for biomedicine and science, federated learning, and scaling machine learning workflow on High-Performance Computing environments. I receivd my Master of Science degree in Computer Science from the University of Illinois at Urbana-Champaign, where I worked with Prof. Volodymyr Kindratenko. I also previously interned at Amazon Web Services.
science RESEARCH link
engineering PROJECTS link
Check out all of my projects on GitHub.
star SELECTED PUBLICATIONS link
Ordered by most recent.
FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices Using a Computing Power-Aware Scheduler [May 2024] |
Zilinghan Li, Pranshu Chaturvedi, Shilan He, Han Chen, Gagandeep Singh, Volodymyr Kindratenko, Eliu A Huerta, Kibaek Kim, Ravi Madduri |
ICLR 2024 |
TLDR | PDF | Website | Code | BibTex |
TLDR: Cross-silo federated learning offers a promising solution to collaboratively train robust and generalized AI models without compromising the privacy of local datasets, e.g., healthcare, financial, as well as scientific projects that lack a centralized data facility. Nonetheless, because of the disparity of computing resources among different clients (i.e., device heterogeneity), synchronous federated learning algorithms suffer from degraded efficiency when waiting for straggler clients. Similarly, asynchronous federated learning algorithms experience degradation in the convergence rate and final model accuracy on non-identically and independently distributed (non-IID) heterogeneous datasets due to stale local models and client drift. To address these limitations in cross-silo federated learning with heterogeneous clients and data, we propose FedCompass, an innovative semi-asynchronous federated learning algorithm with a computing power-aware scheduler on the server side, which adaptively assigns varying amounts of training tasks to different clients using the knowledge of the computing power of individual clients. FedCompass ensures that multiple locally trained models from clients are received almost simultaneously as a group for aggregation, effectively reducing the staleness of local models. At the same time, the overall training process remains asynchronous, eliminating prolonged waiting periods from straggler clients. Using diverse non-IID heterogeneous distributed datasets, we demonstrate that FedCompass achieves faster convergence and higher accuracy than other asynchronous algorithms while remaining more efficient than synchronous algorithms when performing federated learning on heterogeneous clients. The source code for FedCompass is available at https://github.com/APPFL/FedCompass.
|
@inproceedings{li2024fedcompass, title = {FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices Using a Computing Power-Aware Scheduler}, author = {Zilinghan Li and Pranshu Chaturvedi and Shilan He and Han Chen and Gagandeep Singh and Volodymyr Kindratenko and Eliu A Huerta and Kibaek Kim and Ravi Madduri}, booktitle = {The Twelfth International Conference on Learning Representations}, url = {https://openreview.net/forum?id=msXxrttLOi}, year = {2024} } |
Appflx: Providing Privacy-Preserving Cross-Silo Federated Learning as a Service [Oct 2023] |
Zilinghan Li, Shilan He, Pranshu Chaturvedi, Trung-Hieu Hoang, Minseok Ryu, EA Huerta, Volodymyr Kindratenko, Jordan Fuhrman, Maryellen Giger, Ryan Chard, Kibaek Kim, Ravi Madduri |
e-Science 2023 |
TLDR | PDF | Website | BibTex |
TLDR: Cross-silo privacy-preserving federated learning (PPFL) is a powerful tool to collaboratively train robust and generalized machine learning (ML) models without sharing sensitive (e.g., healthcare of financial) local data. To ease and accelerate the adoption of PPFL, we introduce APPFLx, a ready-to-use platform that provides privacy-preserving cross-silo federated learning as a service. APPFLx employs Globus authentication to allow users to easily and securely invite trustworthy collaborators for PPFL, implements several synchronous and asynchronous FL algorithms, streamlines the FL experiment launch process, and enables tracking and visualizing the life cycle of FL experiments, allowing domain experts and ML practitioners to easily orchestrate and evaluate cross-silo FL under one platform. APPFLx is available online at https://appflx.link
|
@inproceedings{li2023appflx, title = {Appflx: Providing privacy-preserving cross-silo federated learning as a service}, author = {Li, Zilinghan and He, Shilan and Chaturvedi, Pranshu and Hoang, Trung-Hieu and Ryu, Minseok and Huerta, EA and Kindratenko, Volodymyr and Fuhrman, Jordan and Giger, Maryellen and Chard, Ryan and others}, booktitle = {2023 IEEE 19th International Conference on e-Science (e-Science)}, organization = {IEEE}, pages = {1--4}, year = {2023} } |
article ALL PUBLICATIONS link
Ordered by most recent and grouped by topic. Bibtex file available for download here.
May 2025 | FedSpaLLM: Federated Pruning of Large Language Models link |
TLDR | PDF | Authors | Preprint | BibTex | NAACL 2025 | |
TLDR: Large Language Models (LLMs) achieve state-of-the-art performance but are challenging to deploy due to their high computational and storage demands. Pruning can reduce model size, yet existing methods assume public access to calibration data, which is impractical for privacy-sensitive applications. To address the challenge of pruning LLMs in privacy-preserving settings, we propose FedSpaLLM, the first federated learning framework designed specifically for pruning LLMs. FedSpaLLM enables clients to prune their models locally based on private data while accounting for system heterogeneity and maintaining communication efficiency. Our framework introduces several key innovations: (1) a novel l0-norm aggregation function that ensures only non-zero weights are averaged across clients, preserving important model parameters; (2) an adaptive mask expansion technique that meets global sparsity targets while accommodating client-specific pruning decisions; and (3) a layer sampling strategy that reduces communication overhead and personalizes the pruning process based on client resources. Extensive experiments show that FedSpaLLM improves pruning performance in diverse federated settings. The source code will be released upon publication.
|
|
@article{bai2024fedspallm, title = {FedSpaLLM: Federated Pruning of Large Language Models}, author = {Bai, Guangji and Li, Yijiang and Li, Zilinghan and Zhao, Liang and Kim, Kibaek}, journal = {2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL 2025)}, year = {2025} } |
|
Dec 2024 | Enabling end-to-end secure federated learning in biomedical research on heterogeneous computing environments with APPFLx link |
TLDR | PDF | Authors | Website | BibTex | Computational and Structural Biotechnology Journal | |
TLDR: Facilitating large-scale, cross-institutional collaboration in biomedical machine learning (ML) projects requires a trustworthy and resilient federated learning (FL) environment to ensure that sensitive information such as protected health information is kept confidential. Specifically designed for this purpose, this work introduces APPFLx - a low-code, easy-to-use FL framework that enables easy setup, configuration, and running of FL experiments. APPFLx removes administrative boundaries of research organizations and healthcare systems while providing secure end-to-end communication, privacy-preserving functionality, and identity management. Furthermore, it is completely agnostic to the underlying computational infrastructure of participating clients, allowing an instantaneous deployment of this framework into existing computing infrastructures. Experimentally, the utility of APPFLx is demonstrated in two case studies: (1) predicting participant age from electrocardiogram (ECG) waveforms, and (2) detecting COVID-19 disease from chest radiographs. Here, ML models were securely trained across heterogeneous computing resources, including a combination of on-premise high-performance computing and cloud computing facilities. By securely unlocking data from multiple sources for training without directly sharing it, these FL models enhance generalizability and performance compared to centralized training models while ensuring data remains protected. In conclusion, APPFLx demonstrated itself as an easy-to-use framework for accelerating biomedical studies across organizations and healthcare systems on large datasets while maintaining the protection of private medical data.
|
|
@article{hoang2023enabling, title = {Enabling end-to-end secure federated learning in biomedical research on heterogeneous computing environments with APPFLx}, author = {Hoang, Trung-Hieu and Fuhrman, Jordan and Klarqvist, Marcus and Li, Miao and Chaturvedi, Pranshu and Li, Zilinghan and Kim, Kibaek and Ryu, Minseok and Chard, Ryan and Huerta, EA and Giger, Maryellen and Madduri, Ravi}, journal = {Computational and Structural Biotechnology Journal}, pages = {29 - 39}, volume = {28}, year = {2024} } |
|
Nov 2024 | Advances in Privacy Preserving Federated Learning to Realize a Truly Learning Healthcare System link |
TLDR | PDF | Authors | Slides | Preprint | BibTex | TPS-ISA 2024 | |
TLDR: The concept of a learning healthcare system (LHS) envisions a self-improving network where multimodal data from patient care are continuously analyzed to enhance future healthcare outcomes. However, realizing this vision faces significant challenges in data sharing and privacy protection. Privacy-Preserving Federated Learning (PPFL) is a transformative and promising approach that has the potential to address these challenges by enabling collaborative learning from decentralized data while safeguarding patient privacy. This paper proposes a vision for integrating PPFL into the healthcare ecosystem to achieve a truly LHS as defined by the Institute of Medicine (IOM) Roundtable.
|
|
@inproceedings{madduri2024advances, title = {Advances in Privacy Preserving Federated Learning to Realize a Truly Learning Healthcare System}, author = {Madduri, Ravi and Li, Zilinghan and Nandi, Tarak and Kim, Kibaek and Ryu, Minseok and Rodriguez, Alex}, booktitle = {2024 IEEE 6th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA)}, organization = {IEEE}, pages = {273--279}, year = {2024} } |
|
Sep 2024 | Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework link |
TLDR | PDF | Authors | Website | Code | Preprint | BibTex | arXiv preprint | |
TLDR: Federated learning (FL) is a distributed machine learning paradigm enabling collaborative model training while preserving data privacy. In today's landscape, where most data is proprietary, confidential, and distributed, FL has become a promising approach to leverage such data effectively, particularly in sensitive domains such as medicine and the electric grid. Heterogeneity and security are the key challenges in FL, however; most existing FL frameworks either fail to address these challenges adequately or lack the flexibility to incorporate new solutions. To this end, we present the recent advances in developing APPFL, an extensible framework and benchmarking suite for federated learning, which offers comprehensive solutions for heterogeneity and security concerns, as well as user-friendly interfaces for integrating new algorithms or adapting to new applications. We demonstrate the capabilities of APPFL through extensive experiments evaluating various aspects of FL, including communication efficiency, privacy preservation, computational performance, and resource utilization. We further highlight the extensibility of APPFL through case studies in vertical, hierarchical, and decentralized FL. APPFL is open-sourced at https://github.com/APPFL/APPFL.
|
|
@article{li2024advances, title = {Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework}, author = {Li, Zilinghan and He, Shilan and Yang, Ze and Ryu, Minseok and Kim, Kibaek and Madduri, Ravi}, journal = {arXiv preprint arXiv:2409.11585}, year = {2024} } |
|
Jul 2024 | FedSZ: Leveraging error-bounded lossy compression for federated learning communications link |
TLDR | PDF | Authors | BibTex | ICDCS 2024 | |
TLDR: With the promise of federated learning (FL) to allow for geographically-distributed and highly personalized services, the efficient exchange of model updates between clients and servers becomes crucial. FL, though decentralized, often faces communication bottlenecks, especially in resource-constrained scenarios. Existing data compression techniques like gradient sparsification, quantization, and pruning offer some solutions, but may compromise model performance or necessitate expensive retraining. In this paper, we introduce FedSZ, a specialized lossy-compression algorithm designed to minimize the size of client model updates in FL. FedSZ incorporates a comprehensive compression pipeline featuring data partitioning, lossy and lossless compression of model parameters and metadata, and serialization. We evaluate FedSZ using a suite of error-bounded lossy compressors, ultimately finding SZ2 to be the most effective across various model architectures and datasets including AlexNet, MobileNetV2, ResNet50, CIFAR-10, Caltech101, and Fashion-MNIST. Our study reveals that a relative error bound 0.01 achieves an optimal tradeoff, compressing model states between 5.55-12.61x while maintaining inference accuracy within < 0.5 % of uncompressed results. Additionally, the runtime overhead of FedSZ is < 4.7% or between of the wall-clock communication-round time, a worthwhile trade-off for reducing network transfer times by an order of magnitude for networks bandwidths < 350Mbps. Intriguingly, we also find that the error introduced by FedSZ could potentially serve as a source of differentially private noise, opening up new avenues for privacy-preserving FL.
|
|
@inproceedings{wilkins2024fedsz, title = {FedSZ: Leveraging error-bounded lossy compression for federated learning communications}, author = {Wilkins, Grant and Di, Sheng and Calhoun, Jon C and Li, Zilinghan and Kim, Kibaek and Underwood, Robert and Mortier, Richard and Cappello, Franck}, booktitle = {2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS)}, organization = {IEEE}, pages = {577--588}, year = {2024} } |
|
May 2024 | FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices Using a Computing Power-Aware Scheduler link |
TLDR | PDF | Authors | Website | Code | BibTex | ICLR 2024 | |
TLDR: Cross-silo federated learning offers a promising solution to collaboratively train robust and generalized AI models without compromising the privacy of local datasets, e.g., healthcare, financial, as well as scientific projects that lack a centralized data facility. Nonetheless, because of the disparity of computing resources among different clients (i.e., device heterogeneity), synchronous federated learning algorithms suffer from degraded efficiency when waiting for straggler clients. Similarly, asynchronous federated learning algorithms experience degradation in the convergence rate and final model accuracy on non-identically and independently distributed (non-IID) heterogeneous datasets due to stale local models and client drift. To address these limitations in cross-silo federated learning with heterogeneous clients and data, we propose FedCompass, an innovative semi-asynchronous federated learning algorithm with a computing power-aware scheduler on the server side, which adaptively assigns varying amounts of training tasks to different clients using the knowledge of the computing power of individual clients. FedCompass ensures that multiple locally trained models from clients are received almost simultaneously as a group for aggregation, effectively reducing the staleness of local models. At the same time, the overall training process remains asynchronous, eliminating prolonged waiting periods from straggler clients. Using diverse non-IID heterogeneous distributed datasets, we demonstrate that FedCompass achieves faster convergence and higher accuracy than other asynchronous algorithms while remaining more efficient than synchronous algorithms when performing federated learning on heterogeneous clients. The source code for FedCompass is available at https://github.com/APPFL/FedCompass.
|
|
@inproceedings{li2024fedcompass, title = {FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices Using a Computing Power-Aware Scheduler}, author = {Zilinghan Li and Pranshu Chaturvedi and Shilan He and Han Chen and Gagandeep Singh and Volodymyr Kindratenko and Eliu A Huerta and Kibaek Kim and Ravi Madduri}, booktitle = {The Twelfth International Conference on Learning Representations}, url = {https://openreview.net/forum?id=msXxrttLOi}, year = {2024} } |
|
Mar 2024 | Secure Federated Learning Across Heterogeneous Cloud and High-Performance Computing Resources-A Case Study on Federated Fine-tuning of LLaMA 2 link |
TLDR | PDF | Authors | BibTex | Computing in Science & Engineering | |
TLDR: Federated learning enables multiple data owners to collaboratively train robust machine learning models without transferring large or sensitive local datasets by only sharing the parameters of the locally trained models. In this paper, we elaborate on the design of our Advanced Privacy-Preserving Federated Learning (APPFL) framework, which streamlines end-to-end secure and reliable federated learning experiments across cloud computing facilities and high-performance computing resources by leveraging Globus Compute, a distributed function as a service platform, and Amazon Web Services. We further demonstrate the use case of APPFL in finetuning a LLaMA 2 7B model using several cloud resources and supercomputers.
|
|
@article{li2024secure, title = {Secure Federated Learning Across Heterogeneous Cloud and High-Performance Computing Resources-A Case Study on Federated Fine-tuning of LLaMA 2}, author = {Li, Zilinghan and He, Shilan and Chaturvedi, Pranshu and Kindratenko, Volodymyr and Huerta, Eliu A and Kim, Kibaek and Madduri, Ravi}, journal = {Computing in Science \& Engineering}, publisher = {IEEE}, year = {2024} } |
|
Oct 2023 | Appflx: Providing Privacy-Preserving Cross-Silo Federated Learning as a Service link |
TLDR | PDF | Authors | Website | BibTex | e-Science 2023 | |
TLDR: Cross-silo privacy-preserving federated learning (PPFL) is a powerful tool to collaboratively train robust and generalized machine learning (ML) models without sharing sensitive (e.g., healthcare of financial) local data. To ease and accelerate the adoption of PPFL, we introduce APPFLx, a ready-to-use platform that provides privacy-preserving cross-silo federated learning as a service. APPFLx employs Globus authentication to allow users to easily and securely invite trustworthy collaborators for PPFL, implements several synchronous and asynchronous FL algorithms, streamlines the FL experiment launch process, and enables tracking and visualizing the life cycle of FL experiments, allowing domain experts and ML practitioners to easily orchestrate and evaluate cross-silo FL under one platform. APPFLx is available online at https://appflx.link
|
|
@inproceedings{li2023appflx, title = {Appflx: Providing privacy-preserving cross-silo federated learning as a service}, author = {Li, Zilinghan and He, Shilan and Chaturvedi, Pranshu and Hoang, Trung-Hieu and Ryu, Minseok and Huerta, EA and Kindratenko, Volodymyr and Fuhrman, Jordan and Giger, Maryellen and Chard, Ryan and others}, booktitle = {2023 IEEE 19th International Conference on e-Science (e-Science)}, organization = {IEEE}, pages = {1--4}, year = {2023} } |
Dec 2024 | Gen-SIS: Generative Self-augmentation Improves Self-supervised Learning link |
TLDR | PDF | Authors | Website | Preprint | BibTex | arXiv preprint | |
TLDR: Self-supervised learning (SSL) methods have emerged as strong visual representation learners by training an image encoder to maximize similarity between features of different views of the same image. To perform this view-invariance task, current SSL algorithms rely on hand-crafted augmentations such as random cropping and color jittering to create multiple views of an image. Recently, generative diffusion models have been shown to improve SSL by providing a wider range of data augmentations. However, these diffusion models require pre-training on large-scale image-text datasets, which might not be available for many specialized domains like histopathology. In this work, we introduce Gen-SIS, a diffusion-based augmentation technique trained exclusively on unlabeled image data, eliminating any reliance on external sources of supervision such as text captions. We first train an initial SSL encoder on a dataset using only hand-crafted augmentations. We then train a diffusion model conditioned on embeddings from that SSL encoder. Following training, given an embedding of the source image, this diffusion model can synthesize its diverse views. We show that these `self-augmentations', i.e. generative augmentations based on the vanilla SSL encoder embeddings, facilitate the training of a stronger SSL encoder. Furthermore, based on the ability to interpolate between images in the encoder latent space, we introduce the novel pretext task of disentangling the two source images of an interpolated synthetic image. We validate Gen-SIS's effectiveness by demonstrating performance improvements across various downstream tasks in both natural images, which are generally object-centric, as well as digital histopathology images, which are typically context-based.
|
|
@article{belagali2024gen, title = {Gen-SIS: Generative Self-augmentation Improves Self-supervised Learning}, author = {Belagali, Varun and Yellapragada, Srikar and Graikos, Alexandros and Kapse, Saarthak and Li, Zilinghan and Nandi, Tarak Nath and Madduri, Ravi K and Prasanna, Prateek and Saltz, Joel and Samaras, Dimitris}, journal = {arXiv preprint arXiv:2412.01672}, year = {2024} } |
Jun 2023 | ViCTer: A semi-supervised video character tracker link |
TLDR | PDF | Authors | Code | BibTex | Machine Learning with Applications | |
TLDR: Video character tracking problem refers to tracking certain characters of interest in the video and returning the appearing time slots for those characters. Solutions to this problem can be applied in various video-analysis-related areas, such as movie analysis and automatic video clipping. However, there are very few researches investigating this problem and there are no existing relevant benchmark datasets available. In this paper, we design a novel model to solve this problem by combining a semi-supervised face recognition network and a multi-human tracker. For the face recognition network, we propose a semi-supervised learning method to fully leverage the unlabeled images in the video, thus reducing the required number of labeled face images. Triplet loss is also used during the training to better distinguish among inter-class samples. However, a single face recognition network is insufficient for video character tracking since people do not always show their frontal faces, or sometimes their faces are blocked by some obstacles. Therefore, a multi-human tracker is integrated into the model to address those problems. Additionally, we collect a dataset for the video character tracking problem, Character Face in Video, which can support various experiments for evaluating video character tracker performance. Experiments show that the proposed semi-supervised face recognition model can achieve more than 98.5% recognition accuracy, and our video character tracker can track in near-real-time and achieve 70%~80% average intersection-over-union tracking accuracy on the dataset.
|
|
@article{li2023victer, title = {ViCTer: A semi-supervised video character tracker}, author = {Li, Zilinghan and Wang, Xiwei and Zhang, Zhenning and Kindratenko, Volodymyr}, journal = {Machine Learning with Applications}, pages = {100460}, publisher = {Elsevier}, volume = {12}, year = {2023} } |
|
Apr 2023 | An efficient generative data imputation toolbox with adversarial learning link |
TLDR | Authors | BibTex | ICDE 2023 | |
TLDR: The dramatically increasing volume of incomplete data makes the imputation models computationally infeasible in many real-life applications. In this demonstration, we propose a scalable and extendible data imputation toolbox, SEMI, to deal with large-scale incomplete data imputation efficiently and visually. SEMI consists of three modules: data preprocessing, data imputation, and post-imputation prediction. It is built upon SCIS, a scalable imputation system, to significantly speed up the training of generative adversarial imputation models under accuracy-guarantees for large-scale incomplete data. Using a public real-world large-scale incomplete weather dataset, we demonstrate that, SEMI is capable of assisting users to efficiently address real-life large-scale imputation issues, from the aspects of high-efficient imputation system, user-friendly performance visualization, and easy-to-use interaction operation.
|
|
@inproceedings{wu2023efficient, title = {An efficient generative data imputation toolbox with adversarial learning}, author = {Wu, Yangyang and Miao, Xiaoye and Li, Zilinghan and He, Shilan and Yuan, Xinkai and Yin, Jianwei}, booktitle = {2023 IEEE 39th International Conference on Data Engineering (ICDE)}, organization = {IEEE}, pages = {3651--3654}, year = {2023} } |
|
Oct 2022 | Activematch: end-to-end semi-supervised active representation learning link |
TLDR | PDF | Authors | BibTex | ICIP 2023 | |
TLDR: Semi-supervised learning (SSL) is an efficient framework that can train models with both labeled and unlabeled data, but may generate ambiguous and non-distinguishable representations when lacking adequate labeled samples. With human-in-the-loop, active learning can iteratively select informative unlabeled samples for labeling and training to improve the performance in the SSL framework. However, most existing active learning approaches rely on pre-trained features, which is not suitable for end-to-end learning. To deal with the drawbacks of SSL, in this paper, we propose a novel end-to-end representation learning method, namely ActiveMatch, which combines SSL with contrastive learning and active learning to fully leverage the limited labels. Starting from a small amount of labeled data with unsupervised contrastive learning as a warm-up, ActiveMatch then combines SSL and supervised contrastive learning, and actively selects the most representative samples for labeling during the training, resulting in better representations towards the classification. Compared with MixMatch and FixMatch with the same amount of labeled data, we show that ActiveMatch achieves the state-of-the-art performance, with 89.24% accuracy on CIFAR-10 with 100 collected labels, and 92.20% accuracy with 200 collected labels.
|
|
@inproceedings{yuan2022activematch, title = {Activematch: end-to-end semi-supervised active representation learning}, author = {Yuan, Xinkai and Li, Zilinghan and Wang, Gaoang}, booktitle = {2022 IEEE International Conference on Image Processing (ICIP)}, organization = {IEEE}, pages = {1136--1140}, year = {2022} } |
co_present PRESENTATIONS link
Ordered by most recent.
Dec 2024 | When Federated Learning Meets FABRIC link |
Slides | Video | FABRIC Tutorials | |
Oct 2024 | Advances in Privacy-Preserving Federated Learning to Realize a Truly Learning Healthcare System link |
Slides | TPS-ISA 2024, Washington, D.C. | |
Oct 2024 | Using Globus Compute to Streamline Federated Learning Applications link |
Slides | Video | ParslFest 2024, Chicago, IL | |
Jul 2024 | Federated Learning Tutorial: Concepts, Challenges, and Framework link |
Slides | Video | SciFM Summer School, Ann Arbor, MI | |
Oct 2023 | APPFLx.Link: Providing Privacy-Preserving Cross-Silo Federated Learning as a Service link |
Slides | Video | ParslFest 2023, Chicago, IL |