Publications

Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated Learning

Published in IEEE Access, 2025

Federated Learning (FL) has been proposed as a privacy-preserving solution for distributed machine learning, particularly in heterogeneous FL settings where clients have varying computational capabilities and thus train models with different complexities compared to the server’s model. However, FL is not without vulnerabilities: recent studies have shown that it is susceptible to membership inference attacks (MIA), which can compromise the privacy of client data. In this paper, we examine the intersection of these two aspects, heterogeneous FL and its privacy vulnerabilities, by focusing on the role of client model integration, the process through which the server integrates parameters from clients’ smaller models into its larger model. To better understand this process, we first propose a taxonomy that categorizes existing heterogeneous FL methods and enables the design of seven novel heterogeneous FL model integration strategies. Using CIFAR-10, CIFAR-100, and FEMNIST vision datasets, we evaluate the privacy and accuracy trade-offs of these approaches under three types of MIAs. Our findings reveal significant differences in privacy leakage and performance depending on the integration method. Notably, introducing randomness in the model integration process enhances client privacy while maintaining competitive accuracy for both the clients and the server. This work provides quantitative light on the privacy-accuracy implications client model integration in heterogeneous FL settings, paving the way towards more secure and efficient FL systems. Download paper here

Recommended citation: Németh, G. D., Lozano, M. A., Quadrianto, N., & Oliver, N. (2025). " Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated Learning" IEEE Access 10.1109/ACCESS.2025.3546478 http://negedng.github.io/files/2025-Privacy.pdf

A Snapshot of the Frontiers of Client Selection in Federated Learning

Published in Transactions on Machine Learning Research, 2022

Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model’s parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early naive random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.

Recommended citation: Németh, G. D., Lozano, M. A., Quadrianto, N., & Oliver, N. (2022). " A Snapshot of the Frontiers of Client Selection in Federated Learning" Transactions on Machine Learning Research http://negedng.github.io/files/2022-Snapshot.pdf

Hyphenation using deep neural networks

Published in XIV. Magyar Számítógépes Nyelvészeti Konferencia, 2018

Hyphenation algorithms are the computer based ways of syllabification and mostly used in typesetting, formatting documents as well as text-to-speech and speech recognition systems. We present a deep learning approach to automatic hyphenation of Hungarian text. Our experiments compare feed forward, recurrent and convolutional neural network approaches.

Recommended citation: Németh, G. D., Ács, J. (2018). "Hyphenation using deep neural networks" XIV. Magyar Számítógépes Nyelvészeti Konferencia http://negedng.github.io/files/2018-Hyphenation.pdf