An Information-Theoretic Analysis of the Cost of Decentralization for Learning and Inference under Privacy Constraints

Sharu Theresa Jose*, Osvaldo Simeone

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we study general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables.

Original languageEnglish
Article number485
JournalEntropy
Volume24
Issue number4
DOIs
Publication statusPublished - Apr 2022

Keywords

  • Bayesian learning
  • information-theoretic analysis
  • vertical federated learning

Fingerprint

Dive into the research topics of 'An Information-Theoretic Analysis of the Cost of Decentralization for Learning and Inference under Privacy Constraints'. Together they form a unique fingerprint.

Cite this