Differential privacy (DP) is a mathematically rigorous and widely studied privacy framework that ensures the output of a randomized algorithm remains statistically indistinguishable even if the data of a single user changes. This framework has been extensively studied in both theory and practice, with many applications in analytics and machine learning (e.g., 1, 2, 3, 4, 5, 6, 7).
The two main models of DP are the central model and the local model. In the central model, a trusted curator has access to raw data and is responsible for producing an output that is differentially private. The local model requires that all messages sent from a user’s device are themselves differentially private, removing the need for a trusted curator. While the local model is appealing due to its minimal trust requirements, it often comes with significantly higher utility degradation compared to the central model.
In real-world data-sharing scenarios, users often place varying levels of trust in others, depending on their relationships. For instance, someone might feel comfortable sharing their location data with family or close friends but would hesitate to allow strangers to access the same information. This asymmetry aligns with philosophical views of privacy as control over personal information, where individuals specify with whom they are willing to share their data. Such nuanced privacy preferences highlight the need for frameworks that go beyond the binary trust assumptions of existing differentially private models, accommodating more realistic trust dynamics in privacy-preserving systems.
In “Differential Privacy on Trust Graphs”, published at the Innovations in Theoretical Computer Science Conference (ITCS 2025), we use a trust graph to model relationships, where the vertices represent users, and connected vertices trust each other (see below). We explore how to apply DP to these trust graphs, ensuring that the privacy guarantee applies to messages shared between a user (or their trusted neighbors) and everyone else they do not trust. In particular, the distribution of messages exchanged by each user u or one of their neighbors with any other user not trusted by u should be statistically indistinguishable if the input held by u changes, which we call trust graph DP (TGDP).