We consider a user releasing her data containing some personal information in return of a service. We model user’s personal information as two correlated random variables, one of them, called the secret variable, is to be kept private, while the other, called the useful variable, is to be disclosed for utility. We consider active sequential data release, where at each time step the user chooses from among a finite set of release mechanisms, each revealing some information about the user’s personal information, i.e., the true hypotheses, albeit with different statistics. The user manages data release in an online fashion such that maximum amount of information is revealed about the latent useful variable, while the confidence for the sensitive variable is kept below a predefined level. For the utility, we consider both the probability of correct detection of the useful variable and the mutual information (MI) between the useful variable and released data. We formulate both problems as a Markov decision process (MDP), and numerically solve them by advantage actor-critic (A2C) deep reinforcement learning (RL).

Active privacy-utility trade-off against a hypothesis testing adversary / Erdemir, E.; Dragotti, P. L.; Gunduz, D.. - 2021-:(2021), pp. 2660-2664. (Intervento presentato al convegno 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 tenutosi a can nel 2021) [10.1109/ICASSP39728.2021.9414608].

Active privacy-utility trade-off against a hypothesis testing adversary

Gunduz D.
2021

Abstract

We consider a user releasing her data containing some personal information in return of a service. We model user’s personal information as two correlated random variables, one of them, called the secret variable, is to be kept private, while the other, called the useful variable, is to be disclosed for utility. We consider active sequential data release, where at each time step the user chooses from among a finite set of release mechanisms, each revealing some information about the user’s personal information, i.e., the true hypotheses, albeit with different statistics. The user manages data release in an online fashion such that maximum amount of information is revealed about the latent useful variable, while the confidence for the sensitive variable is kept below a predefined level. For the utility, we consider both the probability of correct detection of the useful variable and the mutual information (MI) between the useful variable and released data. We formulate both problems as a Markov decision process (MDP), and numerically solve them by advantage actor-critic (A2C) deep reinforcement learning (RL).
2021
2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021
can
2021
2021-
2660
2664
Erdemir, E.; Dragotti, P. L.; Gunduz, D.
Active privacy-utility trade-off against a hypothesis testing adversary / Erdemir, E.; Dragotti, P. L.; Gunduz, D.. - 2021-:(2021), pp. 2660-2664. (Intervento presentato al convegno 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 tenutosi a can nel 2021) [10.1109/ICASSP39728.2021.9414608].
File in questo prodotto:
File Dimensione Formato  
Active_Privacy-Utility_Trade-Off_Against_A_Hypothesis_Testing_Adversary.pdf

Accesso riservato

Tipologia: Versione pubblicata dall'editore
Dimensione 2 MB
Formato Adobe PDF
2 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1280017
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 5
social impact