Attribute privacy in multimedia technology aims to hide one personal characteristic of an individual rather than the full identity. For example, an attribute can be the sex, nationality, or health state. When the attribute is discrete with a finite number of possible values, the attacker’s belief is represented by a discrete probability distribution over the set of possible values. The Bayes’ rule—known as an information acquisition paradigm—tells how the likelihood function is changing the prior belief into a posterior belief. In the binary case, the likelihood function can be written as the Log-Likelihood-Ratio (LLR) also known as the weight-of-evidence, this informs which hypothesis the data is supporting and how strong. The Bayes’ rule can be written as a sum between the LLR and the log-ratio of prior probabilities decoupling therefore the evidence provided by the data and the prior belief. This thesis proposes to represent the sensitive information disclosed by the data by a likelihood function. However, the appealing form of the Bayes’ Rule, in the binary case, cannot be generalized directly to cases where more than two hypotheses are possible. This thesis proposes therefore to treat discrete probability distributions and likelihood functions as compositional data. Compositional data lives on a simplex where an Euclidean vector space structure—known as the Aitchison geometry—can be defined. With the coordinate representation given by the Isometric-Log-Ratio (ILR) approach, the additive form of the Bayes’ rule is recovered. Within this space, the likelihood function (ILRL) can be seen as the multiple hypotheses and multidimensional extension of the LLR. The norm of the ILRL is the strength-of-evidence and measures the distance between the prior distribution and the posterior distribution. Perfect privacy is reached when the attacker’s belief does not change when observing the data: its posterior probabilities remain equal to its prior ones. In other words, we want the data to provide no evidence about the value the attribute takes. This idea is theoretically reached when the LLR is zero in a binary setting, and by extension when the ILRL is the zero vector in a non-binary case corresponding to no strength-of-evidence. The information—contained in an observation—about an attribute, is represented by a ILRL. However, in order to properly represent the information, the ILRLs have to be calibrated. The idempotence of calibrated LLRs and its constraint on the distributions of normally distributed LLRs are well-known properties. In this thesis, these properties are generalized to the ILRL for multiple hypotheses applications. Based on these properties and on the compositional nature of the likelihood function, a new discriminant analysis approach is proposed. First, for binary applications, the proposed discriminant analysis maps the input feature vectors into a space where the discriminant component forms a calibrated LLR. The mapping is learned through normalizing flow, a cascade of invertible neural networks. This can be used for pattern recognition but also for privacy. Indeed, since the mapping is invertible, the LLR can be set to zero and the data can then be mapped back to the feature space. This protection strategy is tested on the concealment of the speaker’s sex in neural network-based speaker embeddings. The resulting protected embeddings are tested for speaker verification and for voice conversion applications. Since the properties of the LLR naturally extend to the ILRL, the proposed discriminant analysis is generalized to multiclass cases. This new approach is called compositional discriminant analysis. It maps the data into a space where the discriminant components form calibrated likelihood functions expressed by the ILRLs. Although this work is presented in the context of privacy preservation, we believe this opens several research directions in pattern recognition, calibration, and representation learning.