The conventional information leakage metrics assume that an adversary has complete knowledge of the distribution of the mechanism used to disclose information correlated with the sensitive attributes of a system. The only uncertainty arises from the specific realizations that are drawn from this distribution. This assumption does not hold in various practical scenarios where an adversary usually lacks complete information about the joint statistics of the private, utility, and the disclosed data. As a result, the typical information leakage metrics fail to measure the leakage appropriately. In this paper, we introduce multiple new versions of the traditional information-theoretic leakage metrics, that aptly represent information leakage for an adversary who lacks complete knowledge of the joint data statistics, and we provide insights into the potential uses of each. We experiment on a real-world dataset to further demonstrate how the introduced leakage metrics compare with the conventional notions of leakage. Finally, we show how privacy-utility optimization problems can be formulated in this context, such that their solutions result in the optimal information disclosure mechanisms, for various applications.