Generalized Bivariate Kummer-Beta Distribution

A new bivariate beta distribution based on the Humbert’s conﬂuent hypergeometric function of the second kind is introduced. Various representations are derived for its product moments, marginal densities, marginal moments, conditional densities and entropies.

parameters of the multinomial distribution. However, the Dirichlet family is not sufficiently rich in scope to represent many important distributional assumptions, because the Dirichlet distribution has fewer number of parameters. The generalized bivariate Kummer-beta distribution is a generalization of the Dirichlet distribution (a bivariate beta distribution) with the added number of parameters and will enrich the existing class of bivariate beta distributions. Further, the proposed generalized bivariate Kummer-beta distribution which has an elementary pdf (except for the normalizing constant) is sufficiently flexible and can be used in place of other bivariate beta distributions. Needless to say that generalized bivariate Kummer-beta distribution is conjugate prior for the multinomial distribution.
The matrix variate generalizations of beta and Dirichlet distributions have been defined and studied extensively. For example, see Gupta and Nagar [19], Gupta, Cardeño and Nagar [20], and Nagar and Gupta [21].
In this article we study several properties such as marginal and conditional distributions, joint moments, correlation, and mixture representation of the bivariate Kummer-beta distribution defined by the density (1). We also derive distributions of X + Y , X/(X + Y ) and XY , where (X, Y ) ∼ GBKB(a, b; c; λ 1 , λ 2 ).
In Bayesian analysis, if the posterior distributions are in the same family as the prior probability distribution; the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior. In case of multinomial distribution, the usual conjugate prior is the Dirichlet distribution. If where x > 0, y > 0, and x + y < 1, then Thus, the generalized bivariate family of distributions considered in this article is conjugate prior for the multinomial distribution.
A distribution is said to be negatively likelihood ratio dependent if the density f (x, y) satisfies for all x 1 > x 2 and y 1 > y 2 (Lehmann [22], Tong [23]). In the case of generalized bivariate Kummer-beta distribution for c > 1 the above inequality reduces to which clearly holds. Hence, the bivariate distribution defined by the density (1) for c > 1 is negatively likelihood ratio dependent.
Theorem 2.1. Let (X, Y ) ∼ GBKB(a, b; c; λ 1 , λ 2 ), and define S = X + Y and W = X/(X + Y ). Then, the density of S is given by and the density of W is given by Proof. Substituting x = ws and y = s(1 − w) with the Jacobian J(x, y → w, s) = s, in the joint density of X and Y , we obtain the joint density of W and S as where 0 < s < 1 and 0 < w < 1. Now, integrating appropriately by using the integral representation of confluent hypergeometric function (A.1), we obtain marginal densities of S ans W .
By using the above theorem and (A.8), it is straightforward to show that Further, by using (A.5), we write in the density of W given in Theorem 2.1 and derive E(W r ) as In next two theorems, we derive marginal distributions of X and Y . It is interesting to note that these marginal distributions do not belong to the Kummer-beta family and differs by an additional factor containing confluent hypergeometric function 1 F 1 .
Proof. To find the marginal p.d.f. of X, we integrate (1) with respect to y to get Now, the desired result is obtained by using (A.1).
Proof. Similar to the proof of Theorem 2.2.
Using the above theorem, the conditional density function of X given Y = y, 0 < y < 1, is obtained as Similarly, using Theorem 2.2, the conditional density function of Y given X = x, 0 < x < 1, is derived as Further, using conditional densities given above, we derive .
Further, using (1), the joint (r, s)-th moment is obtained as where d = a+b+c, a+r > 0 and b+s > 0. Now, substituting appropriately, we obtain Notice that the expressions for x, y] which can be computed by using a suitable software. Table 1 provides correlations between X and Y for different values of a, b, c, λ 1 and λ 2 . All the values of the correlation coefficient are negative because of the condition x + y < 1. Further, for selected values of the parameters, it is possible to find correlations close to 0 or −1. As can be seen that for fixed a, b, λ 1 , λ 2 the correlation increases as c increases. Thus for small values of c the correlation is close to −1 whereas for large c the correlation is close 0. The correlation is very small when a, b and c are smaller than one. Further, for fixed values of a, b, c, the correlation increases as λ 1 or λ 2 increases. Furthermore, the choices of a, b small and c large yield correlations close to zero, whereas large values of a or b and small values of c or λ 1 , λ 2 give large correlations.

Entropies
In this section, exact forms of Renyi and Shannon entropies are determined for the generalized bivariate Kummer-beta distribution defined in 1.
Let (X , B, P) be a probability space. Consider a pdf f associated with P, dominated by σ−finite measure µ on X . Denote by H SH (f ) the well-known Shannon entropy introduced in Shannon [24]. It is define by One of the main extensions of the Shannon entropy was defined by Rényi [25]. This generalized entropy measure is given by where ing.cienc., vol. 16, no. 32, pp. 7-31, julio-diciembre. 2020. The additional parameter η is used to describe complex behavior in probability models and the associated process under study. Rényi entropy is monotonically decreasing in η, while Shannon entropy (4) is obtained from (5) for η ↑ 1. For details see Nadarajah and Zografos [26], Zografos and Nadarajah [27] and Zografos [28].
Proof. For η > 0 and η = 1, using the joint density of X and Y given by (1), we have where the last line has been obtained by using (2). Now, taking logarithm of G(η) and using (5) we get H R (η, f ). The Shannon entropy is obtained from H R (η, f ) by taking η ↑ 1 and using L'Hopital's rule.

Proof.
Making the transformation W = XY with the Jacobian J(x, y → x, w) = x −1 in (1), we obtain the joint density of X and W as where p < x < q with Now, integrating x in (10), the marginal density of W is obtained as Writing ing.cienc., vol. 16, no. 32, pp. 7-31, julio-diciembre. 2020.
where t = (q − x)/(q − p) in (11) the marginal density of W rewritten as where we have used the substitution t = (q − x)/(q − p). Now, evaluating the above integral using (A.2) and simplifying the resulting expression, we get the desired result.
Theorem 4.2. Let (X, Y ) ∼ GBKB(a, b; c; cλ 1 , cλ 2 ) and U and V be defined by U = cX and V = cY . Then, U and V are asymptotically distributed as a product of independent gamma densities; where f U,V (u, v) denotes the joint density of U and V .
Proof. In the joint density of X and Y given by (1) transform U = cX and V = cY with the Jacobian J(x, y → u, v) = c −2 to get the joint density of U and V as Now, observing that we get the desired result.
The integral representations of Φ 2 is given by where Re(a) > 0, Re(b) > 0 and Re(c − a − b) > 0. Substituting t = (1 − u) −1 v and integrating t in the above expression, the Humbert's confluent hypergeometric function Φ 2 can also be represented as For properties and further results on these functions the reader is referred to Luke [31] and Srivastava and Karlsson [30]. Next, we define the Kummer-beta distribution due to Ng and Kotz [32].
Definition A.1. The random variable X is said to have a Kummer-beta distribution, denoted by X ∼ KB(α, β, λ), if its p.d.f. is given by where α > 0, β > 0, −∞ < λ < ∞ and B(a, b) is the beta function given by Note that for λ = 0 the above density simplifies to a beta type I density with parameters α and β.