UNIVERSIDAD NACIONAL AUTÓNOMA DE MÉXICO PROGRAMA DE MAESTRÍA Y DOCTORADO EN CIENCIAS MATEMÁTICAS Y DE LA ESPECIALIZACIÓN EN ESTADÍSTICA APLICADA SAMPLE COPULA PROPERTIES AND WEAK CONVERGENCE OF THE SAMPLE PROCESS T E S I S QUE PARA OPTAR POR EL GRADO DE: DOCTOR EN CIENCIAS PRESENTA: RICARDO HOYOS ARGÜELLES DIRECTOR DE LA TESIS DR. JOSÉ MARÍA GONZÁLEZ-BARRIOS MURGUÍA, IIMAS, UNAM MIEMBROS DEL COMITÉ TUTOR DRA. MARÍA ASUNCIÓN BEGOÑA FERNÁNDEZ FDEZ, FC, UNAM DR. ARTURO ERDELY RUIZ, FES ACATLÁN, UNAM CIUDAD DE MÉXICO A 29 DE NOVIEMBRE DE 2017 Contents Introduction I 1 Preliminaries 1 1.1 Basic definitions and properties of copulas . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Archimedean copulas and types of dependence . . . . . . . . . . . . . . . . . . . 6 2 Comparison between the empirical copula and the sample d-copula of order m 11 2.1 Definitions and results about the empirical copula . . . . . . . . . . . . . . . . . . 11 2.2 Sample d-copula of order m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Simulation study: dimension two . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4 Simulation study: dimension three . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5 A method for estimating an adequate m . . . . . . . . . . . . . . . . . . . . . . . 46 3 Moments of some of the random variables associated with the sample copula 61 3.1 Case: dimension two when m divides n . . . . . . . . . . . . . . . . . . . . . . . 67 3.2 Case: dimension three when m divides n . . . . . . . . . . . . . . . . . . . . . . . 77 3.3 Summary of results and generalizations . . . . . . . . . . . . . . . . . . . . . . . 94 3.4 Case: m does not divide n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4 Convergence results of the sample copula 106 4.1 Weak convergence of empirical process . . . . . . . . . . . . . . . . . . . . . . . 106 4.2 Weak convergence of sample copula process . . . . . . . . . . . . . . . . . . . . . 111 4.3 Simulation study: Gaussian process . . . . . . . . . . . . . . . . . . . . . . . . . 124 Conclusions 128 References 130 Introduction The sample d-copula of order m is a new way to estimate a d copula, see [24], it is an estimator which is already a copula, unlike the empirical copula which is only a subcopula. In this thesis we will study the main two properties of the sample d-copula, that is, a Glivenko- Cantelli’s theorem and the asymptotic properties of its associated empirical process. The main objective of this work is to obtain the same results that exist for the empirical copula. The parameter m, which it is an integer with 2 ≤ m ≤ n, where n is the sample size, is not difficult to estimate and in most cases it is less than n. Besides the evaluation of the sample copula is fairly simple and quicker than the empirical copula. In Chapter 1 we given a review of the basic results of the theory of copulas and we describe the main definitions needed for the rest of the thesis. In Chapter 2 we provide definitions and results corresponding to the sample and empirical copula and we give the proof of a version of the Glivenko-Cantelli’s theorem for the sample d-copula. We also provide a large number of simulations to compare the approximation of the empirical copula and the sample d-copula to the real copula based on samples. At the end of this chapter we provide a method to estimate the order m, with 2 ≤ m ≤ n, of the sample d-copula using the estimated value of the Sperman’s rho in dimension two, and we also provide a methodology for higher dimensions. In Chapter 3 we present the behavior of the random variables associated with the counting process generated by the sample d-copula. We obtain similar results to the ones found in [8] for the two dimensional case, and we extend the study of the these random variables for higher dimensions. In Chapter 4 we found the weak convergence of the process generated by the sample d-copula using the convergence result of the empirical copula process. We obtain the weak convergence to a Gaussian process, and we evaluate its variance-covariance structure. Finally, in this chapter we provide several simulations to test the convergence of this process at each point. In the final remarks we point out the advantages of using the sample d-copula instead of the em- pirical copula and we also give several possibles extensions and applications of the sample d-copula for real data. I 1 Preliminaries This chapter describes the basic results about copulas used in this work, we provide some exam- ples related to the Archimedean copulas and the relations between copulas and other dependence measures. Most of the results presented here can be found in Nelsen’s book [37]. 1.1 Basic definitions and properties of copulas Definition 1.1 Let S 1, S 2 ⊂ R be not empty sets, with R = [−∞,∞] the extended real line. A function H : S 1 × S 2 −→ R is grounded if there exists a least element a1 ∈ S 1 and a least element a2 ∈ S 2 such that H(x, a2) = H(a1, y) = 0 for all (x, y) ∈ S 1 × S 2. Definition 1.2 Let S 1, S 2 ⊂ R be not empty sets, with R = [−∞,∞] the extended real line. Let B = [x1, x2]×[y1, y2] be a box with vertices in the domain of the function H, we define the H-volume of B by VH(B) = H(x2, y2) − H(x2, y1) − H(x1, y2) + H(x1, y1). Definition 1.3 Let S 1, S 2 ⊂ R be not empty sets and let H : S 1 × S 2 −→ R be a bivariate function. We say that H is 2-increasing if VH(B) ≥ 0 for every box B with vertices in the domain of the H function. Definition 1.4 A two-dimensional subcopula (or 2-subcopula, or briefly subcopula) is a function C′ with the following properties 1. Dom(C′) = S 1×S 2, where S 1 y S 2 are subsets of I = [0, 1] such that 0, 1 ∈ S 1 and 0, 1 ∈ S 2. 2. C′ is a grounded function and is 2-increasing. 3. For all u ∈ S 1 and for all v ∈ S 2, C′(u, 1) = u y C′(1, v) = v. The range of the function C′ is a subset of I = [0, 1]. Definition 1.5 A two-dimensional copula (or 2-copula, or briefly copula) is a 2-subcopula C with domain equal to I2 = [0, 1]2. That is, a copula C is a function with domain I2 = [0, 1]2 and range I = [0, 1] that satisfies the following properties 1. For all u, v ∈ I C(u, 0) = 0 = C(0, v) and C(u, 1) = u and C(1, v) = v. 1 2. For all u1, u2, v1, v2 ∈ I such that u1 ≤ u2 and v1 ≤ v2 C(u2, v2) −C(u2, v1) −C(u1, v2) +C(u1, v1) ≥ 0. Example 1.6 An example of a copula is the function Π2 : [0, 1]2 → [0, 1] defined by Π2(u, v) = u · v, this function satisfies the above conditions. If we define W2(u, v) = max{0, u + v − 1} and M2(u, v) = min{u, v} then M2 and W2 are also copulas which satisfy that for every subcopula or copula C we have that W2(u, v) ≤ C(u, v) ≤ M2(u, v) for every u, v ∈ [0, 1]. The copulas W2 and M2 are called the lower and upper Fréchet-Hoeffding bounds Lemma 1.7 For every C subcopula and for every u1, u2, v1, v2 ∈ [0, 1] we have that |C(u1, v1) −C(u2, v2)| ≤ |u1 − u2| + |v1 − v2|, which guarantees that C is uniformly continuous (Lipschitz). Following, the above definitions extends to the case n-dimensional. Definition 1.8 Let S 1, · · · , S n ⊂ R be not empty sets, with R = [−∞,∞] the extended real line. A function H : S 1 × · · · × S n −→ R is grounded if there exists a least element ak ∈ S k for every k ∈ {1, · · · , n}, such that H(t) = 0 for all t ∈ Dom(H), with tk = ak, for at least one k ∈ {1, · · · , n}. Definition 1.9 Let S 1, · · · , S n ⊂ R be not empty sets, with R = [−∞,∞] the extended real line. Let H : S 1 × · · · × S n −→ R be a function and let B = [a, b] be a n-box with vertices in the domain of the H, we define the H-volume of B by VH(B) = ∑ sgn(c)H(c), (1) where the sum is considered on all vertices c of B and we define sgn(c) = { 1 if ck = ak for an even number of values k −1 if ck = ak for an odd number of values k. Example 1.10 Let H : R 3 −→ R be a function and let B = [x1, x2] × [y1, y2] × [z1, z2] be a 3-box. Then the H-volume of B is given by VH(B) = H(x2, y2, z2) − H(x2, y2, z1) − H(x2, y1, z2) − H(x1, y2, z2) +H(x2, y1, z1) + H(x1, y2, z1) + H(x1, y1, z2) − H(x1, y1, z1). 2 Definition 1.11 Let S 1, · · · , S n ⊂ R be not empty sets and let H : S 1×· · ·×S n −→ R be a n-variate function. We say that H is n-increasing if VH(B) ≥ 0 for all n-box B with vertices in the domain of the function H. Definition 1.12 Let S 1, · · · , S n ⊂ R be not empty sets and let H : S 1 × · · · × S n −→ R be a n-variate function. If every set S k has a maximum element bk, k ∈ {1, · · · , n}, we say that the H function has marginals and we define the k-th marginal function of one dimension of H, denoted by Hk : S k −→ R by Hk(x) = H(b1, · · · , bk−1, x, bk+1, · · · , bn). for all x ∈ S k. We can define marginals of higher order that one (k-marginals), setting fewer positions in the domain of H. Definition 1.13 A n-dimensional subcopula (or n-subcopula) is a function C′ with the following properties 1. Dom(C′) = S 1 × · · · × S n, with S k ⊂ I such that 0, 1 ∈ S k for every k ∈ {1 · · · , n}. 2. C′ is grounded and is n-increasing. 3. C′ has marginals and each marginal C′ k , for every k ∈ {1 · · · , n}, satisfies C′k(u) = u para toda u ∈ S k. The range of the C′ function is a subset of I = [0, 1]. Definition 1.14 A n-dimensional copula (or n-copula) is a n-subcopula C with domain equal to In = [0, 1]n. That is, a n-copula C is a function with domain In and range I that satisfies the following properties 1. For all u ∈ In C(u) = 0 if at least one coordinate of u is 0 and, if every coordinate of u are 1 except uk, then C(u) = uk. 2. For all a, b ∈ In, such that a ≤ b, that is, ak ≤ bk for every k ∈ {1, · · · , n}, we have that VC([a, b]) ≥ 0. Remark 1.15 For all 2 ≤ k < n, every k−marginal function of C is a k-copula. 3 Lemma 1.16 Let C : [0, 1]d → [0, 1] be a d-copula. We define for every u = (u1, . . . , ud) ∈ Id = [0, 1]d Wd(u) = max(0, d ∑ i=1 ui − d + 1) and Md(u) = min(u1, . . . , ud). Then we have that for every d-copula or d-subcopula C , Wd(u) ≤ C(u) ≤ Md(u). (2) In this case Md is always a d-copula. However, Wd is never a d-copula for d > 2, because the volume of the box R = [1/2, 1]d is given by VWd (R) = 1 − (d/2) < 0 for every d > 2. However, the inequality (2) is sharp because for every u ∈ [0, 1]d there exists a d-copula C depending on u, such that the left inequality in (2) becomes an equality. Lemma 1.17 For every u, v ∈ [0, 1]d and every d-subcopula C we have that |C(u) −C(v)| ≤ d ∑ i=1 |ui − vi|. Again any d-copula C is a continuous (Lipschitz) function. Theorem 1.18 (Sklar’s Theorem) Let H be a joint d-distribution function for d ≥ 2 with margins F1, F2, . . . , Fd. Then there exists a d-copula C such that for every (x1, x2, . . . , xd) ∈ RI d, H(x1, x2, . . . , xd) = C(F1(x1), F2(x2), . . . , Fd(xd)). (3) If F1, F2, . . . , Fd are continuous, then C is unique; otherwise, C is uniquely determined on Ran(F1)× Ran(F2) × · · · × Ran(Fd). Conversely, if C is a d-copula and F1, F2, . . . , Fd are distribution func- tions, then the function H defined in equation (3) is a joint d-distribution function. Definition 1.19 Given a copula C and a, b ∈ [0, 1] we define the horizontal section of C at a by ha(t) = C(t, a) and the vertical section of C at b by vb(t) = C(b, t) for every t ∈ [0, 1]. Remark 1.20 Using equation (1) we can see that the functions ha(t) and vb(t) are increasing func- tions. Definition 1.21 We define the diagonal section of C by δC(t) = C(t, t) for every t ∈ [0, 1]. 4 Remark 1.22 We can see that δC(t) is also an increasing function and, in the case of d-copulas C, with d > 2, we have equivalent definitions for each coordinate by fixing d − 1 coordinates, and defining δC(t) = C(t, t, . . . , t). Remark 1.23 Given a d-copula C for any (v1, . . . vi−1, vi+1, . . . , vd) ∈ [0, 1]d−1 we have that ∂C ∂ui (v1, . . . , vi−1, ui, vi+1, . . . , vd) exists for almost every ui ∈ [0, 1] and for every i ∈ {1, . . . , d}. In fact, 0 ≤ ∂ ∂ui C(v1, . . . , vi−1, ui, vi+1, . . . , vd) ≤ 1. The last inequality follows from the Lipschitz continuity of C. Furthermore the partials are defined and nondecreasing almost everywhere (a.e.) on [0, 1]d−1 with respect to Lebesgue measure. The mixed partials of C also exist almost everywhere with respect to Lebesgue measure. In fact, c(u1, . . . , cd) = ∂d ∂u1 · · · ∂ud C(u1, . . . , ud) also exists a.e. and it is defined as the density function c of the distribution function C. For example, in the case of Πd we have that its density is given simply by πd(u1, . . . , ud) = 1 for every (u1, . . . , ud) ∈ [0, 1]d. Definition 1.24 We define the support of a d-copula C by supp(C) = {u ∈ [0, 1]d |VC(Rr) > 0 for every r > 0}, where Rr = [u − r, u + r] ⊂ [0, 1]d and r = (r, r, . . . , r) (d times). Example 1.25 We have that the support of the copula M2 is given by the main diagonal D = {(u, v) ∈ [0, 1]2 | u = v}. Similarly the support of the copula W2 is the secondary diagonal D1 = {(u, v) ∈ [0, 1]2 | u + v = 1}. Definition 1.26 For any d-copula C we define Ca.c.(u1, . . . , ud) = ∫ u1 0 · · ∫ ud 0 ∂d ∂v1 · · · ∂vd C(v1, . . . , vd)dvd . . . dv1 5 where a.c. stands for absolutely continuous. Also define Cs.(u1, . . . , ud) = C(u1, . . . , ud) −Ca.c.(u1, . . . , ud), where s. stands for singular. If C = Ca.c. we say that C is absolutely continuous, if C = Cs. we say C is singular, in any other case C is hybrid. We can observe that Cs.(1, . . . , 1) is the measure of the singular part if it exists. 1.2 Archimedean copulas and types of dependence Definition 1.27 Let ϕ : [0, 1] → [0,∞] be a continuous, strictly decreasing, convex function, that is, if u, v ∈ [0, 1] and 0 ≤ α ≤ 1 then ϕ(αu + (1 − α)v) ≤ αϕ(u) + (1 − α)ϕ(v), such that ϕ(1) = 0, in this case we will call ϕ a generator. Then, if we define ϕ[−1](t) = { ϕ−1(t) if 0 ≤ t ≤ ϕ(0) 0 if ϕ(0) ≤ t ≤ ∞, called pseudo-inverse of ϕ, and we define C(u, v) = ϕ[−1](ϕ(u) + ϕ(v)) for every (u, v) ∈ [0, 1]2. (4) Then C is always a copula with generator ϕ, ϕ is strict if ϕ(0) = ∞ and non-strict if ϕ(0) < ∞. The copulas given in equation (4) are called Archimedean copulas. Lemma 1.28 Let ϕ be a generator of C as in equation (4). Then i) C is symmetric, that is, C(u, v) = C(v, u) for every (u, v) ∈ [0, 1]2. ii) C is associative, that is, C(C(u, v),w) = C(u,C(v,w)) for every u, v,w ∈ [0, 1]. iii) If c > 0 then ψ = c · ϕ is also a generator of C. Theorem 1.29 C is an Archimedean copula if and only if C is an associative copula such that δC(t) = C(t, t) < t, for every t ∈ (0, 1). Remark 1.30 We have the following observations: a) If we define ϕ(t) = − ln(t), then ϕ is a strict generator with ϕ[−1](t) = ϕ−1(t) = exp(−t), and using equation (4) we have that Π2 is an Archimedean copula. b) If we define ϕ(t) = 1 − t, then ϕ is a non-strict generator with ϕ[−1](t) = max{1 − t, 0}. Hence, W2 is also Archimedean. c) However, using Theorem 1.29, M2 is not Archimedean because δM2(t) = t for every t ∈ [0, 1]. 6 Remark 1.31 Several families of copulas used in practice for modeling are Archimedean among them we have: 1. Clayton Family, with generator ϕ(t) = max{0, 1 θ (t−θ − 1), where θ ∈ [−1,∞)\{0}. 2. Ali-Mikhail-Haq (AMH), with generator ϕ(t) = ln ( 1−θ(1−t) t ) where θ ∈ [−1, 1). 3. Gumbel, with generator ϕ(t) = (− ln(t))θ where θ ∈ [1,∞). 4. Frank, with generator ϕ(t) = − ln ( exp(−θt)−1 exp(−θ)−1 ) where θ ∈ (−∞,∞)\{0}. Definition 1.32 Let Un = {(x1, y1), . . . (xn, yn)} be a random sample from a bivariate continuous vector (X,Y), for every 1 ≤ i < j ≤ n we say that (xi, yi) and (x j, y j) are concordant if and only if (xi − x j)(yi − y j) > 0 and they are discordant if (xi − x j)(yi − y j) < 0. The sample Kendall’s tau is defined as τ = c − d c + d = (c − d)/ ( n 2 ) , where c and d are the number of pairs which are concordant and discordant respectively. It can be thought as the probability of concordance minus the probability of discordance. Therefore, the population version of Kendall’s tau is defined by τ = τX,Y = P[(X1 − X2)(Y1 − Y2) > 0] − P[(X1 − X2)(Y1 − Y2) < 0], where (X1,Y1) and (X2,Y2) are independent and identically distributed random vectors with com- mon joint distribution F. Theorem 1.33 Let (X1,Y1) and (X2,Y2) be independent vectors of continuous random variables with joint distributions H1 and H2, respectively, with common margins F1 of (X1 and X2) and F2 of (Y1 and Y2). Let C1 and C2 be the respective copulas of (X1,Y1) and (X2,Y2). Let Q = Q(C1,C2) = P[(X1 − X2)(Y1 − Y2) > 0] − P[(X1 − X2)(Y1 − Y2) < 0]. Then Q(C1,C2) = 4 ∫ ∫ [0,1]2 C2(u, v)dC1(u, v) − 1. (5) Remark 1.34 We can see that Q(C1,C2) = Q(C2,C1). We also have that Q(M2,M2) = 1,Q(M2,Π2) = 1/3,Q(M2,W2) = 0, Q(W2,Π2) = −1/3,Q(W2,W2) = −1 and Q(Π2,Π2) = 0. 7 Lemma 1.35 If X and Y are continuous random variables with copula C. Then the population version of Kendall’s tau is given by τX,Y = Q(C,C) = 4 ∫ ∫ [0,1]2 C(u, v)dC(u, v) − 1 = 4E(C(U,V)) − 1. It also denoted by τC. Hence, we have that τM2 = 1, τΠ2 = 0 and τW2 = −1.7 In fact, if C1(u, v) ≤ C2(u, v) for every (u, v) ∈ [0, 1]2 then τC1 ≤ τC2 , and using the Fréchet- Hoeffding bounds we have that −1 ≤ τC ≤ 1 for every copula C. Example 1.36 For the Clayton family Cθ with parameter θ ≥ −1, we have that τCθ = θ/(θ + 2). Another example is the Gumbel family with parameter θ ≥ 1, in this case, we have that τCθ = (θ − 1)/θ. Example 1.37 If C is Archimedean with generator ϕ then we can find Kendall’s tau using the formula τC = 1 + 4 ∫ 1 0 ϕ(t) ϕ′(t) dt. Definition 1.38 Another measure of association is Sperman’s rho. Let (X1,Y1), (X2,Y2) and (X3,Y3) be three independent random vectors with common continuous joint distribution F with margins F1 and F2. The population version of Spearman’s rho is defined as proportional to the proba- bility of concordance minus the probability of discordance of the vectors (X1,Y1) and (X2,Y3). In fact, vρ = ρX,Y = 3(P[(X1 − X2)(Y1 − Y3) > 0] − P[(X1 − X2)(Y1 − Y3) < 0]). Therefore, using (5) we have that if X and Y are continuous random variables with copula C then ρx,y = ρC = 3Q(C,Π2) v = 12 ∫ ∫ [0,1]2 uvdC(u, v) − 3 = 12 ∫ ∫ [0,1]2 C(u, v)dudv − 3 (6) Example 1.39 We have that convex combinations of copulas are copulas. Then, we can consider the Fréchet family of copulas defined as Cα,β(u, v) = α ·M2(u, v)+ (1−α−β) ·Π2(u, v)+β ·W2(u, v) for every α, β ≥ 0 such that α + β ≤ 1 and for every (u, v) ∈ [0, 1]2. Using the previous results, we have that Q(Cα,β) = αQ(M2,Π2) + (1 − α − β)Q(Π2,Π2) + βQ(W2,Π2). So, ρCα,β = α − β. 8 We also observe that from the previous observations we have that −1 ≤ ρC ≤ 1 for every copula C. Remark 1.40 If U and V are uniform (0, 1) random variables with copula C, then ρC = 12 ∫ ∫ [0,1]2 uvdC(u, v) − 3 = 12E(UV) − 3 = E(UV) − 1/4 1/12 = E(UV) − E(U)E(V) √ Var(U) √ Var(V) . Here, we used that E(U) = 1/4 and Var(U) = 1/12. Hence, Sperman’s rho for a pair of continuous random variables X and Y is identical to Pearson’s correlation coefficient for the random variables U = F1(X) and V = F2(Y). Lemma 1.41 If X and Y are continuous random variables, then −1 ≤ 3τ − 2ρ ≤ 1. 1 + ρ 2 ≥ ( 1 + τ 2 )2 and 1 − ρ 2 ≥ ( 1 − τ 2 )2 . For τ ≥ 0 3τ − 1 2 ≤ ρ ≤ 1 + 2τ − τ2 2 , and if τ ≤ 0 τ2 + 2τ − 1 2 ≤ ρ ≤ 1 + 3τ 2 . Definition 1.42 Let X and Y be random variables. Then X and Y are positively quadrant depen- dent if for every (x, y) ∈ RI 2 P(X ≤ x,Y ≤ y) ≥ P(X ≤ x)P(Y ≤ y). (7) Equivalently, P(X > x,Y > y) ≥ P(X > x)P(Y > y). If equation (7) holds we will write PQD(X,Y). In terms of copulas (7) can be written as C(u, v) ≥ uv = Π2(u, v) for every (u, v) ∈ [0, 1]2. 9 Negative quadrant dependence is defined analogously, by reversing the inequalities. If PQD(X,Y) then 3τX,Y ≥ ρX,Y ≥ 0. Definition 1.43 Let X, Y be two random variables, we will say that Y is left tail decreasing in X, denoted LTD(Y |X) if and only if P(Y ≤ y|X ≤ x) is a decreasing function of x for all y, equivalently, if and only if C(u, v)/u is decreasing in u or if and only if ∂C(u, v)/∂u ≤ C(u, v)/u for almost every u. We will say that Y is right tail increasing in X, denoted RTI(Y |X) if and only if P(Y > y|X > x) is a increasing function of x for all y or equivalently, if and only if 1 − u − v + C(u, v)/(1 − u) is decreasing in u or if and only if ∂C(u, v)/∂u ≥ [v −C(u, v)]/(1 − u) for almost every u. LTD(X|Y) RTI(X|Y) are defined by interchanging X and Y and we have that tail monotonicity implies PQD. We will use later on the Spearman’s rho to define a methodology to establish the order of the sample copula in the two-dimensional case. 10 2 Comparison between the empirical copula and the sample d- copula of order m The d-sample copula of order m, Cm n , is a d-copula which is a sample estimator of C(m) the checker- board approximation of order m of a given d-copula C, see [36] and [9], as we will see the estimator Cm n approaches C(m) as the sample size n increases. If m is relatively large, C(m) is a very good ap- proximation of C. Hence, Cm n can be thought as a quasi-nonparametric method to estimate the real d-copula C, and it becomes a nonparametric estimator when we choose the order m. In this Chapter we make an extensive comparison of the supremum distance between the empirical copula Cn and the real copula C, and the supremum distance between the real copula and the sample copula of order m, Cm n which is simply the multilinear interpolation used in the proof of Sklar’s Theorem, based on a sample of size n and a regular partition of order m in md d-boxes of Lebesgue measure 1/md of [0, 1]d. The same partition is used to define C(m) the checkerboard approximation. We used different samples sizes n, and we consider a large class of frequently used families of copulas, and some interesting families of singular copulas. We simulated a large number of samples of each copula with sample sizes n = 20, n = 30 and n = 50, and we obtain the basic statistics of the supremum distance when we vary m from 2 up to n. We observed that always there exist values of m, in general far smaller that n, such that the supremum distance between the sample d-copula of order m and the real copula C gives better approximations than using the supremum distance between the empirical copula and the real copula. We also prove a Glivenko-Cantelli’s Theorem for the sample copula, and we provide a method to estimate the value of m such that Cn m the sample d-copula of order m is a good approximation of the real d-copula C based on the simulations. In the last section we give some remarks and observations which include an important comment on why the case m = n is not a good option. We also see that we can easily simulate from the sample copula Cm n , and that these simulated samples are quite similar to the original sample. On the other hand, we give strong evidence that it is possible to obtain a Glivenko-Cantelli’s Theorem for the total variation distance for the checkerboard approximation C(m) and the sample copula Cm n . 2.1 Definitions and results about the empirical copula We start this section by recalling the principal results about the empirical copulas. Definition 2.1 (Rank Function) Let X1, · · · , Xn be a random sample of size n of a continuous random variables X and let X(1), · · · , X(n) be their order statistics. The rank function r : {1, . . . , n}× 11 RI n → {1, . . . , n} is defined by r( j, X1, · · · , Xn) = k, if and only if X j = X(k) where j, k ∈ {1, . . . , n}. Definition 2.2 (Modified Sample or Pseudosample) Let X 1 , · · · , X n be a random sample of size n of a continuous random vector X of dimension d, where X i = (Xi,1, . . . , Xi,d) ∈ RI d, for every i = 1, . . . , n. Let i ∈ In, the i-th modified sample Y i = (Yi,1, · · · ,Yi,d), is defined by Yi, j = 1 n r(i, X1, j, . . . , Xn, j) for every j ∈ Id. Here we observe that the modified sample {Y 1 , . . . ,Y n } is always a subset of Id Definition 2.3 (Empirical Copula) Let X 1 , · · · , X n be a random sample of size n of a random vector X of dimension d, with continuous joint distribution H, where X i = (Xi,1, · · · , Xi,d) ∈ RI d, for every i = 1, . . . , n. Let Y 1 , · · · ,Y n be the corresponding modified sample. We define the empirical copula denoted by Cn : Id → I by Cn(u1, . . . , ud) = 1 n n ∑ i=1 1{Yi,1≤u1,...,Yi,d≤ud}(u1, . . . , ud) for every (u1, . . . , ud) ∈ Id. (8) Remark 2.4 The empirical copula Cn is an approximation of the real copula C. The empirical copula Cn given in equation (8) has jumps of magnitude 1/n at each Y i of the modified sample for every i ∈ In almost surely. Hence, Cn is not continuous, and therefore Cn is not a d-copula. However, Cn is a d-subcopula if we restrict the domain to be T d, where T = {0, 1/n, 2/n, . . . (n − 1)/n, 1}. This follows easily by observing that from the continuity of the joint distribution function H of the random vector X, the ranks in each coordinate vary from one to n. Using Sklar’s Theorem for the continuous joint distribution H in Definition 2.3 there exists a unique d-copula C such that equation (3) holds. If we are sampling from a d-copula C instead of a joint distribution function H the Definition 2.3 of the empirical copula still holds using modified samples. A very important result about empirical copulas is the Glivenko-Cantelli’s Theorem which states that the empirical copula Cn approaches the real copula C in supremum norm almost surely. Theorem 2.5 (Glivenko-Cantelli) Let Cn the empirical copula constructed from a sample of size n of a continuous joint distribution H with d-copula C, or from a d-copula C, as in Remark (2.4). Then lim n→∞ sup (u1,u2,...,ud)∈Id |Cn(u1, u2, . . . , ud) −C(u1, u2, . . . , ud)| = 0 almost surely. (9) 12 It is quite important to observe here, that the empirical copula has been used extensively in statisti- cal applications to model multivariate data, see for example [4], [5], [6], [7], [10], [12], [14], [16], [18], [19], [20], [22], [23], [28], [30], [31], [39], [42], [45] and [48], just to cite some of them. However, as observed above the empirical d-copula is not a d-copula. In order to correct this pro- blem some authors have proposed some modifications of the empirical copulas such as the linear B-spline copulas, see [43] and [17]. We will see later on that this approximation corresponds to our sample d-copula of order m = n, but we will also see that, in many instances, this approximation of the real copula does not improve the approximation given by the empirical copula. Another well known approximation for a copula is the Bernstein copula, which is based on polynomial approx- imations of a d-copula, see for example [41] or [36]. We will see that our proposal is far easier to implement specially in higher dimensions and even with large sample sizes. The empirical d-copula , based on the theory of empirical processes includes two important results which justify its use: The first one is the Glivenko-Cantelli Theorem, stated above, which states that for n large enough it approximates the real copula almost surely, and the second one is the asymptotic theory which states that the normalized process converges to a Gaussian process with a given covariance structure, see for example [8], [13] and [44]. As we will see in this paper, using the checkerboard approximation of order m, see [36], which is a very good approximation of a d-copula C, that when m increases, approaches rapidly C, we can obtain the Glivenko-Cantelli’s result for the sample d-copula of order m. See also, [9] and [35]. First, we observe that in dimension one, if we have a univariate distribution function F and a random sample X = {X1, X2, . . . , Xn} from F of size n. We know that the empirical distribution function is defined by Fn(x) = (1/n) n ∑ i=1 1{Xi≤x} for every x ∈ RI . If F is continuous then with probability one Fn has jumps of magnitude 1/n at each point Xi of the sample. If we let F0 denote the distribution function of the constant random variable X0 = 0, and taking order statistics we can assume that the sample satisfies −∞ < X1 < X2 < · · · < Xn < ∞, then Fn(x) = ∑n i=1(1/n)F0(x − Xk) for every x ∈ RI . So, if we assume that F is continuous, it is easy to see that supx∈RI |Fn(x) − F(x)| = max1≤k≤n (max (|Fn(Xk−) − F(Xk)|, |Fn(Xk) − F(Xk)|)) = max1≤k≤n ( max (∣ ∣ ∣ k−1 n − F(Xk) ∣ ∣ ∣ , ∣ ∣ ∣ k n − F(Xk) ∣ ∣ ∣ )) . (10) On the other hand, we have that for every k ∈ In, min(max(|(k − 1)/n − F(Xk)| , |k/n − F(xk)|)) is attained when F(Xk) = (2k − 1)/2n, and in this case min(|(k − 1)/n − F(Xk)| , |k/n − F(xk)|) = 13 max(|(k−1)/n−F(Xk)| , |k/n−F(xk)|) = 1/2n for every k ∈ In. So, if we define Xk = F−1((2k−1)/2n) for every k ∈ In, we have, using equation (10), that sup x∈RI |Fn(x) − F(x)| = 1 2n . (11) Therefore, we have proved the following Lemma, which we could not find a reference for it Lemma 2.6 Let F be a univariate continuous distribution function and let X = {X1, X2, . . . , Xn} be a random sample of size n ≥ 1 from F. Then sup x∈RI |Fn(x) − F(x)| ≥ 1 2n a.s.[PF], where PF is the probability measure induced by F on RI . For an upper bound on the tail probabilities we have the Dvoretzky-Kiefer-Wolfowitz inequality, improved by Massart, see [11] and [34], which states that for every ǫ > 0 and for every n ≥ 1 P ( sup x∈RI |Fn(x) − F(x)| > ǫ ) ≤ 2e−2nǫ2 . (12) If we take ǫ = 1/(2n) in (12) we observe that that the minimum of the right hand side is attained at n = 1 where it takes the value 2 exp(−1/2) = 1.2103 > 1 which agrees with Lemma 2.6 Let us return to the case d ≥ 2, let H be a continuous joint distribution d-dimensional and let C the unique d-copula given in equation (3) of Sklar’sTheorem. Let X 1 , · · · , X n be a random sample of size n of a random vector X of dimension d, with continuous joint distribution H, and let Y 1 , · · · ,Y n be the corresponding modified sample. Define the empirical copula Cn as in equation (8), then as observed in Remark 2.4 we have that Cn : T d → [0, 1], where T = {0, 1/n, . . . , (n − 1)/n, 1} is a d-subcopula, but not a d-copula when defined on Id. If 0 < ǫ < 1/n, since C is a d-copula then C(ǫ, 1, . . . , 1) = ǫ. However, Cn(ǫ, 1, . . . , 1) = 0, because ǫ < 1/n. So, letting ǫ approach 1/n from the left we have that lim ǫ↑(1/n) |C(ǫ, 1, . . . , 1) −Cn(ǫ, 1, . . . , 1)| = 1/n. Therefore we have proved the following: Lemma 2.7 Let H be a continuous joint distribution d-dimensional for d ≥ 2, let C the unique d-copula given in equation (3) of Sklar’sTheorem. Let X 1 , · · · , X n be a random sample of size n of 14 a random vector X of dimension d, with continuous joint distribution H, and let Y 1 , · · · ,Y n be the corresponding modified sample. Define the empirical copula Cn as in equation (8). Then sup (x1,x2,...,xd)∈Id |C(x1, . . . , xd) −Cn(x1, . . . , xd)| ≥ 1 n a.s.[PC], (13) where PC is the probability function induced by the copula C on RI d. 2.2 Sample d-copula of order m We start this section by defining the concept of generalized transformation matrix given in [15] and extended in [46] in the construction of fractal copulas. Definition 2.8 Let In = {1, 2, · · · , n}, with n ≥ 1. For dimension d ≥ 2, let m ∈ N, we define Id m = ×d i=1 Im. Let τ a probability measure in (Id m, 2 Id m), τ is known as a generalized transformation matrix if for all j ∈ {1, · · · , d} and for all k ∈ {1, · · · ,m} ∑ i∈Id m,i j=k τ(i) > 0 where i = (i1, · · · , i j−1, i j = k, i j+1, · · · , id) ∈ Id m. τ can be thought as a d-dimensional matrix τ, considering τ(i) = τi1,···,id if i = (i1, · · · , id) ∈ Id m. Example 2.9 Let d = 2, m = 3. Then I2 3 = {1, 2, 3}2. We define two probability measures τ1 and τ2 in (I2, 2I2 ) given by the following matrices τ1 =           1/6 0 0 0 1/3 0 0 0 1/2           and τ2 =           1/6 0 1/6 1/6 0 1/6 2/6 0 0           . We can see by adding the elements in each row and column that τ1 is a transformation matrix, but τ2 is not since the sum of the entries in the second column equals zero. Definition 2.10 Let τ = (τi, j)i, j∈{1,···,m} be a generalized transformation matrix where d = 2. Define {q1,0, q1,1, · · · , q1,m} and {q2,0, q2,1, · · · , q2,m} two partitions of [0, 1], such that q1,0 = q2,0 = 0 and for i, j ∈ Im we have that q1,i = i ∑ i′=1 ∑ j∈Im τi′, j and q2, j = j ∑ j′=1 ∑ i∈Im τi, j′ , 15 we also define the partition induced by τ on I2 by Qm i, j = 〈 q1,i−1, q1,i ] × 〈 q2, j−1, q2, j ] for every (i, j) ∈ Im × Im, where the 〈 notation indicates that the left end of the interval is closed if i = 1 or j = 1, and open in any other case. Let Π2 be the product 2-copula, we define the τ(Π2) transformation by τ(Π2)(u, v) = ∑ i′ 2 using the product d-copula Πd with a d-linear interpolation. Definition 2.11 Let m ≥ 2 and let τ = (τi1,···,id )(i1,···,id)∈Id m be a generalized transformation matrix. We define q1,0 = q2,0 = · · · = qd,0 = 0, and for every j ∈ {1, · · · , d} and for every k ∈ {1, · · · ,m} q j,k = k ∑ i j=1 m ∑ i1=1 · · · m ∑ i j−1=1 m ∑ i j+1=1 · · · m ∑ id=1 ti1,···,i j−1,i j,i j+1,···,id . Then 0 = q j,0 < q j,1 < · · · < q j,m−1 < q j,m = 1 is a partition of the [0, 1] interval, induced for the matrix τ in the j-coordinate. For every i = (i1, . . . , id) ∈ Id m we define Qm i = 〈q1,(i1−1), q1,i1] × 〈q2,(i2−1), q2,i2] × · · · × 〈qd,(id−1), qd,id ]. (15) Then the family (Qm i )i∈Id m is a partition of Id. Now we can give the definition of the sample copula of order m, based on a random sample of size n, where 2 ≤ m ≤ n, coming from a continuous joint d-distribution function H or a d-copula C. 16 Definition 2.12 (Sample Copula of order m) Let 2 ≤ m ≤ n and let X 1 , . . . , X n be a random sample of size n of a random vector X of dimension d, with continuous joint distribution H or d-copula C, where X i = (Xi,1, · · · , Xi,d) ∈ RI d, for every i = 1, . . . , n. Let Un = {Y 1 , . . . ,Y n } be the corresponding modified sample. Define the uniform partition of size m of Id, where for every i = (i1, . . . , id) ∈ Id m Rm i = 〈 i1 − 1 m , i1 m ] × · · · × 〈 id − 1 m , id m ] . (16) Define s n,(m) i1,...,id = card(Rm i ∩ Un) n (17) where card(·) denotes the cardinality of a set. Let S n m = (s n,(m) i1,...,id )(i1,...,id)∈Id m , (18) then S n m is always a d-dimensional generalized transformation matrix. Let (Qm i )i∈Id m be the partition of Id induced by the generalized transformation matrix S n m given in equation (15). Using the partition (Qm i )i∈Id m , we define the sample d-copula of order m by Cn m(u1, . . . , ud) = S n m(Πd)(u1, . . . , ud), (19) as in the generalization of equation (14), where Πd is the product copula in [0, 1]d. To clarify this definition we give a simple example Example 2.13 We took a sample of size n = 4 from a bivariate normal distribution with mean µ = (0, 0) and correlation coefficient ρ = 0.5. The observations were X 1 = (0.662, 0.895), X 2 = (−1.352,−0.174), X 3 = (1.304,−0.682), and X 4 = (0.651, 0.137), the corresponding modified sample is U4 = {Y 1 = (3/4, 1), Y 2 = (1/4, 2/4), Y 3 = (1, 1/4, ), Y 4 = (2/4, 3/4)}. First, let m = 2, then using equation (16) we have that R2 1,1 ∩ U4 = {Y 2 }, R2 1,2 ∩ U4 = {Y 4 }, R2 2,1 ∩ U4 = {Y 3 }, and R2 2,2 ∩ U4 = {Y 1 }. Then s 4,(2) i, j = 1/4 for every i, j ∈ I2, that is, S 4 2 = ( 1/4 1/4 1/4 1/4 ) . 17 Therefore, S 4 2 is clearly a transformation matrix, which induces the partitions q1,0 = q2,0 = 0 < q1,1 = q2,1 = 1/2 < q1,2 = q2,2 = 1 on the first and second coordinates, see Definition 2.10. Observe that the partition (Q2 i, j)i, j∈I2 induced by the matrix S 4 2 , given in Definition 2.10 coincides with the uniform partition of size 2 (R2 i, j)i, j∈I2 . So, using equation (14), it is easy to see that C4 2 = S 4 2 (Π2) is the copula which has joint density c4 2 (u, v) = 1 for every (u, v) ∈ I2. Observe that in all four cases c4 2 (u, v) = s 4,(2) i, j /λ2(Q2 i, j), where λ2 is the Lebesgue measure in RI 2. Second, let us assume that m = 3 and let us take {R3 i, j }i, j∈I3 the uniform partition of size 3 of I2. Then we have that R3 1,2 ∩ U4 = {Y 2 }, R3 2,3 ∩ U4 = {Y 4 }, R3 3,1 ∩ U4 = {Y 3 }, and R3 3,3 ∩ U4 = {Y 1 }, and for the remaining boxes R3 i, j ∩ U4 = ∅. Then s 4,(3) i, j = { 1/4 if (i, j) ∈ {(1, 2), (2, 3), (3, 1), (3, 3)} 0 if (i, j) ∈ ((I3 × I3)\{(1, 2), (2, 3), (3, 1), (3, 3)}). Hence, S 4 3 =           0 1/4 0 0 0 1/4 1/4 0 1/4           , which is clearly a transformation matrix. The partitions in [0, 1] induced by the matrix S 4 3 are q1,0 = q2,0 = 0 < q1,1 = q2,1 = 1/4 < q1,2 = q2,2 = 2/4 < q1,3 = q2,3 = 1. Define as Definition 2.10 Q3 i, j = 〈q1,i−1, q1,i] × 〈q2, j−1, q2, j] for every i, j ∈ I3. So, using Definition 2.10 again, it is easy to see that the sample copula C4 3 = S 4 3 (Π2) of order 3, has a joint density c4 3 given by c4 3(u, v) =                      4 if (u, v) ∈ Q3 1,2 2 if (u, v) ∈ Q3 2,3 2 if (u, v) ∈ Q3 3,1 1 if (u, v) ∈ Q3 3,3 0 if (u, v) ∈ I2\(Q3 1,2 ∪ Q3 2,3 ∪ Q3 1,2 ∪ Q3 1,2 ) Observe that again, in all cases c4 3 (u, v) = s 4,(3) i, j /λ2(Q3 i, j ). 18 Finally, assume that m = n = 4 and let us take {R4 i, j}i, j∈I4 the uniform partition of size 4 of I2. Then we have that R4 1,2 ∩ U4 = {Y 2 }, R4 2,3 ∩ U4 = {Y 4 }, R4 3,4 ∩ U4 = {Y 3 }, and R4 4,1 ∩ U4 = {Y 1 }, and for the remaining boxes R4 i, j ∩ U4 = ∅. Then s 4,(4) i, j = { 1/4 if (i, j) ∈ {(1, 2), (2, 3), (3, 4), (4, 1)} 0 if (i, j) ∈ ((I4 × I4)\{(1, 2), (2, 3), (3, 4), (4, 1)}). Hence, S 4 4 =                0 1/4 0 0 0 0 1/4 0 0 0 0 1/4 1/4 0 0 0                , which is clearly a transformation matrix. The partitions in [0, 1] induced by the matrix S 4 4 are q1,0 = q2,0 = 0 < q1,1 = q2,1 = 1/4 < q1,2 = q2,2 = 2/4 < q1,3 = q2,3 = 3/4 < q1,4 = q2,4 = 1. Define as in Definition 2.10 Q4 i, j = 〈q1,i−1, q1,i] × 〈q2, j−1, q2, j] for every i, j ∈ I4, and in this case again, the partition (Q4 i, j)i, j∈I4 of I2 coincides with the uniform partition of size 4 (R4 i, j)i, j∈I4 of order 4 of I2. So, using Definition 2.10 again, it is easy to see that the sample copula C4 4 = S 4 4 (Π2) of order 4, has a joint density c4 4 given by c4 4(u, v) = { 4 if (u, v) ∈ Q4 1,2 ∪ Q4 2,3 ∪ Q4 3,4 ∪ Q4 4,1 0 if (u, v) ∈ I2\(Q4 1,2 ∪ Q4 2,3 ∪ Q4 3,4 ∪ Q4 4,1 ) Observe again, that c4 3 (u, v) = s 4,(4) i, j /λ2(Q4 i, j) for every i, j ∈ I4. We will use the previous Example in order to see that even if Definitions 2.10, 2.11 and 2.12 seem to be quite cumbersome, in the case of the sample copula of order m they become quite simple, this will become apparent in the next: Theorem 2.14 Let 2 ≤ m ≤ n and let X 1 , . . . , X n be a random sample of size n of a random vector X of dimension d, with continuous joint distribution H or d-copula C, where X i = (Xi,1, · · · , Xi,d) ∈ RI d, for every i = 1, . . . , n. Let Un = {Y 1 , . . . ,Y n } be the corresponding modified sample. 19 Let 2 ≤ m ≤ n fixed and define (Rm i )i∈Id m the uniform partition of size m of Id as in equation (16), s n,(m) i1,...,id as in equation (17), the generalized transformation matrix S n m as in equation (18), the partition (Qm i )i∈Id m of Id induced by S n m given in equation (15), and Cn m the sample copula order m as in equation (19). Then i) For the partitions of (Qm i )i∈Id m we know that 0 = q1,0 < q1,1 < · · · < q1,m = 1, but we also have that q j,0 = q1,0 = 0, q j,1 = q1,1, q j,2 = q1,2, . . . , q j,m = q1,m = 1 for every j ∈ {2, 3, . . . , d}, (20) that is, in the d coordinates the partition of [0, 1] does not change. Evenmore, with probability one, the partition 0 = q1,0 < q1,1 < · · · < q1,m = 1 only depends on n and m, and does not depend on the sample, in fact we have that q1, j = 1 n · ⌊ j · n m ⌋ for every j ∈ {0, 1, 2, . . . ,m}, (21) where ⌊a⌋ denotes the greatest integer less than or equal to a. ii) For every 2 ≤ m ≤ n, Cn m is always a d-copula. iii) Assume that m divides n, then the partition (Qm i )i∈Id m of Id induced by S n m coincides with the uniform partition (Rm i )i∈Id m of size m. iv) Let λd be the Lebesgue measure on the measurable space ( RI d,B( RI d)), where B( RI d) denotes the σ-algebra of Borel. If Cn m is the sample copula of order m, let us denote by cn m its joint density function. Then cn m(u1, . . . , ud) = s n,(m) i1,...,id /λd(Qm i1,...,id ) for every (u1, . . . , ud) ∈ Qm i1,...,id and (i1, . . . , id) ∈ Id m. (22) Hence, the density is constant on every d-box Qm i1,...,id of the partition of Id induced by S n m. Besides, if md > n then there exists at least one d-box Qm i1,...,id on which the density is zero. In fact, there are at most n d-boxes with positive density. v) If m = n there are exactly n elements of the partition (Qm i )i∈Id m = (Rm i )i∈Id m on which the density equals nd−1 and the remaining elements have density zero. Proof: i) We first observe that the modified sample satisfies that in each coordinate the different values are 1/n, 2/n, . . . , (n− 1)/n, n/n = 1, then for any 2 ≤ m ≤ n we have that equations (20) and (21) hold. We also have that the matrix S n m is always a generalized transformation matrix. Hence, by the definition of the sample d-copula of order m given in equation (19) and using the results in [15] and [46] we obtain ii). 20 Assume that m divides n, to see that iii) holds is enough to observe that we obtain always integers in the expressions ⌊·⌋ in equation (21) and that q1, j = j/m for every j ∈ {1, 2, . . . ,m}. iv) We know from equation (19) that the d-sample copula of order m is a multilinear function which spreads uniformly the mass s n,(m) i1,...,id over the d-box Qm i1,...,id for every (i1, . . . , id) ∈ Id m. Hence the density on each d-box Qm i1,...,id is a constant, the result now follows directly from evaluating the constant in the integral expression. v) follows directly from parts iii) and iv).  Observe that from Theorem 2.14 the two partitions of Id, that is, the uniform partition of size m (Rm i )i∈Id m , where 2 ≤ m ≤ n, and the partition induced by the generalized transformation matrix S n m, (Qm i )i∈Id m do not always coincide. Using equation (16), if we define 0 = rk,0 < 1/m = rk,1 < 2/m = rk,2 < · · · < (m−1)/m = rk,m−1 < 1 = rk,m for every k ∈ {1, 2, . . . , d}, then (rk, j)k∈Id , j∈{0,1,...,m} provides the partition in each coordinate induced by the uniform partition of size m. We define the distance between (Rm i )i∈Id m and (Qm i )i∈Id m by em((Rm i ), (Qm i )) = max j∈{0,1,...,m} |r1, j − q1, j|, where q1, j is given in equation (21). Then em measures the “distortion” of the uniform partition of size m, caused by the sample size n. It is easy to see that max2≤m≤n em((Rm i ), (Qm i )) ≤ (n − 2)/((n − 1)n) < 1/n, that is, the maximum is attained when m = n − 1, and this maximum distance is always smaller that 1/n. Using part iii) of Theorem 2.14, we have that if m divides n, then em((Rm i ), (Qm i )) = 0. In Theorem 2.14, part iv) we give the joint density cn m associated to Cn m the sample copula of order m, from this density the evaluation of Cn m is absolutely trivial, and easily implemented in any computer. Part v) of Theorem 2.14, implies that most of the d-boxes in the partition of Id have zero den- sity. For example if the sample size n = 100 and d = 4, we have that only one hundred out of 100, 000, 000 4-boxes have positive density. In this example is quite important to notice that the one hundred 4-boxes with positive density include the support of the empirical copula Cn, which is included in T d, where T = {0, 1/n, 2/n, . . . , 1} given in Definition 2.3 and Remark 2.4. Besides, the sample copula Cn n is a d-copula unlike the empirical copula Cn, which is only a d-subcopula. 21 Now we will see, that in some cases, Cn m the sample copula of order m may coincide with C, the d-copula we are sampling from, that is, sup(u1,...,ud)∈Id |Cn m(u1, . . . , ud) −C(u1, . . . , ud)| = 0. Lemma 2.15 Let d ≥ 2 be an integer and let n ≥ 2 be an even integer. Then there exist 2 ≤ m ≤ n, C a d-copula and a sample of size n from C, such that sup (u1,...,ud)∈Id |Cn m(u1, . . . , ud) −C(u1, . . . , ud)| = 0. (23) Proof: Let d ≥ 2 be an integer and let n ≥ 2 be an even integer. Let C be a d-copula with density c given by c(u1, . . . , ud) = { 2d−1 if (u1, . . . , ud) ∈ [0, 1/2]d ∪ (1/2, 1]d 0 if (u1, . . . , ud) ∈ Id\([0, 1/2]d ∪ (1/2, 1]d). Let m = 2. The uniform partition of size 2 of Id is such that R1,1,...,1 = [0, 1/2]d and R2,2,...,2 = (1/2, 1]d accumulate all the mass of the d-copula C. Let X 1 , . . . , X n be a random sample of size n from the d-copula C, and assume that exactly n/2 points of the sample fall in the d-box R1,1,...,1, then the remaining n/2 points fall in the box R2,2,...,2 with probability one, because the density is zero on any other d-box of the uniform partition of size 2. Let Un = {Y 1 , . . . ,Y n } be the corresponding modified sample, then it is obvious that this modified sample satisfies exactly the same conditions. Hence, using equation (17), s n,(2) 1,1,...,1 = s n,(2) 2,2,...,2 = 1/2 and the remaining s n,(2) i1,i2,...,id = 0. Therefore, using Theorem 2.14, part iv), the density of the sample d-copula of order 2 is given by cn m(u1, . . . , ud) = { 2d−1 if (u1, . . . , ud) ∈ [0, 1/2]d ∪ (1/2, 1]d 0 if (u1, . . . , ud) ∈ Id\([0, 1/2]d ∪ (1/2, 1]d), which is exactly the density of the d-copula C, and the result follows.  However, observe that from Lemma 2.7, the empirical copula Cn satisfies that sup (u1,...,ud)∈Id |Cn(u1, . . . , ud) −C(u1, . . . , ud)| ≥ 1/n almost surely. Since we are using the product copula in order to define the sample d-copula of order m, see equation (19), we have to see what is the largest error we can incur by doing so. First we observe that for d = 2, we have that sup (u,v)∈I2 |Π2(u, v) − M2(u, v)| = sup (u,v)∈I2 |Π2(u, v) −W2(u, v)| = 1/4, (24) 22 where the supremum is attained at u = v = 1/2 in both cases, as can be easily checked. Hence, from equation (24) we have that for every 2-copula C, using the Fréchet-Hoeffding’s bounds, sup(u,v)∈I2 |Π2(u, v) −C(u, v)| ≤ 1/4. For the case d ≥ 3, we have that sup (u1,...,ud)∈Id |Πd(u1, . . . , ud) − Md(u1, . . . , ud)| = d − 1 dd/(d−1) , (25) where the supremum is attained at u1 = u2 = · · · ud = 1/d1/(d−1). For every C1,C2 d-copulas let us define dsup(C1,C2) = sup (u1,u2,...,ud)∈Id |C1(u1, u2, . . . , ud) −C2(u1, u2, . . . , ud)|. (26) Then clearly dsup is a metric in the set of all d-copulas. Definition 2.16 Let 2 ≤ m ≤ n and let X 1 , . . . , X n be a random sample of size n of a random vector X of dimension d, with continuous joint distribution H with d-copula C, or from the d- copula C where X i = (Xi,1, · · · , Xi,d) ∈ RI d, for every i = 1, . . . , n. Let Un = {Y 1 , . . . ,Y n } be the corresponding modified sample. Let Cn be the empirical copula defined in equation (8), and let Cn m be the sample copula of order m defined as in equation (19) of Definition 2.12. We define dsup n (Cn,C) = max        sup (i1,i2,...,id)∈In d |Cn(i1/n, i2/n, . . . , id/n) −C(i1/n, i2/n, . . . , id/n)| , 1 n        (27) and dsup n,(m) (Cn m,C) = sup (i1,i2,...,id)∈In d |Cn m(i1/n, i2/n, . . . , id/n) −C(i1/n, i2/n, . . . , id/n)|. (28) By equation (13) we now that dsup(Cn,C) ≥ 1/n almost surely, that is why the term 1/n appears in equation (27). Hence dsup n (Cn,C) is never a metric. However, dsup n,(m) (Cn m,C) in equation (28) is a pseudometric in the family of all d-copulas. In order to see that this statement holds just observe that Cn m is always a d-copula, and for any d-copula C such that Cn m(i1/n, i2/n, . . . , id/n) = C(i1/n, i2/n, . . . , id/n) for every (i1, i2, . . . , id) ∈ In d we have that dsup n,(m) (Cn m,C) = 0. In particular, if C = Cn m we have that dsup n,(m) (Cn m,C) = 0. 23 Of course we have from equations (26), (27) and (28)that dsup(Cn,C) ≥ dsup n (Cn,C) and dsup(Cn m,C) ≥ dsup n,(m) (Cn m,C), (29) for every integers 2 ≤ m ≤ n and for every d-copula C. We will show with an easy example that the first inequality in equation (29) can be strict, but the second one is an equality. Let d = 2, n = 2 and let C = Π2 be the product copula. Then the modified sample is U2 = {(1/2, 2/2 = 1), (2/2 = 1, 1/2)} or U2 = {(1/2, 1/2), (1 = 2/2, 1 = 2/2)} each with probability 1/2. In the first case, the mass of the empirical copula is concentrated in the two points (1/2, 1) and (1, 1/2), therefore, it is quite easy to see from equation (27) that dsup 2 (C2,Π 2) = max{|0 − 1/4|, 1/2} = 1/2. But, if we take any 0 < ǫ < 1 then we have that dsup(C2,Π 2) ≥ |C2(1− ǫ, 1− ǫ)− (1−ǫ)2| = (1−ǫ)2, so if we let ǫ ↓ 0 we have that dsup(C2,Π 2) = 1 > 1/2 = dsup 2 (C2,Π 2). Observe that in this example we obtain the upper bound, since dsup(C,D) ≤ 1 for any two distribution functions C and D on I2. Since n = 2, then we have that m = 2 for the sample copula, and in this case we have that the sample copula of order m = 2 gives uniform masses 1/2 to the boxes R1,2 = [0, 1/2] × (1/2, 1] and R2,1 = (1/2, 1] × [0, 1/2], so using equation (28) we have that dsup 2,(2) (C2 2 ,Π2) = |C2 2 (1/2, 1/2) − Π2(1/2, 1/2)| = 1/4 = dsup(C2 2 ,Π2). In the second case, the mass of the empirical copula is concentrated in the two points (1/2, 1/2) and (1, 1), so, from equation (27) we have that dsup 2 (C2,Π 2) = max{|1/2−1/4|, 1/2} = 1/2. Using (13) we can see that dsup(C2,Π 2) = 1/2. In this case we have that the sample copula of order m = 2 gives uniform masses 1/2 to the boxes R1,1 = [0, 1/2]2 and R2,2 = (1/2, 1]2, so using equation (28) we have that dsup 2,(2) (C2 2 ,Π2) = |C2 2 (1/2, 1/2) − Π2(1/2, 1/2)| = 1/4 = dsup(C2 2 ,Π2). Let X 1 , . . . , X n be a random sample of size n from the d-copula Md. Let Un = {Y 1 , . . . ,Y n } be the corresponding modified sample, then {Y i = (i/n, i/n, . . . , i/n)}n i=1 almost surely. So, it is obvious that sup(i1,i2,...,id)∈In d |Cn(i1/n, i2/n, . . . , id/n) − Md(i1/n, i2/n, . . . , id/n)| = 0, but from equation (13) dsup(Cn,C) = 1/n. We finish this section by proving a Glivenko-Cantelli’s Theorem for the sample d-copula of order m. Theorem 2.17 Let C be a d-copula, let m ≥ 2 and let n be a multiple of m, let Cn m be the sample copula built from a modified sample of size n from C. Then lim m−→∞ sup (u,v)∈I2 ∣ ∣ ∣Cn m(u, v) −C(u, v) ∣ ∣ ∣ = 0 a.s. 24 Proof: We prove the result for d = 2. Using the notation in Lemma 2.3.5 in Nelsen’s book [37]: For (a, b) ∈ [0, 1]2 and a subcopula C′ with domain finite S 1 × S 2, let a1 and a2 be, respectively, the greatest and least elements of S 1 that satisfy a1 ≤ a ≤ a2; and let b1 and b2 be, respectively, the greatest and least elements of S 2 that satisfy b1 ≤ b ≤ b2. If a ∈ S 1, then a1 = a = a2, and if b ∈ S 2, then b1 = b = b2. Let λ1(a, b) = λ1 = { (a − a1)/(a2 − a1) if a1 < a2 1 if a1 = a2 and µ1(a, b) = µ1 = { (b − b1)/(b2 − b1) if b1 < b2 1 if b1 = b2. We will consider the following representation of the the sample copula Cn m, see [26]. Cn m(u, v) = m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](u, v) ( (1 − λ1(u, v))(1 − µ1(u, v))Cn ( i − 1 m , j − 1 m ) +(1 − λ1(u, v))µ(u, v)1Cn ( i − 1 m , j m ) + λ1(u, v)(1 − µ1(u, v))Cn ( i m , j − 1 m ) + λ1(u, v)µ1(u, v)Cn ( i m , j m ))] where Cn is the empirical copula built from the modified sample, and the checkerboard approxi- mation C(m), see [33], given by C(m)(u, v) = m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](u, v) ( (1 − λ1(u, v))(1 − µ1(u, v))C ( i − 1 m , j − 1 m ) +(1 − λ1(u, v))µ1(u, v)C ( i − 1 m , j m ) + λ1(u, v)(1 − µ1(u, v))C ( i m , j − 1 m ) + λ1(u, v)µ1(u, v)C ( i m , j m ))] for all (u, v) ∈ I2. Let (u∗, v∗) ∈ I2 and m ≥ 2, where m divides n, such that (u∗, v∗) ∈ Rm i∗ j∗ = 〈 i∗−1 m , i∗ m ] × 〈 j∗−1 m , j∗ m ] where the notation “〈” indicates that the left end of interval is closed if i∗ = 1 or j∗ = 1 and open in any other case, then |Cn m(u∗, v∗) −C(m)(u∗, v∗)| ≤ (1 − λ1(u∗, v∗))(1 − µ1(u∗, v∗)) ∣ ∣ ∣ ∣ ∣ ∣ Cn ( i∗ − 1 m , j∗ − 1 m ) −C ( i∗ − 1 m , j∗ − 1 m ) ∣ ∣ ∣ ∣ ∣ ∣ 25 +(1 − λ1(u∗, v∗))µ1(u∗, v∗) ∣ ∣ ∣ ∣ ∣ ∣ Cn ( i∗ − 1 m , j∗ m ) −C ( i∗ − 1 m , j∗ m ) ∣ ∣ ∣ ∣ ∣ ∣ +λ1(u∗, v∗)(1 − µ1(u∗, v∗)) ∣ ∣ ∣ ∣ ∣ ∣ Cn ( i∗ m , j∗ − 1 m ) −C ( i∗ m , j∗ − 1 m ) ∣ ∣ ∣ ∣ ∣ ∣ +λ1(u∗, v∗)µ1(u∗, v∗) ∣ ∣ ∣ ∣ ∣ ∣ Cn ( i∗ m , j∗ m ) −C ( i∗ m , j∗ m ) ∣ ∣ ∣ ∣ ∣ ∣ . Hence, using Glivenko-Cantelli’s Theorem for the empirical copula, we have that lim m−→∞ sup (u,v)∈I2 ∣ ∣ ∣Cn m(u, v) −C(m)(u, v) ∣ ∣ ∣ = 0 a.s. From [33], we have that, for every m ≥ 1, sup (u,v)∈I2 ∣ ∣ ∣C(m)(u, v) −C(u, v) ∣ ∣ ∣ < 2 m , and lim m−→∞ sup (u,v)∈I2 ∣ ∣ ∣C(m)(u, v) −C(u, v) ∣ ∣ ∣ = 0. From these relations we get that lim m−→∞ sup (u,v)∈I2 ∣ ∣ ∣Cn m(u, v) −C(u, v) ∣ ∣ ∣ = 0 a.s. The proof of the case d-dimensional, where d > 2, is performed similarly.  2.3 Simulation study: dimension two For this section we performed a large simulation study in order to compare the supremum distance defined on equation (26), between the real copula and the empirical copula and then the supremum distance between the real copula and the sample copula of order m. Here we used equations (27) and (28), which are good approximations of the supremum distance (26). In this subsection, we study twenty five different families in dimension d = 2, which include several of the most used families of copulas such as: Ali-Mikhail-Haq, Clayton, Frank, Gumbel, Joe, Normal, Plackett, Product and t-Student. We also include some copulas that are singular such as M2, W2, Examples 3.3, 3.4, 3.5 and the circular distribution copula from Nelsen‘s book [37]. We also include an example of an absolutely continuous copula whose support is [0, 1/2]2 ∪ (1/2, 1]2 26 and we call it the protocol copula. We also study increasing and decreasing transformations of the coordinates of a copula in the Plackett and the Gumbel families. Finally we study a couple of examples of mixtures of copulas to study the behavior of asymmetric copulas. The study consisted in taking random samples of sizes n = 20, 30 and n = 50 with N = 2000 or N = 10000 repetitions for the first two values of n and N = 1500 repetitions for n = 50. We obtained the sample copulas Cn m for every 2 ≤ m ≤ n and the empirical copula Cn and we approximate the supremum distance using equations (27) and (28). Finally we report the mean values, minima and maxima of the N iterations in the first figures, the straight lines correspond to the minima, mean values and maxima of dsup n (Cn,C) and the other lines correspond to the same statistics for dsup n,(m) (Cn m,C). In the second figures we report for some of the cases the comparisons of the variances. We report the values of Spearman’s ρ, which is a common concordance measure, instead of the parameters of the families in order to make comparisons. From now on, we will say that Cn m is a better approximation than Cn to the real copula C for a given value of 2 ≤ m ≤ n if the mean value of the iterations satisfy that dsup n,(m) (Cn m,C) ≤ dsup n (Cn,C). We will make some remarks about the variances later on. First, we start to the case of the product copula Π2, see Figure 2.11, in which we show that for n = 20, 30 and n = 50 we have that for every 2 ≤ m ≤ n, Cn m is a better approximation than Cn, and the variances in all three cases are quite similar for 3 ≤ m ≤ n, even for m = 2 where the variance difference is the greatest we also have that the difference between the mean values is quite remarkable. However, these facts are expected since in the construction of the d-sample copula of order m we used uniform masses. Second, for the Ali-Mikhail-Haq family we have in Figure 2.1 the results for θ ∈ (−1, 0) and in Figure 2.2 for θ ∈ (0, 1). As we can see for every 2 ≤ m ≤ n Cm 20 is a better approximation than C20, and in some instances these mean values are even smaller than the minimum of dsup m,(20) (C20,C) for m = 2. The behavior of this family follows since it is known that its dependence is short of independence, because −0.27106 ≤ ρ ≤ 04784. Third, the Clayton family is a symmetric one and its results are given in Figures 2.3 and Figure 2.4, the first one includes the results for −1 ≤ ρ < 0, we know that the limit of this family as θ approaches −1 is W2, an when θ approaches zero its limit is the product copula Π2, and for θ close to −1 is quite similar to the case W2. Observe that for every m ≥ 5 we have that C20 m is a better approximation than C20. In Figure 2.4 we show the results for 0 < ρ ≤ 1, in this case we know that whenever θ increases to ∞ the copula tends to M2. Recall that a family which has limits W2 27 and M2 it is called comprehensive, here we also observe that for every m ≥ 5 we have that C20 m is a better approximation than C20. Fourth, the Frank family is also a symmetric and comprehensive set of copulas when θ tends to −∞ the copula approaches W2, and when θ tends to∞ the copula approaches M2. We observe that for values of ρ which are not too far from zero we have that C20 m is a better approximation than C20 for several values of m, see Figure 2.5 and Figure 2.6. In fact, this holds to be true for any ρ if m ≥ n/2 = 10. Fifth, the Gumbel family is a symmetric family such that for θ = 1 we obtain the product copula Π2 with ρ = 0, and when θ tends to ∞ the copula approaches M2. Again, for values of ρ close to zero C20 m is a better approximation than C20 for every 2 ≤ m ≤ 20, and for large values of ρ it also holds for m ≥ n/2 = 10 see Figure 2.7. Sixth, the Joe family is also symmetric and it behaves quite similar to the Gumbel family, see Figure 2.8. Seventh, the normal family is symmetric, comprehensive and it is parametrized via its correlation coefficient ρ, it can be seen that for |ρ| ≤ 0.9, C20 m is a better approximation than C20 for every 3 ≤ m ≤ n. But for |ρ| close to one it starts to behaves like W2 or M2 depending on the sign of ρ. Eighth, the Plackett family is symmetric with parameter ρ ∈ (−1, 1), for θ near zero it behaves like W2, for θ = 1 it is Π2, and for very large values of θ it behaves like M2, so this family is also comprehensive. Hence, C20 m is a better approximation than C20 for several values of m if |ρ| is relatively larger than zero and not too large, see Figure 2.10. Ninth, the t-Student family is also symmetric, comprehensive and it is parametrized via its corre- lation coefficient ρ, it can be seen that for |ρ| ≤ 0.95, C20 m is a better approximation than C20 for every 2 ≤ m ≤ 20. The only difference respect to the normal case is the fact that the t-Student has heavier tail behavior, see Figure 2.12. Tenth, the cases of M2 and W2 are of particular importance. To begin with they are the upper and lower bounds for every copula, besides, they represent extreme dependence, because they are the copulas of random variables X and Y for which Y is a strictly increasing or a strictly decreasing function of X, with ρ = 1 and ρ = −1, respectively. Besides, both copulas are singular with supports the main diagonal and the second diagonal, respectively. From Figure 2.13 left two graphs, we see that they have similar behavior and that for m ≥ 5 we have that C20 m is a better 28 approximation than C20. As we have seen above, several families of copulas with limit cases M2 or W2 have pretty similar behavior to the first two graphs in Figure 2.13. It is also very important to observe that for any given sample of size n, the minimum and the maximum coincide with the mean value, this happens because the modified samples are always the same. Eleventh, the cases of the circular uniform distribution and Example 3.5 given in Nelsen’s book [37] are examples of singular copulas, the second one with support on two quarters of circles of radius one, see Figure 3.5 in [37], as we can see in the right graphs of Figure 2.13 we have that C20 m is a better approximation than C20 for every 2 ≤ m ≤ 20. Twelfth, the case of Example 3.3 in [37] is another case of singular copulas with given support the segment of lines joining (0, 0) with (θ, 1), and (θ, 1) with (1, 0), for θ ∈ [0, 1]. As we can see in Figure 2.14 the behavior of the difference between the supremum distances behaves like the case of M2 for ρ = θ = 1 and like W2 for ρ = θ = 0. Thirteenth, the case of Example 3.4 in Nelsen’s book [37], it is simply shuffles of M2 with support the segment of lines joining (0, θ) with (θ, 0), and (θ, 1) with (1, θ), where θ ∈ [0, 1]. for a general definition of shuffle see [35] or [37], for the multivariate case see [9]. In this case, see Figure 2.15, we have that C20 m is a better approximation than C20 for 10 = n/2 ≤ m ≤ n = 20 with a similar behavior to W2. This case is the one that presents smaller differences between the mean values for 0.1 ≤ θ ≤ 0.9. Fourteenth, the case of what we called the protocol copula, which has incomplete support given by [0, 1/2]2 and (1/2, 1]2 with uniform masses on each square it is quite important. We can see from Figure 2.16 left graph, that C20 2 is a better approximation than C20, but C20 3 is not a better approximation than C20, in fact, we can see that the behavior of the graph oscillates between even and odd values of m this can be easily explained due to the form of its support. It is also very important to see that the minimum of the observed supremum distances between C20 2 and the protocol copula is zero. Fifteenth, recall that from Theorem 2.4.4 in [37] if two random variables U and V have copula C, then we can give just in terms of C the copulas of the pairs (U, 1−V), (1−U,V) and (1−U, 1−V), we will denote these cases by ID, DI and DD, where I means a strictly increasing transformation and D denotes a strictly decreasing transformation. We studied these transformation for the Gumbel and the Plackett families, we choose these two families to obtain asymmetric copulas with the transformations ID and DI. The result for the Gumbel family are shown in Figure 2.16 last three graphs, Figure 2.17 and Figure 2.18. For the Plackett family we refer to Figure 2.19, Figure 2.20 29 and Figure 2.21. We can observe that the values of the parameters that we used were the same for each family, and that the graphs look very much alike in each case. Sixteenth, the last three cases are mixtures; First of Gumbel, GumbelID (GID), and GumbelDI (GDI), second GID, Frank and Joe, in both cases we obtain highly asymmetric copulas; and third a mixture of M2 and W2 which is singular. The results are shown in Figure 2.22, Figure 2.23 and Figure 2.24 and the comments are quite similar to some of the previous cases. For the PlackettID we will use the abbreviation (PID), for PlackettDI (PDI) and for PlackettDD (PDD). Summing up, the results for all the families of copulas absolutely continuous with complete support are quite similar as a function of the Spearman’s ρ. However, for the families of singular copulas the results are sometimes comparable to those of M2 or W2, and they depend strongly on their supports. When the copula is absolutely continuous, but it does not have complete support, the behavior may be alittle erratic as shown by the protocol copula in Figure 2.16. We can observe that when m = n, the sample copula Cn m is at least as good an approximation to the real copula C as the empirical copula Cn, and in many cases a better approximation, when |ρ| is close to one. In fact, in many cases the best approximation is given with values of m quite smaller than n, and this value of m depends strongly on the value of Spearman’s ρ. 30 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance AMH(-0.99,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance AMH(-0.7,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance AMH(-0.5,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance AMH(-0.1,d=2)n20iters1000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.1: Ali-Mikhail-Haq d = 2 for ρ = −0.2688,−0.2004,−0.1489 and −0.0325 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance AMH(0.1,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance AMH(0.5,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance AMH(0.9,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance AMH(0.99,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.2: Ali-Mikhail-Haq d = 2 for ρ = 0.0342, 0.1924, 0.4070 and 0.4706 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 20) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Clayton(-0.99,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Clayton(-0.9,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Clayton(-0.5,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Clayton(-0.1,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.3: Clayton d = 2 for ρ = −0.99,−0.8978,−0.4670 and −0.0787 with n = 20 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Clayton(0.5,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 5) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Clayton(5,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 6) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Clayton(10,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Clayton(25,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.4: Clayton d = 2 for ρ = 0.2955, 0.8848, 0.9582 and 0.9915 with n = 20 31 5 10 15 20 m value (minimun mean sample copula for m = 5) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Frank(-20,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Frank(-10,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Frank(-5,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Frank(-1,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.5: Frank d = 2 for ρ = −0.9578,−0.8602,−0.6434 and −0.1644 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Frank(2,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Frank(7,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 5) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Frank(15,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Frank(20,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.6: Frank d = 2 for ρ = 0.3168, 0.7630, 0.9293 and 0.9578 with n = 20 5 10 15 20 m value (minimun mean sample copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Gumbel(2,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Gumbel(15,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Gumbel(20,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Gumbel(50,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.7: Gumbel d = 2 for ρ = 0.6828, 0.9935, 0.9963 and 0.999 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 7) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Joe(15,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Joe(25,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Joe(50,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Joe(75,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.8: Joe d = 2 for ρ = .9766, .9908, .9975 and .9988 with n = 20 32 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Normal(-0.99,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Normal(-0.5,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Normal(0.1,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Normal(0.7,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.9: Normal d = 2 for ρ = −0.9889,−0.4825, 0.0955 and 0.6829 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 8) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Plackett(0.001,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Plackett(0.5,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Plackett(10,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Plackett(1000,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.10: Plackett d = 2 for ρ = −0.9881,−0.2274, 0.6536 and 0.9881 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Producto(d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Producto(d=2)n30iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 10 20 30 40 50 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Producto(d=2)n50iters1500 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.11: Product d = 2 with ρ = 0 for n = 20, 30 and 50 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Tstudent(-0.6,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Tstudent(-0.3,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Tstudent(0.5,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.00 0.05 0.10 0.15 0.20 Supremum Distance Tstudent(0.99,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.12: t-Student d = 2 for ρ = −0.5819,−0.2875, 0.4825 and 0.9889 with n = 20 33 5 10 15 20 m value (minimun sample mean copula for m = 20) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Supremum Distance M(d=2)n20iters2 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 20) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Supremum Distance W(d=2)n20iters2 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Circular(d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Example3.5(d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.13: M2ρ = 1, W2ρ = −1, Circular ρ = 0 and Examp. 3.5 of Nelsen ρ.286 with n = 20 5 10 15 20 m value (minimun mean sample copula for m = 11) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Example3.3(0.05,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 6) 0.0 0.1 0.2 0.3 0.4 Supremum Distance Example3.3(0.3,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.0 0.1 0.2 0.3 0.4 Supremum Distance Example3.3(0.6,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 10) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Supremum Distance Example3.3(0.9,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.14: Example 3.3 of Nelsen for ρ = −0.9,−0.4, 0.2 and 0.8 with n = 20 5 10 15 20 m value (minimun mean sample copula for m = 20) 0.0 0.1 0.2 0.3 0.4 0.5 Supremum Distance Example3.4(0.2,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 20) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Supremum Distance Example3.4(0.5,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 20) 0.0 0.1 0.2 0.3 0.4 Supremum Distance Example3.4(0.7,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 20) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Example3.4(0.95,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.15: Example 3.4 of Nelsen for ρ = −0.0396, 0.5, 0.2591 and −0.7151 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Supremum Distance Protocolo(d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance GumbelID(5,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.00 0.05 0.10 0.15 0.20 Supremum Distance GumbelID(10,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 11) 0.00 0.05 0.10 0.15 0.20 Supremum Distance GumbelID(20,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.16: Cop, Protocol ρ = .75 and GumbelID for ρ = −.9429,−.9854 and −.9963 with n = 20 34 5 10 15 20 m value (minimun mean sample copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance GumbelDI(3,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance GumbelDI(5,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance GumbelDI(10,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 11) 0.00 0.05 0.10 0.15 0.20 Supremum Distance GumbelDI(20,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.17: Copulas GumbelDI for ρ = −.8487,−.9431,−.9855 and −.9963 with n = 20 5 10 15 20 m value (minimun mean sample copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance GumbelDD(3,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 5) 0.00 0.05 0.10 0.15 0.20 Supremum Distance GumbelDD(5,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 7) 0.00 0.05 0.10 0.15 0.20 Supremum Distance GumbelDD(10,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance GumbelDD(20,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.18: Copulas GumbelDD for ρ = .8494, .9434, .9854 and .9963 with n = 20 5 10 15 20 m value (minimun mean sample copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance PlackettID(0.001,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance PlackettID(3,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance PlackettID(10,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 6) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance PlackettID(50,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.19: Copulas PlackettID for ρ = 0.9880,−.3521,−.6537 and −.8780 with n = 20 5 10 15 20 m value (minimun mean sample copula for m = 10) 0.00 0.05 0.10 0.15 0.20 Supremum Distance PlackettDI(0.001,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance PlackettDI(3,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance PlackettDI(10,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 6) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance PlackettDI(50,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.20: Copulas PlackettDI for ρ = 0.9880,−.3521,−.6537 and −.8780 with n = 20 35 5 10 15 20 m value (minimun mean sample copula for m = 8) 0.00 0.05 0.10 0.15 0.20 Supremum Distance PlackettDD(0.001,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance PlackettDD(3,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance PlackettDD(10,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance PlackettDD(50,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.21: Copulas PlackettDD for ρ = −0.9881, .3517, .6532 and .8778 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix23(1.5,10,8,0.3,0.4,0.3,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix23(15,15,6,0.4,0.5,0.1,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix23(20,24,20,0.8,0.1,0.1,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Mix23(1.5,10,10,0.8,0.1,0.1,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.22: Mixt. of G, GID and GDI with ρ= -.54459, -.19608, .59829 and .18460 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Supremum Distance Mix24(20,24,50,.3,.3,.4,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 7) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Mix24(20,24,50,.1,.1,.8,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 5) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix24(20,24,50,.1,.8,.1,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix24(20,24,50,.8,.1,.1,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.23: Mixt. of GID, Fr. and Joe with ρ = .39085, .79548, .77585 and -.600093 with n = 20 5 10 15 20 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Supremum Distance Mix25(0.25,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Mix25(0.5,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix25(0.75,d=2)n20iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 m value (minimun mean sample copula for m = 9) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Mix25(0.9,d=2)n20iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.24: Mixt. of M2 and W2 with ρ = −0.5, 0, 0.5 and 0.8 with n = 20 36 Now we make a quick study of the variances for the twenty five families with n = 20 related to all the Figures considerd above. In Figure 2.25, Figure 2.26, . . . , Figure 2.30 and Figure 2.31 we include the behavior of the variances of the statistics that measure the differences between the supremum distance between the sample copula of order m and the real copula, and also the supremum distance between the empirical copula and the real copula. We chose only one value of the parameter in everyone of the previous Figures. For each family, the chosen parameter best describes the behavior of the variance, leaving out the limit cases. In all graphs we give the value of ρ. First we observe that from Figure 2.25 the variances for the Ali-Mikhail-Haq family behave like the variance given for the product copula Π2 in Figure 2.27, this fact follows from what we explained before for Figures 2.1 and 2.2. Observe that the biggest variance is given for m = 2, but the difference of the variance of the sample copula and the variance for the empirical copula in this case is only of magnitude 0.0005. For larger values of m, the sample copula has practically the same variance as the empirical copula. In the cases of the Clayton, Frank, Gumbel, Joe, Normal, Plackett and t-Student families, the variance of the sample copula of order m for 10 ≤ m ≤ 20 = n is close to the variance of the empirical copula, and in some cases the variances of the sample copula of order m are smaller than the ones reported for the empirical copula for some values of m, see Figure 2.25, Figure 2.26 and Figure 2.27. In the case of singular copulas such as the circular distribution, Example 3.3, Example 3.4 and Example 3.5 we see that in average the variance of the sample copula of order m is smaller than the variance for the empirical case. For the protocol copula, see Figure 2.29, we observe that the oscillating behavior of the variances for 2 ≤ m ≤ n = 20 follows the pattern of the mean, minimum and maximum given in Figure 2.16, giving greater variances to even values of m and smaller variances for odd values of m. In the case of considering increasing and decreasing transformations in the Gumbel and Plackett families, we observe that the variances graphs in Figure 2.29 and Figure 2.30 have a similar be- havior when we keep the parameter fixed in the cases GumbelID, GumbelDI and GumbelDD, and also fixed in the cases PlackettID, PlackettDI and PlackettDD. If we observe that the cases ID, DD and DI correspond of rotations of 90, 180 and 270 degrees of the original copula around the point (1/2, 1/2), then we have that there is a certain invariance of the graphs, this invariance can also be observed in Figure 2.16, Figure 2.17 and Figure 2.18, and in Figure 2.19, Figure 2.20 and Figure 2.21. 37 For the case of mixtures we denote Mix23 the mixture of Gumbel, GumbelID and GumbelDI, Mix24 is the mixture of GID, Frank and Joe, in this two cases the first three numbers correspond to the parameters of each member of the mixture, and the last three correspond to the weights. Mix25 is the mixture of M2 and W2. In these cases we observe that the variances for the sample copula of order m remain below the variances of the empirical copula almost in all cases. 38 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances AMH(-0.5,d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances AMH(0.5,d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Clayton(-0.5,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Clayton(5,d=2)n20iters2000 sample copula empirical copula Figure 2.25: Var. of AMH(-0.148), AMH(0.192), Clay.(-0.467) and Clay.(0.844) with n = 20 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Frank(-10,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Frank(7,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Gumbel(15,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Joe(5,d=2)n20iters2000 sample copula empirical copula Figure 2.26: Var. of Fr.(-0.860), Fr.(0.763), Gum.(0.993) and Joe(0.854) with n = 20 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Normal(-0.5,d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Plackett(10,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Producto(d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Tstudent(-0.6,d=2)n20iters2000 sample copula empirical copula Figure 2.27: Var. of Norm.(-0.482), Plack.(0.654), Prod.(0) and t-Student(-0.582) with n = 20 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Circular(d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Example3.5(d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Example3.3(0.6,d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Example3.4(0.5,d=2)n20iters10000 sample copula empirical copula Figure 2.28: Var. of Circ.(0), Ex.3.5(.286), Ex.3.3(0.2) and Ex.3.4(0.5) with n = 20 39 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Protocolo(d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances GumbelID(10,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances GumbelDI(10,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances GumbelDD(10,d=2)n20iters2000 sample copula empirical copula Figure 2.29: Var. of Protocol(.75), GID(-.985), GDI(-.985) and GDD(.985) with n = 20 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances PlackettID(10,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances PlackettDI(10,d=2)n20iters2000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances PlackettDD(10,d=2)n20iters2000 sample copula empirical copula Figure 2.30: Var. of PID(-.653), PDI(-.653) and PDD(.653) with n = 20 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Mix23(20,24,20,0.8,0.1,0.1,d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Mix24(20,24,50,.1,.8,.1,d=2)n20iters10000 sample copula empirical copula 5 10 15 20 m value 0.000 0.001 0.002 0.003 0.004 Variances Mix25(0.5,d=2)n20iters10000 sample copula empirical copula Figure 2.31: Var. of Mix23(0.5977), Mix24(0.7758) and Mix25(0) We also performed several simulations for sample sizes n = 30 and n = 50 in these two cases we obtained very similar graphs to the ones obtained above in Figures one to thirty one, but of course in different scales. We do not present all these results because the differences are pretty much negligible. In the next Subsection we will see that we can extend the previous results for higher dimensions. 2.4 Simulation study: dimension three In this subsection, we study eleven different families in dimension d = 3, which include families of copulas such as: Clayton, Frank, Gumbel, Normal, Product and t-Student. We also included the copula M3 and its transformations by increasing and decresing families to obtain 3-copulas with support in every diagonal of I3, these are examples of singular copulas. We also give a 40 3-dimensional version of the protocol copula to see what happens when we have an absolutely continuous 3-copula with support [0, 1/2]3 ∪ (1/2, 1]3. We also include just as an interesting re- ference a 3-copula denoted by C =“W3” such that C(2/3, 2/3, 2/3) = 0, that is, it behaves like W3, which is not a 3-copula at the point (2/3, 2/3, 2/3). Finally we include two families of mixtures of 3-copulas to study the behavior under asymmetric 3-copulas. For the product copula Π3 in d = 3, see Figure 2.34 first graph, the behavior of the difference between the supremum distances is quite similar to the case d = 2 with n = 30, see Figure 11. Hence, we have that C30 m is a better approximation than C30 for every 2 ≤ m ≤ n = 30. Observe that the variances have the same behavior as in dimension 2, see Figure 2.27 and Figure 2.40. As can be seen in Figure 2.32, Figure 2.33, Figure 2.34, Figure 2.35 and Figure 2.36, the cases Clayton, Frank, Gumbel, Normal and Clayton have similar behavior, and at least at these graphs, C30 m is a better approximation than C30 for every 6 ≤ m ≤ n = 30. The same similarity holds for the graphs of the variances in Figure 2.40, Figure 2.41 and Figure 2.42. In Figure 2.37 we show the results for singular copulas, the first graph corresponds to the copula M3, the second, third and fourth graphs for M3IID,M3IDI and M3DII, that is, if the vector (U,V,W) has copula M3. Then the vector (U,V, 1−W) has copula M3IID, the vector (U, 1−V,W) has copula M3IDI and the vector (1 − U,V,W) has copula M3DII, these three copulas have supports on the other three diagonals of the unit cube [0, 1]3, and its existence is guaranteed by an easy extension of Theorem 2.4.4 in [37]. As in the case of dimension d = 2, the variances in the four cases are zero. The case that we called protocol consists of a 3-copula which has incomplete support given by [0, 1/2]3 ∪ (1/2, 1]3, with uniform masses on each cube. We can see from the first graph in Figure 2.38, that C30 2 is a better approximation than C30 and C30 4 is a better approximation than C30, but C30 3 is not a better approximation than C30 and C30 5 is not a better approximation than C30, that is the oscillating pattern is also present in dimension d = 3. In this case C30 m is a better approximation than C30 for every 6 ≤ m ≤ 30 = n. In the case of the variance, see Figure 2.41, this oscillation is also evident with larger variances for even values of m and smaller variances for odd values of m. The case Mixture 9 is just a convex combination of a Clayton, a Gumbel and a Frank, in Figure 2.38 we present the results when the corresponding parameters are 10, 15 and 20 respectively, and with weights 0.2, 0.3 and 0.5 in different orders. As we can see C30 m is a better approximation than C30 for every 5 ≤ m ≤ 30 = n. The variances are almost always below the variance of the empirical copula, see Figure 2.42. 41 The case Mixture 10 which corresponds to a convex combination of Gumbel, GumbeIID, Gumbe- lIDI and GumbelDII, defined as in the case of M3 above. This family is highly assymetric in all directions. We present two cases the first when the parameters are 5, 10, 15 and 20 with weights 0.1, 0.2, 0.3 and 0.4 in three different orders, and the second when all the parameters are equal to 10 and all the weights are equal to 0.25. The results are presented in Figure 2.39, and in these cases we have that Cm 30 is a better approximation than C30 for every 2 ≤ m ≤ 30 = n. The variances are again almost always below the variance of the empirical copula, see Figures 2.40, 2.41 and 2.42. The variance of the copula denoted by “W3” is presented in Figure 2.40 last graph. 42 5 10 15 20 25 30 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Clayton(0.5,d=3)n30iters1500 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Clayton(1,d=3)n30iters1500 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 5) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Clayton(5,d=3)n30iters1500 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Clayton(20,d=3)n30iters1500 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.32: Clayton d = 3 for ρ = 0.2955, 0.4784, 0.8848 and 0.9869 with n = 30 5 10 15 20 25 30 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Frank(1,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Frank(5,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Frank(8,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 5) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Frank(10,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.33: Frank d = 3 for ρ = 0.1644, 0.6434, 0.8035 and 0.8602 with n = 30 5 10 15 20 25 30 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Producto(d=3)n30iters10000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Gumbel(2,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 6) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Gumbel(5,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Gumbel(10,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.34: Product (0)and Gumbel d = 3 for ρ = 0.6828, 0.9430 and 0.9855 with n = 30 5 10 15 20 25 30 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Normal(.2,.5,.7,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Normal(-.6,-.85,.1,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Normal(-.8,-.8,.29,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 10) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Normal(.99,.99.99,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.35: Normal d = 3 for θ = (.2, .5, .7), (−.6,−.85, .1), (−.8,−.8, .29) and (.99, .99, .99) 43 5 10 15 20 25 30 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Tstudent(0,0,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 3) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Tstudent(.5,-.5,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 2) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Tstudent(.4,-.6799,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Supremum Distance Tstudent(.7071,0,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.36: t-Student d = 3 for θ = (0, .0), (.5,−.5), (.4,−.6799) and (.7071, 0) with n = 30 5 10 15 20 25 30 m value (minimun sample mean copula for m = 30) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance M(d=3)n30iters2 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 30) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Supremum Distance MIID(2,d=3)n30iters2 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 30) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Supremum Distance MIDI(3,d=3)n30iters2 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 30) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Supremum Distance MDII(4,d=3)n30iters2 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.37: M3, M3IID, M3IDI and M3DII with n = 30 5 10 15 20 25 30 m value (minimun sample mean copula for m = 2) 0.0 0.1 0.2 0.3 0.4 0.5 Supremum Distance Protocolo(d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 7) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Mix9(10,15,20,.3,.2,.5,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 9) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Mix9(10,15,20,.3,.5,.2,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 8) 0.00 0.05 0.10 0.15 0.20 0.25 Supremum Distance Mix9(10,15,20,.5,.3,.2,d=3)n30iters2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.38: Protocol and Mixture 9 d = 3 for θ = (10, 15, 20) for different weights with n = 30 5 10 15 20 25 30 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Supremum Distance Mix10(5,10,15,20,.1,.2,.3,.4,d=3)n30it2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix10(5,10,15,20,.2,.4,.3,.1,d=3)n30it2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix10(5,10,15,20,.4,.3,.2,.1,d=3)n30it2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum 5 10 15 20 25 30 m value (minimun sample mean copula for m = 4) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Supremum Distance Mix10(10,10,10,10,.25,.25,.25,.25,d=3)n30it2000 sample copula empirical copula sample minimun sample maximum empirical minimun empirical maximum Figure 2.39: Mixture 10 d = 3 for θ = (5, 10, 15, 20) and (10, 10, 10, 10) for different weights 44 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Clayton(5,d=3)n30iters1500 sample copula empirical copula 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Frank(8,d=3)n30iters2000 sample copula empirical copula 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Producto(d=3)n30iters10000 sample copula empirical copula 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances W(1,d=3)n30iters2000 sample copula empirical copula Figure 2.40: Variances of Clayton(ρ = 0.8848), Frank(ρ = 0.8035), Product and “W3” when n = 30 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Gumbel(10,d=3)n30iters2000 sample copula empirical copula 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Normal(-.8,-.8,.29,d=3)n30iters2000 sample copula empirical copula 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Protocolo(d=3)n30iters2000 sample copula empirical copula Figure 2.41: Var. of Gumb.(ρ = 0.9855), Normal(−0.8,−0.8, 0.29) and Protocol copula when n = 30 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Tstudent(.7071,0,d=3)n30iters2000 sample copula empirical copula 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Mix9(10,15,20,.5,.3,.2,d=3)n30iters2000 sample copula empirical copula 5 10 15 20 25 30 m value 0.000 0.001 0.002 0.003 0.004 Variances Mix10(5,10,15,20,.4,.3,.2,.1,d=3)n30iters2000 sample copula empirical copula Figure 2.42: Variances of t-St.(5), Mix9(10, 15, 20, .3, .2, .5) and Mix10(5, 10, 15, 20, .4, .3, .2, .1) As we have seen in all these simulations not only in dimension d = 3, but also in dimension d = 2, we can say that there exists at least one m such Cn m is a better approximation than Cn, and that this value of m is smaller than n. The results that we obtained for n = 20, which are not presented here, have many features in common with the case n = 30. It is obvious that these results can be extended to higher dimensions. In fact, the programs that we wrote in language R can be easily extended to at least dimension d = 5 without any problems. 45 2.5 A method for estimating an adequate m From all the examples in Section 3 we can observe that even for a small value of n, that is, n = 20 in dimension d = 2, we can find an m with 2 ≤ m ≤ n, such that Cn m the sample copula of order m is a better approximation than the empirical copula Cn. In fact, in all cases, we can also find a value of m that minimizes the expected value of the difference between Cn m and the real copula C, observe all the comments between parenthesis below each of the graphs in Figures 2.1 through Figure 2.24. It is quite important to observe that this minimum is reached at m = 2 when the real copula C is “close” to the product copula Π2, and that this minimum increases as the real copula C approaches the Frechet-Hoeffding bounds M2 and W2, in fact, as observed in Figure 2.13, in these two cases the minimum is reached at m = n, this also holds for the Example 3.4 of Nelsen’s book which is a singular copula, see Figure 2.15. As observed in Section 3, in the case of dimension d = 2, when we have an absolutely continuous copula with complete support, which by the way is the most interesting case in applications, we have a function that relates the Spearman’s ρ with the value of m that minimizes the expected value of the difference between Cn m and the real copula C. In Figure 2.43 we graph the value of m which minimizes the expected value of the supremum distance between the sample copula Cn m and the real copula C, for sample sizes n = 20, 30 and n = 50, for the families of absolutely continuous copulas with complete support which include AMH, Clayton, Gumbel, Frank, Joe, Plackett, Normal, t, Plackett ID, Plackett DI, Plackett DD and Gumbel DD, for different values of ρ between ρ = 0.1 and ρ = .99. As can be seen from these graphs, for ρ fixed, the value of m is quite stable and it is always smaller than the sample size n. This result provides a natural way to give an estimator of the parameter m based on the sample Spearman’s ρ. Now we propose a method to estimate m for the sample copula of order m based on a random sample of size n: • Obtain the modified sample. • Graph the points of the modified sample. If the data does not follow a clear pattern which indicates that the copula C is a discrete copula then • Estimate the Spearman’s rho using the sample value ρ̃ of the Spearman’s ρ. • If |ρ̃| ≤ 0.25 we can take m = 2 for any sample size. • If |ρ̃| ≥ .95 we have to take a large value of m. Let us say 10 ≤ m ≤ 15. 46 • If 0.25 < |ρ̃| < 0.95 take 3 ≤ m ≤ 10, with a linear approximation which depends on the value of ρ̃. Cl AMH Gu Fr J P T N PID PDI PDD GDD family 0 5 10 15 20 m -m in ρ=0.1 n=20 n=30 n=50 Cl AMH Gu Fr J P T N PID PDI PDD GDD family 0 5 10 15 20 m -m in ρ=0.25 n=20 n=30 n=50 Cl Gu Fr J P T N PID PDI PDD GDD family 0 5 10 15 20 m -m in ρ=0.5 n=20 n=30 n=50 Cl Gu Fr J P T N PID PDI PDD GDD family 0 5 10 15 20 m -m in ρ=0.75 n=20 n=30 n=50 Cl Gu Fr J P T N PID PDI PDD GDD family 0 5 10 15 20 m -m in ρ=0.9 n=20 n=30 n=50 Cl Gu Fr J P T N PID PDI PDD GDD family 0 5 10 15 20 m -m in ρ=0.95 n=20 n=30 n=50 Cl Gu Fr J P T N PID PDI PDD GDD family 0 5 10 15 20 m -m in ρ=0.99 n=20 n=30 n=50 Figure 2.43 If the modified sample shows a pattern which indicates a singular support we may need to take larger values of m, but not larger than 15. We also observe that for d ≥ 3 the same methodology can be used for Archimedean copulas, since all of their margins have the same parameter, hence, the same value of Spearman’s ρ for each pair of random variables. This behavior was observed in Figure 2.32, Figure 2.33 and Figure 2.34. 47 For dimension d ≥ 3 we are still working on the possibility of estimating m using the averages of all the estimated Spearman’s ρ for every pair of random variables. Also, for dimension d ≥ 3, we know from all the Figures, that there exists a value 2 ≤ m0 ≤ n such that for every m0 ≤ m ≤ n, Cn m is a better approximation than Cn, of course, the minimum value of m0 is among them. We would like to find a method of estimation of m, given a fix sample (X 1 , . . . , X n ) of size n from an unknown continuous joint distribution function H with copula C, such that m0 ≤ m ≤ n. Since the copula C is also unknown, we will use all the remarks that we have made in order to establish a “measure of discrepancy” between the sample and the product copula Π2, which may be used to estimate the value of m. Besides, from Figure 2.32 through Figure 2.42 we observe that all the previous comments apply to n = 30 in dimension d = 3, this also holds when n = 20. We start by defining our proposal of the “measure of discrepancy” in any dimension d ≥ 2 for n a fixed positive integer. Definition 2.18 Let 2 ≤ m ≤ n and let (X 1 , . . . , X n ) be a random sample from a random vector X of dimension d ≥ 2 with continuous joint distribution H or d-copula C. Let Un = (Y 1 , . . . ,Y n ) be the corresponding modified sample. Define (Qm i )i∈Id m , (Rm i )i∈Id m , s n,(m) i1,...,id , S n m and Cn m as in equations (15), (16), (17), (18) and (19) respectively. Let Πd the product d-copula, since Cn m the sample d-copula of order m is always a d-copula, define for every (i1, . . . , id) ∈ Id m VCn m (Qm i1,...,id ) = s n,(m) i1,...,id and VΠd (Qm i1,...,id ) = λd(Qm i1,...,id ), (30) where λd is the Lebesgue measure on ( RI d,B( RI d). We define dTV(Cn m,Π d) = 1 2 ∫ [0,1]2 | fCn m − fΠd |dλd = ∑ i∈(Im)d ∣ ∣ ∣ ∣ VCn m (Qm i ) − VΠd (Qm i ) ∣ ∣ ∣ ∣ 2 , (31) We first observe that from equations (30), (17) and (31) we have that dTV(Cn m,Π d) = ∑ i∈(Im)d ∣ ∣ ∣ ∣ s n,(m) i − λ(Qm i ) ∣ ∣ ∣ ∣ 2 = ∑ i∈(Im)d ∣ ∣ ∣ ∣ ∣ card(Rm i ∩Un) n − λ(Qm i ) ∣ ∣ ∣ ∣ ∣ 2 . (32) So, if we assume that n = md and that card(Rm i ∩ Un) = 1 for every i ∈ Id m, we have from equation (32) that dTV(Cn m,Π d) = 0. 48 Now, we assume that the sample of size n is taken from the the copula Md, then clearly the modified sample is given by Un = (Y 1 , . . . ,Y n ), where Y j = ( j/n, j/n, . . . , j/n) for every j ∈ {1, 2, . . . , n}. First, if m divides n then we know from Theorem 2.14, part iii) that (Qm i )i∈Id m coincides with (Rm i )i∈Id m , so, λd(Qm i ) = (1/m)d for every i ∈ Id m. Since the modified sample lies on the main diagonal of the unit d cube Id, then we have from equation (21) that for every i ∈ Im if Rm i=(i,i,...,i) = [(i − 1)/m, i/m]d then card(Ri∩Un) n = (n/m)/n = 1/m. Hence, using equation (32) we have that dTV(Cn m,Π d) =        m ( 1 m ( md−1 − 1 md−1 )) + (md − m) ( 1 m )d        / 2 = 2 ( md−1 − 1 md−1 )/ 2 = 1 − 1 md−1 . Second, if m does not divide m then Theorem 2.14 part i) we have that Qm i=(i,i,...,i) = [(1/n) ⌊((i − 1) · n)/m⌋ , (1/n) ⌊(i · n)/m⌋]d for every i ∈ Im. So, in this case, λd(Qm i=(i,i,...,i) ) = (1/(nd)) (⌊(i · n)/m⌋ − ⌊((i − 1) · n)/m⌋)d and VCn m (Qm i=(i,i,...,i) ) = (1/n) (⌊i/m⌋ − ⌊(i − 1)/m⌋) for every i ∈ Im. So, using the fact that the sum of all the volumes of the d-boxes in the partition (Qm i )i∈Id m is one, it is easy to see that dTV(Cn m,Π d) = ( md−1 md−1 − 1 )        1 − m ∑ i=1 1 nd (⌊(i · n)/m⌋ − ⌊((i − 1) · n)/m⌋)d        ≤ ( md−1 md−1 − 1 ) ( 1 − m ( 1 md )) = 1, where the inequality follows from Lagrange multiplier applied to the function h(x1, . . . , xm) = 1 − (xd 1 + · · · + xd m) with conditions x1 + x2 + . . . + xm = 1 and x1, . . . , xm ≥ 0. Hence, if we change the denominator 2 by Km,d = 2(1 − 1/md−1) in equation 31, we have that 0 ≤ dTV(Cn m,Π d) ≤ 1, and this what we will do in what follows. In order to see how this density difference works, we show the averages of 10000 simulations when the sample size is n = 100 and we evaluate ηm,2(C100 m ) for m between 2 and 10 for several of the families studied in Section 3, we use only values of ρ ≥ 0, but the results for ρ < 0 follow the same pattern. In Figures 2.44 and 2.45, we have the results for the families Clayton,Frank, Gumbel, Plackett, Normal and t-Student. It can be easily observed that for all these families of copulas which are 49 absolutely continuous and with complete support. the graphs are almost identical, which indicates that they can be used to estimate the value of the order m of the sample d-copula. We have similar results for d = 3 and we are working in a resampling method to estimate the graph given a fixed sample of size n. As can be see in Figure 2.46, the graphs for singular copulas can be quite different as the absolutely continuous case, and it depends strongly on the support of the copula. But, as observed above, in these cases we always need larger values of m. Observe that for the last graph which corresponds to the copula M2, the graph is the constant 1, as expected. 50 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (Clayton n = 100, N = 10000) 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (Frank n = 100, N = 10000) 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (Gumbel n = 100, N = 10000) Figure 2.44: Averages of ηm,2 for the Clayton, Frank and Gumbel families with positive ρ 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (Plackett n = 100, N = 10000) 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (Normal n = 100, N = 10000) 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (Tstudent n = 100, N = 10000) Figure 2.45: Averages of ηm,2 for the Plackett, Normal and tStudent families with positive ρ 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (Example3.3 n = 100, N = 10000) 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (Example3.4 n = 100, N = 10000) 2 4 6 8 10 m value 0.0 0.2 0.4 0.6 0.8 1.0 Positive Rho (M n = 100, N = 10000) Figure 2.46: Averages of ηm,2 for the Example 3.3, Example 3.4 and M2 families with positive ρ 51 A last very important result is the following: Recall that the total variation distance of two proba- bility measures P and Q in ( RI d,B), where B denotes the Borel σ-algebra, with Radon-Nykodim’s derivatives fP and fQ is defined by dTV(P,Q) = supA∈B |P(A) − Q(A)| = 1 2 ∫ RI d | fP − fQ|dλ ≤ 1, where λ is the Lebesgue measure in ( RI d,B). Let us assume that we take a random sample from Πd the independent copula of size n ≥ 2 and we we take m = n in the definition of the sample copula, which corresponds to the linear B-spline copula construction in [43]. Since in this case, we are considering the uniform partition of order n of [0, 1]d, that is, (Rn i )i∈Id n , using part v) of Theorem 2.14, we have that only n of the d-boxes in the uniform partition have density nd−1 and the remain- ing boxes have density 0. Let J be the subset of n indices i ∈ Id n with positive density. Then, using the above definition we have that dTV(Πd,Cn n) = 1 2 ∫ [0,1]d | fΠd − fCn n |dλ = 1 2           ∑ i∈J ∫ Rn i |1 − nd−1|dλ + ∑ i∈(Id n \J) ∫ Rn i |1 − 0|dλ           = 1 2 ( n(nd−1 − 1) nd + (nd − n) nd ) = 1 2 ( 1 − 1 nd−1 + 1 − 1 nd−1 ) = 1 − 1 nd−1 . (33) The last equality implies that, with probability one, if we let n go to infinity then dTV(Πd,Cn n) ↑ 1. Hence, it can be thought as an “anti” Glivenko-Cantelli’s result. Even more, if we let the dimension d go to infinity for any fixed n ≥ 2 then again dTV(Πd,Cn n) ↑ 1. This argument tell us that using m = n is not a really good option at all. On the other hand, if we take d = 2, m = 2 and we assume that the sample size n is a multiple of md = 4, and that we are sampling from Π2. Then with positive probability ((n/2)!)4/(n! · ((n/4)!)4), see [26], we have that each 2-box of the uniform partition of order m = 2 has exactly n/4 points. So, using Theorem 2.14, we have that the density of the sample copula of order 2 is one on each 2-box of the uniform partition. But in this case we have that dTV(C2 n,Π 2) = 0. In Figure 11 we can see that when n = 20 the minimum value of the simulations attains 0, when m = 2. In fact, it is not difficult to see that 0 ≤ dTV(C2 n,Π 2) ≤ 1/2 with probability one. Of course, the above argument works also when d > 2, m ≥ 2 and the sample size n is a multiple of md. 52 Finally, we have observed using simulations, that the total variation distance may be used to mea- sure the distance between Cm n , the d-sample copula of order m, and C(m), the checkerboard approx- imation of size m, and that when the sample size increases to infinity as a multiple of m, then we obtain a Glivenko-Cantelli’s theorem. Observe that this happens even using the strongest distance, that is, the total variation, which in this case is surprisingly easy to evaluate. In Tables 1 to 7, we present the results of evaluating the total variation distance between Cm n the sample copula of order m = 10 and C(10) the checkerboard approximation, with 1000 simulations of sample sizes varying between n = 10 and n = 50000, the rows include basic statistics such as mean, variance, minimum and maximum. We used some of the absolutely continuous families in Section 3, specifically the AMH, Clayton, Frank, Gumbel and Plackett copulas with different values of Spearman’s ρ. We also include with m = 15 Π2 the product copula in dimension 2, with n varying from 15 to 60000, and also, Π3 the product copula in dimension 3 when m = 2, when n varies from n = 2 up to n = 49152. As can be seen in all cases, the statistics mean, minimum and maximum decrease to zero as n increases, and the variance also decrease to zero for n large, which gives evidence of the existence of a Glivenko-Cantelli’s theorem for the total variation distance. Observe also, that the first columns in Table 6 and Table 7 agree with the result given in equation (33). Even more, in the two last tables it is obvious that the checkerboard approximations coincides with the real copulas, that is, Π2 and Π3, so, we have a real Glivenko-Cantelli’s theorem between Cm n and C the true copula. Table 1 Total Variation Distance for AMH (ρ = 0.3451) with m = 10 n 10 100 500 1000 2000 10000 50000 mean 0.8874 0.3532 0.1573 0.1108 0.0786 0.0350 0.0156 variance 0.00015 0.00079 0.00017 0.00008 0.00004 8 e-06 1 e-06 minimum 0.8577 0.2605 0.1189 0.0758 0.0582 0.0248 0.0118 maximum 0.9244 0.4879 0.2037 0.1443 0.1032 0.0436 0.0204 Table 2 Total Variation Distance for Clayton (ρ = 0.9582) with m = 10 n 10 100 500 1000 2000 10000 50000 mean 0.5915 0.1757 0.0788 0.0555 0.0389 0.0175 0.0078 variance 0.00720 0.00090 0.00014 0.00007 0.00003 7 e-06 1 e-06 minimum 0.4521 0.1013 0.0459 0.0304 0.02236 0.0104 0.0043 maximum 0.8579 0.2835 0.1153 0.0847 0.0616 0.0279 0.0124 53 Table 3 Total Variation Distance for Frank (ρ = −0.64348) with m = 10 n 10 100 500 1000 2000 10000 50000 mean 0.8559 0.3316 0.1456 0.1029 0.0730 0.0324 0.0145 variance 0.00050 0.00072 0.00016 0.00008 0.00005 9 e-06 1 e-06 minimum 0.7966 0.2514 0.1019 0.0749 0.0505 0.0240 0.0107 maximum 0.9254 0.4144 0.1873 0.1325 0.0946 0.0433 0.0192 Table 4 Total Variation Distance for Gumbel (ρ = 0.84816) with m = 10 n 10 100 500 1000 2000 10000 50000 mean 0.7539 0.2658 0.1206 0.0854 0.0604 0.0268 0.0119 variance 0.00268 0.00085 0.00016 0.00007 0.00004 8 e-06 1 e-06 minimum 0.6556 0.1887 0.0841 0.0553 0.0407 0.0188 0.0080 maximum 0.9092 0.3548 0.1659 0.1141 0.0825 0.0363 0.0169 Table 5 Total Variation Distance for Plackett (ρ = −0.90005) with m = 10 n 10 100 500 1000 2000 10000 50000 mean 0.7110 0.2351 0.1105 0.0782 0.0552 0.0246 0.0109 variance 0.00509 0.00082 0.00016 0.00007 0.00004 8 e-06 1 e-06 minimum 0.5367 0.1460 0.0715 0.0528 0.0391 0.01644 0.0077 maximum 0.8907 0.03555 0.1542 0.1017 0.0896 0.0350 0.01570 Table 6 Total Variation Distance for Product d = 2 (ρ = 0) with m = 15 n 15 225 750 1500 3000 12000 60000 mean 0.9333 0.3421 0.2067 0.1453 0.1017 0.0509 0.0227 variance 0 0.00044 0.00011 0.00005 0.00002 7 e-06 1 e-06 minimum 0.9333 0.2755 0.1746 0.1253 0.0855 0.0415 0.0187 maximum 0.9333 0.4044 0.2382 0.1702 0.1180 0.0603 0.0274 Table 7 Total Variation Distance for Product d = 3 (ρ = 0) with m = 2 n 2 16 48 192 6144 24576 49152 mean 0.7500 0.1890 0.1150 0.0569 0.0101 0.0050 0.0035 variance 0 0.00567 0.00191 0.00045 0.00001 3 e-06 1 e-06 minimum 0.7500 0 0 0.0104 0.00162 0.0010 0.00038 maximum 0.7500 0.3750 0.29166 0.1718 0.0273 0.0112 0.0080 54 The sample d-copula of order m is already a d-copula which provides a quasi-nonparametric method to estimate a d-copula C. Once m has been chosen, it becomes a nonparametric estimator of the checkerboard approximation C(m), which is a good estimator of C, even for m relatively small. After Cn m has been constructed, since we know that its density is constant on each of the d-boxes of the partition of [0, 1]d that it generates, it is absolutely trivial to generate random sam- ples of size N ≥ 2 from it. We have written a program using the package R, that generates these samples. Using this program; First, we generated a sample of size n, that we called original sample size, denoted by OSS, from this sample we obtain the original modified sample generated from the absolutely continuous copulas Clayton, AMH, Gumbel, Plackett, Normal, t-Student and Product, respectively, using different values of Spearman’s rho . Second, for a given value of m, we ob- tained Cn m the sample copula of order m, corresponding to each of the original modified samples. Third, using our program we generated a sample from the copula Cn m in each case of size N, that we called simulated sample size, denoted by SSS. In Figures 2.47 through Figure 2.53, we took OSS = 5000, SSS = 5000 and m = 50, on the left hand side of each Figure we show the original modified sample, and on the right hand side we show the simulated sample, as can be easily seen, both samples are quite similar in all cases, which indicates that Cn m , the sample copula of order 50 is a really good estimator of the original copula C. In Figure 2.54, we took OSS = 20, m = 20, that is, the case m = n, SSS = 1000, and the sample was taken from Π2 the product copula, in this case the original modified sample on the left hand side looks like an independent sample, but the simulated sample on the right hand side does not look at all, like an independent sample on [0, 1]2. In this case the sample copula C20 20 corresponds to the linear B-spline given in [43], which generates poor samples for the independence copula Π2. In Figures 2.55, 2.56 2.57 and 2.58, we sample from M2 , Example 3.3, Example 3.4 and Example 3.5 in Nelsens book [37]. We took OSS = 5000, SSS = 5000 and m = 50, except on Figure 2.58, in which m = 100. As we can see in all these four Figures the simulated samples replicate the real supports of these four singular copulas. The only difference is that the supports of the simulated samples are slightly enlarged versions (mounted on little squares) of the real support, due to the definition of the sample copula of order m. However, from looking at these simulated samples, it is obvious, that we can deduce if the original copulas are singular. 55 ●●●● ● ●● ●●●●● ●●● ●● ●●●● ●●● ●●● ●●● ●●●● ● ●●● ●●●● ●●● ●●● ●● ●● ●●● ●●●●● ●● ●●●● ● ●●● ●●● ● ●●●● ●● ●●● ●●● ●● ●●● ●●● ●● ●●● ●● ●● ● ●●● ●● ● ●●●● ● ●●●● ●● ●● ●●● ●●●●● ●● ●● ●● ●●●● ●● ● ●●● ●●● ● ●●● ●● ●● ●● ● ●●●● ●● ●●●● ●●●● ●● ●● ●● ●● ●●● ●● ●● ● ●●●● ● ●● ●● ● ●● ●● ●●●● ● ●●● ●● ●●●● ●● ● ●● ● ● ●● ● ●●● ● ●●● ●● ●●●● ● ● ●●● ●● ●●● ●●●● ●● ●● ● ●● ●●● ●● ● ●● ●● ●● ●● ●● ● ●● ● ● ● ●● ●●●● ● ●●●● ● ●●● ●● ●● ●●● ●●● ●●● ● ●● ●● ●●● ●●● ● ●●● ●●● ● ●●●● ●● ●● ●● ● ●● ●●● ● ●●● ● ●●●● ●●● ● ●●●● ●● ●●●●● ●● ●●● ●● ●●● ●●● ●●●● ● ●●● ● ●●●● ●●●●● ●● ● ●● ● ●●● ● ●●● ●● ●● ● ●●●● ● ●● ●● ● ●● ● ● ●● ●● ● ●● ●●● ●●●● ●● ●● ● ●● ●● ●● ●●● ● ●●● ● ●● ●●● ● ●●●● ● ● ●● ●●● ● ●●● ● ●●● ●●● ● ●● ●● ●● ● ●●●● ●●●● ● ● ●● ● ●● ●● ●●● ● ●● ●● ● ●● ● ●● ●●● ● ●●● ●● ●● ●● ● ●● ●●●●● ●● ● ●●● ●●● ●● ● ● ●●● ●●● ●●●●● ●●● ● ●● ● ●● ● ●●●● ● ● ●●● ● ●● ●●● ●● ●●●● ● ● ●● ● ●● ●● ● ●● ●● ●● ●●● ●● ● ●● ●●●●● ●● ●● ● ●● ●●● ●● ●●● ● ●●● ● ●● ●●● ●● ● ●● ● ●● ● ●● ● ●● ● ●●●●●● ●●● ●● ●●●● ● ●●● ●●●●● ●●● ●●● ●● ●●● ● ●●●● ●●●● ●● ● ●● ●●● ● ●● ●● ●●●● ●● ●●●● ●●●● ●● ●● ●●● ●● ● ●● ●●● ●● ● ●●● ● ●● ●● ●●●● ●●● ● ●● ●● ●● ●● ● ● ●●● ●● ● ●● ●●● ●●● ●●● ●● ●●● ● ● ● ●●● ● ●●● ● ●●● ●●●●● ●● ●● ●● ●●●● ● ●●●● ●●● ●●● ● ●● ●●● ● ●● ●● ●● ●● ●●● ●●● ● ● ●● ●●●●● ●●● ● ● ●●● ●●● ●●●● ● ●●●● ●● ● ●● ●● ●●●● ● ●●● ●● ● ● ●● ●● ●● ●● ● ●●●● ● ●●●●●● ●● ●●● ● ●● ● ● ●●● ●● ●●●● ●●● ●●● ●● ●● ●●● ● ●● ●● ●● ● ● ●● ● ●● ● ●● ●● ● ●●● ● ● ●● ●●● ●● ● ●●● ●●● ●● ●● ●● ● ●●●● ●● ●●●●● ●●● ●● ● ●● ● ●● ● ●●● ●● ●● ●● ● ● ●●● ●●● ● ● ●● ● ●● ●● ●●●● ●● ●●● ●● ●● ●● ●● ●● ●● ●●●● ●● ●●● ● ●●●● ●●● ● ●● ●● ●●● ● ●●● ●● ● ●● ●●● ●● ●● ● ● ●● ●●● ●●● ●●● ●●● ●●● ●●● ●● ●● ●●● ●● ●● ●●●●● ● ● ●● ●●● ● ● ●● ●● ●●● ● ●● ● ● ● ●●●●● ● ●●●●● ● ●●●● ● ●●●● ●● ●●● ●● ● ●● ●● ● ●● ●● ●●● ●●●● ●● ●●●● ●● ● ●● ● ● ●●● ● ● ●● ●●●●● ● ●● ●●● ●● ● ● ●●● ●● ●● ●● ● ● ●●●● ●● ●●● ●● ●● ● ● ●●● ● ● ●● ●● ●●●● ●● ● ● ●●● ● ●●●●● ●●●●● ●●● ●●● ●●● ●●● ●●●● ●●●● ● ● ●● ●● ●● ●● ●● ● ●●●●● ●●● ●● ● ●●● ●● ●● ●● ● ●● ●● ● ●● ●●● ● ● ●● ●●● ●●● ● ● ●● ● ● ●● ●●● ●● ●● ● ●●● ●●● ● ●● ●● ●●● ● ● ●●● ● ●● ●●● ● ● ●●●● ●●● ●● ● ●● ● ●●● ●●●●● ●● ● ●●●● ●● ●●● ● ●● ●● ● ●●● ● ● ● ●● ●● ● ●● ●●● ●● ●●● ● ● ●●● ●●●● ● ●●● ● ● ●● ●● ●● ●● ●●● ● ● ● ●●● ●●●● ● ● ● ●● ● ●● ●●●● ●●● ●●● ●● ●● ● ●● ●● ● ●●● ● ●● ●●● ●●● ● ● ●●●● ●● ● ●●●●● ●●● ●● ●● ● ●●● ● ●●● ● ●●● ●● ●● ●● ●● ●● ●● ●● ●●● ●● ●●● ●●●●● ●●●●● ●● ●● ● ●●● ● ●● ● ●● ● ● ●● ●●●●● ●● ● ●● ●● ●●● ●●●● ● ● ●● ● ●● ● ●● ● ●● ●●● ● ●●● ● ● ●●●● ●● ●● ●● ● ●●● ●● ●● ●● ● ● ●●●● ● ● ●● ●● ● ● ●● ●● ●●● ●● ●●● ●● ●●●● ●● ● ● ●●●● ●●● ●● ● ●● ●●● ● ●●● ●●● ● ●● ●●● ●● ●● ●●●● ● ●●● ●● ● ●● ●● ●● ●● ●● ●● ● ●● ●● ●● ● ● ●● ●● ●● ● ●● ●● ●●●● ●●● ●● ● ●● ●● ●●● ● ●● ●● ●●● ● ●●●● ●● ● ● ●● ●●● ●● ●●● ● ●● ●●● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Clayton Copula Figure 2.47: ρ = −0.7921, OSS=5000, SSS=5000, m = 50 ● ●● ● ●●● ● ● ●●● ● ●●● ●● ● ●● ●●●● ● ●●● ● ● ●● ●● ●● ● ●● ●● ●● ●●●● ●●● ● ●●● ● ●●● ●●● ●●●● ●● ●●● ●● ●●●●● ●● ● ●●● ●●● ●● ●● ● ● ●●●● ●● ● ●● ● ●● ● ●● ● ●●● ● ●●● ●● ●● ●● ●● ●●●● ●●●● ●● ●● ● ●●● ●●● ●● ●● ● ●● ●● ●●● ●● ● ●● ●● ●●● ● ●●●● ● ●● ● ●● ●● ●● ● ●● ●● ●● ●● ●● ●● ●● ●●● ●● ● ●● ●● ●●● ● ● ● ●● ●● ●● ●●● ● ●●● ● ● ●●●● ●● ●●● ● ● ● ●●● ●● ●●● ●●●●● ●●● ●● ●● ●● ●● ●● ● ● ●● ● ●● ●● ● ● ●●● ● ●●●●●● ●● ●●●● ● ●●●● ●●●● ●● ●●●● ●●● ●● ●● ● ●● ● ●● ● ● ●● ●● ●● ●● ● ●●● ●● ● ●● ● ●● ● ●● ●● ● ●● ● ● ●●●● ●● ●●● ●● ● ●●● ● ●● ●● ●●●● ●● ●● ●● ●● ●● ●● ●●● ●● ●● ●● ●● ● ●●●● ●● ● ● ●● ●●● ●● ●● ●● ● ●●● ●●● ● ●●● ●●●● ● ●● ●●● ● ●● ●●● ●● ● ●● ●● ●●● ●●● ● ●● ●●● ● ● ●● ● ● ●● ●● ●●●● ●●●● ●● ●● ●●●● ● ●● ●●● ●● ● ●●●● ● ●● ●●● ● ●● ●●● ● ●●● ●● ●●●●● ● ● ● ●● ●●● ●● ●●● ●●● ●●● ● ●●● ●●● ● ● ●● ● ●● ● ●● ●●● ●●● ●● ● ●● ● ●●● ●●● ● ●● ●● ●● ●● ● ●● ● ●● ● ●● ●● ● ●● ●●● ●● ●● ●●● ● ●●● ●● ● ●●● ●● ●● ●●●● ● ●● ●●● ●● ● ●●● ● ● ●●● ● ●● ●●● ●●● ●●●● ● ●●●● ● ●● ●●●● ●● ● ●●● ●●● ●● ● ●● ●● ●● ●● ● ● ● ●●● ●● ●●● ●● ●●●● ●●● ● ●● ●● ●● ●●● ●● ● ●●● ●● ●●● ● ●● ●● ●● ● ●●● ●●●● ● ● ●● ●● ●●● ●● ●● ● ●●● ● ●● ●● ●●● ●● ● ●● ●●●● ●● ● ●● ●● ● ● ●● ●● ● ●● ●●● ● ● ●● ●●● ●●● ● ●●● ● ● ●●● ●●● ● ●● ● ● ●● ● ●● ● ●●●● ●● ●●●● ●●● ●● ●● ●● ●● ●●● ● ● ●● ●●● ● ●● ●●● ●● ● ●●● ●●● ● ●● ●●● ●● ●● ● ●●● ●●●● ●●●● ●● ●●● ●● ● ●● ● ●●● ●●● ●●● ●●●● ●● ● ●●● ● ●● ● ●●● ●● ●●● ● ● ●●● ●●● ●●● ● ●●● ●● ● ● ●● ●● ●● ● ●● ●● ●● ●● ●●● ● ● ●●● ●● ● ●●● ●●● ●● ●●● ●● ●● ●● ●● ● ●●● ●● ●● ● ● ● ●●● ●● ●● ● ● ● ●● ●●●● ●● ● ●●● ●●●●● ●● ● ●● ●● ●● ●● ● ● ● ●● ●● ●●●● ● ● ● ●●●● ●● ●● ● ●● ●●● ● ●● ●● ●● ● ●● ● ●●● ● ● ● ●● ●●● ● ●●● ● ●●● ●●● ●● ●●●● ● ●● ●● ● ●●●●● ●●●● ● ●●●● ●●● ● ●● ●●● ●●● ●● ●●●● ●● ● ●●●●● ●● ●● ●●● ●● ●● ●●● ●●● ●● ● ● ●● ●● ● ●● ●●● ●● ●●● ● ●●● ●●● ●●● ● ● ●● ●●● ●● ●●● ●● ●●● ●●● ● ●●●● ●● ● ●● ●●● ●● ● ●●● ●●● ●● ●●● ● ●●● ●● ●● ● ●● ● ●● ● ● ●●● ●●● ● ●● ●● ● ● ●●● ●● ●●●● ● ●● ●● ●●● ● ●● ●●● ●● ● ●● ●● ● ●●● ● ●● ●● ●●● ● ●●● ●● ●●● ●● ● ●● ●●● ●● ●● ●●● ● ●●●● ● ●● ●● ●● ●● ●● ●● ●● ● ●● ● ● ●●● ● ●● ● ●●●●● ●● ● ●● ●● ●● ●● ●● ●●● ● ●● ●● ●● ●● ●● ●● ●●● ● ● ●●●●● ●● ●●● ● ●●●● ●● ● ●● ●●● ●● ●●● ● ●● ●● ● ●● ●● ●● ●●● ●●● ● ●● ●● ●● ●● ●● ●●● ●● ●● ●● ●● ● ● ●●● ● ●● ● ● ●● ● ● ● ●● ●● ●● ●● ● ●● ●● ●●● ● ●● ● ●● ●● ●● ●● ●●● ●●● ●●● ●●● ●● ●● ●●● ●● ● ● ●●●● ●● ●●● ●● ● ● ●● ●● ● ●● ●● ●● ●●● ●● ● ●●● ● ●● ● ●●● ●● ●● ●● ● ●●●● ●●● ●● ●● ● ● ●●●● ● ●●●● ●● ● ●● ● ●●● ● ●● ●● ●● ●● ● ●● ●● ●● ●● ●●● ●● ●● ●●●● ●● ●● ●●● ●● ●● ●● ● ●● ● ● ●●● ●●● ● ●● ●●● ● ●●● ●● ●●● ● ●●● ● ●●● ● ● ●● ● ●● ●● ●● ● ●● ● ●●● ●● ●● ●● ●●●● ● ●● ●● ●●● ● ●● ●● ●●●● ● ●● ● ●● ●●● ● ● ●●● ●●● ● ●●●● ●●● ●● ●● ●●● ●●● ●●●● ● ●●●●● ●● ●● ● ●●● ●●●● ●● ●● ●● ●●● ●● ● ●●● ● ●●● ●● ●●●● ● ●●● ●● ●●● ● ●● ●● ●● ●● ●● ● ●● ● ●●● ● ●● ● ● ●● ●● ●● ●● ●●●● ● ●●●● ●● ●●●● ● ● ●● ●●●● ●●●● ● ●● ●● ●● ●● ●● ●● ●●● ●●●● ●● ●●● ● ●● ● ●●●●● ●● ● ●● ●● ●● ● ●● ●●● ● ● ●●●● ● ●●● ●● ● ●●● ●●● ●● ● ●●●● ●● ●● ●● ●●● ●●●● ● ●●●● ●● ●● ●● ●●●● ● ●● ●●●● ●● ●● ●●●● ● ●● ●● ●● ● ●● ● ● ● ●●● ●● ●●● ●● ● ●● ●●●●● ●● ●● ● ●● ●● ●● ●●●● ● ●● ●● ● ●●●● ●●● ● ●●●● ●● ●● ●● ●●● ●●●● ●●● ●●● ●● ●● ●● ● ● ●●● ●● ●●●●● ● ●●● ●● ●● ●●● ● ● ●● ●● ●●●● ● ● ●● ● ●●● ●● ● ●● ● ●●● ● ●●●● ● ● ●● ●● ● ●● ●● ●●● ●● ● ●● ● ●● ●●● ●● ● ●● ● ●●● ● ●●● ●●● ● ●● ●● ● ●● ● ●● ●● ●●● ● ●●● ● ●●● ●● ● ●●● ●● ●●● ●● ●●●● ●● ●●● ●● ●●● ● ●● ●● ●●●●● ●● ● ●● ●● ●● ● ●● ●● ●● ●●●● ● ●● ●● ●●● ●●● ● ● ●● ● ●● ● ●●●●● ● ●● ●●●● ● ●●● ● ●●●● ●● ●●● ● ● ●● ●●●●● ●● ● ●● ● ●● ●●● ● ●● ●● ● ●● ●●● ● ●● ●●● ●● ● ●● ●●● ● ●● ●● ● ●● ● ●● ●● ●●● ●● ● ●● ●●● ●●●●● ● ●● ●● ●● ● ●●● ●●● ●● ●● ●● ●● ●● ● ●●● ●● ●●● ●● ●● ●● ●●● ● ●●● ●● ●●● ●● ●● ● ●● ●●● ● ●●● ●●●● ●●● ● ●●● ●●● ●●● ●●●● ●● ●●●●● ● ●●● ●● ● ●●● ●● ●●● ●● ●● ● ●●● ●●●● ●● ●● ●●● ● ● ●●● ● ●● ●●●● ●●●● ●● ●●● ●●● ● ● ●● ●●● ● ●● ●●● ● ●● ● ●● ●● ● ●●● ●● ●● ● ●● ● ●● ●● ● ●●● ●● ● ●● ●● ● ● ●● ●●● ● ●● ● ● ●● ●●● ● ●●● ●● ●● ● ●●● ● ● ●● ●●● ●● ●● ●●● ●● ●●●●● ● ●● ●● ●●● ● ●● ● ● ●● ●● ●● ● ●● ● ●●●● ● ●●● ● ●●● ●● ●● ●●●●● ●● ●●● ●● ●● ●● ●● ●●● ●● ●●●● ●● ●● ●●● ●●● ●● ●● ●●● ●● ● ● ●●● ● ●● ● ● ● ●●● ● ●●● ●●●● ● ●●● ● ●●●● ● ●●● ● ●●●● ● ●●● ● ●● ● ●● ●●●● ●● ●● ● ● ● ●●● ●●●● ● ●● ●● ● ●● ● ●● ● ●● ●● ●● ●●● ●● ●● ● ●●● ●● ●● ● ●●● ●● ●●●● ●● ● ● ●●● ●●● ●● ●●●● ●● ●● ●● ●● ●●●● ● ●● ● ●● ●● ● ●●●●● ●●● ●● ● ●●● ● ● ●● ●● ●●●● ● ●● ●●● ● ●●● ●● ●● ●● ● ●●● ●● ●● ●● ● ●●● ● ●●● ● ● ●● ●● ●● ●●● ● ● ●●● ●●● ●●●● ● ●● ●●● ●● ● ●● ● ● ●● ●● ● ●● ●● ● ● ● ●● ●● ●●● ● ●● ● ● ● ●●● ●●● ●● ● ●● ●●● ●●● ●● ●●● ● ●● ●●● ● ●●● ●●● ● ●● ●●● ●● ● ●● ● ●●●● ●● ●● ● ●● ●● ● ●● ●●● ● ●●● ●● ●●●● ● ●●● ●● ●●● ● ●● ● ●● ●● ●●● ● ●●●● ●● ●● ●● ●● ●●● ● ●●● ●● ●● ● ●● ●●●● ● ●● ●● ●●●● ●●●● ●●● ●●● ●●● ●● ● ●●● ●● ●● ●●● ● ●● ●● ● ●●● ●● ●● ●●● ●● ●● ● ●● ●● ● ●● ●● ●● ●●●● ●●●● ● ●●● ● ●● ● ●●●● ● ● ●●● ● ●● ● ●●● ●●● ● ●● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample AMH Copula Figure 2.48: ρ = 0.3451, OSS=5000, SSS=5000, m = 50 ● ● ●●● ● ●●● ● ●● ●● ●● ●● ●● ● ●● ● ●● ●● ●●● ●● ●● ● ●● ●●● ●● ● ●●● ● ●●●●● ●●●● ●●● ● ●●● ●●● ●● ● ●●● ●● ●●●● ●●●● ●● ●● ●●●●● ●●● ●●● ●● ●● ● ●●● ●● ●●● ●●● ● ●●●● ●● ●●●● ●●● ●●●● ●●● ●●● ● ●●● ● ● ●● ●●● ●●● ● ● ●● ●●●●● ● ●● ● ●● ●● ●●● ● ●●● ●●● ● ●● ●●●● ● ●● ●●● ●●● ● ●●● ●● ● ●● ●● ●●●●●● ● ●●● ●●● ●●● ●● ●● ● ●●●● ● ●● ●●●●● ●●● ●●● ● ● ●●● ● ● ●●●●● ● ● ●●● ●●● ●● ●●● ●●● ●● ●● ●● ●● ●● ● ●● ● ●● ●● ●● ●● ●● ● ●●● ●● ● ●●● ●● ●● ●● ●●● ●● ● ● ●●● ●● ●●●● ●● ● ●●● ●● ● ● ● ● ●●●● ●●● ●● ●●● ●●● ●●●● ●●● ● ●●● ●● ●●●●●● ● ● ●● ●● ●●●●● ●●●● ● ●●● ● ●● ●●● ●● ●● ●● ● ● ●● ●● ●● ●●● ●● ● ●● ● ● ●● ● ●●● ●●● ●● ● ●● ●● ●●● ●● ● ●● ● ●● ●●● ●●● ● ●● ●● ●●● ●● ●●● ● ● ●● ● ●● ●● ●● ● ●● ●●● ●● ●●●●●● ●● ●●● ●● ●●●●● ● ●● ●●● ●● ●● ●●● ●● ●● ●●● ●●● ●● ● ● ● ● ●●● ●●● ●● ● ●●●●● ●● ● ●●●● ● ●● ●●● ●● ● ●● ● ●●●● ● ●● ●● ●●● ●● ● ●●● ●●●● ●●●● ● ●●● ●● ●●●●●● ●● ● ●●● ●●● ● ● ●●● ●●● ●● ●●● ●●● ● ●●● ● ●● ●●●● ●● ●● ●● ● ●●●● ● ●● ●●●●● ●●● ● ●●●●● ●● ●● ● ●● ● ●● ●● ● ●●● ● ●●● ●● ●●● ●● ●●● ● ● ●● ●● ● ●● ●●●● ● ●● ●● ●● ●●●● ●●●●●● ●●●● ● ●● ●●● ● ●●● ●● ● ●● ● ●● ●● ●●● ● ●●●● ● ●● ●● ●● ● ●●● ●● ●● ●● ●●● ●● ●●● ●● ●●●● ● ● ●● ●● ●●● ● ●● ●● ●● ●● ●● ● ●● ● ● ●● ● ●● ●● ●● ●● ● ● ●●●●● ● ● ●●●●● ●● ●● ●● ●● ●● ● ● ●● ●●● ●●● ● ●● ●● ● ●●● ●●●● ● ● ●● ● ●●●● ●●●● ●●● ●●● ● ●● ● ●● ● ●● ●●● ●●●● ●●●● ●●● ●●● ● ●●● ●● ●● ●● ● ●● ● ● ●●● ●●● ● ●● ● ●●● ●● ● ●● ●● ●● ● ●● ● ●● ●●● ●● ●● ●● ● ●●● ● ●●●●● ●● ●● ●●●● ●● ●●● ●● ●● ●● ●●● ● ●● ●●● ● ● ●●● ●● ● ●● ●● ●● ● ● ●● ●●● ● ●● ● ● ●● ●● ●●● ● ●●● ●●● ●● ●●●● ● ●● ●●●● ●● ●● ●● ●● ●●● ● ●● ● ● ●● ●●●●● ●●● ●● ●●●●● ● ●● ●● ●●● ●● ●● ● ●●● ● ●●● ●● ●● ●●● ● ●● ●● ●● ●● ● ●● ●● ●●●● ●● ●●●● ●● ● ●● ●● ●● ●● ●●● ● ●● ●● ●● ●●● ● ●● ●● ●● ●●●●● ●●●●● ●●● ● ● ●●● ● ●● ● ●● ●● ●● ●● ● ● ●●●● ● ●●●● ● ● ●● ●● ●● ●●●●● ●● ●● ●● ● ●●●● ●●●● ●●● ●● ●●●●● ● ● ●● ●● ●● ● ● ●● ●● ● ● ●●● ●● ●●● ●● ●● ● ●● ●● ●● ●● ● ●●● ● ● ●● ●●● ●● ● ● ●● ● ●● ● ● ●●● ●●● ●●● ●● ● ●● ● ●● ●●● ● ●● ●●● ●● ● ●●●● ●● ●● ●● ●● ●● ● ●● ●● ●● ●● ●● ● ●●● ● ● ●● ● ●●● ● ●● ●●●● ● ●● ● ●●●● ●●●● ● ●● ●● ● ●●● ●● ●●●●● ●● ● ●● ●●● ●●● ● ●● ● ●●● ●● ●●● ●● ●● ●● ●● ●● ●● ●●●● ●● ● ●● ●●●● ●●●● ●●● ●● ●● ● ●● ●● ●●●● ●● ●●● ● ●●● ●● ●●● ●● ●●● ●● ●●● ●●● ● ●●●●●● ● ●● ● ● ●● ●●● ●●● ● ●●● ● ●● ●● ●●● ● ● ●●●●● ●●● ●● ● ● ●● ●● ●● ●● ●● ● ●●●●● ●●● ●● ●●● ●●● ●● ● ●● ●● ●● ●● ● ●● ● ● ●● ●● ●● ●●●● ●●● ●●●●●●●●●● ●● ●● ●● ● ●● ●● ●● ●● ● ●● ●●● ●● ● ●● ●●● ●● ●●●● ●● ●●●●● ● ●●● ● ●●● ●● ●●● ●●● ● ●●● ●●● ●●● ● ●●●● ●● ●●● ●● ●● ● ●● ●● ● ●●● ●●● ● ●●●● ●● ●●● ●●● ● ●●●● ●● ●● ●● ● ●● ● ●●● ● ●●●● ●● ● ●● ●● ●● ●● ●●●● ● ● ●●●● ●● ● ●● ●● ● ●● ● ●● ●●● ●● ●● ●●● ●● ●● ●●● ● ●●●● ● ●● ●● ● ●● ●● ●● ● ●● ● ●● ●●●●● ● ● ● ●●●●● ● ●●● ● ●● ●● ●●● ●●● ●● ●●●● ●● ●●●● ●●● ● ●● ● ●●● ● ●● ● ● ●● ●●● ●● ● ●● ● ●● ●● ●● ●● ●● ● ●● ● ●●●● ●●●● ●●● ●● ● ●● ● ●●●● ●● ●● ● ●● ●●●● ●●●●● ●● ●● ●● ●● ●● ●● ●● ● ● ● ●● ●● ●● ● ● ●●● ●●● ●●● ●●● ●● ●●● ●● ● ● ●●● ● ●● ●● ● ● ●●●● ● ●●●● ● ●● ●●●●● ●● ●●● ● ●● ●● ●● ●●●●● ● ●● ●●● ● ●● ●●●● ●● ● ●● ●● ●●●● ●●●● ●● ●● ● ●● ● ●●●● ● ●● ●●● ● ●●● ● ● ●●● ●●● ●● ●● ●● ●● ●●●● ●● ● ●● ●● ●● ● ●● ●● ●●● ● ● ●● ●●● ●●● ● ●● ● ●● ●●●● ●● ●● ●● ● ●●●● ●● ● ● ● ●● ●● ●● ●● ● ●●● ●●● ●● ●● ● ●●●●●● ● ● ●● ●●●●● ●● ●● ●●● ●●● ●● ● ●● ●● ●● ●●●● ● ●●●● ●●● ●● ● ● ●●● ●● ●● ● ●● ●● ● ●●● ● ●●● ● ●●● ● ●● ●●●● ●● ●●● ● ● ●●●● ● ●●● ●● ●● ●●● ● ●●● ● ● ●● ● ●●● ●● ●● ● ●● ●●● ● ●● ● ●●●● ● ●● ● ●●● ●● ●●● ● ●● ●●●● ●● ●●● ● ● ● ●● ●● ●● ●●● ●● ●● ●●● ● ●● ●●●● ●● ●● ● ●● ● ●● ●● ● ●●●●● ● ●●●● ● ●● ●● ●● ●● ●● ●●● ●● ● ● ●● ●● ●●●● ●● ●● ●● ●●●●● ● ●● ●●● ●● ●● ●● ● ●● ●● ●●● ● ●● ● ●●● ●● ●● ● ●●● ●●● ● ●●● ●●● ●● ●● ●●●● ●●● ●●● ●●● ● ●●● ●● ●●● ●● ● ●● ●●● ●●●●● ● ●●●●● ●● ● ●● ●●●● ●●● ● ● ● ●● ●● ●● ●●●● ● ●● ●● ●●● ●●● ●●●●● ●●● ● ●●●●●● ● ●● ●●● ●●●●● ● ● ●●● ●●●● ●●● ●●● ●●● ●● ●● ●●●● ●● ●●● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Gumbel Copula Figure 2.49: ρ = 0.4412, OSS=5000, SSS=5000, m = 50 56 ● ●●● ●●●●● ●●● ●●● ●●● ●● ●● ●●●●●● ●● ● ● ●● ● ●●● ●● ● ●● ●●●● ●● ●● ●●● ● ●● ● ●● ●●● ●● ●● ● ●●● ●● ●● ●●●●● ●●● ● ●●● ● ●● ● ●● ●● ●● ●● ● ●● ●●●● ●● ●● ●● ● ●● ●● ●● ● ●●●● ● ●● ●● ●●● ●●● ● ●● ● ●● ● ●●● ● ●●●● ● ● ●● ●● ●● ● ●●● ●● ●● ●● ●● ● ●●●● ●● ●●● ●●●● ●●●● ●●● ●●● ●●●●● ●● ●● ● ●●●●● ●● ●●● ●●●● ● ● ●● ● ●●●●●● ●● ●●●● ●● ●● ●● ●● ●●● ●●● ●●● ●● ●● ● ●● ●● ●●● ●● ●●● ●●● ● ●● ● ●●● ●● ●●●● ●●● ●● ●●●●● ●● ● ● ●●●● ● ●● ●● ● ● ●●●● ● ●●●● ●●● ● ●●●●● ● ●●● ● ●● ● ●●●●● ● ●●●● ●● ●● ●●●● ●● ●● ●● ●● ●●● ●●● ● ●●● ●●● ●● ●●● ● ●● ● ●●● ●● ●●● ●●● ●● ● ●● ●● ●● ●● ●● ● ●●● ●●●● ● ● ●●● ●●● ●● ●●● ●●● ●●● ●●● ●● ●●● ●●●● ●● ●●● ● ●●● ●●● ● ●● ●●●● ●● ●●● ●● ●● ●● ●●● ●● ●● ● ●●●● ●●● ● ●● ● ●● ●●● ●● ●●● ●● ●●● ●● ●● ● ●●● ●● ●● ●●● ●●● ●● ●● ● ●● ●●● ●●● ● ●● ●●● ●● ● ●● ●● ●● ● ●●● ● ●●● ● ●● ●● ● ●●● ● ●●●●●● ●● ●●● ● ●● ● ●●● ●● ●● ●● ●●● ●● ● ●● ●● ●● ● ●● ●● ●● ●●● ●● ●● ●● ● ●●● ● ●● ● ●● ●● ● ●● ●●● ●● ●● ● ●●●● ●●● ●●● ● ●●●●● ● ●●●● ●● ●● ● ●● ●● ● ●● ●● ●●● ●● ●●●● ● ●● ●●●●● ●● ●● ●● ●● ● ●● ●● ● ●●●● ●● ●● ●●● ● ● ●● ●●●● ●● ●● ●●● ●● ●●● ●●● ●● ● ●● ●● ● ●● ●●● ●●● ●●● ● ● ●● ●● ●●● ● ●● ●● ● ●●● ●●●●● ●● ● ●●●● ● ● ● ●● ●●● ●●● ●● ●● ●●● ●● ● ●● ●●● ●● ●●●●● ●●● ●●●● ●●● ●● ●● ●● ●● ●● ● ●● ●●● ● ●● ● ●●●●● ●●● ●●●●● ● ●● ● ●● ●● ●● ● ●●●●● ●●●● ●● ● ●●● ● ● ●●● ●●● ● ●●●● ●● ●●● ●● ●●● ●● ●●●● ● ●●●● ●●● ●● ●● ●● ● ●● ● ●● ● ●● ●●● ●●● ● ●●● ●● ● ●●● ● ● ●●● ●●●●● ● ●● ● ●● ●●● ●●● ●●● ●●● ●● ●● ●● ●● ●●●● ● ●● ●● ●●● ●●● ●● ●● ●●●● ● ●●● ● ●●● ● ●● ● ●●●● ●●●● ● ●●●● ●●●● ●●● ● ●●● ● ●● ● ●● ● ●● ●●●● ● ●●●● ● ●● ● ●● ●● ● ●● ●●● ●● ●● ● ● ●● ● ●●●● ●● ●● ●● ●● ● ● ●●● ● ● ●● ●● ● ●● ●● ●●● ●●● ●● ● ●●● ●● ● ●● ●●● ●● ●● ●●● ● ●● ●● ● ●●● ●●● ●●● ●● ● ●●●●● ● ●●●● ●● ● ●● ● ●● ●● ●● ● ●●● ● ●● ●●● ●● ●●●● ● ●● ●● ●● ● ●●● ● ● ●●● ●●● ●●● ●● ● ●● ●● ●●●● ●● ●● ●●●●●●● ●● ●● ● ●● ●●●● ● ● ●● ●●● ●● ●● ●● ● ●● ●●● ●● ●●● ● ●● ●● ●● ●●●●● ● ●● ●●● ●●● ●● ● ●● ● ●●● ●● ● ●●● ●● ●● ●● ●● ● ●●● ●●●●● ●●● ●● ●● ●● ●● ●● ●● ●●● ● ●● ●●●● ●●● ●● ●● ●● ● ●●●● ● ●● ●● ●●● ●● ●●● ●●●● ●● ●● ● ●●● ●●●● ●● ●● ●●● ●● ● ●● ● ●●● ●● ●● ●● ●●●● ●● ●● ●●●●●●● ● ●● ●● ●● ● ●●●● ● ●●● ● ● ● ●●● ● ●● ●● ●●● ●● ●●●● ●● ● ●●● ● ●●● ● ●● ● ●● ●● ●●● ●●● ●● ● ● ●●● ● ●● ●● ●● ●● ●●● ●● ●●●●● ●●● ●●●● ●● ● ●●● ●● ●● ●● ● ● ●● ●●●● ● ● ●●● ● ●●● ● ●●●● ●● ●●● ●●● ●●● ● ●●● ● ●● ●●●● ●●●● ●● ●●● ●● ●●● ●●● ●●●● ●● ●●●● ●● ●● ● ●●● ● ●● ●●● ●● ●●● ●● ●●● ●● ●●● ●●●● ●●● ● ●● ● ●● ● ●●● ●●●● ● ● ●●● ●● ●●● ●● ● ●●● ●●● ●●●● ●●● ●● ●● ●● ●● ● ●● ●●●● ● ●● ●● ● ●● ● ●●● ● ●● ●● ● ● ● ●● ●● ●●● ●● ● ●● ●●● ●●● ●●●● ●●●● ●● ●● ●●●● ● ● ●●● ● ●● ●● ●●● ●● ●●● ●● ●●●● ● ● ●●● ● ●●●● ● ●●● ●●● ●●●● ●● ●● ● ●●● ● ●● ●●● ● ●● ●●● ● ●●● ●● ●●● ● ●● ● ●● ● ●●● ● ● ●●● ● ● ● ●● ●● ● ●● ●● ●● ● ●● ● ●● ●●● ● ●●●● ●● ●●●●● ● ●● ●●●●● ●●● ●● ●● ● ● ●● ●● ●●● ●●● ●● ● ●●● ● ●● ●●●● ● ● ● ●● ●● ●● ●● ● ●●●● ●●● ● ●●● ●● ●● ●●● ●●●●●● ● ●●● ●● ●●● ●●●● ●●●● ● ●●● ●● ●● ●●●● ●●● ●● ● ●●● ● ●● ●●● ●● ●● ●● ●●● ●●● ●● ●●● ●● ●●●● ●●● ●●● ●●● ●●● ●● ●●●● ● ● ●● ●● ●●●● ● ●● ●● ●●● ●● ● ●● ●●●●● ●●● ● ●● ● ●●● ●●●● ●● ● ●●●● ●● ●● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Plackett Copula Figure 2.50: ρ = −0.9262, OSS=5000, SSS=5000, m = 50 ● ● ●●●●● ●● ●● ●●● ● ●● ● ●●●● ●●● ●●●●● ●●● ●● ● ●●● ●● ●●● ● ●● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ● ●● ●●● ●● ●● ●●● ●● ● ● ●● ●● ●●● ●● ●● ● ●●●● ● ●● ● ●●●● ●●●● ● ● ●● ●●● ●● ●●●●● ●●● ●●● ●●● ● ●● ●●● ●● ●●● ●●●● ●● ●● ● ●●●● ● ●● ●● ●● ●●●●● ●● ● ●● ● ● ●●● ● ●● ●●● ●●● ●● ● ●● ●● ●● ●●● ● ●● ●●● ●● ●● ●●● ●●● ● ●●●● ●● ●● ● ●●● ●●● ●●●● ● ● ●●● ●● ● ●● ●● ● ●● ●●● ● ●●● ● ● ●● ● ●●●● ●●● ● ● ●●● ●● ●●● ●●● ●●●● ●● ● ● ●●● ● ●● ●●● ● ●● ●● ●●● ●● ●● ●● ●● ●● ● ●●● ●●● ●● ●●● ●●● ●● ●●● ● ●● ●●● ● ●●● ● ● ● ●● ● ● ● ●●● ●● ●● ● ● ●●● ●●● ●● ● ●● ● ●● ●● ●● ●● ● ●●● ● ● ●● ● ●● ● ● ●●● ● ●●● ●● ●● ● ●●● ●● ●● ●● ● ●●● ●●●● ● ●● ●●● ●●●● ● ●●●● ● ●● ● ●●● ● ●●● ●●● ●● ●● ●● ● ● ●●● ● ●●● ● ●● ● ●● ●●●●●● ●● ● ●● ● ●● ●● ● ●●● ● ●●● ● ●● ● ●●● ● ●●● ●● ●●● ●● ● ●● ●●● ●● ●●●●● ● ●●● ●●● ●● ●● ●●●● ●●●● ● ● ●●● ● ● ●●●● ●● ●● ●● ●●● ● ● ●● ●● ●●● ● ●●●●● ●● ●● ●●●● ●●●● ● ●● ●● ● ●● ● ●● ●●● ● ● ●●● ●●● ●● ● ● ●●● ● ●●●● ●●● ●●● ● ●● ●●● ●● ●● ●● ●● ●●●● ●● ●● ●● ●●● ● ●●● ● ●● ●●● ● ● ●● ●● ●● ● ●●● ● ●●● ●●● ●● ● ●● ● ●●● ● ●● ●●●● ● ● ●●● ●● ●● ●● ● ●●●● ●● ● ●●● ●●● ●●● ● ●●● ●● ● ●●● ● ●● ●● ● ●● ● ●● ● ●● ●●● ●●● ●● ● ● ●●● ●●●● ●● ●● ●●● ●●● ●● ● ●●● ●● ●● ● ●● ● ●● ●●● ● ●●● ●● ●● ● ● ●● ● ●●● ●● ●● ●● ●● ● ●● ●● ●●● ● ● ● ●● ●●● ●● ●● ● ●● ● ●● ● ● ● ● ● ●● ●● ●● ●●● ● ●● ● ● ●●● ● ● ●● ●● ● ●●● ●● ●●● ●● ●● ●● ● ●● ● ●●●● ● ● ●●● ●●● ● ●● ● ●● ●● ●●● ●● ● ●● ●● ●●● ●● ● ● ●●●● ● ●● ● ● ●● ● ●●● ●● ● ●●● ●●● ●● ● ● ●● ●● ●● ●● ● ●●● ● ●● ●● ●● ●●● ● ● ●● ● ●● ●● ●● ● ●● ●● ●● ● ●● ● ●● ● ● ●●● ●● ●● ● ● ●● ●●● ●● ● ●● ●● ● ●●● ●● ●● ●● ●● ● ●● ●● ● ●● ● ●●●● ●●● ●● ●●● ● ●●●●● ● ●● ●● ●● ● ● ● ●● ● ●● ●●● ●●●●● ●● ●● ● ●●● ●●● ● ●● ●● ● ●● ●● ● ●● ●●● ●● ● ●● ● ●● ●● ●● ●● ● ●● ●●● ●● ●●●● ●● ●● ●● ●●●● ● ● ● ●●●● ● ●● ●● ● ●●● ● ● ●● ●● ●● ●● ●●● ●● ●● ●●● ●● ●●● ●● ● ●●● ● ●●●●● ●● ● ●●● ● ● ●● ● ●● ●● ●● ●● ●●● ●● ●● ●●● ●●● ●● ● ●● ●●● ●● ●● ● ● ●●● ●● ●● ● ● ●● ●● ●● ● ●● ● ●● ●●●● ● ● ●●● ●● ●●● ●●● ● ●● ●● ●●● ●● ●●● ● ●● ●●● ●●● ●●● ● ●●● ●●●● ● ●●● ●● ●●● ●● ● ●●● ●● ●● ●● ●● ●● ● ●● ● ●●●● ● ● ● ●●●● ●●●● ●●● ●●●● ● ●● ●●● ●● ● ●● ●● ●●●● ● ●● ●● ●● ●● ● ●●● ●● ●● ● ●●●● ●● ●● ● ●●●● ●● ●● ● ●●●● ● ●●●● ● ●●●● ● ●●●● ●●● ● ●● ● ●● ● ●● ●●● ●●●● ●● ●● ● ●● ● ●● ●● ● ●● ●●● ● ●● ●● ●●● ●● ●● ●● ●●● ● ● ●●●●● ● ●● ● ● ●●● ● ●● ●● ●●● ● ●● ●● ●● ●● ●● ●● ●●● ● ●●●● ●●●● ●● ●● ●● ● ●● ● ●● ●● ●● ●● ●● ● ●● ●●● ●● ●● ●● ● ● ●●● ● ●● ●● ● ● ●● ●● ●●● ●● ●● ●● ● ●●● ●●● ●● ● ●●● ●● ● ● ●● ● ●● ●● ●● ●●● ●●● ● ● ●●●●●●● ● ●● ● ●● ● ●● ● ●● ● ● ●● ●●● ●● ●● ● ●●● ● ●● ●● ● ● ●●● ●● ●●●●●● ● ●● ●●● ● ● ●●● ●●● ●● ●● ●● ●●●● ●●●●● ● ●● ●● ●● ●● ● ● ●●● ● ● ●●●●●●● ●● ● ●● ● ●●● ●●● ● ●●● ● ●● ●● ●●● ● ● ●● ●● ●● ● ●●● ●● ● ● ● ●●● ●● ●●●● ●●●● ● ●● ●● ● ●● ●●●● ●● ● ●● ●● ●● ● ● ● ●●● ● ●●●● ●● ●● ●●● ● ●●●● ●● ● ●●●● ●●● ●● ● ●● ● ●● ●●●● ●●● ●●● ●● ● ●● ● ● ●●●● ●● ● ● ●● ● ●● ● ●●● ● ● ●●● ●●● ●● ● ●● ●●●●● ●●● ●● ●● ● ●● ●● ●● ● ●●● ● ●●● ● ● ●●● ●● ●● ●● ● ● ●●● ●●●●● ●● ● ●● ●●● ●● ●● ●● ● ●●● ●● ●● ●●●● ●● ● ●●● ● ● ●● ●● ●● ●● ● ● ● ●● ● ●● ●● ●●●● ● ●● ●●● ●● ●● ●●●● ● ●●●●● ● ●● ●●●● ●● ● ● ●● ●● ●● ● ●● ●●● ● ● ●● ● ●● ●● ●● ● ● ●● ●●●● ● ●●● ●● ● ●● ●●● ●● ● ●● ● ● ● ●● ● ●● ●● ● ●●●● ●●● ● ●● ●● ●● ● ●● ●●●● ● ●● ●● ● ●●● ●● ● ●● ● ●●● ● ●● ●● ● ●●●● ●● ● ● ●●● ●●● ●●● ●●● ●●●●● ● ● ●●●● ●● ● ●● ●●● ●● ● ● ●●● ●●● ●● ●●●● ● ● ● ●●● ●● ●● ● ● ●● ●●● ●● ●● ●● ● ●●● ●● ● ●● ● ●● ● ●●● ● ● ●● ● ●● ●● ●●● ●● ●● ●●●●● ● ●●●● ●● ●●● ●● ●● ●●● ● ●● ● ●●● ●● ●● ●●● ● ●●● ● ●● ● ● ●●● ●● ● ● ●●●● ●● ●● ●● ●●● ●● ●●●● ● ●● ● ●●● ● ●●●● ● ●● ●● ●●● ● ●● ●●●● ●● ● ●●● ●● ● ●● ●● ●● ●●● ●● ●●● ● ●● ● ●● ●● ● ●● ●● ●●●● ●●● ●● ●● ●● ●●● ●● ●●●●●● ●● ● ●●● ●● ● ●● ●● ●● ●● ● ●●● ●● ●● ●● ● ●● ●● ●●● ●● ● ●●● ●●●● ●●● ● ●●● ● ●● ● ●●●● ●● ●●● ● ●● ● ●●●● ● ● ●● ●● ●●● ● ●● ●● ●● ●● ● ●● ● ●● ● ● ●● ●● ●● ●●● ●● ●● ● ●● ●● ●●● ●● ● ●● ●● ●●● ●● ●● ●● ● ●● ● ●●●●●● ●● ●● ● ●●● ●● ●● ●●●● ● ●● ● ●●● ●●●●● ● ●●● ●● ●●● ●● ●● ● ●● ● ●●●● ●●● ●●● ●● ●● ●● ● ●●● ● ●● ● ●●●●● ●● ●● ●● ●●●●● ●●● ● ●● ●● ●● ●● ●● ● ●● ● ●● ●●●● ● ●●● ●● ●●● ●● ●● ●● ●● ●● ●●● ●● ●● ●● ● ●● ●● ●●● ●● ●● ●●● ●● ●● ● ●●●● ● ●●● ● ●●● ●● ● ●●●● ●● ●● ●● ●● ●●● ●●●●● ●●● ● ●● ●●● ●● ●● ●●● ●●●●● ● ●● ●● ●● ● ●● ●●● ●● ●● ●●● ● ●● ●● ●●● ●●●● ●● ● ●●● ●● ●● ●●● ● ●● ●●●● ●●● ●●●● ●●●● ● ●● ●●● ● ●●●● ●●● ● ●●●●● ●● ●●●● ●●● ● ●● ● ●● ●●● ●● ●●● ●● ●● ●●●● ●● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Normal Copula Figure 2.51: ρ = 0.7341, OSS=5000, SSS=5000, m = 50 ● ●● ●●● ●●●●● ●● ● ●● ●● ●● ● ● ●●● ●●● ● ●●● ●●● ● ●● ●● ● ●● ●● ● ●● ●● ●● ●●● ●●●●● ●● ●●● ●● ●●● ●● ● ●●● ● ●●● ● ●● ● ●●● ●●● ●● ● ●●● ●● ●● ●● ● ●● ●●● ● ● ●●● ●●● ●● ●● ●● ●● ●●● ●●● ●●● ● ●●● ●●● ●● ●● ● ●●●● ● ● ● ●●● ●●● ●●● ●● ● ●●●● ●● ●● ●● ●● ●●●●●● ●● ● ●● ● ●● ●● ●●●● ● ●● ●● ● ●● ● ●● ●●●● ● ● ●● ●● ● ● ●● ●● ●●● ●●●● ●● ●● ● ● ●●● ●● ● ●● ● ●●● ●● ●● ●● ●● ● ●●● ●●● ● ●●● ●● ● ● ●● ●● ● ●● ●● ●● ●● ●●● ●●● ● ● ●●● ●● ●● ●●● ●● ● ●●●● ● ●● ● ● ●● ●● ●● ●● ● ●●● ● ● ●● ●● ● ● ●●●● ●●●● ●●● ● ●● ● ●●● ●● ●● ●● ●● ● ●●● ● ●● ●● ●● ●● ● ●● ●● ● ●●● ●●● ● ●● ●● ●● ● ●● ●● ●● ●●● ● ●● ●●●●●●●● ● ●● ● ● ●●● ●●● ●●● ●● ●● ●●●● ●● ●● ● ●● ●● ● ●● ●● ●●●● ●● ●● ● ●● ●● ●● ●●●● ● ●●●● ●●●●● ●● ●● ●●● ●● ●● ●● ●● ● ● ● ●●● ●●● ●●● ● ●● ●●● ●● ●●● ● ●●● ●● ●●●● ●● ●● ●● ● ●● ●●●● ●● ●●●● ● ● ●● ● ●● ●●● ● ●● ●● ●● ●●●● ●● ●● ● ●●● ● ●●● ●●● ●● ● ●●● ●● ● ●● ●● ●●●●● ● ●●● ●●● ●● ● ●●● ●● ●●● ● ●●● ● ●● ●●● ●●● ●●● ●● ● ●● ● ●●● ●● ● ● ●● ●●●● ● ●● ●●● ● ●●● ● ●● ●●●● ● ●● ●●● ●● ●● ● ●● ●● ●●● ● ●●● ●● ● ●● ●● ● ●● ● ●● ●● ●● ●● ● ●● ●● ● ●● ●● ●● ● ●●●● ●● ● ● ●●●● ●●● ●● ●●● ●● ● ●● ●●● ●●● ●●● ●●● ● ●● ●● ●● ●● ●●● ●●●● ● ●● ● ● ● ●● ●●● ●●●● ●● ● ● ●● ● ●●● ●●● ●●●●● ●●●● ● ● ●● ●● ●● ●● ●●●● ● ●● ●● ● ●● ●● ● ●● ●● ● ●●●●●● ●● ●● ● ●● ● ● ●● ● ● ●●●● ●● ●● ● ●● ● ●● ●● ● ●●● ● ●● ●● ● ●● ●● ●●●●● ●●● ● ●● ●● ●● ●● ● ●● ●● ●● ● ●●● ● ●●● ● ●●● ●● ●● ●●● ● ● ●●● ● ●● ●●● ● ● ●●● ●● ●●● ●● ● ●● ● ●● ●● ● ● ●●●● ● ● ●●● ● ●● ● ●● ●● ●●●● ● ●●● ●● ● ●● ●● ●● ●● ●● ● ●●●● ●● ● ●●● ● ●● ● ● ●● ●●● ●●●● ●● ● ●● ●●● ●● ●● ● ●● ●●● ● ● ●● ● ●●● ●● ●●●● ● ●● ●●●● ●● ●●●● ●●● ● ● ●●● ●● ● ● ●●● ●● ●● ● ●● ●●● ●●● ●● ●● ● ● ●● ●● ● ●● ● ● ●● ●●●●● ● ●● ●● ● ● ●● ● ●●● ● ● ●●● ● ●● ●●● ● ●● ●● ● ●● ●● ●● ● ● ●●● ● ● ● ●● ● ●● ● ● ● ●● ●●● ● ●●● ●● ●●● ●●●● ● ●● ● ● ●● ●●●● ●●● ● ●●●● ● ●●●●● ● ●● ● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●● ●● ●●● ● ●●● ● ●● ●● ●● ● ●● ● ●● ● ●●● ● ●●● ●● ●● ●● ● ●●●● ●● ●● ●●● ●● ●●● ●● ●●● ● ●● ●● ●●● ● ● ●● ●●●●● ● ●●● ●● ●●●● ●● ● ●● ●●●● ●● ●● ●● ●● ● ● ●●● ●● ● ●●● ●●● ● ●●●● ●●● ●● ●● ●● ● ●●● ● ● ●● ● ●● ●● ●● ● ● ●●●● ●●● ●● ● ●●●● ●● ●●● ●● ●● ● ●● ●●● ● ● ●●●●● ●● ●●● ●● ●● ●● ●● ● ●● ●●● ●● ●●●● ● ●● ●● ● ●●● ● ●●● ● ●● ●● ●● ● ●● ●●● ● ●●● ● ●●● ● ● ●● ●● ●●● ●● ●●● ● ●●●● ●●● ●● ● ●● ●● ●●●● ● ● ●● ●●●● ●● ●● ● ●● ●● ●● ● ● ●●● ●● ●● ●● ●● ●●● ● ● ●● ● ●●● ●● ● ●● ●● ● ●●● ●● ●● ●●●● ●●●● ●●● ●●●●● ● ●● ●● ● ● ●●● ●● ● ●● ●●●● ●● ●●●● ● ● ●●●●● ●● ●● ●●● ● ●● ● ●●● ● ●●●● ●● ● ● ●●● ●● ●●● ● ●●●● ●● ●● ●● ● ●●● ● ●●●● ●● ● ●● ● ●● ●● ●● ● ●● ● ● ●● ● ●●● ●●● ● ●●● ●●●● ● ● ●● ● ●●● ●● ● ●● ●● ● ●● ●● ●● ●● ●● ● ●●●●● ●● ● ●● ●● ●● ● ●●● ●●● ●● ●●●● ●● ●● ●●● ●● ● ●●● ● ● ● ●●● ● ●● ●● ●● ●● ● ●● ●● ● ● ●● ●● ●● ●● ●●● ●● ●● ● ●● ●● ●● ●● ● ●●● ●● ●● ●● ● ●● ● ●● ● ● ●● ● ● ●● ●●●● ●● ● ●● ●● ●● ● ●● ● ●● ● ●● ●● ●● ●● ● ●●● ●● ● ●●● ●●●● ●●● ● ●● ●●● ●● ● ●● ●● ●●● ●● ● ● ●● ●●● ● ●● ●●● ●● ● ● ●●●● ●●● ● ●●● ● ●●● ●● ●● ●●●● ●● ● ●●● ● ●● ● ●●● ●● ●● ● ●● ●● ●● ●● ●●● ● ● ●●●● ●● ●● ●● ● ● ●●● ●● ●● ● ●● ● ●●●● ● ●● ●●● ●●● ●●●● ● ● ●● ● ●●● ●● ● ●●● ●●●● ●● ● ●● ●●●● ● ●●● ●●● ●● ● ●● ●● ●● ●●● ● ● ●● ●● ● ●● ● ● ●● ●●● ●● ● ●● ● ●●● ●●●● ● ●●●● ●● ●● ● ● ●●●●● ● ●●●● ● ●● ●● ●●● ●● ●● ● ● ●●●● ● ●● ●●● ●● ● ● ●● ●●●● ● ●●● ●●●● ● ●● ●●●● ● ● ●● ●●● ● ●● ● ● ●● ●●● ●● ● ● ●● ● ●● ●●●●● ●●●●● ● ●● ●● ●●●●● ● ●● ● ●● ● ●●● ●●● ●● ● ●●● ●● ●● ● ●●● ● ●●● ●●● ●● ● ●● ● ●●● ●● ● ●●● ● ● ●●● ● ● ● ●●●●●● ●●● ●●●● ● ●● ●● ● ● ●● ● ●●● ● ●● ●● ● ●●●●● ● ●●●● ●● ● ●●● ● ● ●● ●●●● ●● ●● ● ●●● ●●●● ●●● ● ●● ● ●● ●●●● ●● ●●● ● ●● ● ●●●●● ●●●●●●● ● ●●●● ● ●● ●● ●● ●●● ● ●●●● ●● ● ●● ●● ●● ●● ● ●● ●● ●● ● ●●● ●● ●●●● ● ● ● ●● ● ● ●●●●● ● ● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●● ●●●● ●●● ● ●● ●●●● ● ●● ●●● ● ● ●●● ●● ● ●● ●● ● ●●● ●● ●● ●●● ● ● ●●●●● ●● ● ●● ● ●● ●● ● ●● ●● ●● ● ●●● ●●● ●● ● ●● ●● ●● ●● ●● ● ●●● ●● ● ●● ●● ●● ●● ● ●● ●● ●●● ●●● ●●● ●● ●●●● ● ●●● ● ●●● ●● ●●● ●● ●● ● ● ● ●● ●● ●● ● ●● ●● ●● ●●●● ●●● ●● ●● ●●● ●● ● ●● ●●● ● ●● ● ●● ●●● ● ● ● ●● ● ● ●●● ●● ● ●●● ●● ●●● ● ●●●●● ●● ● ●● ● ●●●● ●● ●●●● ●●● ●●● ●● ● ●●● ● ●● ● ● ●●● ●● ●● ●● ●● ●● ●●●● ●● ● ●● ●●●● ● ●●● ● ● ● ● ●● ● ●●● ●●● ●● ●●● ●● ●●●●● ● ●● ●● ● ●●●● ● ●●● ● ●●●● ●● ●●● ●● ● ●● ● ●● ●● ● ●●● ● ●●● ● ●●● ● ●● ●● ●● ●● ●● ●● ● ●● ●●●● ● ●●●●●● ● ● ●● ● ●●●● ● ●●● ●●●● ●● ● ●●● ●● ●●● ●● ●● ●● ●● ●●● ● ●●● ●● ●●●● ● ●● ●●● ●● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample T−Student Copula Figure 2.52: ρ = 0.7341, OSS=5000, SSS=5000, m = 50 57 ●●● ●● ●● ● ●● ● ●● ●●● ●●●● ● ●● ● ●●●● ●● ●●● ●● ●●● ● ●● ●● ●● ● ●● ●● ● ● ●●● ● ●● ●● ●● ● ●● ● ●● ●●● ●● ● ● ● ●● ●● ●● ●● ●● ●●●● ● ●●● ●● ●● ● ●● ●●●● ● ●●● ●●● ● ● ●● ●● ●● ●●● ● ● ●● ● ● ●●● ● ●● ● ● ● ●● ●● ●● ●● ●● ●●●● ●● ● ● ● ●●● ● ●● ●●● ● ●● ●● ●● ●● ●●● ●●●● ●● ●● ●● ●● ●●● ●● ● ●● ● ● ●●● ● ●●● ●● ●● ●● ●● ● ●● ●● ●● ●● ● ●●● ● ●●●● ●●● ●● ●●● ● ● ●● ●● ● ●● ● ●● ●●● ●● ●● ●● ● ● ● ●●● ●● ●●● ●● ●● ●●● ●● ●●●● ● ●● ● ●● ● ●●● ● ● ●●● ● ●●●● ● ●● ●● ●● ●● ●● ● ●●● ●● ●●● ●●● ●● ● ●● ●●●● ●●● ●● ●● ●●●● ● ● ●● ●● ●● ● ●● ●●● ● ●● ●●● ●● ●● ●●●● ●● ●● ●● ● ●● ● ●● ●●● ●●●● ● ●● ●● ●●●● ●● ● ● ●●● ●● ● ●● ●● ●● ● ●● ● ●●● ● ●●● ● ●● ●● ● ●●●●●● ●●● ● ●●● ●● ● ●●● ● ●● ●●●● ● ●●●● ●●● ● ●● ●● ●●● ●●● ● ● ●● ● ●●● ●● ● ●● ●● ●●● ● ●●● ● ●●● ●● ●●● ● ● ●●● ●● ● ●● ●●● ● ●● ● ●● ●● ●●● ● ●● ●● ● ●●●● ●●● ● ●● ● ●● ● ●● ●●●● ●●●● ●●●● ●● ●● ● ●●●● ●● ●● ●● ●● ●● ●● ●● ●●● ●● ●●● ●● ●●● ●●● ●●● ● ● ●● ●● ● ●●● ● ●● ● ●●● ● ● ●● ●●● ●● ●● ●● ●● ●● ●●● ●●● ●● ● ● ●● ●●● ●●● ●● ●●● ●●● ● ●● ●●● ● ● ●● ●● ● ●● ● ●● ●● ● ●● ● ●● ● ●●●●● ●● ●●●● ● ●● ●●●● ●● ●● ●● ● ●●●● ● ●● ● ●● ●● ● ●● ●● ●●● ● ●● ●● ● ●● ● ●● ● ●● ● ● ●● ●● ● ●● ● ●●●● ●● ●●● ● ●● ●●● ● ●●● ●● ● ●●●● ●● ●● ●● ●● ●●●● ● ● ●● ●●● ●● ● ●● ● ●● ●●● ●● ● ● ●● ● ●● ●●● ● ●● ●● ●●● ●●● ●●● ● ●● ●● ●●● ● ●● ●● ●●●● ●● ●●● ●● ●●● ●● ● ●●● ● ●●● ●● ●● ●● ●● ●● ● ● ●●● ● ●●● ●● ● ● ●● ● ●● ●●● ●● ●● ●● ●● ●● ●●●● ● ●●● ●● ● ●●● ●●● ●● ●● ● ●● ●●● ● ● ●●●● ● ●● ●● ●● ●●● ●● ● ●● ●● ● ●●● ● ● ●● ●● ●●●● ● ●●● ●● ●● ● ●● ●● ●● ●● ● ●● ●●● ●● ●● ●●● ●● ●●● ●● ● ●● ● ● ●●● ●●● ●● ●● ●●● ●● ●● ● ●●● ●● ●● ● ●●● ●●● ●●●●● ● ●● ●●●● ● ●● ●●●● ● ● ●● ●●● ●●● ● ●●● ●●●● ●● ● ● ●●● ●●●● ●● ● ●●●● ●●● ●● ●●●● ●● ● ●● ●●● ●● ●● ●● ● ●●● ●● ● ● ●● ●● ● ●● ● ● ●● ●● ●● ● ●●●●● ●● ●●● ● ●●● ●●●● ● ●● ● ●● ●●● ● ●●●● ● ●●● ●● ●● ●● ● ●● ●● ●● ●● ●●● ●● ●● ● ●●● ●● ●● ● ● ● ●● ● ●●● ●● ●●●● ● ●● ● ● ●● ●●● ●● ●● ●● ● ●● ● ●●● ● ●●● ●● ● ●●● ● ●●● ●● ● ● ●●● ● ● ● ●●● ●● ● ●● ● ●●● ● ●● ● ●●● ●● ●●● ● ●● ● ●●● ●● ●● ● ●●●● ● ●● ●● ● ●● ●● ●●● ●● ●● ●●●● ● ● ●● ● ●●● ●● ●● ● ●● ●● ●● ●●● ●●●● ●● ●● ●●●● ●● ●●●●● ●●● ● ●●● ●●●●● ●● ● ●●●● ●● ● ●●●● ●● ●● ●● ● ●●● ●● ●● ● ●●● ● ● ●●● ● ●● ●● ●● ● ● ●●●● ● ● ●●● ●●● ●●● ●● ● ●●● ●● ●●● ● ●●●● ●●●● ● ●●● ●●● ●● ●●● ●● ●● ●● ● ●● ●● ● ●● ●● ●●● ●●●●● ●● ● ●● ●● ●● ● ● ●● ● ●●● ●● ● ●● ● ●●● ●● ● ●●● ● ●●● ●● ●● ● ●● ●●●●●● ●●●● ●●● ●● ●● ● ● ● ●●● ● ●●● ●● ●● ●● ●●● ●●● ●● ● ● ● ●● ●● ●●● ● ●● ●● ●● ●● ● ● ●● ● ● ● ●● ●● ●● ●●●● ● ● ●● ●●● ●● ●● ● ●● ●● ●● ●● ●●●● ● ●● ●● ● ●● ●●● ●● ● ●●● ●●●●● ●● ●●● ●● ●●● ● ●●● ●● ●● ● ● ●●●● ●● ● ●● ● ●●● ●●● ●● ●● ●● ●● ●●●● ● ● ●● ● ● ● ●● ● ● ●● ● ●● ●● ●● ●●●● ● ● ● ●● ●● ●● ●●● ● ● ●●● ●● ●●● ●● ●● ● ●●●● ● ●●● ● ●● ●● ●●● ●●●● ●● ● ●● ●●● ● ●● ●● ●●● ●● ● ●●● ●●●● ● ● ●● ● ●●●● ● ●● ●●● ●● ● ●● ●● ●● ●● ●● ●●● ●●● ● ● ●●● ●●● ● ● ●●● ●● ●● ● ●●● ●● ● ● ● ●● ● ●●●● ●● ●●● ● ●●● ●● ● ● ●● ●●● ●●● ●● ●● ●● ●●●●● ●●●● ●● ●● ● ●●●● ●● ●● ●●● ● ●● ●● ● ●●● ●●● ●●● ●● ●●● ●● ●● ●●● ●● ●●● ● ●● ●●● ●● ●● ●●● ● ●● ● ●● ● ● ● ●● ●● ●● ● ●● ●●●● ● ●● ●●●● ● ●● ●●● ●●● ● ●● ● ●●● ●● ● ● ●● ● ●● ●●●● ●● ● ●●● ●● ● ●●●● ●● ●●● ● ●●● ●●● ●● ●● ●●● ●● ●● ●● ● ●● ● ●●● ● ●● ●●●● ● ●● ●●● ●● ● ●● ● ●● ●● ●●● ●● ● ●● ●● ●●● ●● ●● ●● ●● ● ●● ●●● ● ● ●● ●● ●● ● ●● ● ●● ●● ●● ● ●● ● ●● ●●● ●● ●● ●● ●● ●●● ●● ● ●●● ●● ● ●●● ●●● ● ●● ●●●● ● ●●● ●●● ●●●● ● ●● ●●● ●●●● ●● ●● ● ●●● ● ●●●● ●●●● ●● ●●● ● ●● ● ●● ● ●● ●●● ● ●● ●● ● ●●●● ●● ●●● ● ●●●● ● ●●● ●● ●●●● ● ● ● ●●● ●●● ●● ●● ● ● ●● ●● ●●● ●●● ●● ●● ● ●●● ● ● ●●● ● ● ●● ●● ●● ● ● ●● ●●● ●● ● ●● ●●● ● ●● ● ●● ● ●●● ●●● ●●●● ● ●● ●●● ● ● ●● ●●● ● ●●●● ● ●● ● ●● ●● ● ●● ●●● ●● ●●● ● ●● ●●●● ●● ●● ●●●●●●● ● ●● ● ● ●● ●●● ●● ●● ●●●● ● ●● ●● ●● ● ●● ● ● ●● ●● ●●●●● ● ● ● ● ●● ●● ●●● ● ● ● ● ●●● ●● ●●● ●● ● ●● ●● ● ●● ●● ●● ● ● ●● ● ●● ● ● ●● ●● ●● ● ●● ● ●●●● ● ●●● ●●●● ●● ●● ● ●● ●●● ●●● ●●● ●● ●● ●● ●● ●●● ●● ●● ● ● ● ●●●● ●● ● ●● ●●●● ● ●● ●● ●●● ● ●●● ●● ● ●● ● ●●●● ● ●● ● ● ●●● ●● ● ● ●●● ●●● ●● ●● ● ●● ● ●● ● ●●● ●●● ●● ● ●●● ● ●●● ●● ● ●● ●●● ●●● ●● ● ● ● ●●●● ●● ●●●● ● ● ●● ● ● ● ●●● ●●● ●● ● ●● ●●● ●●● ●●● ●● ●●● ●● ● ●● ●● ●●● ●● ● ●●● ● ●● ● ● ●● ●● ●●● ●●● ●●●●● ● ●● ●●●● ●● ● ● ●● ●● ●● ● ● ●● ● ●●● ● ●●●● ●● ● ●●● ● ● ●●●●● ● ● ●●●●● ●●●● ●● ●●●● ●● ● ●●●● ●● ● ●●●● ● ●● ●● ● ●●● ●● ● ●●● ●●● ●● ●● ●●●●● ●● ●●●● ● ●●●● ● ●●● ●● ●●●● ●● ● ●● ● ● ● ●●● ●●●● ●●● ●●● ●● ●● ●●● ● ●●● ● ●●● ●● ●● ●● ●●●●● ●● ●●●●● ●● ● ●● ● ● ●●● ● ●● ●● ● ●●●● ●● ● ● ● ●●●● ● ●● ●●● ●●● ●●● ● ●● ●●● ●●●● ●●● ● ●●● ● ●●●● ●●● ● ● ●● ●● ● ● ●●●● ● ● ●● ● ● ●● ●● ●● ●● ● ●●● ●●● ●● ●● ●● ● ● ●●● ● ● ●● ● ● ●● ● ● ● ●●● ●● ●●● ●● ●● ●●● ●● ●● ●● ● ●● ●●● ●●●● ● ●●● ● ●●●● ●● ●● ●● ●●●● ●●● ●● ● ●●●● ● ● ●● ●●● ●● ● ●● ●●● ●●●●● ●● ● ●● ●● ● ●● ●●●●● ●● ●●● ● ● ●● ●●●● ● ●● ● ●●● ● ●● ●●● ● ●● ●● ●●●● ●● ●● ●● ●●● ● ●●● ● ●● ●● ●● ●● ●● ● ● ●●● ● ●● ●● ●● ●● ● ●●● ●●● ●● ● ● ●● ●● ● ●●● ●● ● ●● ● ● ●● ●●●●● ●● ●●● ●● ●● ●● ● ●● ●●● ●● ●● ● ●● ●●● ●● ●● ●● ●●● ●●● ●● ●● ●●●● ●● ●● ● ● ●● ● ●● ● ●● ● ●● ● ●●● ● ●● ● ●●●●● ● ●● ●● ●●●● ●● ●● ●●● ●● ●●●●● ●● ●● ●● ●●● ●● ●●● ● ●● ● ● ● ●● ●●● ●● ● ●●● ● ●● ● ●●● ● ●● ● ●●● ●● ●● ●●● ●● ● ● ●●●● ● ●●●● ● ●● ●●● ●● ●●● ●●●● ●● ●● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Product Copula Figure 2.53: ρ = 0, OSS=5000, SSS=5000, m = 50 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Product Copula Figure 2.54: ρ = 0, OSS=20, SSS=1000, m = 20 ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample M Copula Figure 2.55: ρ = 1, OSS=5000, SSS=5000, m = 50 58 ● ●●● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●● ●● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●●●●● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Example 3.3 Copula Figure 2.56: ρ = 0, OSS=5000, SSS=5000, m = 50 ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Example 3.4 Copula Figure 2.57: ρ = 0.75, OSS=5000, SSS=5000, m = 50 ●●●●●●●●●●●●●●●● ●●●●●● ●● ●●●●●● ● ●●●● ●●● ●●●● ●● ●●●●● ●●●● ●●●● ●●● ●● ●●●● ●●●● ●●● ●●● ●●● ●●● ●●● ●●● ●●● ●●● ●●● ●● ●●● ●● ●●● ●● ●●● ●● ●● ●●● ●● ●●● ●● ●● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●● ●●● ●● ●● ●●● ●● ●● ●●● ●● ●● ●●● ●● ●●● ●●● ●●● ●●● ●●● ●●● ●●● ●●● ●●●● ●●● ●●● ●●●● ●●●● ●●● ●●●●● ●●●●● ● ●●●● ●●●●●● ●●● ●●●●●● ●●●● ●●●●●●●● ●●●●●●●●● ●●● ●●●●●● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Original Sample ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 Simulated Sample Example 3.5 Copula Figure 2.58: ρ = 0.286, OSS=5000, SSS=5000, m = 100 59 Original Sample 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 0.0 0.2 0.4 0.6 0.8 1.0 ● ● ●● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● Simulated Sample 0.0 0.2 0.4 0.6 0.8 1.0 0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 0.0 0.2 0.4 0.6 0.8 1.0● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ●●● ● ● ● ●● ● ● ●● ● ● ● Mixture G−GIID−GIDI−GDII Copula Figure 2.59: Dimension 3, model = 10, OSS=2000, SSS=2000, m = 50 Finally, in Figure 2.59 we sample in dimension d = 3 from model 10, which is a mixture of Gumbel, GumbelIID, GumbelIDI and GumbelDII, and allows us to have different dependencies on the vertices of the cube [0, 1]3 , with OSS = 2000, SSS = 2000 and m = 50, by its shape we call this the pinhata copula, and both samples look alike. From all the Figures it is clear that the sample copula of order m is a very nice estimator of the original copula C, when the sample size n is not too small, and the value of the order m is not close to n. In general, we recommend to take values of m ≤ n1/d. 60 3 Moments of some of the random variables associated with the sample copula In this section we show the characteristics of the distributions associated to the count of the boxes generated by the uniform partition of order m of Ik = [0, 1]k. We obtained similar results to the ones in [8], the identities corresponding to permutations and combinations, used in this section, can be found in [27]. We use the following notation to indicate the k-permutations of n Pn k = n! (n − k)! . To note that our case of study corresponds to observations of the modified sample from a d-copula C or a continuous joint distribution H. Let Im = {1, · · · ,m}, if we consider the sample without applying the rank transformation, then by [24] and [25] and considering m ≥ 2, d ≥ 2 and for every i = (i1, · · · , id) ∈ Id m Rm i = 〈 i1 − 1 m , i1 m ] × · · · × 〈 id − 1 m , id m ] the uniform partition of size m of Id = [0, 1]d, where the notation “〈” indicates “(” if i j > 1 and “[” if i j = 1, for all j ∈ Im, the random vector ( Ni | i = (i1, · · · , id) ∈ Id m ) have a multinomial distribution with parameters pi = VolC(Rm i ), the C-volume of the region Ri, for all i = (i1, · · · , id) ∈ Id m; [25] affirms that it is only necessary to consider d(m − 1) + 1 of these parameters as free, the remaining parameters are determined by the properties that satisfy a copula. For the case of the original sample, in dimension d = 2, we have the following lemma (see [25], for the proof of this result), Lemma 3.1 Let m ≥ 2, we consider the uniform partition Ri j, i, j ∈ {1, · · · ,m}, of I2 = [0, 1]2, and given a 2-copula C we define, for all i, j ∈ {1, · · · ,m} pi j = VC(Ri j) 61 p11 p21 · · · pm1 ... ... · · · ... p1m−1 p2m−1 · · · pmm−1 p1m p2m · · · pmm 0 1 m 2 m m−1 m 1 1 m 2 m m−1 m 1 Figure 3.1: Uniform partition in I2 = [0, 1]2 We have the following relations 0 ≤ ∑m−1 i=1 pi j ≤ 1 m ( j ∈ {1, · · · ,m − 1}) 0 ≤ ∑m−1 j=1 pi j ≤ 1 m (i ∈ {1, · · · ,m − 1}) 1 − 2 m ≤ m−1 ∑ i, j=1 pi j ≤ 1 − 1 m for i, j ∈ {1, · · · ,m − 1} pim = 1 m − m−1 ∑ j=1 pi j, pm j = 1 m − m−1 ∑ i=1 pi j and pmm = 1 m − m−1 ∑ j=1 pm j = 1 m − m−1 ∑ i=1 pim Let n,m ≥ 2, where m divides n, and Ri j, i, j ∈ {1, · · · ,m}, the uniform partition of I2 = [0, 1]2. Let Ni j, i, j ∈ {1, · · · ,m}, be the random variables that indicates the number of observation in the regions Ri j, i, j ∈ {1, · · · ,m}, respectively, then the distribution of the random vector (Ni j, i, j ∈ {1, · · · ,m}) is multinomial with parameters n and pi j, i, j ∈ {1, · · · ,m}. It is important to note that only (m−1)2 values of the probabilities pi j, i, j ∈ {1, · · · ,m− 1}, are free, the remaining probabilities pi j, can be written in terms of the previous probabilities (in the Figure 3.1 they are appear in red). 62 In dimension d = 3 we have the following lemma (in [25], we found the proof of this result), Lemma 3.2 Let m ≥ 2, we consider the uniform partition Ri jk, i, j, k ∈ {1, · · · ,m}, of the unit cube I3 = [0, 1]3, given a 3-copula C we define pi jk = VC(Ri jk) p111 p211 · · · pm11 p112 p212 · · · pm12 . . . . . . · · · . . . p11m p21m · · · pm1m pm 21 · · · pm m 1 pm 22 · · · pm m 2. . . · · · . . . pm 2m · · · pm m m p12m p22m · · · · · · · · · · · · p1mm p2mm · · · 0 1 m 2 m m−1 m 1 1 m 2 m m−1 m 1 1 m 2 m m−1 m 1 Figure 3.2: Uniform partition in I3 = [0, 1]3 We have the following relations, Let i ∈ {1, · · · ,m − 1} 0 ≤ ∑ ( j,k)∈{1,···,m}2\{(m,m)} pi jk ≤ 1 m . Let j ∈ {1, · · · ,m − 1} 0 ≤ ∑ (i,k)∈{1,···,m}2\{(m,m)} pi jk ≤ 1 m . Let k ∈ {1, · · · ,m − 1} 0 ≤ ∑ (i, j)∈{1,···,m}2\{(m,m)} pi jk ≤ 1 m . We define Id m,r = {(i1, · · · , id) ∈ {1, · · · ,m}d | (d − r) coordinates equal to m} also it holds 2 − 3 m ≤ 3 ∑ r=2           (r − 1) ∑ (i, j,k)∈I3 m,r pi jk           ≤ 2 − 2 m . 63 We observe that the probabilities p1mm, · · · , pm−1mm, pm1m, · · · , pmm−1m, pmm1, · · · , pmmm−1 y pmmm can be written in terms of the other probabilities pi jk, for example p1mm = 1 m − ∑ ( j,k)∈{1,···,m}2\{(m,m)} p1 jk y pmmm = 3 m − 2 + 3 ∑ r=2           (r − 1) ∑ (i, j,k)∈I3 m,r pi jk           . Similarly to the case d = 2, let n,m ≥ 2, where m divides n, and Ri jk, i, j, k ∈ {1, · · · ,m}, the regions corresponding to the uniform partition of the unit cube I3 = [0, 1]3. Let Ni jk, i, j, k ∈ {1, · · · ,m}, be the random variables that indicates the number of observations in the regions Ri jk, i, j, k ∈ {1, · · · ,m}, respectively, then the distribution of de random vector (Ni jk, i, j, k ∈ {1, · · · ,m}) is multinomial with parameters n and pi jk, i, j, k ∈ {1, · · · ,m}. Regarding to the probabilities pi jk, i, j, k ∈ {1, · · · ,m} is only necessary to consider m3 − (3m − 2) free parameters (in the Figure 3.2 they are marked with red color the regions determined for the remaining regions). The distributions described in the above lemmas corresponds to the case where we considered the original sample, that is, the sample without the rank transformation. Following we present the procedure to make the count of the observations in the boxes considering the modified sample. Remark 3.3 In the grid of I2 = [0, 1]2, generated by the partitions Px = {0, 1/n, · · · , (n − 1)/n, 1}, in the first coordinate X and Py = {0, 1/n, · · · , (n − 1)/n, 1} in the second coordinate Y, there exist n! different ways in which can be observed n rank statistics from a sample of size n, because points are not considered if they have the same value in some coordinate. Figure 3.3: Grid in I2 = [0, 1]2 In the Figure 3.3 we show, in red, a point that should not be considered, because it has the same second coordinate as another point (marked in blue). 64 We consider n ≥ m ≥ 2, where m divides n, and l = n/m. let R11 = [0, l/n]2 and let N11 be the random variable that indicates the number of the observations k in R11, with k ∈ {0, 1, · · · , l} (see Figure 3.4). R11 0 1 m 1 m Figure 3.4: Region R11 To calculate P{N11 = k}, with k ∈ {0, 1, · · · , l}, we first select the number of ways in which we can select the coordinates in the first coordinate X, from these k observations, this count corresponds to ( l k ) (see Figure 3.5). R11 We select k posibilities (marked in red) among the l total options regardless of the order 0 1 m 1 m Figure 3.5 subsequently we calculate the number of ways in which we can select the coordinates in the second coordinate Y from these k observations, this count is given by Pl k (see Figure 3.6). 65 R11 We select k posibilities (marked in red) among the l total options considering the order 0 1 m 1 m Figure 3.6 Let R̂12 = [0, l/n]× [l/n, 1], the coordinates in the first coordinate X of the l− k points that we must see in the region R̂12 can be selected of ( l−k l−k ) = 1 way, the coordinates in the second coordinate Y, can be selected of Pn−l l−k different ways (see Figure 3.7). R11 R̂12 0 1 m 1 m Figure 3.7 Let R̂22 = [l/n, 1]× [0, 1], the n− l points that we must see in this region, they have (n− l)! different ways to appear (see Figure 3.8). R11 R̂12 R̂22 0 1 m 1 m Figure 3.8 66 From this observations, the number in which k points can be observed in the region R11 = [0, l/n]2, l− k points in the region R̂12 = [0, l/n]× (l/n, 1] and n− l points in the region R̂22 = (l/n, 1]× [0, 1], is ( l k ) Pl k ( l − k l − k ) Pn−l l−k(n − l)! from n! possibilities. The counting procedure described above, it can be generalized to higher dimensions, considering permutations in the counting process of the different coordinates respect to the first coordinate axis. 3.1 Case: dimension two when m divides n In this part, we present the distribution and moments of the random variables associated to the counting in the boxes generated by the uniform partition of size m, with m ≥ 2, of I2 = [0, 1]2, induced by the modified sample from the product copula. Definition 3.4 Let m ≥ 2, n ∈ N, where m divides n, and l = n/m. We define the following regions in the unit square I2 = [0, 1]2 R11 = [0, l/n] × [0, l/n] R1 = [0, l/n] × (l/n, 1] R = (l/n, 1] × [0, 1] . Remark 3.5 In the following results we consider that in the unit square I2 exists n points corres- ponding to the rank transformation of a sample of size n from the product copula. The following results describe the probability distribution of the random variables associated with the counting of observations in the boxes generated by the uniform partition. Lemma 3.6 Let k1 be the number of points in the region R11 and x1 the number of points in the region R1, we have that ∑ k1+x1=l ( l k1 ) Pl k1 ( l − k1 x1 ) Pn−l x1 = Pn l . Proof: We can observe that ( l − k1 x1 ) = ( x1 x1 ) = 1 67 then ∑ k1+x1=l ( l k1 ) Pl k1 ( l − k1 x1 ) Pn−l x1 = ∑ k1+x1=l ( l k1 ) Pl k1 Pn−l x1 = ∑ k1+x1=l l! k1!(l − k1)! l! (l − k1)! (n − l)! (n − l − x1)! = ∑ k1+x1=l l! k1!(l − k1)! l! x1! (n − l)! (n − l − x1)! = l! ∑ k1+x1=l ( l k1 )( n − l x1 ) = l! ( n l ) (Vandermonde’s identity) = Pn l .  Lemma 3.7 Let N11 be the random variable that indicates the number of observations falling in the region R11, then the following equality holds P{N11 = k1} = l!(n − l)! n! ( l k1 )( n − l l − k1 ) = ( l k1 )( n−l l−k1 ) ( n l ) and the random variable N11 has a hypergeometric distribution con parameters: n the population size, l the class size and l the sample size. Proof: From the proof of Lemma 3.6, we have ∑ k1+x1=l ( l k1 ) Pl k1 ( l − k1 x1 ) Pn−l x1 = l! ∑ k1+x1=l ( l k1 )( n − l x1 ) , this equality indicates the number of ways in which we can have k1 + x1 points in the region R11 ∪ R1; fixing the value k1, this result is multiplied by (n − l)! (number of ways that we can have n − l points in the region R discarding l possibilities corresponding to the coordinates occupied by the observations in the region R11 ∪ R1) and divided by n!, the total number of ways that we can observe n points in I2 = [0, 1]2.  68 Then l ∑ k1=1 P{N11 = k1} = l ∑ k1=1 l!(n − l)! n! ( l k1 )( n − l l − k1 ) = (n − l)! n! l! l ∑ k1=1 ( l k1 )( n − l l − k1 ) = (n − l)! n! Pn l = 1. Theorem 3.8 Let n and m be integers greater than or equal to two, where m divides n, and l = n/m. Let N11 be the random variable indicating the number of observations falling in R11, when we take a sample of size n from the product copula, then E(N11/n) = 1 m2 , E((N11/n)2) = l2(l − 1)2 n3(n − 1) + 1 nm2 and Var(N11/n) = l2(l − 1)2 n3(n − 1) + 1 nm2 − ( 1 m2 )2 . Proof: From the Lemma 3.7, we have E(N11) = l ∑ k1=0 k1 l!(n − l)! n! ( l k1 )( n − l l − k1 ) = l!l(n − l)! n! l ∑ k1=1 ( l − 1 k1 − 1 )( n − l l − k1 ) = l!l(n − l)! n! l−1 ∑ u1=0 ( l − 1 u1 )( n − l l − 1 − u1 ) (u1 = k1 − 1) = l!l(n − l)! n! ( n − 1 l − 1 ) (Vandermonde’s identity) = l2 n = n m2 and E(N11/n) = 1 m2 . 69 Then, E((N11)2) = l ∑ k1=0 k2 1 l!(n − l)! n! ( l k1 )( n − l l − k1 ) = l ∑ k1=0 (k1(k1 − 1) + k1) l!(n − l)! n! ( l k1 )( n − l l − k1 ) = l ∑ k1=2 k1(k1 − 1) l!(n − l)! n! ( l k1 )( n − l l − k1 ) + l ∑ k1=0 k1 l!(n − l)! n! ( l k1 )( n − l l − k1 ) = l ∑ k1=2 l!(n − l)! n! l(l − 1) ( l − 2 k1 − 2 )( n − l l − k1 ) + n m2 = l!(n − l)! n! l(l − 1) l−2 ∑ u1=0 ( l − 2 u1 )( n − l (l − 2) − u1 ) + n m2 (u1 = k1 − 2) = l!(n − l)! n! l(l − 1) ( n − 2 l − 2 ) + n m2 = l2(l − 1)2 n(n − 1) + n m2 . Therefore E((N11/n)2) = l2(l − 1)2 n3(n − 1) + 1 nm2 and Var(N11/n) = E((N11/n)2 − (E(N11/n))2 = l2(l − 1)2 n3(n − 1) + 1 nm2 − ( 1 m2 )2 .  Remark 3.9 To evaluate the covariance, we consider two cases, corresponding to the position of the boxes in the square I2 = [0, 1]2, as illustrated in Figure 3.9. 70 R11 R12 R11 R22 0 1 m 1 m 2 m 0 1 m 2 m 1 m 2 m Figure 3.9: Covariances cases d = 2. The results presented below correspond to the calculation of the covariances for the first case. Definition 3.10 Let r ∈ {2, · · · ,m}, we define the following region in the unit square I2 = [0, 1]2 R1r = [0, l/n] × ((l/n)(r − 1), (l/n)r] . Lemma 3.11 Let k1 be the number of points in R11, let k2 be the number of points in R1r, r ∈ {2, · · · ,m}, and let x1 be the number of points in ([0, l/n] × [0, 1])\(R11 ∪ R1r), then ∑ k1+k2+x1=l ( l k1 ) Pl k1 ( l − k1 k2 ) Pl k2 ( l − k1 − k2 x1 ) Pn−2l x1 = Pn l . Proof: We have that ( l − k1 − k2 x1 ) = ( x1 x1 ) = 1 and ∑ k1+k2+x1=l ( l k1 ) Pl k1 ( l − k1 k2 ) Pl k2 ( l − k1 − k2 x1 ) Pn−2l x1 = ∑ k1+k2+x1=l ( l k1 ) Pl k1 ( l − k1 k2 ) Pl k2 Pn−2l x1 = ∑ k1+k2+x1=l l! k1!(l − k1)! l! (l − k1)! (l − k1)! k2!(l − k1 − k2)! · l! (l − k2)! (n − 2l)! (n − 2l − x1)! = l! ∑ k1+k2+x1=l ( l k1 )( l k2 )( n − 2l x1 ) = l! ( n l ) = Pn l . 71  Lemma 3.12 Let N11 be the random variable indicating the number of observations falling in R11 and let N1r, r ∈ {2, · · · ,m}, be the random variable indicating the number of observations falling in R1r then P{N11 = k1,N1r = k2} = l!(n − l)! n! ( l k1 )( l k2 )( n − 2l l − k1 − k2 ) . Proof: We consider l! ∑ k1+k2+x1=l ( l k1 )( l k2 )( n − 2l x1 ) = Pn l from the proof of Lemma 3.11, using the notation of the Definition 3.4, this equality indicates the number of ways in which we can have k1 + k2 + x1 points in the region R11 ∪ R1; similarly to the Lemma 1.4, setting the values k1 and k2, this result is multiplied by (n − l)! (number of ways that we can have n − l points in the region R discarding l possibilities corresponding to the coordinates occupied by the observations in the region R11 ∪ R1) and divided by n!, the total number of ways that we can observe n points in I2 = [0, 1]2.  We can observe that ∑ k1+k2=l P{N11 = k1,N1r = k2} = ∑ k1+k2=l l!(n − l)! n! ( l k1 )( l k2 )( n − 2l l − k1 − k2 ) = (n − l)! n! l! ∑ k1+k2+x1=l ( l k1 )( l k2 )( n − 2l x1 ) = (n − l)! n! Pn l = 1. Theorem 3.13 Let n and m be integers greater than or equal to two, where m divides n, and l = n/m. Let N11 be the random variable indicating the number of observations falling in R11 (Definition 3.4) and let N1r, r ∈ {2, · · · ,m}, be the random variable indicating the number of observations falling in R1r (Definition 3.10), when we consider a sample of size n from the product copula, then Cov(N11/n,N1r/n) = l3(l − 1) n3(n − 1) − ( 1 m2 )2 . 72 Proof: E(N11N1r) = ∑ k1+k2=l k1k2 l!(n − l)! n! ( l k1 )( l k2 )( n − 2l l − k1 − k2 ) = l!(n − l)! n! ∑ k1+k2+x1=l k1k2 ( l k1 )( l k2 )( n − 2l x1 ) = l!(n − l)! n! l−2 ∑ x1=0 ( n − 2l x1 ) ∑ k1+k2=l−x1 k1k2 ( l k1 )( l k2 ) = l!(n − l)!l2 n! l−2 ∑ x1=0 ( n − 2l x1 ) ∑ (k1−1)+(k2−1)=(l−2)−x1 ( l − 1 k1 − 1 )( l − 1 k2 − 1 ) = l!(n − l)!l2 n! l−2 ∑ x1=0 ( n − 2l x1 )( 2l − 2 (l − 2) − x1 ) = l!(n − l)!l2 n! ( n − 2 l − 2 ) = l!l2(n − l)! n! (n − 2)! (l − 2)!(n − l)! = l3(l − 1) n(n − 1) and Cov(N11/n,N1r/n) = E((N11/n)(N1r/n)) − E(N11/n)E(N1r/n) = l3(l − 1) n3(n − 1) − ( 1 m2 )2 .  The next results correspond to the calculation of the covariances in the second case. Lemma 3.14 Let k1, x1 and x2 be the number of observed points in the regions R11 (Definition 3.4), R12 = [0, l/n] × (l/n, 2l/n] and R∗ 13 = [0, l/n] × (2l/n, 1], respectively, let y1, k2 and y2 be the number of observed points in R21 = (l/n, 2l/n] × [0, l/n], R22 = (l/n, 2l/n] × (l/n, 2l/n] and R∗ 23 = (l/n, 2l/n] × (2l/n, 1], respectively, then ∑ k1+x1+x2=l ∑ k2+y1+y2=l ( l k1 ) Pl k1 ( l − k1 x1 ) Pl x1 ( l − k1 − x1 x2 ) Pn−2l x2 73 · ( l y1 ) Pl−k1 y1 ( l − y1 k2 ) P l−x1 k2 ( l − y1 − k2 y2 ) Pn−2l−x2 y2 = Pn−l l Pn l . Proof: We have that ( l − k1 − x1 x2 ) = 1, ( l − y1 − k2 y2 ) = 1 and ∑ k1+x1+x2=l ∑ k2+y1+y2=l ( l k1 ) Pl k1 ( l − k1 x1 ) Pl x1 Pn−2l x2 ( l y1 ) Pl−k1 y1 ( l − y1 k2 ) P l−x1 k2 Pn−2l−x2 y2 = (l!)2 ∑ k1+x1+x2=l ∑ k2+y1+y2=l ( l k1 )( l x1 )( n − 2l x2 )( l − k1 y1 )( l − x1 k2 )( n − 2l − x2 y2 ) = (l!)2 ∑ k1+x1+x2=l ( l k1 )( l x1 )( n − 2l x2 )( n − k1 − x1 − x2 l ) = (l!)2 ( n − l l ) ∑ k1+x1+x2=l ( l k1 )( l x1 )( n − 2l x2 ) = (l!)2 ( n − l l )( n l ) = Pn−l l Pn l .  Lemma 3.15 Let N11 be the random variable indicating the number of observations falling in R11 (Definition 3.4) and N22 the random variable indicating the number of observations falling in R22 = (l/n, 2l/n] × (l/n, 2l/n] then P{N11 = k1,N22 = k2} = (l!)2(n − 2l)! n! ∑ x1+x2=l−k1 ∑ y1+y2=l−k2 ( l k1 )( l x1 )( n − 2l x2 )( l − k1 y1 )( l − x1 k2 )( n − 2l − x2 y2 ) Proof: This equality is obtained from the proof Lemma 3.14, setting the values k1 and k2, and multiplying by (n−2l)! (number of ways that we can have n−2l points in the region (2l/n, 1]×[0, 1] discarding 2l possibilities corresponding to the coordinates occupied by the observations in the region [0, 2l/n] × [0, 1]) and divided by n!, the total number of ways that we can observe n points in I2 = [0, 1]2. 74 We can note that l ∑ k1=0 l ∑ k2=0 P{N11 = k1,N22 = k2} = (n − 2l)! n! Pn−l l Pn l = 1.  Theorem 3.16 Let n and m be integers greater than or equal to two, where m divides n, and l = n/m, with the same hypothesis of the Lemma 3.15 we have Cov(N11/n,N22/n) = l4 n3(n − 1) − 1 m4 . Proof: E(N11N22) = (l!)2(n − 2l)! n! ∑ k1+x1+x2=l k1 ( l k1 )( l x1 )( n − 2l x2 ) ∑ k2+y1+y2=l k2 ( l − k1 y1 )( l − x1 k2 )( n − 2l − x2 y2 ) = (l!)2l(n − 2l)! n! ∑ (k1−1)+x1+x2=l−1 ( l − 1 k1 − 1 )( l x1 )( n − 2l x2 ) ·(l − x1) ∑ (k2−1)+y1+y2=l−1 ( l − k1 y1 )( l − x1 − 1 k2 − 1 )( n − 2l − x2 y2 ) = (l!)2l(n − 2l)! n! ∑ (k1−1)+x1+x2=l−1 ( l − 1 k1 − 1 )( l x1 )( n − 2l x2 ) (l − x1) ( n − 1 − l l − 1 ) = (l!)2l2(n − 2l)! n! ( n − 1 − l l − 1 ) ∑ (k1−1)+x1+x2=l−1 ( l − 1 k1 − 1 )( l x1 )( n − 2l x2 ) − (l!)2l2(n − 2l)! n! ( n − 1 − l l − 1 ) ∑ (k1−1)+(x1−1)+x2=l−2 ( l − 1 k1 − 1 )( l − 1 x1 − 1 )( n − 2l x2 ) = (l!)2l2(n − 2l)! n! ( n − 1 − l l − 1 ) [( n − 1 l − 1 ) − ( n − 2 l − 2 )] = (l!)2l2(n − 2l)! n! (n − 1 − l)! (n − 2l)!(l − 1)! ( (n − 1)! (l − 1)!(n − l)! − (n − 2)! (l − 2)!(n − l)! ) = (l!)2l2(n − 1)!(n − 1 − l)! n!((l − 1)!)2(n − l)! − (l!)2l2(n − 2)!(n − 1 − l)! n!(l − 1)!(l − 2)!(n − l)! = l4 n(n − l) − l4(l − 1) n(n − 1)(n − l) 75 = l4 n(n − 1) and E((N11/n)(N22/n)) = l4 n3(n − 1) . Therefore Cov(N11/n,N22/n) = E((N11/n)(N22/n)) − E(N11/n)E(N22/n) = l4 n3(n − 1) − 1 m4 .  We finalized this section with a theorem that indicates the joint probability distribution of the boxes generated for the uniform partition of size m, with m ≥ 2, when m divides n. Theorem 3.17 Let m ≥ 2, n ∈ N, where m divides n, l = n/m and Im = {1, · · ·m}, let Ri j, with i, j ∈ Im, be the boxes of the uniform partition of size m of I2 = [0, 1]2 and let Ni j be the random variables that indicate the number of observations falling in Ri j, respectively, for all i, j ∈ Im, when we consider a sample of size n from the product copula; let ni j, with i, j ∈ Im, be zero or a positive integer, satisfying the following restrictions m ∑ j=1 ni j = l (for all i ∈ Im), m ∑ i=1 ni j = l (for all j ∈ Im), then P        ⋂ i, j∈Im {Ni j = ni j}        = (l!)2m n! ∏ i j∈Im ni j! . Proof: Proceeding as in Remark 3.3, first for the region R11, second for the region R12 and succes- sively to the regions R13, · · · ,R1m,R21, · · · ,R2m,Rm1, · · · ,Rmm, we have that the number of ways in which we can observe ni j points in each region Ri j, i, j ∈ Im, is given by ( l n11 ) Pl n11 ( l − n11 n12 ) Pl n12 · · · ( l −∑m−1 j=1 n1 j n1m ) Pl n1m · ( l n21 ) Pl−n11 n21 ( l − n21 n22 ) Pl−n12 n22 · · · ( l −∑m−1 j=1 n2 j n2m ) Pl−n1m n2m 76 ... · ( l nm1 ) P l−∑m−1 i=1 ni1 nm1 ( l − nm1 nm2 ) P l−∑m−1 i=1 ni2 nm2 · · · ( l −∑m−1 j=1 nm j nmm ) P l−∑m−1 i=1 nim nmm = (l!)2m ∏ i, j∈Im ni j! , finally, we divided between n!, the total number of possibilities in which we can see n points in I2 = [0, 1]2, corresponding to the modified sample of the a sample of size n from the product copula, therefore P        ⋂ i, j∈Im {Ni j = ni j}        = (l!)2m n! ∏ i, j∈Im ni j! .  3.2 Case: dimension three when m divides n In a similar way to the previous section, we present the distribution and moments of the random variables associated to the counting in the boxes generated by the uniform partition of size m, with m ≥ 2, of I3 = [0, 1]3, induced by the modified sample from the product copula. Theorem 3.18 Let n and m be integers greater than or equal to two, where m divides n, and l = n/m. Let N111 be the random variable that indicates the number of observations falling in R111 = [0, l/n]3 when we consider a sample of size n from the product copula, then E(N11/n) = 1 m3 , E((N11/n)2) = l3(l − 1)3 n4(n − 1)2 + 1 nm3 and Var(N111/n) = l3(l − 1)3 n4(n − 1)2 + 1 nm3 − ( 1 m3 )2 . Remark 3.19 Figure 3.10 shows the related boxes to the count of the number of observations associated to the random variable N111, the Figure 3.11 show the region R = (l/n, 1]×[0, 1]×[0, 1], in this region exist ((n − l)!)2 ways in which we can graph the transformed sample data, since that l points are observed in the region R1 = [0, l/n] × [0, 1] × [0, 1]. 77 R111 Figure 3.10: Box R111 R111 R Figure 3.11: Box R k x1 x2 Figure 3.12: Box R1 Remark 3.20 The Figure 3.12 it shows the region R1 = [0, l/n]×[0, 1]×[0, 1], this region is divided on three parts, R111 = [0, l/n]3, R112 = [0, l/n]×[0, l/n]×(l/n, 1] and R122 = [0, l/n]×(l/n, 1]×[0, 1], the number of observations in the regions are denoted, respectively, by k, x1 and x2. The number of ways in which we can select the value of k is ( l k ) Pl k Pl k , the number of ways in which we can select the value of x1, given the value k, is ( l−k x1 ) Pl−k x1 Pn−l x1 and the number of possibilities of the value x2 is, given the values k and x1, ( l−k−x1 x2 ) Pn−l x2 P n−k−x1 x2 . Definition 3.21 Let k ∈ {0, 1, · · · , l}, we define Gn,l k = ∑ x1+x2=l−k ( l − k x1 ) Pl−k x1 Pn−l x1 ( l − k − x1 x2 ) Pn−l x2 Pn−k−x1 x2 = l−k ∑ x1=0 ( l − k x1 ) Pl−k x1 Pn−l x1 ( l − k − x1 l − k − x1 ) Pn−l l−k−x1 P n−k−x1 l−k−x1 = l−k ∑ x1=0 (l − k)! x1!(l − k − x1)! (l − k)! (l − k − x1)! (n − l)! (n − l − x1)! (n − l)! (n − 2l + k + x1)! (n − k − x1)! (n − l)! = ((l − k)!)2 l−k ∑ x1=0 ( n − k − x1 l − k − x1 )( n − l l − k − x1 )( n − l x1 ) . 78 Remark 3.22 We have, l ∑ k=0 ( l k ) Pl kPl kG n,l k = ((Pn l )!)2. then l ∑ k=0 ( l k ) (l!)2         l−k ∑ x1=0 ( n − k − x1 l − k − x1 )( n − l l − k − x1 )( n − l x1 )         ((n − l)!)2 = (n!)2 Remark 3.23 Le k ∈ {0, 1, · · · , l}, we have P{N = k} = ( l k ) Pl kPl kG n,l k ((n − l)!)2/(n!)2 Using Remark 3.23, we can proof the Theorem 3.18, E(N111) = l ∑ k=0 k ( l k ) (l!)2         l−k ∑ x1=0 ( n − k − x1 l − k − x1 )( n − l l − k − x1 )( n − l x1 )         ((n − l)!)2/(n!)2 = l3 l ∑ k=1 ( l − 1 k − 1 ) ((l − 1)!)2         (l−1)−(k−1) ∑ x1=0 ( (n − 1) − (k − 1) − x1 (l − 1) − (k − 1) − x1 ) ( (n − 1) − (l − 1) (l − 1) − (k − 1) − x1 )( (n − 1) − (l − 1) x1 )] (((n − 1) − (l − 1))!)2/(n!)2 = l3 l−1 ∑ u=0 ( l − 1 u ) ((l − 1)!)2         (l−1)−u ∑ x1=0 ( (n − 1) − u − x1 (l − 1) − u − x1 ) ( (n − 1) − (l − 1) (l − 1) − u − x1 )( (n − 1) − (l − 1) x1 )] (((n − 1) − (l − 1))!)2/(n!)2 = l3((n − 1)!)2 (n!)2 (Remark 3.22) = n3 m3 1 n2 = n m3 and E ( N111 n ) = 1 m3 . 79 The second moment is obtained from the following equalities E(N2 111) = l ∑ k=0 k2 ( l k ) (l!)2         l−k ∑ x1=0 ( n − k − x1 l − k − x1 )( n − l l − k − x1 )( n − l x1 )         ((n − l)!)2/(n!)2 = l ∑ k=0 [k(k − 1) + k] ( l k ) (l!)2         l−k ∑ x1=0 ( n − k − x1 l − k − x1 )( n − l l − k − x1 )( n − l x1 )         ((n − l)!)2/(n!)2 = l ∑ k=2 k(k − 1) ( l k ) (l!)2         l−k ∑ x1=0 ( n − k − x1 l − k − x1 )( n − l l − k − x1 )( n − l x1 )         ((n − l)!)2/(n!)2 + n m3 = (l(l − 1))3 l ∑ k=2 ( l − 2 k − 2 ) ((l − 2)!)2         (l−2)−(k−2) ∑ x1=0 ( (n − 2) − (k − 2) − x1 (l − 2) − (k − 2) − x1 ) ( (n − 2) − (l − 2) (l − 2) − (k − 2) − x1 )( (n − 2) − (l − 2) x1 )] (((n − 2) − (l − 2))!)2/(n!)2 + n m3 = (l(l − 1))3 l−2 ∑ u=0 ( l − 2 u ) ((l − 2)!)2         (l−2)−u ∑ x1=0 ( (n − 2) − u − x1 (l − 2) − u − x1 ) ( (n − 2) − (l − 2) (l − 2) − u − x1 )( (n − 2) − (l − 2) x1 )] (((n − 2) − (l − 2))!)2/(n!)2 + n m3 = (l(l − 1))3((n − 2)!)2 (n!)2 + n m3 (Remark 3.22) = l3(l − 1)3 n2(n − 1)2 + n m3 . and E ( ( N111 n )2 ) = l3(l − 1)3 n4(n − 1)2 + 1 nm3 . Finally Var(N111/n) = E((N111/n)2) − (E(N111/n))2 = l3(l − 1)3 n4(n − 1)2 + 1 nm3 − ( 1 m3 )2 .  Remark 3.24 To evaluate the covariance, we consider three cases, with respect to the position of the boxes R112 = [0, l/n] × [0, l/n] × (l/n, 2l/n], R122 = [0, l/n] × (l/n, 2l/n] × (l/n, 2l/n] and R222 = (l/n, 2l/n]3, relatives to the box R111. 80 R111 R112 Figure 3.13: Case 1 R111 R222 Figure 3.14: Case 2 R111 R122 Figure 3.15: Case 3 Lemma 3.25 (Case 1) Let n and m be integers greater than or equal to two, where m divides n, and l = n/m. Let k1 be the number of observations falling in R111 = [0, l/n]3, k2 the number of observations falling in R112 = [0, l/n] × [0, l/n] × (l/n, 2l/n], x1 be the number of observations falling in R113 = [0, l/n] × [0, l/n] × (l/n, 1] and x2 the number of observations falling in R∗ = [0, l/n] × (l/n, 1] × [0, 1], when we considered a sample of size n from the product copula, then ∑ k1+k2+x1+x2=l ( l k1 ) Pl k1 Pl k1 ( l − k1 k2 ) P l−k1 k2 Pl k2 ( l − k1 − k2 x1 ) Pl−k1−k2 x1 Pn−2l x1 · ( l − k1 − k2 − x1 x2 ) Pn−l x2 Pn−x1−k1−k2 x2 = ( Pn l )2 . Proof: We note ( l − k1 − k2 − x1 x2 ) = 1 and ∑ k1+k2+x1+x2=l ( l k1 ) Pl k1 Pl k1 ( l − k1 k2 ) P l−k1 k2 Pl k2 ( l − k1 − k2 x1 ) Pl−k1−k2 x1 Pn−2l x1 · ( l − k1 − k2 − x1 x2 ) Pn−l x2 Pn−x1−k1−k2 x2 81 = ∑ k1+k2+x1+x2=l ( l k1 ) Pl k1 Pl k1 ( l − k1 k2 ) P l−k1 k2 Pl k2 ( l − k1 − k2 x1 ) Pl−k1−k2 x1 Pn−2l x1 Pn−l x2 Pn−x1−k1−k2 x2 = (l!)2 ∑ k1+k2+x1+x2=l ( l k1 )( l k2 )( n − 2l x1 )( n − l x2 )( n − x1 − k1 − k2 x2 ) = (l!)2 ∑ k1+k2+x1+x2=l ( l k1 )( l k2 )( n − 2l x1 )( n − l x2 )( n − (l − x2) x2 ) = (l!)2 l ∑ x2=0 ( n − l x2 )( n − (l − x2) x2 ) ∑ k1+k2+x1=l−x2 ( l k1 )( l k2 )( n − 2l x1 ) = (l!)2 l ∑ x2=0 ( n − l x2 )( n − (l − x2) x2 )( n l − x2 ) = (l!)2n! l ∑ x2=0 1 x2!(n − l − x2)! 1 x2!(l − x2)! = (l!)2n! l!(n − l)! l ∑ x2=0 ( l x2 )( n − l (n − l) − x2 ) = (l!)2n! l!(n − l)! ( n n − l ) = (Pn l )2.  Remark 3.26 Let n and m be integers greater than or equal to two, where m divides n, and l = n/m. Let N111 be the random variable indicating the number of observations falling in R111 = [0, l/n]3 and let N112 be the random variable indicating the number of observations falling in R112 = [0, l/n] × [0, l/n] × (l/n, 2l/n], when we take a sample of size n from the product copula, using the proof of the Lemma 3.25 we have P{N111 = k1,N112 = k2} = (l!)2((n − l)!)2 (n!)2 l−k1−k2 ∑ x2=0 ( n − l x2 )( n − (l − x2) x2 )( l k1 )( l k2 )( n − 2l l − x2 − k1 − k2 ) . Theorem 3.27 With the same hypothesis of Remark 3.26, we have Cov(N111/n,N112/n) = l4(l − 1)2 n4(n − 1)2 − ( 1 m3 )2 . 82 Proof: From Lemma 3.25 and Remark 3.26 we have E(N111N112) = (l!)2((n − l)!)2 (n!)2 l−2 ∑ x2=0 ( n − l x2 )( n − (l − x2) x2 ) ∑ k1+k2+x1=l−x2 k1k2 ( l k1 )( l k2 )( n − 2l x1 ) = l2(l!)2((n − l)!)2 (n!)2 l−2 ∑ x2=0 ( n − l x2 )( n − (l − x2) x2 ) · ∑ (k1−1)+(k2−1)+x1=(l−2)−x2 ( l − 1 k1 − 1 )( l − 1 k2 − 1 )( n − 2l x1 ) = l2(l!)2((n − l)!)2 (n!)2 l−2 ∑ x2=0 ( n − l x2 )( n − (l − x2) x2 )( n − 2 (l − 2) − x2 ) = l2(l!)2((n − l)!)2(n − 2)! (n!)2(n − l)!(l − 2)! l−2 ∑ x2=0 ( n − l x2 )( l − 2 (l − 2) − x2 ) = l2(l!)2((n − l)!)2(n − 2)! (n!)2(n − l)!(l − 2)! ( n − 2 l − 2 ) = l4(l − 1)2 n2(n − 1)2 , and Cov(N111/n,N112/n) = E((N111/n)(N112/n)) − E(N111/n)E(N112/n) = l4(l − 1)2 n4(n − 1)2 − ( 1 m3 )2 .  Remark 3.28 (Case 2) Let n and m be integers greater than or equal to two, where m divides n, and l = n/m, we consider the following regions R111 = [0, l/n] × [0, l/n] × [0, l/n] R112 = [0, l/n] × [0, l/n] × (l/n, 2l/n] R113 = [0, l/n] × [0, l/n] × (2l/n, 1] R121 = [0, l/n] × (l/n, 2l/n] × [0, l/n] R122 = [0, l/n] × (l/n, 2l/n] × (l/n, 2l/n] R123 = [0, l/n] × (l/n, 2l/n] × (2l/n, 1] R131 = [0, l/n] × (2l/n, 1] × [0, l/n] 83 R132 = [0, l/n] × (2l/n, 1] × (l/n, 2l/n] R133 = [0, l/n] × (2l/n, 1] × (2l/n, 1], let x1, x2, x3, x4, x5, x6, x7, x8 and x9 be, respectively, the number of observations in the regions R111,R112,R113,R121,R122,R123,R131,R132 and R133. We define x1 + x2 + x3 = A1 x4 + x5 + x6 = A2 x7 + x8 + x9 = A3 x1 + x4 + x7 = B1 x2 + x5 + x8 = B2 x3 + x6 + x9 = B3. To count the number of ways in which we can select l from n points in the region R∗ = [0, l/n] × [0, 1] × [0, 1] we consider 1. Number of ways to select the first coordinate X CX = ( l x1 )( l − x1 x2 )( l − x1 − x2 x3 )( l − x1 − x2 − x3 x4 )( l − x1 − x2 − x3 − x4 x5 ) · ( l − x1 − x2 − x3 − x4 − x5 x6 )( l − x1 − x2 − x3 − x4 − x5 − x6 x7 ) · ( l − x1 − x2 − x3 − x4 − x5 − x6 − x7 x8 )( l − x1 − x2 − x3 − x4 − x5 − x6 − x7 − x8 x9 ) = l! 1 x1! 1 x2! 1 x3! 1 x4! 1 x5! 1 x6! 1 x7! 1 x8! 1 x9! . 2. Number of ways to select the second coordinate Y CY = Pl x1 Pl−x1 x2 Pl−x1−x2 x3 Pl x4 Pl−x4 x5 Pl−x4−x5 x6 Pn−2l x7 Pn−2l−x7 x8 Pn−2l−x7−x8 x9 = l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! . 3. Number of ways to select the third coordinate Z CZ = Pl x1 Pl x2 Pn−2l x3 Pl−x1 x4 Pl−x2 x5 Pn−2l−x3 x6 Pl−x1−x4 x7 Pl−x2−x5 x8 Pn−2l−x3−x6 x9 . 84 Lemma 3.29 Using the notation of Remark 3.28 we have ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 CX ·CY ·CZ = (Pn l )2. Proof: We have the following identities ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 CX ·CY ·CZ = ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! · ( l x1 )( l x2 )( n − 2l x3 )( l − x1 x4 )( l − x2 x5 )( n − 2l − x3 x6 )( l − x1 − x4 x7 )( l − x2 − x5 x8 )( n − 2l − x3 − x6 x9 ) = ∑ A1+A2+A3=l l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! ( n A1 )( n − A1 A2 )( n − A1 − A2 A3 ) = l!n! (n − l)! ∑ A1+A2+A3=l ( l A1 )( l A2 )( n − 2l A3 ) = l!n! (n − l)! ( n l ) = (Pn l )2.  Remark 3.30 Let n and m be integers greater than or equal to two, where m divides n, and l = n/m, we consider the following regions R211 = (l/n, 2l/n] × [0, l/n] × [0, l/n] R212 = (l/n, 2l/n] × [0, l/n] × (l/n, 2l/n] R213 = (l/n, 2l/n] × [0, l/n] × (2l/n, 1] R221 = (l/n, 2l/n] × (l/n, 2l/n] × [0, l/n] R222 = (l/n, 2l/n] × (l/n, 2l/n] × (l/n, 2l/n] R223 = (l/n, 2l/n] × (l/n, 2l/n] × (2l/n, 1] R231 = (l/n, 2l/n] × (2l/n, 1] × [0, l/n] R232 = (l/n, 2l/n] × (2l/n, 1] × (l/n, 2l/n] R233 = [l/n, 2l/n] × (2l/n, 1] × (2l/n, 1], 85 let y1, y2, y3, y4, y5, y6, y7, y8 and y9 be, respectively, the number of observations in the regions R211,R212,R213,R221,R222,R223,R231,R232 and R233. We define y1 + y2 + y3 = Â1 y4 + y5 + y6 = Â2 y7 + y8 + y9 = Â3 y1 + y4 + y7 = B̂1 y2 + y5 + y8 = B̂2 y3 + y6 + y9 = B̂3. To count the number of ways in which we can select l from n − l points in the region R∗∗ = (l/n, 2l/n] × [0, 1] × [0, 1] we consider 1. Number of ways to select the first coordinate X CX = ( l y1 )( l − y1 y2 )( l − y1 − y2 y3 )( l − y1 − y2 − y3 y4 )( l − y1 − y2 − y3 y5 ) · ( l − y1 − y2 − y3 − y4 − y5 y6 )( l − y1 − y2 − y3 − y4 − y5 − y6 y7 ) · ( l − y1 − y2 − y3 − y4 − y5 − y6 − y7 y8 )( l − y1 − y2 − y3 − y4 − y5 − y6 − y7 − y8 y9 ) = l! 1 y1! 1 y2! 1 y3! 1 y4! 1 y5! 1 y6! 1 y7! 1 y8! 1 y9! . 2. Number of ways to select the second coordinate Y (we consider the notation of Remark 3.28) CY = Pl−A2 y5 Pl−A2−y5 y4 Pl−A2−y4−y5 y6 Pl−A1 y2 Pl−A1−y2 y1 Pl−A1−y2−y1 y3 Pn−2l−A3 y8 Pn−2l−A3−y8 y7 Pn−2l−A3−y8−y7 y9 . 3. Number of ways to select the third coordinate Z (we consider the notation of Remark 3.28) CZ = l−B1 Py1 l−B1−y1 Py4 l−B1−y1−y4 Py7 l−B2 Py2 l−B2−y2 Py5 l−B2−y2−y5 Py8 n−2l−B3 Py3 n−2l−B3−y3 Py6 · n−2l−B3−y3−y6 Py9 = (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (n − 2l − B3)! (n − 2l − B3 − B̂3)! . 86 Lemma 3.31 Using the notation of Remark 3.30 and Remark 3.28, we have that ∑ B̂1+B̂2+B̂3=l ∑ y2+y5+y8=B̂2 ∑ y1+y4+y7=B̂1 ∑ y3+y6+y9=B̂3 CX ·CY ·CZ = (Pn−l l )2. Proof: We have the following identities ∑ B̂1+B̂2+B̂3=l ∑ y2+y5+y8=B̂2 ∑ y1+y4+y7=B̂1 ∑ y3+y6+y9=B̂3 CX ·CY ·CZ = ∑ B̂1+B̂2+B̂3=l ∑ y2+y5+y8=B̂2 ∑ y1+y4+y7=B̂1 ∑ y3+y6+y9=B̂3 l! (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (n − 2l − B3)! (n − 2l − B3 − B̂3)! · ( l − A2 y5 )( l − A1 y2 )( n − 2l − A3 y8 )( l − A2 − y5 y4 )( l − A1 − y2 y1 )( n − 2l − A3 − y8 y7 ) · ( l − A2 − y4 − y5 y6 )( l − A1 − y2 − y1 y3 )( n − 2l − A3 − y8 − y7 y9 ) = ∑ B̂1+B̂2+B̂3=l l! (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (n − 2l − B3)! (n − 2l − B3 − B̂3)! ( n − l B̂2 ) · ( n − l − B̂2 B̂1 )( n − l − B̂1 − B̂2 B̂3 ) = ∑ B̂1+B̂2+B̂3=l l! (n − l)! (n − 2l)! ( l − B1 B̂1 )( l − B2 B̂2 )( n − 2l − B3 B̂3 ) = l! (n − l)! (n − 2l)! ( n − l l ) = (Pn−l l )2.  Remark 3.32 We use the notation on Remark 3.28 and Remark 3.30, and the results in the proofs of the Lemma 3.29 and the Lemma 3.31. Let n and m be integers greater than or equal to two, where m divides n, and l = n/m. Let N111 be the random variable indicating the number of observations falling in R111 and N222 the random variable indicating the number of observations falling in R222, when we considered a sample of size n from the product copula, then P (N111 = x1,N222 = y5) =         ∑ A1+A2+A3=l ∑ x2+x3=A1−x1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! ( l x1 )( l x2 )( n − 2l x3 )( l − x1 x4 )( l − x2 x5 )( n − 2l − x3 x6 ) 87 ( l − x1 − x4 x7 )( l − x2 − x5 x8 )( n − 2l − x3 − x6 x9 )] ·          ∑ B̂1+B̂2+B̂3=l ∑ y2+y8=B̂2−y5 l! (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (n − 2l − B3)! (n − 2l − B3 − B̂3)! ( l − A2 y5 )( l − A1 y2 ) ( n − 2l − A3 y8 )( n − l − B̂2 B̂1 )( n − l − B̂1 − B̂2 B̂3 )] · [ ((n − 2l)!)2 (n!)2 ] . Remark 3.33 If we use the notation on Remark 3.28 and Remark 3.30, then ∑ B̂1+B̂2+B̂3=l ∑ y2+y8+(y5−1)=B̂2−1 l! (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (n − 2l − B3)! (n − 2l − B3 − B̂3)! ( l − A2 − 1 y5 − 1 ) ( l − A1 y2 )( n − 2l − A3 y8 )( n − l − B̂2 B̂1 )( n − l − B̂1 − B̂2 B̂3 ) = ∑ B̂1+B̂2+B̂3=l l! (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (n − 2l − B3)! (n − 2l − B3 − B̂3)! ( n − l − 1 B̂2 − 1 )( n − l − B̂2 B̂1 ) ( n − l − B̂1 − B̂2 B̂3 ) = ∑ B̂1+B̂2+B̂3=l l! (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (n − 2l − B3)! (n − 2l − B3 − B̂3)! (n − l − 1)! (B̂2 − 1)! 1 B̂1! 1 B̂3! 1 (n − 2l)! = l!(n − l − 1)! (n − 2l)! (l − B2) ∑ B̂1−1+B̂2+B̂3=l−1 ( l − B1 B̂1 )( l − B2 − 1 B̂2 − 1 )( n − 2l − B3 B̂3 ) = l!(n − l − 1)! (n − 2l)! (l − B2) ( n − l − 1 l − 1 ) . Remark 3.34 Using the notation on Remark 3.28 and Remark 3.30, we have ( l x2 )( l − x2 x5 )( l − x2 − x5 x8 ) = l! x2!(l − x2)! (l − x2)! x5!(l − x2 − x5)! (l − x2 − x5)! x8!(l − x2 − x5 − x8)! = l l − x2 (l − 1)! x2!(l − x2 − 1)! l − x2 l − x2 − x5 (l − x2 − 1)! x5!(l − x2 − x5 − 1)! · l − x2 − x5 l − x2 − x5 − x8 (l − x2 − x5 − 1)! x8!(l − x2 − x5 − x8 − 1)! = l l − B2 ( l − 1 x2 )( l − 1 − x2 x5 )( l − 1 − x2 − x5 x8 ) . 88 Theorem 3.35 With the same hypothesis as in Remark 3.32, we have Cov(N111/n,N222/n) = l6 n4(n − 1)2 − ( 1 m3 )2 . Proof: From Remark 3.32, Remark 3.33 and Remark 3.34 we have E (N111N222) =         ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 x1y5l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! ( l x1 )( l x2 )( n − 2l x3 )( l − x1 x4 )( l − x2 x5 )( n − 2l − x3 x6 )( l − x1 − x4 x7 )( l − x2 − x5 x8 ) ( n − 2l − x3 − x6 x9 )] ·          ∑ B̂1+B̂2+B̂3=l ∑ y2+y8+y5=B̂2 l! (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (n − 2l − B3)! (n − 2l − B3 − B̂3)! ( l − A2 y5 )( l − A1 y2 )( n − 2l − A3 y8 )( n − l − B̂2 B̂1 ) ( n − l − B̂1 − B̂2 B̂3 )] · [ ((n − 2l)!)2 (n!)2 ] =         ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! l ( l − 1 x1 − 1 )( l x2 )( n − 2l x3 )( l − x1 x4 )( l − x2 x5 )( n − 2l − x3 x6 )( l − x1 − x4 x7 )( l − x2 − x5 x8 ) ( n − 2l − x3 − x6 x9 )] ·          ∑ B̂1+B̂2+B̂3=l ∑ y2+y8+(y5−1)=B̂2−1 l! (l − B1)! (l − B1 − B̂1)! (l − B2)! (l − B2 − B̂2)! (l − A2) (n − 2l − B3)! (n − 2l − B3 − B̂3)! ( l − A2 − 1 y5 − 1 )( l − A1 y2 )( n − 2l − A3 y8 )( n − l − B̂2 B̂1 ) ( n − l − B̂1 − B̂2 B̂3 )] · [ ((n − 2l)!)2 (n!)2 ] = (l!)2l(n − l − 1)! (n − 2l)! ( n − l − 1 l − 1 ) ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 · l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! (l − A2)(l − B2) ( l − 1 x1 − 1 )( l x2 )( n − 2l x3 ) · ( l − x1 x4 )( l − x2 x5 )( n − 2l − x3 x6 )( l − x1 − x4 x7 )( l − x2 − x5 x8 ) 89 · ( n − 2l − x3 − x6 x9 ) [ ((n − 2l)!)2 (n!)2 ] (using Remark 3.33) = (l!)2l3(n − l − 1)! (n − 2l)! ( n − l − 1 l − 1 ) ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 · l! (l − A1)! (l − 1)! (l − A2 − 1)! (n − 2l)! (n − 2l − A3)! (l − A2)(l − B2) ( l − 1 x1 − 1 )( l − 1 x2 )( n − 2l x3 ) · ( l − x1 x4 )( l − 1 − x2 x5 )( n − 2l − x3 x6 )( l − x1 − x4 x7 )( l − 1 − x2 − x5 x8 ) · ( n − 2l − x3 − x6 x9 ) [ ((n − 2l)!)2 (n!)2 ] (using Remark 3.34) = (l!)2l3(n − l − 1)! (n − 2l)! ( n − l − 1 l − 1 ) ∑ A1+A2+A3=l l! (l − A1)! (l − 1)! (l − A2 − 1)! (n − 2l)! (n − 2l − A3)! · ( n − 2 A1 − 1 )( n − 1 − A1 A2 )( n − 1 − A1 − A2 A3 ) [ ((n − 2l)!)2 (n!)2 ] = (l!)2l4(n − l − 1)! (n − 2l)! ( n − l − 1 l − 1 ) ∑ A1+A2+A3=l (n − 2)! (n − 1 − l)! ( l − 1 A1 − 1 ) · ( l − 1 A2 )( n − 2l A3 ) [ ((n − 2l)!)2 (n!)2 ] = (l!)2l4(n − l − 1)! (n − 2l)! (n − 2)! (n − 1 − l)! ( n − 2 l − 1 )( n − l − 1 l − 1 ) [ ((n − 2l)!)2 (n!)2 ] = (l!)2l4(n − l − 1)!(n − 2)! (n − 2l)!(n − 1 − l)! (n − 2)! (l − 1)!(n − l − 1)! (n − l − 1)! (l − 1)!(n − 2l)! ((n − 2l)!)2 (n!)2 = l6 n2(n − 1)2 , therefore Cov (N111/n,N222/n) = E ((N111/n)(N222/n)) − E(N111/n)E(N222/n) = l6 n4(n − 1)2 − ( 1 m3 )2 .  Remark 3.36 (Case 3) Using the notation in Remark 3.28 and the results in Lemma 3.29, let n and m be integers greater than or equal to two, where m divides n, and l = n/m. Let N111 be the random variable indicating the number of observations falling in R111 and let N122 be the random 90 variable indicating the number of observations falling in R122, when we consider a sample of size n from the product copula, then P{N111 = x1,N122 = x5} = ∑ A1+A2+A3=l ∑ x2+x3=A1−x1 ∑ x4+x6=A2−x5 ∑ x7+x8+x9=A3 l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! ( l x1 )( l x2 )( n − 2l x3 )( l − x1 x4 )( l − x2 x5 )( n − 2l − x3 x6 ) ( l − x1 − x4 x7 )( l − x2 − x5 x8 )( n − 2l − x3 − x6 x9 ) ((n − l)!)2 (n!)2 . Theorem 3.37 If we consider the same hypothesis of Remark 3.36, then Cov(N111/n,N122/n) = l5(l − 1) n4(n − 1)2 − ( 1 m3 )2 . Proof: From Remark 3.36, we have E(N111N122) = ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! ·x1x5 l(l − 1)! x1!(l − x1)! l (l − x2) (l − 1)! x2!(l − x2 − 1)! ( n − 2l x3 )( l − x1 x4 ) (l − x2) · (l − x2 − 1)! x5!(l − x2 − x5)! ( n − 2l − x3 x6 )( l − x1 − x4 x7 )( l − x2 − x5 x8 )( n − 2l − x3 − x6 x9 ) · ((n − l)!)2 (n!)2 = ∑ A1+A2+A3=l ∑ x1+x2+x3=A1 ∑ x4+x5+x6=A2 ∑ x7+x8+x9=A3 l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! ·l2 ((n − l)!)2 (n!)2 [( l − 1 x1 − 1 )( l − 1 x2 )( n − 2l x3 )] [( l − x1 x4 )( l − x2 − 1 x5 − 1 )( n − 2l − x3 x6 )] · [( l − x1 − x4 x7 )( l − x2 − x5 x8 )( n − 2l − x3 − x6 x9 )] = ∑ A1+A2+A3=l l! l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! l2 ((n − l)!)2 (n!)2 ( n − 2 A1 − 1 ) · ( n − 1 − A1 A2 − 1 )( n − A1 − A2 A3 ) 91 = l!l2((n − l)!)2 (n!)2 ∑ A1+A2+A3=l ( l! (l − A1)! l! (l − A2)! (n − 2l)! (n − 2l − A3)! (n − 2)! (A1 − 1)!(n − 1 − A1)! (n − 1 − A1)! (A2 − 1)!(n − A1 − A2)! (n − A1 − A2)! A3!(n − l)! ) = l!l2((n − l)!)2 (n!)2 l2(n − 2)! (n − l)! ∑ A1+A2+A3=l ( (l − 1)! (A1 − 1)!(l − A1)! (l − 1)! (A2 − 1)!(l − A2)! (n − 2l)! A3!(n − 2l − A3)! ) = l!l4((n − l)!)2(n − 2)! (n!)2(n − l)! ∑ A1+A2+A3=l ( l − 1 A1 − 1 )( l − 1 A2 − 1 )( n − 2l A3 ) = l!l4((n − l)!)2(n − 2)! (n!)2(n − l)! ( n − 2 l − 2 ) = l!l4((n − l)!)2(n − 2)! (n!)2(n − l)! (n − 2)! (l − 2)!(n − l)! = l5(l − 1) n2(n − 1)2 , therefore Cov(N111/n,N122/n) = l5(l − 1) n4(n − 1)2 − ( 1 m3 )2 .  The following theorem describes the joint probability distribution of the boxes generated for the uniform partition of size m, with m ≥ 2, when m divides n, in the three dimensional case. Theorem 3.38 Let m ≥ 2, n ∈ N, where m divides n, l = n/m and Im = {1, · · ·m}, let Ri jk, with i, j, k ∈ Im, be the uniform partition of size m of I3 = [0, 1]3 and Ni jk the random variables that indicates the number of observations falling in Ri jk, respectively, for all i, j, k ∈ Im, when we consider a sample of size n from the product copula; let ni jk, with i, j, k ∈ Im, be zero or positive integer satisfying the following restrictions m ∑ j,k=1 ni jk = l (for all i ∈ Im), m ∑ i,k=1 ni jk = l (for all j ∈ Im), m ∑ i, j=1 ni jk = l (for all k ∈ Im), 92 then P          ⋂ i, j,k∈Im {Ni jk = ni jk}          = (l!)3m (n!)2 ∏ i, j,k∈Im ni jk! . Proof: Similarly to the proof of the Theorem 3.17, we will use the counting methodology provided on the Remark 3.3, using permutations for the counting of the third coordinate Z. The order count is R111,R112, · · · ,R11m, · · · ,R1m1,R1m2, · · · ,R1mm R211,R212, · · · ,R21m, · · · ,R2m1,R2m2, · · · ,R2mm ... Rm11,Rm12, · · · ,Rm1m, · · · ,Rmm1,Rmm2, · · · ,Rmmm. The number of possibilities for l observations in the region R̂1 = [0, l/n]× [0, 1]× [0, 1] is given by N̂1 = [( l n111 ) Pl n111 Pl n111 ( l − n111 n112 ) Pl−n111 n112 Pl n112 · · · ( l −∑m−1 k=1 n11k n11m ) P l−∑m−1 k=1 n11k n11m Pl n11m ] [( l −∑m k=1 n11k n121 ) Pl n121 Pl−n111 n121 ( l − n121 − ∑m k=1 n11k n122 ) Pl−n121 n122 Pl−n112 n122 · · · ( l −∑m−1 k=1 n12k − ∑m k=1 n11k n12m ) P l−∑m−1 k=1 n12k n12m Pl−n11m n12m ] · · · [( l −∑m−1 j=1 ∑m k=1 n1 jk n1m1 ) Pl n1m1 P l−∑m−1 j=1 n1 j1 n1m1 ( l − n1m1 − ∑m−1 j=1 ∑m k=1 n1 jk n1m2 ) Pl−n1m1 n1m2 P l−∑m−1 j=1 n1 j2 n1m2 · · · ( l −∑m−1 k=1 n1mk − ∑m−1 j=1 ∑m k=1 n1 jk n1mm ) P l−∑m−1 k=1 n1mk n1mm P l−∑m−1 j=1 n1 jk n1mm ] , and the number of possibilities for l observations in the region R̂s = (l(s−1)/n, ls/n]×[0, 1]×[0, 1], for all s ∈ 2, · · · ,m, is given by N̂s = [( l ns11 ) P l−∑s−1 i=1 ∑m k=1 ni1k ns11 P l−∑s−1 i=1 ∑m j=1 ni j1 ns11 ( l − ns11 ns12 ) P l−ns11− ∑s−1 i=1 ∑m k=1 ni1k ns12 P l−∑s−1 i=1 ∑m j=1 ni j2 ns12 · · · ( l −∑m−1 k=1 ns1k ns1m ) P l−∑m−1 k=1 ns1k− ∑s−1 i=1 ∑m k=1 ni1k ns1m P l−∑s−1 i=1 ∑m j=1 ni jm ns1m ] [( l −∑m k=1 ns1k ns21 ) P l−∑s−1 i=1 ∑m k=1 ni2k ns21 P l−ns11− ∑s−1 i=1 ∑m j=1 ni j1 ns21 ( l − ns21 − ∑m k=1 ns1k ns22 ) P l−ns21− ∑s−1 i=1 ∑m k=1 ni2k ns22 P l−ns12− ∑s−1 i=1 ∑m j=1 ni j2 ns22 · · · 93 ( l −∑m k=1 ns1k − ∑m−1 k=1 ns2k ns2m ) P l−∑m−1 k=1 ns2k− ∑s−1 i=1 ∑m k=1 ni2k ns2m P l−ns1m− ∑s−1 i=1 ∑m j=1 ni jm ns2m ] · · · [( l −∑m j,k=1 ns jk nsm1 ) P l−∑s−1 i=1 ∑m k=1 nimk nsm1 P l−∑m−1 j=1 ns j1− ∑s−1 i=1 ∑m j=1 ni j1 nsm1 ( l − nsm1 − ∑m j,k=1 ns jk nsm2 ) P l−nsm1− ∑s−1 i=1 ∑m k=1 nimk nsm2 P l−∑m−1 j=1 ns j2− ∑s−1 i=1 ∑m j=1 ni j2 nsm2 · · · ( l −∑m−1 k=1 nsm j − ∑m j,k=1 ns jk nsmm ) P l−∑m−1 k=1 nsmk− ∑s−1 i=1 ∑m k=1 nimk nsmm P l−∑m−1 j=1 ns jm− ∑s−1 i=1 ∑m j=1 ni jm nsmm ] we have that N̂1 · m ∏ s=2 N̂s = (l!)3m ∏ i, j,k∈Im ni jk! . We divided between (n!)2, the total number of possibilities in which we can see n points in I3 = [0, 1]3, corresponding to the modified sample of the a sample of size n from the product copula, therefore P          ⋂ i, j,k∈Im {Ni jk = ni jk}          = (l!)3m (n!)2 ∏ i, j,k∈Im ni jk! .  3.3 Summary of results and generalizations We present a summary of the precedent results and the generalizations of the previous expressions for the moments in dimension greater than three. 1. Dimension two. Let m ≥ 2, n ∈ N, where m divides n, and l = n/m, let N11, N12 and N22 be the random variables that indicates the number of observations on the boxes R11 = [0, l/n]2, R12 = [0, l/n]×(l/n, 2l/n] and R22 = (l/n, 2l/n]2, respectively, when we consider the modified sample of a sample of size n from the product copula. (a) First Moment. E(N11/n) = 1 m2 (b) Second Moment. E((N11/n)2) = l2(l − 1)2 n3(n − 1) + 1 nm2 94 (c) Variance. Var(N11/n) = l2(l − 1)2 n3(n − 1) + 1 nm2 − 1 m4 (d) Covariance I. Cov(N11/n,N12/n) = l3(l − 1) n3(n − 1) − 1 m4 (e) Covariance II. Cov(N11/n,N22/n) = l4 n3(n − 1) − 1 m4 2. Dimension three. Let m ≥ 2, n ∈ N, where m divides n, and l = n/m, let N111, N112, N122 and N222 be the random variables that indicates the number of observations on the boxes R111 = [0, l/n]3, R112 = [0, l/n]× [0, l/n]× (l/n, 2l/n], R122 = [0, l/n]× (l/n, 2l/n]× (l/n, 2l/n] and R222 = (l/n, 2l/n]3, respectively, when we consider the modified sample of a sample of size n from the product copula. (a) First Moment. E(N111/n) = 1 m3 (b) Second Moment. E((N111/n)2) = l3(l − 1)3 n4(n − 1)2 + 1 nm3 (c) Variance. Var(N111/n) = l3(l − 1)3 n4(n − 1)2 + 1 nm3 − 1 m6 (d) Covariance I. Cov(N111/n,N112/n) = l4(l − 1)2 n4(n − 1)2 − 1 m6 (e) Covariance II. Cov(N111/n,N122/n) = l5(l − 1) n4(n − 1)2 − 1 m6 (f) Covariance III. Cov(N111/n,N222/n) = l6 n4(n − 1)2 − 1 m6 95 3. General case. Let m ≥ 2, n ∈ N, where m divides n, l = n/m, d ≥ 2, 1 = (1, · · · , 1) (d times) and 2 = (2, · · · , 2) (d times), let N1 be the random variable that indicates the number of observations on the box R1 = [0, l/n]d, when we consider the modified sample of a sample of size n from the product copula. (a) First Moment. E(N1/n) = 1 md (b) Second Moment. E((N1/n)2) = ld(l − 1)d nd+1(n − 1)d−1 + 1 nmd (c) Variance. Var(N1/n) = ld(l − 1)d nd+1(n − 1)d−1 + 1 nmd − 1 m2d (d) Covariance I. Cov(N1/n,N2/n) = l2d nd+1(n − 1)d−1 − 1 m2d where N2 is the random variable that indicates the number of observations on the box R2 = (l/n, 2l/n]d. (e) Covariance II. Let j = ( j1, · · · , jd) ∈ {1, 2}d, j , 1, 2, if we define R j = Î1 × · · · × Îd where Îi = { [0, l/n] i f ji = 1 (l/n, 2l/n] i f ji = 2 for i ∈ {1, · · · , d}, then Cov(N1/n,N j/n) = l2d−k(l − 1)k nd+1(n − 1)d−1 − 1 m2d . where N j is the random variable that indicates the number of observations on the box R j and k the number of coordinates equals to one in j. (f) Joint distribution. Let d ≥ 2, and let Ni1···id be the random variable that indicates the number of observations in the box Ri1···id = 〈(i1 − 1)/m, i1/m]×· · ·×〈(id − 1)/m, id/m], where 96 i1, · · · , id ∈ Im and the notation “〈” indicates “(” if ik − 1 > 0 and “[” if ik − 1 = 0, for all k ∈ Id, when we consider the modified sample of size n from the product copula Πd. Then P        ⋂ i1,···,id∈Im {Ni1···id = ni1···id}        = (l!)dm (n!)d−1 ∏ i1,···,id∈Im ni1...id ! . 3.4 Case: m does not divide n In the next pages we show that we can obtain similar results to what was done previously, now without the hypothesis m divides n. We exemplify with the two dimensional case. The following results establish the properties of random variables associated with the observations in the boxes from a not uniform partition of order m, that is, a partition of the box I2 = [0, 1]2 generated by dividing the two intervals I = [0, 1] in the Cartesian product into m parts, where m does not divide n. Definition 3.39 Let n ≥ 2, 0 < l1 < l2 < n and 0 < j1 < j2 ≤ n be integers with j2 − j1 = l2. We define the following regions in the unit square I2 = [0, 1]2 R1 j = [0, l1/n] × ( j1/n, j2/n ] R1 = ([0, l1/n] × [0, 1])\R1 j R = (l1/n, 1] × [0, 1] . Remark 3.40 In the following results we consider than in the unit square I2 = [0, 1]2 there exists n points corresponding to the range statistics from a sample of size n from the product copula. Lemma 3.41 Let k1 be the number of points in R1 j and x1 the number of points in R1, we have that ∑ k1+x1=l1 ( l1 k1 ) P l2 k1 ( l1 − k1 x1 ) Pn−l2 x1 = Pn l1 . Proof: We can observe that ( l1 − k1 x1 ) = ( x1 x1 ) = 1 then ∑ k1+x1=l1 ( l1 k1 ) P l2 k1 ( l1 − k1 x1 ) Pn−l2 x1 = ∑ k1+x1=l1 ( l1 k1 ) P l2 k1 Pn−l2 x1 97 = ∑ k1+x1=l1 l1! k1!(l1 − k1)! l2! (l2 − k1)! (n − l2)! (n − l2 − x1)! = ∑ k1+x1=l1 l1! k1!x1! l2! (l2 − k1)! (n − l2)! (n − l2 − x1)! = l1! ∑ k1+x1=l1 ( l2 k1 )( n − l2 x1 ) = l1! ( n l1 ) (Vandermonde’s identity) = Pn l1 .  Lemma 3.42 Let N1 j be the random variable that indicates the number of observations falling in R1 j, then the following equality holds P{N1 j = k1} = l1!(n − l1)! n! ( l2 k1 )( n − l2 l1 − k1 ) = ( l2 k1 )( n−l2 l1−k1 ) ( n−l1 l1 ) this probability corresponds to a hypergeometric distribution with parameters: n the population size, l1 the class size and l2 the sample size. Proof: We obtained the equality considering ∑ k1+x1=l1 ( l1 k1 ) P l2 k1 ( l1 − k1 x1 ) Pn−l2 x1 = l1! ∑ k1+x1=l1 ( l2 k1 )( n − l2 x1 ) from the proof of Lemma 3.42, this equality indicates the number of ways in which we can have k1+ x1 points in the region R1 j∪R1; setting the value k1, this result is multiplied by (n− l1)! (number of ways in which we can have n− l1 points in the region R discarding l1 possibilities corresponding to the coordinates occupied by the observations in the region R1 j ∪ R1) and divided by n!, the total number of ways that we can observe n points in I2 = [0, 1]2.  Theorem 3.43 Let N1 j be the random variable that indicates the number of observations falling in R1 j when we consider a sample of size n from the product copula, then E(N1 j/n) = l1l2 n2 , E((N1 j/n)2) = l2(l2 − 1)l1(l1 − 1) n3(n − 1) + l1l2 n3 98 and Var(N1 j/n) = l2(l2 − 1)l1(l1 − 1) n3(n − 1) + l1l2 n3 − ( l1l2 n2 )2 . Proof: E(N1 j) = l1 ∑ k1=0 k1 l1!(n − l1)! n! ( l2 k1 )( n − l2 l1 − k1 ) = l1!l2(n − l1)! n! l1 ∑ k1=1 ( l2 − 1 k1 − 1 )( n − l2 l1 − k1 ) = l1!l2(n − l1)! n! l1−1 ∑ u1=0 ( l2 − 1 u1 )( n − l2 l1 − 1 − u1 ) (u1 = k1 − 1) = l1!l2(n − l1)! n! ( n − 1 l1 − 1 ) = l1!l2(n − l1)! n! (n − 1)! (l1 − 1)!(n − l1)! = l1l2 n and E(N1 j/n) = l1l2 n2 . We have, E(N2 1 j) = l1 ∑ k1=0 k2 1 l1!(n − l1)! n! ( l2 k1 )( n − l2 l1 − k1 ) = l1 ∑ k1=0 (k1(k1 − 1) + k1) l1!(n − l1)! n! ( l2 k1 )( n − l2 l1 − k1 ) = l1 ∑ k1=2 k1(k1 − 1) l1!(n − l1)! n! ( l2 k1 )( n − l2 l1 − k1 ) + l1 ∑ k1=0 k1 l1!(n − l1)! n! ( l2 k1 )( n − l2 l1 − k1 ) = l1 ∑ k1=2 l1!(n − l1)! n! l2(l2 − 1) ( l2 − 2 k1 − 2 )( n − l2 l1 − k1 ) + l1l2 n = l1!(n − l1)! n! l2(l2 − 1) l1−2 ∑ u1=0 ( l2 − 2 u1 )( n − l2 (l1 − 2) − u1 ) + l1l2 n (u1 = k1 − 2) 99 = l1!(n − l1)! n! l2(l2 − 1) ( n − 2 l1 − 2 ) + l1l2 n = l2(l2 − 1)l1(l1 − 1) n(n − 1) + l1l2 n . Therefore E((N1 j/n)2) = l2(l2 − 1)l1(l1 − 1) n3(n − 1) + l1l2 n3 and Var(Ni j/n) = E((Ni j/n)2) − (E(Ni j/n))2 = l2(l2 − 1)l1(l1 − 1) n3(n − 1) + l1l2 n3 − ( l1l2 n2 )2 .  In a similar way to the case where m divides n, the calculus of the covariances is divided in two cases. The following result corresponds to the first one. Definition 3.44 Let n ≥ 2, 0 < l1, l2, l3 < n and 0 < j1 < j2 < j3 ≤ n be integers with j2 − j1 = l2 and j3 − j2 = l3. We define the following regions in the unit square I2 = [0, 1]2 R1 j1 = [0, l1/n] × ( j1/n, j2/n ] R1 j2 = [0, l1/n] × ( j2/n, j3/n ] R1 = ([0, l1/n] × [0, 1])\(R1 j1 ∪ R1 j2) R = (l1/n, 1] × [0, 1] . Lemma 3.45 Let k1 be the number of points in R1 j1 , k2 the number of points in R1 j2 and x1 the number of points in R1, then ∑ k1+k2+x1=l1 ( l1 k1 ) P l2 k1 ( l1 − k1 k2 ) P l3 k2 ( l1 − k1 − k2 x1 ) Pn−l2−l3 x1 = Pn l1 . Proof: We have that ∑ k1+k2+x1=l1 ( l1 k1 ) P l2 k1 ( l1 − k1 k2 ) P l3 k2 ( l1 − k1 − k2 x1 ) Pn−l2−l3 x1 = ∑ k1+k2+x1=l1 l1! k1!k2!x1! P l2 k1 P l3 k2 Pn−l2−l3 x1 = ∑ k1+k2+x1=l1 l1! ( l2 k1 )( l3 k2 )( n − l2 − l3 x1 ) 100 = l1! ( n l1 ) = Pn l1 .  Remark 3.46 Let N1 j1 be the random variable that indicates the number of observations falling in R1 j1 and let N1 j2 be the random variable that indicates the number of observations falling in R1 j2 then P{N1 j1 = k1,N1 j2 = k2} = l1!(n − l1)! n! ( l2 k1 )( l3 k2 )( n − l2 − l3 l1 − k1 − k2 ) . Lemma 3.47 Whit the same hypothesis of Remark 3.46, we have Cov(Ni j1/n,Ni j2/n) = l1(l1 − 1)l2l3 n2(n − 1) − ( l1l2 n2 ) ( l1l3 n2 ) . Proof: E(N1 j1 N1 j2) = l1 ∑ k1+k2=0 k1k2 l1!(n − l1)! n! ( l2 k1 )( l3 k2 )( n − l2 − l3 l1 − k1 − k2 ) = l1 ∑ k1+k2=0 l1!(n − l1)! n! l2l3 ( l2 − 1 k1 − 1 )( l3 − 1 k2 − 1 )( n − l2 − l3 l1 − k1 − k2 ) = l1!(n − l1)!l2l3 n! l1 ∑ k1+k2=0 ( l2 − 1 k1 − 1 )( l3 − 1 k2 − 1 )( n − l2 − l3 l1 − k1 − k2 ) = l1!(n − l1)!l2l3 n! ( n − 2 l1 − 2 ) = l1(l1 − 1)l2l3 n(n − 1) , and E((N1 j1/n)(N1 j2/n)) = l1(l1 − 1)l2l3 n3(n − 1) . Therefore Cov(N1 j1/n,N1 j2/n) = E((N1 j1/n)(N1 j2/n)) − E(N1 j1/n)E(N1 j2/n) = l1(l1 − 1)l2l3 n3(n − 1) − ( l1l2 n2 ) ( l1l3 n2 ) . 101  We use the next results for the calculation of the covariances for the second case. Definition 3.48 Let n ≥ 2, 0 < l1, l2, l3, l4 < m, 0 < j1 < j2 < j3 ≤ n and 0 = i1 < i2 < i3 ≤ n be integers with i2 − i1 = l1, i3 − i2 = l3, j2 − j1 = l2 and j3 − j2 = l4. We define the following regions in the unit square I2 = [0, 1]2 R1 j1 = [0, l1/n] × ( j1/n, j2/n ] R1 j2 = [0, l1/n] × ( j2/n, j3/n ] R2 j1 = (i2/n, i3/n] × ( j1/n, j2/n ] R2 j2 = (i2/n, i3/n] × ( j2/n, j3/n ] R1 = ([0, l1/n] × [0, 1])\(R1 j1 ∪ R1 j2) R2 = ((i2/n, i3/n] × [0, 1])\(R2 j1 ∪ R2 j2). Lemma 3.49 Let k1 be the number of points in R1 j1 , x1 the number of points in R1 j2 , x2 the number of points in R1, y1 the number of points in R2 j1 , k2 the number of points in R2 j2 and y2 the number of points in R2, then ∑ k1+x1+x2=l1 ∑ y1+k2+y2=l3 ( l1 k1 )( l1 − k1 x1 )( l1 − k1 − x1 x2 ) P l2 k1 Pl4 x1 Pn−l2−l4 x2 · ( l3 k2 )( l3 − k2 y1 )( l3 − k2 − y1 y2 ) Pl2−k1 y1 P l4−x1 k2 Pn−l2−l4−x2 y2 = Pn l1 P n−l1 l3 . Proof: ∑ k1+x1+x2=l1 ∑ y1+k2+y2=l3 ( l1 k1 )( l1 − k1 x1 )( l1 − k1 − x1 x2 ) P l2 k1 Pl4 x1 Pn−l2−l4 x2 · ( l3 k2 )( l3 − k2 y1 )( l3 − k2 − y1 y2 ) Pl2−k1 y1 P l4−x1 k2 Pn−l2−l4−x2 y2 = ∑ k1+x1+x2=l1 ∑ y1+k2+y2=l3 ( l1! k1!x1!x2! P l2 k1 Pl4 x1 Pn−l2−l4 x2 ) ( l3! k2!y1!y2! Pl2−k1 y1 P l4−x1 k2 Pn−l2−l4−x2 y2 ) = ∑ k1+x1+x2=l1 ∑ y1+k2+y2=l3 l1!l3! ( l2 k1 )( l4 x1 )( n − l2 − l4 x2 )( l2 − k1 y1 )( l4 − x1 k2 )( n − l2 − l4 − x2 y2 ) = ∑ k1+x1+x2=l1 l1!l3! ( l2 k1 )( l4 x1 )( n − l2 − l4 x2 )( n − l1 l3 ) 102 = l1!l3! ( n l1 )( n − l1 l3 ) = Pn l1 P n−l1 l3 .  Remark 3.50 Let N11 j1 be the random variable indicating the number of observations falling in R11 j1 and Ni2 j2 , the random variable indicating the number of observations falling in Ri2 j2 , then P{Ni1 j1 = k1,Ni2 j2 = k2} = (n − l1 − l3)! n! ∑ x1+x2=l1−k1 ∑ y1+y2=l3−k2 l1!l3! ( l2 k1 )( l4 x1 )( n − l2 − l4 x2 ) · ( l2 − k1 y1 )( l4 − x1 k2 )( n − l2 − l4 − x2 y2 ) . Lemma 3.51 With the same hypothesis of Remark 3.50, we have Cov(Ni1 j1/n,Ni2 j2/n) = l1l2l3l4 n3(n − 1) − ( l1l2 n2 ) ( l3l4 n2 ) . Proof: E(Ni1 j1 Ni2 j2) = (n − l1 − l3)! n! ∑ k1+x1+x2=l1 ∑ y1+k2+y2=l3 k1k2l1!l3! ( l2 k1 )( l4 x1 )( n − l2 − l4 x2 ) · ( l2 − k1 y1 )( l4 − x1 k2 )( n − l2 − l4 − x2 y2 ) = (n − l1 − l3)! n! ∑ k1+x1+x2=l1 ∑ y1+k2+y2=l3 l1!l3!l2 ( l2 − 1 k1 − 1 ) l4 l4 − x1 ( l4 − 1 x1 )( n − l2 − l4 x2 ) · ( l2 − k1 y1 ) (l4 − x1) ( l4 − x1 − 1 k2 − 1 )( n − l2 − l4 − x2 y2 ) = (n − l1 − l3)! n! ∑ k1+x1+x2=l1 ∑ y1+k2+y2=l3 l1!l3!l2l4 ( l2 − 1 k1 − 1 )( l4 − 1 x1 )( n − l2 − l4 x2 ) · ( l2 − k1 y1 )( l4 − x1 − 1 k2 − 1 )( n − l2 − l4 − x2 y2 ) = (n − l1 − l3)! n! ∑ k1+x1+x2=l1 l1!l3!l2l4 ( l2 − 1 k1 − 1 )( l4 − 1 x1 )( n − l2 − l4 x2 )( n − 1 − l1 l3 − 1 ) = (n − l1 − l3)! n! l1!l3!l2l4 ( n − 2 l1 − 1 )( n − 1 − l1 l3 − 1 ) 103 = (n − l1 − l3)! n! l1!l3!l2l4 (n − 2)! (l1 − 1)!(n − l1 − 1)! (n − 1 − l1)! (l3 − 1)!(n − l1 − l3)! = l1l2l3l4 n(n − 1) , and E((Ni1 j1/n)(Ni2 j2/n)) = l1l2l3l4 n3(n − 1) . Finally Cov(Ni1 j1/n,Ni2 j2/n) = E((Ni1 j1/n)(Ni2 j2/n)) − E(Ni1 j1/n)E(Ni2 j2/n) = l1l2l3l4 n3(n − 1) − ( l1l2 n2 ) ( l3l4 n2 ) .  Finally, the next theorem describes the joint probability distribution of the boxes generated by the not uniform partition of size m, with m ≥ 2, of I2 = [0, 1]2. Theorem 3.52 Let m ≥ 2, n ∈ N, where m not necessarily divides n, and Ik = {1, · · · k}, k ∈ N, we consider {t0, t1, · · · , tm} ⊂ In with 0 = t0 < t1 < · · · < tm = n. We define li = ti − ti−1, for all i ∈ Im and the partition, not necessarily uniform, of size m of I2 = [0, 1]2 by Ri j = 〈 ti−1 n , ti n ] × 〈 t j−1 n , t j n ] for all i, j ∈ Im, the notation “〈” indicates “(” if tk−1 > 0 and “[” if tk−1 = 0, for all k ∈ Im. Let Ni j be the random variables that indicates the number of observations falling in Ri j, respec- tively, for all i, j ∈ Im, when we consider a sample of size n from the product copula; let ni j, with i, j ∈ Im, be zero or a positive integer that satisfies the following restrictions m ∑ j=1 ni j = li (for all i ∈ Im), m ∑ i=1 ni j = l j (for all j ∈ Im), and m ∑ k=1 lk = n then P        ⋂ i, j∈Im {Ni j = ni j}        = ∏m i=1(li!) 2 n! ∏ i, j∈Im ni j! . 104 Proof: We use the counting methodology provided on the Remark 3.3 for the regions Ri j, i, j ∈ Im, in the following order R11, · · · ,R1m,R21, · · · ,R2m,Rm1, · · · ,Rmm. The number of ways in which we can observe ni j points in each region Ri j, i, j ∈ Im, is given by ( l1 n11 ) Pl1 n11 ( l1 − n11 n12 ) Pl2 n12 · · · ( l1 − ∑m−1 j=1 n1 j n1m ) Plm n1m · ( l2 n21 ) Pl1−n11 n21 ( l2 − n21 n22 ) Pl2−n12 n22 · · · ( l2 − ∑m−1 j=1 n2 j n2m ) Plm−n1m n2m ... · ( lm nm1 ) P l1− ∑m−1 i=1 ni1 nm1 ( lm − nm1 nm2 ) P l2− ∑m−1 i=1 ni2 nm2 · · · ( lm − ∑m−1 j=1 nm j nmm ) P lm− ∑m−1 i=1 nim nmm = ∏m i=1(li!) 2 ∏ i, j∈Im ni j! , and we divided between n!, the total number of possibilities in which we can see n points in I2 = [0, 1], corresponding to the modified sample of the original sample of size n from the product copula, therefore P        ⋂ i, j∈Im {Ni j = ni j}        = ∏m i=1(li!) 2 n! ∏ i, j∈Im ni j! .  105 4 Convergence results of the sample copula In this last chapter, we study the weak convergence of the sample copula process √ n(Cn m − C), in this way we will have, for the sample copula, a weak convergence theorem similar to that for the empirical copula. This chapter is divided into three parts: In the first part we give a review of the results in the theory of empirical processes, and the results of the weak convergence to a Gaussian process of the empirical process √ n(Cn − C) given in [13] because, using this convergence, we obtain the corresponding convergence of the sample copula process; in the second part, and in order to be able to apply the results given in the first part, we show that the sample copula can be represented as a linear functional evaluated in the empirical copula, under this representation we have Hadamard’s differentiation and then we can apply the delta method. In the third part we perform several simulations of the sample copula process at a given point to analyze the properties of the convergence to the Gaussian process with a given variance-covariance structure. 4.1 Weak convergence of empirical process The weak convergence of an empirical process is studied extensively in Billingsley’s book [2] in the case of processes with parameter space I = [0, 1]. The extensions of these results, that is, processes with parameter space equal to Id = [0, 1]d, d ≥ 1, can be found in [38]. We begin this section with some of the main definitions and results of [38]. Definition 4.1 We define the unit cube in Rd by Ed = [0, 1] × · · · × [0, 1] (d times) for t ∈ Ed, with t = (t1, · · · , td), it is considered |t| = max{|ti| | i = 1, · · · , d}. We define P = {ρ = (ρi, · · · , ρd) | ρi = 0 ó ρi = 1, i = 1, · · · , d} as the set of the 2d vertices of Ed. Definition 4.2 Let t ∈ Ed and ρ ∈ P, we define the quadrants Q(ρ, t) and Q̂(ρ, t) in Ed with vertex t as follows Q(ρ, t) = I(ρ1, t1) × · · · × I(ρd, td) where I(r, s) = { [0, s) si r = 0 (s, 1] si r = 1 106 and Q̂(ρ, t) = Î(ρ1, t1) × · · · × Î(ρd, td) where Î(r, s) =                [0, s) si r = 0 y s < 1 [0, 1] si r = 0 y s = 1 ∅ si r = 1 y s = 1 [s, 1] si r = 1 y s < 1. Example 4.3 To exemplify the Definition 4.2, we consider d = 3, then E3 = [0, 1] × [0, 1] × [0, 1], let t ∈ E3, with t = (0.5, 0.8, 0) and ρ ∈ P, with ρ = (0, 1, 1). We have Q(ρ, t) = I(0, 0.5) × I(1, 0.8) × I(1, 0) with I(0, 0.5) = [0, 0.5) I(1, 0.8) = (0.8, 1] I(1, 0) = (0, 1] therefore, Q(ρ, t) = [0, 0.5) × (0.8, 1] × (0, 1]. On the other hand Q̂(ρ, t) = Î(0, 0.5) × Î(1, 0.8) × Î(1, 0) with Î(0, 0.5) = [0, 0.5) Î(1, 0.8) = [0.8, 1] Î(1, 0) = [0, 1] therefore, Q̂(ρ, t) = [0, 0.5) × [0.8, 1] × [0, 1]. We note that Q̂ is the closure, on the left side, of the intervals that define Q. Remark 4.4 The quadrants Q(ρ, t) and Q̂(ρ, t), called quadrants of continuity, satisfy the follow- ing properties 107 1. Q(ρ, t) ⊂ Q̂(ρ, t) ⊂ Q(ρ, t) (the notation A indicates the closure of set A). 2. Q(ρ, t) = ∅ if and only if Q̂(ρ, t) = ∅. 3. Q̂(ρ, t) ∩ Q̂(ρ′, t) = ∅ if ρ , ρ′. 4. ⋃ ρ∈P Q̂(ρ, t) = Ed (t ∈ Ed), 5. For each t ∈ Ed exists only a vertex ρ = ρ(t), denoted by σ, such as t ∈ Q̂(σ, t). 6. int(Q(σ, t)) , ∅ y Q̂(σ, t) = Q(σ, t). Definition 4.5 Let f : Ed −→ R be a function. If for t ∈ Ed, ρ ∈ P, with Q(ρ, t) , ∅, and for every sequence {tn}n≥1 ⊂ Q(ρ, t), with tn −→ t, the sequence { f (tn)}n≥1 converges, then the only limit is denoted by f (t + 0ρ) and is called ρ-limit or quadrant limit. Definition 4.6 The D[0, 1]d space is define as the set of all functions f : Ed −→ R for which the ρ-limit of f in t, with t ∈ Ed, exists for all ρ ∈ P such that Q(ρ, t) , ∅ and they are continuous in the following sense: f (t) = f (t + 0σ). Definition 4.7 We denote by Λ the set of all functions λ : [0, 1] −→ [0, 1] increasing, continuous and surjectives. For λ̂ = (λ1, · · · , λd) ∈ Λd = Λ × · · · × Λ (d times) and t = (t1, · · · , td) ∈ Ed we define λ̂(t) = (λ1(t1), · · · , λd(td)). Definition 4.8 For µ ∈ Λ we define ‖µ‖ = sup u,v∈[0,1],u,v { ∣ ∣ ∣ ∣ ∣ ∣ log ( µ(u) − µ(v) u − v ) ∣ ∣ ∣ ∣ ∣ ∣ } and we define the metric d0 in D[0, 1]d by d0( f , g) = inf { ε > 0 | there exists λ̂ ∈ Λd with ‖λi‖ ≤ ε, i ∈ Id, and sup t∈Ed | f (t) − g(λ(t))| ≤ ε } for f , g ∈ D[0, 1]d. Remark 4.9 The metric space (D[0, 1]d, d0) is a Polish space. 108 Since the empirical distributions functions are functions that have discontinuities, then these are elements of the metric space (D[0, 1]d, d0). We have that the space C[0, 1]d, that is the space of the real continuous functions with domain Id = [0, 1]d, is a subset of D[0, 1]d, and in this case, from [2] and [38], we can establish the equivalence between the d0 metric, given in Definition 4.8 and the supreme metric given by dsup( f , g) = ∑ t∈[0,1]d | f (t) − g(t)| for all f , g ∈ C[0, 1]d. Among the advantages of using the sample copula rather than empirical copula, is that the sample copula is a continuous function, that is, for d ≥ 2, the d-sample copula is an element of the metric spaces C[0, 1]d and D[0, 1]d, while the empirical copula in dimension d, being a function with discontinuities of the first order, is only an element of the space D[0, 1]d. The next results given in [38], are the basis for the study of weak convergence of the empirical copula process under the independence case. Remark 4.10 We consider U j = (U j1, · · · ,U jd), j ∈ N, random vector independent and identically distributed, where U ji is an uniform random variable in (0, 1), j ∈ N and i ∈ {1, · · · , d}. For each j ∈ N the distribution function F of U j satisfies the Lipschitz’s condition |F(t) − F(t′)| ≤ K|t − t′| with a constant K and t, t′ ∈ [0, 1]d. Definition 4.11 Let n ∈ N, we define the random variables Yn in D[0, 1]d by Yn(t) = 1 n 1 2         n ∑ j=1 d ∏ i=1 (1[0,ti](U ji) − ti)         . Theorem 4.12 Let r ≥ 1 and let t1, · · · , tr ∈ Ed, under the assumption of independence in the components of the random vector U j, j ∈ N, we have (Yn(t1), . . . ,Yn(tr)) D−→ N(0̂, γY(t1, · · · , tr)) with 0̂ the zero vector in Ed and γY(t1, · · · , tr) =        d ∏ i=1 ((tνi ∧ tµi) − (tνitµi))        ν,µ=1,···,r The next definitions and results provide the principles to formulate the convergence of the empirical processes using Hadamard’s derivative and the delta method, these methodologies allows us to 109 study the convergence of an empirical process using a Taylor’s first order approximation applied to a linear functional and the convergence in probability of the residual of the approximation from the Slustky’s theorem. The delta method in empirical process with one dimension of the parameter space is shown in [47] and the generalization for empirical process with higher dimension of the parameter space can be found in [49]. Definition 4.13 A topological vector space V on a field K is a vector space for which addition and scalar multiplication are continuous operations, that is, if {xn}n≥1, {yn}n≥1, x, y ∈ V and {cn}n≥1, c ∈ K with xn −→ x, yn −→ y and cn −→ c, then xn + yn −→ x + y and cnxn −→ cx. Remark 4.14 In the following results we use some concepts from von Mises calculus like Hadamard differentiability in topological vector spaces, and we consider the hypothesis that space D[0, 1]d is a topological vector space. We study the convergence of the sample copula from the results of convergence of the empirical copula. However, the convergence of the sample copula can be found in the space C[0, 1]d, because it is a continuous function. Definition 4.15 Let D and E be metrizable, topological vector spaces, a map ϕ : Dϕ ⊂ D −→ E is called Hadamard differentiable at θ ∈ Dϕ if there is a continuous linear map ϕ′θ : D −→ E such that ϕ(θ + tnhn) − ϕ(θ) tn −→ ϕ′θ(h) if n −→ ∞, for all converging sequences tn −→ 0 and hn −→ h such that θ + tnhn ∈ Dϕ, for all n ∈ N. We say that ϕ is Hadamard differentiable tangentially to a set D0 ⊂ D by requiring that every hn −→ h has h ∈ D0. Definition 4.16 Let (M, ρ) be a given separable, pseudometric space and let (Ω,A, P) be a pro- bability space. A stochastic process {G(t, ω) : t ∈ M, ω ∈ Ω} is called separable if there exists a null set N ∈ A and a countable subset G ⊂ M such that, for all ω < N and t ∈ M, there exists a sequence tn ∈ G, with tn −→ t and G(tn, ω) −→ G(t, ω). Theorem 4.17 Let (M, ρ) be a given separable, pseudometric space and let (Ω,A, P) be a proba- bility space. If there is a set A ∈ A with probability zero such that for ω < A, G(t, ω) is continuous in t, then {G(t, ω) : t ∈ M, ω ∈ Ω} is separable, and G ⊂ M can be taken as any countable dense subset ofM. Theorem 4.18 (Delta Method) LetD and E be metrizable, topological vector spaces, let ϕ : Dϕ ⊂ D −→ E be Hadamard differentiable at θ tangentially to D0. Let Xn : Ωn −→ Dϕ be maps with rn(Xn − θn) D−→ X for some sequence of constants rn −→ ∞, where X is separable and takes its values in D0, and θn −→ θ, then rn(ϕ(Xn) − ϕ(θn)) D−→ ϕ′θ(X). 110 The following theorem is given in [13], in this result the convergence of the empirical process associated to the empirical copula to a Gaussian process is studied. This theorem is the main result to obtain in the next section the convergence of the process associated to the sample copula, also from this result we can extend the Theorem 4.12 to get weak convergence without the hypothesis of independence, with some restrictions. Theorem 4.19 Let H be a bivariate distribution function with continuous marginal distribution functions and associated copula function C with continuous partial derivatives. Then the empirical copula process Zn = √ n(Cn −C) with Cn the empirical copula, converge weakly to the centered Gaussian process GC in l∞([0, 1]2), the set of all uniformly bounded functions from I2 = [0, 1]2 to R. The limiting Gaussian process can be written as Gc(u, v) = BC(u, v) − ∂1C(u, v)BC(u, 1) − ∂2C(u, v)BC(1, v), where BC is the Brownian sheet on I2 = [0, 1]2 with covariance function E(BC(u, v) · BC(u′, v′)) = C(u ∧ u′, v ∧ v′) −C(u, v)C(u′, v′). 4.2 Weak convergence of sample copula process In this section the sample copula is represented as a linear functional evaluated in the empirical copula, in order to apply the results described in the previous section to get the weak convergence of the process associated with the sample copula. In this way, the weak convergence theorem for the empirical copula can be extended for the sample copula. We begin this section with a simple result which allows us the study of subcopulas with restricted domains, because later we will consider the empirical copula with a restricted domain in the des- cription of the sample copula. Lemma 4.20 Let C′ : S 1×S 2 −→ [0, 1] be a subcopula, let Ŝ 1 ⊂ S 1 and Ŝ 2 ⊂ S 2 such as 0, 1 ∈ Ŝ 1 and 0, 1 ∈ Ŝ 2, then Ĉ = C′|Ŝ 1×Ŝ 2 , with Ĉ(u, v) = C(u, v) for all (u, v) ∈ Ŝ 1 × Ŝ 2 ⊂ S 1 × S 2, is a subcopula. Proof: We verified that Ĉ satisfies the subcopulas properties, 1. Dom(Ĉ) = Ŝ 1 × Ŝ 2 ⊂ [0, 1]2 and 0, 1 ∈ Ŝ 1 ∩ Ŝ 2. 111 2. Let u ∈ Ŝ 1 and v ∈ Ŝ 2, then Ĉ(0, v) = C′(0, v) = 0 Ĉ(u, 0) = C′(u, 0) = 0. Let a1, a2 ∈ Ŝ 1 and b1, b2 ∈ Ŝ 2, with a1 ≤ a2 and b1 ≤ b2, then VolĈ([a1, a2] × [b1, b2]) = Ĉ(a2, b2) − Ĉ(a1, b2) − Ĉ(a2, b1) + Ĉ(a1, b1) = C′(a2, b2) −C′(a1, b2) −C′(a2, b1) +C′(a1, b1) = VolC′([a1, a2] × [b1, b2]) ≥ 0. 3. Let u ∈ Ŝ 1 and v ∈ Ŝ 2, then Ĉ(u, 1) = C′(u, 1) = u Ĉ(1, v) = C′(1, v) = v. Therefore Ĉ is a subcopula. Remark 4.21 In the following results we use the Nelsen’s notation [37]: For (a, b) ∈ I2 = [0, 1]2 and a subcopula C′ with domain S 1 × S 2, let a1 and a2 be, respectively, the greatest and least elements of S 1 that satisfy a1 ≤ a ≤ a2; and let b1 and b2 be, respectively, the greatest and least elements of S 2 that satisfy b1 ≤ b ≤ b2. If a ∈ S 1, then a1 = a = a2, and if b ∈ S 2, then b1 = b = b2. Let λ1(a, b) = λ1 = { (a − a1)/(a2 − a1) if a1 < a2 1 if a1 = a2 and µ1(a, b) = µ1 = { (b − b1)/(b2 − b1) if b1 < b2 1 if b1 = b2. If we define C(a, b) = (1 − λ1)(1 − µ1)C′(a1, b1) + (1 − λ1)µ1C ′(a1, b2) + λ1(1 − µ1)C′(a2, b1) + λ1µ1C ′(a2, b2) then C is a copula that extends the subcopula C′. Also, we considered Cn and Cn m (m ≥ 2), respectively, the empirical copula and the sample copula, build from the modified sample of a sample of size n from a copula C. 112 In the following results the sample copula is represented like a linear functional evaluated in the empirical copula and we proof that this functional is Hadamard differentiable. These properties are required in order to use the delta method for the study of weak convergence of the process associated to the sample copula. Lemma 4.22 Let m ≥ 2 and ϕm : D[0, 1]2 −→ l∞([0, 1]2), with ϕm(H) = m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)H ( i − 1 m , j − 1 m ) + (1 − λ1)µ1H ( i − 1 m , j m ) + λ1(1 − µ1)H ( i m , j − 1 m ) + λ1µ1H ( i m , j m ) ] then ϕm(Cn) = Cn m. Proof: We consider (a, b) ∈ I2 = [0, 1]2 with (a, b) ∈ ((i − 1)/m, i/m] × (( j − 1)/m, j/m ] and let λ be the Lebesgue’s measure in B([0, 1]2). We define C(a, b) = (1 − λ1)(1 − µ1)Ĉn ( i − 1 m , j − 1 m ) + (1 − λ1)µ1Ĉn ( i − 1 m , j m ) +λ1(1 − µ1)Ĉn ( i m , j − 1 m ) + λ1µ1Ĉn ( i m , j m ) where λ1 = λ1(a, b) = (a−(i−1)/m)/(i/m−(i−1)/m), µ1 = µ1(a, b) = (b−( j−1)/m)/( j/m−( j−1)/m) and Ĉn is the subcopula obtained from the empirical copula Cn restricted to a domain consisting of points of the form (i/m, j/m), with i, j ∈ {1, · · · ,m}, from the Lemma 4.20, Ĉ is a subcopula and therefore C is a copula. We will denote Ĉn by Cn and we calculate with this definition the C-volumen of the rectangle ((i − 1)/m, a] × (( j − 1)/m, b ] , VolC (( i − 1 m , a ] × ( j − 1 m , b ]) = C(a, b) −C ( a, j − 1 m ) −C ( i − 1 m , b ) +C ( i − 1 m , j − 1 m ) , where C ( a, j − 1 m ) = (1 − λ1)Cn ( i − 1 m , j − 1 m ) + λ1Cn ( i m , j − 1 m ) , C ( i − 1 m , b ) = (1 − µ1)Cn ( i − 1 m , j − 1 m ) + µ1Cn ( i − 1 m , j m ) , 113 and C ( i − 1 m , j − 1 m ) = Cn ( i − 1 m , j − 1 m ) , we have that C(a, b) −C ( a, j−1 m ) −C ( i−1 m , b ) +C ( i−1 m , j−1 m ) = [ (1 − λ1)(1 − µ1) − (1 − λ1) − (1 − µ1) + 1 ] Cn ( i−1 m , j−1 m ) + [ (1 − λ1)µ1 − µ1 ] Cn ( i−1 m , j m ) + [ λ1(1 − µ1) − λ1 ] Cn ( i m , j−1 m ) + λ1µ1Cn ( i m , j m ) = λ1µ1 [ Cn ( i m , j m ) −Cn ( i−1 m , j m ) −Cn ( i m , j−1 m ) +Cn ( i−1 m , j−1 m )] . Therefore VolC (( i − 1 m , a ] × ( j − 1 m , b ]) = λ1µ1VolCn (( i − 1 m , i m ] × ( j − 1 m , j m ]) = λ1µ1si j = λ (( i−1 m , a ] × ( j−1 m , b ]) λ (( i−1 m , i m ] × ( j−1 m , j m ]) si j = VolCn m (( i − 1 m , a ] × ( j − 1 m , b ]) . Now, we calculate the C-volume of the box ((i − 1)/m, i/m] × (( j − 1)/m, b], VolC (( i − 1 m , i m ] × ( j − 1 m , b ]) = C ( i m , b ) −C ( i − 1 m , b ) −C ( i m , j − 1 m ) +C ( i − 1 m , j − 1 m ) where C ( i m , b ) =        j m − b j m − j−1 m        Cn ( i m , j − 1 m ) +        b − j−1 m j m − j−1 m        Cn ( i m , j m ) , C ( i − 1 m , b ) =        j m − b j m − j−1 m        Cn ( i − 1 m , j − 1 m ) +        b − j−1 m j m − j−1 m        Cn ( i − 1 m , j m ) , then VolC (( i − 1 m , i m ] × ( j − 1 m , b ]) = C ( i m , b ) −C ( i − 1 m , b ) −C ( i m , j − 1 m ) +C ( i − 1 m , j − 1 m ) 114 =               j m − b j m − j−1 m        − 1        Cn ( i m , j − 1 m ) +        j−1 m − b j m − j−1 m        Cn ( i m , j m ) +               b − j m j m − j−1 m        + 1        Cn ( i − 1 m , j − 1 m ) −        j−1 m − b j m − j−1 m        Cn ( i − 1 m , j m ) =        b − j−1 m j m − j−1 m        [ Cn ( i m , j m ) −Cn ( i m , j − 1 m ) −Cn ( i − 1 m , j m ) +Cn ( i − 1 m , j − 1 m )] =        b − j−1 m j m − j−1 m              i m − i−1 m i m − i−1 m       VCn (( i − 1 m , i m ] × ( j − 1 m , j m ]) = λ (( i−1 m , i m ] × ( j−1 m , b ]) λ (( i−1 m , i m ] × ( j−1 m , j m ]) si j = VolCn m (( i − 1 m , i m ] × ( j − 1 m , b ]) . Finally, we calculate the C-volume of the box ((i − 1)/m, a] × (( j − 1)/m, j/m], VolC (( i − 1 m , a ] × ( j − 1 m , j m ]) = C ( a, j m ) −C ( i − 1 m , j m ) −C ( a, j − 1 m ) +C ( i − 1 m , j − 1 m ) where C ( a, j m ) =       i m − a i m − i−1 m       Cn ( i − 1 m , j m ) +       a − i−1 m i m − i−1 m       Cn ( i m , j m ) , C ( a, j − 1 m ) =       i m − a i m − i−1 m       Cn ( i − 1 m , j − 1 m ) +       a − i−1 m i m − i−1 m       Cn ( i m , j − 1 m ) , then VolC (( i − 1 m , a ] × ( j − 1 m , j m ]) = C ( a, j m ) −C ( i − 1 m , j m ) −C ( a, j − 1 m ) +C ( i − 1 m , j − 1 m ) =             i m − a i m − i−1 m       − 1       Cn ( i − 1 m , j m ) +       a − i−1 m i m − i−1 m       Cn ( i m , j m ) +             a − i m i m − i−1 m       + 1       Cn ( i − 1 m , j − 1 m ) −       a − i−1 m i m − i−1 m       Cn ( i m , j − 1 m ) 115 =       a − i−1 m i m − i−1 m       [ Cn ( i m , j m ) −Cn ( i m , j − 1 m ) −Cn ( i − 1 m , j m ) +Cn ( i − 1 m , j − 1 m )] =       a − i−1 m i m − i−1 m              j m − i−1 m j m − j−1 m        VCn (( i − 1 m , i m ] × ( j − 1 m , j m ]) = λ (( i−1 m , a ] × ( j−1 m , j m ]) λ (( i−1 m , i m ] × ( j−1 m , j m ]) si j = VolCn m (( i − 1 m , a ] × ( j − 1 m , b ]) . Because any box can be written as the union of boxes considered in the previous cases we can conclude that ϕm(Cn) = Cn m.  Lemma 4.23 Let m ≥ 2 and ϕm : D[0, 1]2 −→ l∞([0, 1]2), with ϕm(H) = m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)H ( i − 1 m , j − 1 m ) + (1 − λ1)µ1H ( i − 1 m , j m ) + λ1(1 − µ1)H ( i m , j − 1 m ) + λ1µ1H ( i m , j m ) ] then ϕm is Hadamard differentiable in every H ∈ D[0, 1]2. Proof: Let {hn}n≥1 ⊂ D[0, 1]2, H ∈ D[0, 1]2, and {tn}n≥1 ⊂ R such as hn −→ h, with h ∈ D[0, 1]2, and tn −→ 0. It holds lim n−→∞ ϕm(H + tnhn) − ϕm(H) tn = lim n−→∞         m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1) ( H ( i − 1 m , j − 1 m ) +tnhn ( i − 1 n , j − 1 m )) + (1 − λ1)µ1 ( H ( i − 1 m , j m ) + tnhn ( i − 1 m , j m )) +λ1(1 − µ1) ( H ( i m , j − 1 m ) + tnhn ( i m , j − 1 m )) +λ1µ1 ( H ( i m , j m ) + tnhn ( i m , j m ))] − ϕm(H) ] /tn 116 = lim n−→∞         m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)hn ( i − 1 n , j − 1 m ) +(1 − λ1)µ1hn ( i − 1 m , j m ) + λ1(1 − µ1)hn ( i m , j − 1 m ) +λ1µ1hn ( i m , j m )]] = m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)h ( i − 1 n , j − 1 m ) +(1 − λ1)µ1h ( i − 1 m , j m ) + λ1(1 − µ1)h ( i m , j − 1 m ) +λ1µ1h ( i m , j m )] = ϕm(h). Let c ∈ R and h1, h2 ∈ D[0, 1]2, we have ϕm(ch1 + h2) = m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)(ch1 + h2) ( i − 1 n , j − 1 m ) +(1 − λ1)µ1(ch1 + h2) ( i − 1 m , j m ) + λ1(1 − µ1)(ch1 + h2) ( i m , j − 1 m ) +λ1µ1(ch1 + h2) ( i m , j m )] = c m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)h1 ( i − 1 n , j − 1 m ) +(1 − λ1)µ1h1 ( i − 1 m , j m ) + λ1(1 − µ1)h1 ( i m , j − 1 m ) +λ1µ1h1 ( i m , j m )] + m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)h2 ( i − 1 n , j − 1 m ) +(1 − λ1)µ1h2 ( i − 1 m , j m ) + λ1(1 − µ1)h2 ( i m , j − 1 m ) +λ1µ1h2 ( i m , j m )] = cϕm(h1) + ϕm(h2) and ϕm is linear. 117 Let {hn}n≥1 ⊂ D[0, 1]2 with hn −→ h ∈ D[0, 1]2, it holds lim n−→∞ ϕm(hn) = lim n−→∞ m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)hn ( i − 1 n , j − 1 m ) +(1 − λ1)µ1hn ( i − 1 m , j m ) + λ1(1 − µ1)hn ( i m , j − 1 m ) +λ1µ1hn ( i m , j m )] = m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)h ( i − 1 n , j − 1 m ) +(1 − λ1)µ1h ( i − 1 m , j m ) + λ1(1 − µ1)h ( i m , j − 1 m ) +λ1µ1h ( i m , j m )] = ϕm(h) and ϕm is continuous. Therefore we can define the Hadamard derivative in H of ϕm as ϕ′H(h) = ϕm(h), for all h ∈ D[0, 1]2.  Remark 4.24 The functional defined in Lemma 4.22 applied to C is exactly the checkerboard copula of order m defined in [33], and the linear B-spline copula defined in [43], in the case m equals to n. In Chapter 2 we can see that we obtain a better approximation to the real copula if we consider m smaller than n. Remark 4.25 Because in the proof of Lemma 4.23 the relations holds for every sequence {hn}n≥1 ⊂ D[0, 1]2 and h ∈ D[0, 1]2 such that hn −→ h, particularly we can take h ∈ C[0, 1]2 ⊂ D[0, 1]2, that is, ϕm is Hadamard differentiable in every H ∈ D[0, 1]2 tangentially at D0 = C[0, 1]2. The following theorem is the main result of this chapter, since it provides the weak convergence of the process associated with the sample copula. It is important to notes that the proof of the following theorem uses the convergence of the process associated to the empirical copula given in the Theorem 4.19. Theorem 4.26 Let m ≥ 2 be fixed, let C be a copula with continuous partial derivatives and let Cn m be the sample copula build from the modified sample of a sample of size n from C. Let {rn}n≥1 be a increasing sequence such as rn −→ ∞ and m divides rn, for all n ∈ N, then √ rn ( Crn m − ϕm(C) ) D−→ ϕ′C(GC) = ϕm(GC) 118 where ϕm is defined in the Lemma 4.23, and ϕ′CGC) = ϕm(GC) = G is defined by G = m ∑ j=1 m ∑ i=1 [ 1( i−1 m , i m ]× ( j−1 m , j m ](1 − λ1)(1 − µ1)GC ( i − 1 m , j − 1 m ) +(1 − λ1)µ1GC ( i − 1 m , j m ) + λ1(1 − µ1)GC ( i m , j − 1 m ) +λ1µ1GC ( i m , j m )] . Proof: Based on the Theorem 4.19 we have that √ n (Cn −C) D−→ GC, if we define rn = mn, for all n ∈ N, then the above relation implies that √ rn ( Crn −C ) D−→ GC, when n −→ ∞, the Lemma 4.23 and Remark 4.25 imply that ϕm is Hadamard differentiable at every H ∈ D[0, 1]2 tangentially at C[0, 1]2; we can consider, by Theorem 4.17, that the processes GC ∈ C[0, 1]2 is a separable stochastic process, because it is continuous almost surely respect to a probability measure P. Applying the Delta Method (Theorem 4.18) with Dϕ = D = D[0, 1]2, E = l∞([0, 1]2), θrn = C, for all n ∈ N, θ = C, and D0 = C[0, 1]2, we have that √ rn ( ϕm(Crn ) − ϕm(C) ) D−→ ϕ′C(GC) = ϕm(GC) and, by the Lemma 4.22, it is satisfied that ϕm(Crn ) = C rn m , therefore √ rn ( Crn m − ϕm(C) ) D−→ ϕ′C(GC) = ϕm(GC).  In the following remark we make observations about the parameters in the Gaussian process. Remark 4.27 Let (a, b) ∈ I2 = [0, 1]2 with (a, b) ∈ (i − 1, i/m] × ( j − 1, j/m], because the linear combination of the components of a Gaussian vector have normal distribution and the process GC is centered [29, Definition 16.1], we have that ϕ′C(GC)(a, b) = ϕm(GC)(a, b) = G(a, b) is normally 119 distributed with mean equal to zero and variance given by (using the values λ1(a, b) and µ1(a, b), given on the Remark 4.21, and the results from Theorem 4.19) Var(G(a, b)) = (1 − λ1)2(1 − µ1)2Var ( GC ( i − 1 m , j − 1 m )) + (1 − λ1)2µ2 1Var ( GC ( i − 1 m , j m )) +λ2 1(1 − µ1)2Var ( GC ( i m , j − 1 m )) + λ2 1µ 2 1Var ( GC ( i m , j m )) +2(1 − λ1)2µ1(1 − µ1)E ( GC ( i − 1 m , j − 1 m ) · GC ( i − 1 m , j m )) +2λ1(1 − λ1)(1 − µ1)2E ( GC ( i − 1 m , j − 1 m ) · GC ( i m , j − 1 m )) +2λ1(1 − λ1)µ1(1 − µ1)E ( GC ( i − 1 m , j − 1 m ) · GC ( i m , j m ) ) +2λ1(1 − λ1)µ1(1 − µ1)E ( GC ( i − 1 m , j m ) · GC ( i m , j − 1 m )) +2λ1(1 − λ1)µ2 1E ( GC ( i − 1 m , j m ) · GC ( i m , j m ) ) +2λ2 1µ1(1 − µ1)E ( GC ( i m , j − 1 m ) · GC ( i m , j m ) ) with E [ GC(u, v) · GC(u′, v′) ] = E [(BC(u, v) − ∂1C(u, v)BC(u, 1) − ∂2C(u, v)BC(1, v)) ·(BC(u′, v′) − ∂1C(u′, v′)BC(u′, 1) − ∂2C(u′, v′)BC(1, v′)) ] = E [ BC(u, v)BC(u′, v′) ] − E [ BC(u, v)∂1C(u′, v′)BC(u′, 1) ] −E [ BC(u, v)∂2C(u′, v′)BC(1, v′) ] − E [ ∂1C(u, v)BC(u, 1)BC(u′, v′) ] +E [ ∂1C(u, v)BC(u, 1)∂1C(u′, v′)BC(u′, 1) ] +E [ ∂1C(u, v)BC(u, 1)∂2C(u′, v′)BC(1, v′) ] −E [ ∂2C(u, v)BC(1, v)BC(u′, v′) ] +E [ ∂2C(u, v)BC(1, v)∂1C(u′, v′)BC(u′, 1) ] +E [ ∂2C(u, v)BC(1, v)∂2C(u′, v′)BC(1, v′) ] = C(u ∧ u′, v ∧ v′) −C(u, v)C(u′, v′) −∂1C(u′, v′) [ C(u ∧ u′, v) − u′C(u, v) ] −∂2C(u′, v′) [ C(u, v ∧ v′) − v′C(u, v) ] 120 −∂1C(u, v) [ C(u ∧ u′, v′) − uC(u′, v′) ] +∂1C(u, v)∂1C(u′, v′) [ u ∧ u′ − uu′ ] +∂1C(u, v)∂2C(u′, v′) [ C(u, v′) − uv′ ] −∂2C(u, v) [ C(u′, v ∧ v′) − vC(u′, v′) ] +∂2C(u, v)∂1C(u′, v′) [ C(u′, v) − vu′ ] ∂2C(u, v)∂2C(u′, v′) [ v ∧ v′ − vv′ ] and Var(GC(u, v)) = C(u, v) −C(u, v)2 − ∂1C(u, v) [C(u, v) −C(u, v)u] −∂2C(u, v) [C(u, v) −C(u, v)v] − ∂1C(u, v) [C(u, v) −C(u, v)u] +∂1C(u, v)∂1C(u, v) ( u − u2 ) + ∂2C(u, v)∂1C(u, v) [C(u, v) − uv] −∂2C(u, v) [C(u, v) − vC(u, v)] + ∂2C(u, v)∂1C(u, v) [C(u, v) − uv] +∂2C(u, v)∂2C(u, v) ( v − v2 ) . Remark 4.28 In the case where the copula C is equal to the product copula, for all (a, b) ∈ I2 = [0, 1]2, using Nelsen’s notation given in the Remark 4.21 and taking a1 = (i − 1)/m, a2 = i/m, b1 = ( j − 1)/m and b2 = j/m, we have C(a, b) = (1 − λ1)(1 − µ1)Π(a1, b1) + (1 − λ1)µ1Π(a1, b2) + λ1(1 − µ1)Π(a2, b1) + λ1µ1Π(a2, b2) = ( a2 − a a2 − a1 ) ( b2 − b b2 − b1 ) a1b1 + ( a2 − a a2 − a1 ) ( b − b1 b2 − b1 ) a1b2 + ( a − a1 a2 − a1 ) ( b2 − b b2 − b1 ) a2b1 + ( a − a1 a2 − a1 ) ( b − b1 b2 − b1 ) a2b2 ( a2 − a a2 − a1 ) a1 [( b2 − b b2 − b1 ) ( b − b1 b2 − b1 ) b2 ] + ( a − a1 a2 − a1 ) a2 [( b2 − b b2 − b1 ) ( b − b1 b2 − b1 ) b2 ] = ( a2 − a a2 − a1 ) a1b + ( a − a1 a2 − a1 ) a2b = a2 − a1 a2 − a2 ab = ab = Π(a, b). Therefore the conclusion of Theorem 4.26, can we written as √ rn ( Crn m − Π ) D−→ ϕ′H(GC) = ϕm(GC). 121 In this case, the covariance and variance of Z = GC processes is given by Cov(Z(s, t),Z(u, v)) = (s ∧ u − su)(t ∧ v − tv) and Var(Z(s, t)) = (s − s2)(t − t2). Let (a, b) ∈ I2 = [0, 1]2 with (a, b) ∈ (i − 1, i/m] × ( j − 1, j/m], for the calculus of variance of the ϕm(Z)(a, b), normally distributed with mean equal to zero, we have the following relations V1 = Var ( (1 − λ1)(1 − µ1)Z ( i − 1 n , j − 1 m )) = (1 − λ1)2(1 − µ1)2       i − 1 m − ( i − 1 m )2            j − 1 m − ( j − 1 m )2      , V2 = Var ( (1 − λ1)µ1Z ( i − 1 n , j m )) = (1 − λ1)2µ2 1       i − 1 m − ( i − 1 m )2      ( j m − ( j m )2 ) , V3 = Var ( λ1(1 − µ1)Z ( i n , j − 1 m )) = λ2 1(1 − µ1)2 ( i m − ( i m )2 )       j − 1 m − ( j − 1 m )2      , V4 = Var ( λ1µ1Z ( i n , j m )) = λ2 1µ 2 1 ( i m − ( i m )2 ) ( j m − ( j m )2 ) , C12 = Cov ( (1 − λ1)(1 − µ1)Z ( i − 1 m , j − 1 m ) , (1 − λ1)µ1Z ( i − 1 m , j m )) = (1 − λ1)2(1 − µ1)µ1       i − 1 m − ( i − 1 m )2      ( j − 1 m − j( j − 1) m2 ) , C13 = Cov ( (1 − λ1)(1 − µ1)Z ( i − 1 m , j − 1 m ) , λ1(1 − µ1)Z ( i m , j − 1 m )) = λ1(1 − λ1)(1 − µ1)2 ( i − 1 m − i(i − 1) m2 )       j − 1 m − ( j − 1 m )2      , C14 = Cov ( (1 − λ1)(1 − µ1)Z ( i − 1 m , j − 1 m ) , λ1µ1Z ( i m , j m ) ) 122 = λ1(1 − λ1)µ1(1 − µ1) ( i − 1 m − i(i − 1) m2 ) ( j − 1 m − j( j − 1) m2 ) , C23 = Cov ( (1 − λ1)µ1Z ( i − 1 m , j m ) , λ1(1 − µ1)Z ( i m , j − 1 m )) = λ1(1 − λ1)µ1(1 − µ1) ( i − 1 m − i(i − 1) m2 ) ( j − 1 m − j( j − 1) m2 ) , C24 = Cov ( (1 − λ1)µ1Z ( i − 1 m , j m ) , λ1µ1Z ( i m , j m ) ) = λ1(1 − λ1)µ2 1 ( i − 1 m − i(i − 1) m2 ) ( j m − ( j m )2 ) , C34 = Cov ( λ1(1 − µ1)Z ( i m , j − 1 m ) , λ1µ1Z ( i m , j m ) ) = λ2 1µ1(1 − µ1) ( i m − ( i m )2 ) ( j − 1 m − j( j − 1) m2 ) , and Var(ϕm(Z)(a, b)) = V1 + V2 + V3 + V4 + 2 (C12 +C13 +C14 +C23 +C24 +C34) . From [40], and using similar calculations as above, we can extend the convergence results of the Theorem 4.26 to higher dimensions d > 2. This is given in the following theorem: Theorem 4.29 Let d > 2, m ≥ 2, let C be a d-copula with continuous partial derivatives and let Cn m be the sample copula build from the modified sample of a sample of size n from C. Let {rn}n≥1 be a increasing sequence such as rn −→ ∞ and m divides rn, for all n ∈ N, then √ rn ( Crn m − ϕm(C) ) D−→ ϕ′C(GC) = ϕm(GC) where GC = BC(u1, · · · , ud) − d ∑ i=1 ∂iC(u1, · · · , ud)BC(1, 1, · · · , ui, · · · , 1) and BC is a Gaussian process with E(BC(u1, · · · , ud) · BC(u′1, · · · , u′d)) = C(u1 ∧ u′1, · · · , ud ∧ u′d) −C(u1, · · · , ud)C(u′1, · · · , u′d). In the following section we given the results of perform several simulations of the associated process of the sample copula given in Theorem 4.26 to exemplify its convergence. 123 4.3 Simulation study: Gaussian process In the next tables we present the results when we perform 1, 000 iterations from samples of size 50, 000 for the process associated to some family of copulas with continuous partial derivatives. The tables show the results when we repeat these simulations 100 times. The column θ indicates the parameter value, the column ρ indicates the value of the Spearman’s Rho associated to the parameter of the copula, the column Point gives the point of evaluation of the process, the column M. means is the mean of the means of the 100 simulations of the process (since the processes is a centered one, this value most be close to zero), M. var denotes the mean of the variances obtained form the 100 repetitions, Real var. denotes the real variance calculated using Remark 4.27, the firsts three columns 0.01, 0.05 and 0.10 indicates the number of rejections of normality in the 100 simulations at α−level using the Anderson-Darling’s test, and the second three columns 0.01, 0.05 and 0.10 indicates the number of rejections of normality in the 100 simulations at α−level using the Shapiro-Wilk’s test, the column UAD indicates the p-value of a Kolmogorov-Smirnov’s uniformity test applied to the p-values of Anderson-Darling’s tests, and the column USW indicates the p-value of a Kolmogorov-Smirnov’s uniformity test applied to the p-values of Shapiro-Wilk’s tests. In all cases we consider the parameter m of the sample copula equals to 4. θ ρ Point M. means M. var. Real var. 0.01 0.05 0.10 0.01 0.05 0.10 UAD USW 20 0.9578 (0.8,0.9) -0.00010 0.001643 0.001638 1 4 11 1 5 14 0.00032 0.02205 20 0.9578 (0.1,0.2) -0.00023 0.00163 0.00163 2 10 16 3 8 15 0.00036 0.01440 20 0.9578 (0.4,0.6) -0.00004 0.00220 0.00220 3 11 15 0 5 11 0.34306 0.38131 -20 -0.9578 (0.1,0.9) 0.00010 0.00040 0.00040 2 9 16 0 6 14 0.00299 0.00692 -20 -0.9578 (0.8,0.2) 0.00034 0.00654 0.00655 2 7 12 3 7 10 0.01939 0.12012 -20 -0.9578 (0.4,0.6) 0.00046 0.00252 0.00252 0 4 11 3 8 12 0.74344 0.98920 5 0.6434 (0.1,0.2) -0.00027 0.00378 0.00377 0 4 11 0 5 8 0.06614 0.75172 5 0.6434 (0.8,0.9) -0.00001 0.00377 0.00377 1 8 14 2 9 13 0.13685 0.27477 5 0.6434 (0.4,0.6) -0.00047 0.01665 0.01672 0 2 9 0 4 7 0.01777 0.57689 -5 -0.6434 (0.1,0.9) -0.00002 0.00093 0.00094 2 6 17 1 7 14 0.00298 0.03665 -5 -0.6434 (0.8,0.2) 0.00005 0.01518 0.01508 1 5 10 2 9 16 0.04696 0.15866 -5 -0.6434 (0.4,0.6) 0.00040 0.01919 0.01916 1 2 4 1 3 5 0.017208 0.02306 Table 4.1 Simulations results (Frank Copula). 124 θ ρ Point M. means M. var. Real var. 0.01 0.05 0.10 0.01 0.05 0.10 UAD USW 1.1 0.1341 (0.8,0.9) 0.00009 0.00400 0.00400 1 3 7 1 3 5 0.13436 0.59930 1.1 0.1341 (0.1,0.2) -0.00011 0.00383 0.00384 2 5 12 1 3 6 0.07756 0.24805 1.1 0.1341 (0.4,0.6) -0.00139 0.03150 0.03164 2 5 10 0 7 11 0.44923 0.48465 3 0.8481 (0.8,0.81) -0.00027 0.00889 0.00891 1 7 20 3 10 19 0.00000 0.00000 3 0.8481 (0.1,0.15) -0.00016 0.00171 0.00171 1 4 13 0 5 13 0.04158 0.44648 3 0.8481 (0.45,0.5) 0.00029 0.02529 0.02537 3 5 11 2 6 8 0.99269 0.41925 8 0.9773 (0.8,0.81) -0.00035 0.00346 0.00345 2 9 17 1 8 18 0.00000 0.01543 8 0.9773 (0.1,0.13) -0.00005 0.00058 0.00058 2 8 18 3 8 16 0.00036 0.02517 8 0.9773 (0.45,0.48) -0.00078 0.00787 0.00785 2 6 10 2 4 6 0.67848 0.46945 20 0.9963 (0.8,0.81) -0.00056 0.00138 0.00138 6 23 37 4 15 21 0.00000 0.00000 20 0.9963 (0.1,0.11) -0.00018 0.00017 0.00017 0 14 31 4 11 19 0.00000 0.00000 20 0.9963 (0.4,0.6) -0.00020 0.00076 0.00076 2 10 26 2 10 20 0.00000 0.00000 Table 4.2 Simulations results (Gumbel Copula). θ ρ Point M. means M. var. Real var. 0.01 0.05 0.10 0.01 0.05 0.10 UAD USW -0.9 -0.8978 (0.8,0.3) 0.00005 0.01399 0.01408 2 7 8 2 4 11 0.23918 0.39465 -0.9 -0.8978 (0.8,0.9) 0.00007 0.00078 0.00077 0 6 20 3 10 18 0.00000 0.00000 -0.5 -0.9878 (0.5,0.49) -0.00014 0.05621 0.05624 1 3 8 2 3 7 0.74006 0.51611 -0.1 -0.0787 (0.8,0.9) -0.00022 0.00347 0.00346 1 3 5 1 4 6 0.00465 0.06234 -0.1 -0.0787 (0.8,0.3) -0.00052 0.02112 0.02131 0 2 6 0 3 4 0.78381 0.23228 -0.1 -0.0787 (0.2,0.9) -0.00012 0.00380 0.00378 1 6 11 2 6 11 0.00456 0.06511 5 0.8848 (0.1,0.12) -0.00019 0.00058 0.00058 0 9 12 2 10 19 0.01646 0.16689 5 0.8848 (0.8,0.78) -0.00126 0.1646 0.01647 0 4 11 0 5 13 0.08772 0.10310 5 0.8848 (0.4,0.38) -0.00068 0.00447 0.00446 1 4 8 1 5 10 0.50330 0.8815 20 0.9869 (0.1,0.11) -0.00014 0.00013 0.00013 2 15 30 1 7 20 0.00000 0.00000 20 0.9869 (0.8,0.79) -0.00052 0.00574 0.00573 4 15 16 3 11 18 0.00142 0.05822 20 0.9869 (0.4,0.39) -0.00046 0.00109 0.00109 0 0 8 0 2 6 0.06122 0.67889 Table 4.3 Simulations results (Clayton Copula). 125 θ ρ Point M. means M. var. Real var. 0.01 0.05 0.10 0.01 0.05 0.10 UAD USW 1.1 0.0820 (0.8,0.9) -0.00016 0.00396 0.00393 2 4 7 2 6 9 0.29601 0.37343 1.1 0.0820 (0.1,0.2) -0.00001 0.00371 0.00371 3 6 9 0 7 12 0.32767 0.58562 1.1 0.0820 (0.4,0.6) 0.00008 0.03230 0.03214 3 8 13 1 7 13 0.63409 0.85489 3 0.6993 (0.8,0.79) -0.00035 0.01340 0.01332 0 7 11 1 6 8 0.18609 0.12258 3 0.6993 (0.1,0.12) 0.00014 0.00148 0.00147 2 5 12 3 6 11 0.50472 0.16758 3 0.6993 (0.4,0.41) -0.00065 0.02382 0.02363 2 5 10 1 4 15 0.56857 0.33761 8 0.9310 (0.8,0.79) -0.00047 0.00507 0.00506 2 10 19 1 11 20 0.00000 0.00000 8 0.9310 (0.1,0.12) 0.00006 0.00104 0.00104 1 4 9 1 1 7 0.01724 0.42950 8 0.9310 (0.4,0.41) -0.00058 0.00566 0.00568 0 3 9 1 5 8 0.06168 0.12568 20 0.9860 (0.7,0.7) -0.00064 0.00182 0.00181 2 4 9 1 7 12 0.71708 0.19691 20 0.9860 (0.1,0.1) -0.00012 0.00033 0.00033 4 10 17 4 8 12 0.00094 0.09628 20 0.9860 (0.4,0.4) -0.00040 0.00148 0.00148 2 4 9 1 7 12 0.96922 0.27021 Table 4.4 Simulations results (Joe Copula). θ ρ Point M. means M. var. Real var. 0.01 0.05 0.10 0.01 0.05 0.10 UAD USW 0.1 -0.6536 (0.8,0.2) 0.00004 0.01436 0.01445 2 2 8 2 7 11 0.14030 0.19872 0.1 -0.6536 (0.1,0.9) 0.00015 0.00089 0.00090 3 7 9 2 5 7 0.48364 0.61248 0.1 -0.6536 (0.4,0.41) 0.00016 0.01803 0.01811 1 7 13 3 8 17 0.32480 0.0568 1.5 0.1344 (0.8,0.9) 0.00007 0.00392 0.00390 1 4 12 2 7 13 0.22429 0.58390 1.5 0.1344 (0.1,0.2) 0.00006 0.00391 0.00390 1 6 11 1 3 13 0.00762 0.58638 1.5 0.1344 (0.4,0.6) 0.00036 0.03165 0.03145 2 5 9 0 5 13 0.62387 0.13984 8 0.6067 (0.8,0.79) -0.00039 0.01646 0.01658 3 10 14 3 8 17 0.22829 0.48114 8 0.6067 (0.1,0.11) 0.00010 0.00113 0.00113 1 5 9 2 5 10 0.53682 0.92631 8 0.6067 (0.4,0.41) -0.00105 0.02172 0.02170 0 3 9 0 4 8 0.05772 0.04771 30 0.8263 (0.8,0.79) -0.00065 0.01200 0.01199 1 8 17 0 7 13 0.02233 0.09471 30 0.8263 (0.1,0.11) -0.00009 0.00082 0.00082 0 2 10 2 5 11 0.40344 0.60288 30 0.8263 (0.4,0.41) -0.00090 0.01093 0.01095 5 9 11 3 10 12 0.61730 0.82075 Table 4.5 Simulations results (Plackett Copula). Point M. means M. var. Real var. 0.01 0.05 0.10 0.01 0.05 0.10 UAD USW (0.8,0.9) -0.00003 0.00366 0.00360 0 5 13 1 9 14 0.18762 0.91708 (0.1,0.2) 0.00028 0.00358 0.00352 1 7 15 4 10 15 0.09103 0.18777 (0.4,0.6) 0.00042 0.03242 0.03240 2 6 8 2 7 10 0.67877 0.41021 (0.1,0.9) 0.00002 0.00090 0.00090 3 6 9 1 5 8 0.04456 0.04697 (0.8,0.2) 0.00015 0.01439 0.01440 1 6 10 3 8 13 0.00066 0.00484 Table 4.6 Simulations results (Product Copula). 126 In the previous results, it can be seen that in most cases the number of rejections are close to the α-level. However in some cases we have large values in the columns that indicate the number of rejections at α-level (rows in bold type). The problem occur in simulations where the point is located near to the boundary of I2 = [0, 1] × [0, 1] and in simulations where the copula is close to one of the Fréchet-Hoeffding bounds and the point is not in the support of the copula M or copula W. This also happens if the copula is singular. As a possible solution in such cases, we propose to increase the sample size or consider points very close to the support of the copula M or copula W to obtain a better approximation to the normal distribution with the parameters indicated in Remark 4.27. For example, if we consider a sample size of 100,000 instead of a sample size of 50,000 in the simulation of the Gumbel copula with θ = 20 in the point (0.8, 0.8) we obtain a mean of means equal to −0.00039, a mean of variances equal to 0.00149, a real variance equal to 0.00153 and 4, 8 and 12 rejections of normality at α-level 0.01, 0.05 and 0.10, respectively, for the Anderson- Darling’s test, and 1, 10 and 15 for the Shapiro-Wilk’s test, also we have a p-value of 0.79405 for the Kolmogorov-Smirnov’s uniformity test applied to the p-values of the Anderson-Darling’s tests and a p-value of 0.77178 for the Kolmogorov-Smirnov’s uniformity test applied to the p-values of the Shapiro-Wilk’s tests. 127 Conclusions We know that Glivenko-Cantelli’s Theorem 2.5, is one out of two results which has made the study of the empirical distribution functions a really important task. In our Theorem 2.17 we proved that the sample copula also satisfies a Glivenko-Cantelli’s result. It is unquestionable that the empirical distribution function in dimension d = 1 is the best possible approximation to the real distribution function, a proof of that is the Dvoretzky-Kiefer-Wolfowitz’s inequality, see [11] and [34]. However, for dimension higher than one, even if Glivenko-Cantelli’s Theorem still holds, the price we have to pay is that we need very large sample sizes in order to obtain a good approximation of the real distribution function, this needed sample size increases dramatically with the dimension d. Besides, the evaluation of the empirical distribution function needs arrays of order nd where n is the sample size and this becomes a problem for large sample sizes even in relatively small dimensions. On the other hand, for the sample d-copula or order m, if m is relatively small compare to n the arrays we need for the construction of Cn m is only of order md, which in general is a lot smaller that nd. We saw in every simulation of Chapter 2, that there exists a value of m for which Cn m is a better approximation than the empirical copula Cn. If we define m0 the value of 2 ≤ m ≤ n such that Cn m0 is a better approximation Cn and it minimizes the mean value of the supremum distance between Cn m and C the real copula, then we would have had selected the best possible m. Of course, when we have only one sample from an unknown distribution it is not easy to select an appropriate value of m. In Section 5 of the Chapter 2, for dimension d = 2 we are proposing a method to estimate the value of m based on the sample Spearman’s ρ. This idea is based on the fact that the best approximation of Cn m for Πd the product d-copula is always reached when m = 2 for any sample size n, that is, when ρ = 0. This statement also suggests that we may use the total variation between the sample copula and the product copula using only the densities induced by Πd and by Cn m in order to estimate the parameter m. As a last interesting remark, let us assume that d = 1 and that we have a random sample X1, X2, . . . , Xn from a continuous distribution F, let Yi = i/n for i ∈ In be the modified sample, and take 2 ≤ m ≤ n. Then if we define the uniform partition of I = [0, 1] of size m and we construct Cn m following the same ideas as in Definition 2.12 for d ≥ 2, but in dimension one. We have that Cn m(u) = u for every u ∈ I. Therefore, Cn m(u) is the distribution of F(X) independently of the selection of m. This state- ment clearly indicates that using the ideas of Deheuvels [8] we always find the right distribution independently of the real continuous F, the sample size n and the selected m. The aim of Chapter 3 was to show that in addition to the comparative advantages shown in Chapter 128 2 between the sample d-copula of order m and the empirical copula, we can obtain similar results to those existing for the empirical copula given in [8] and that these results can be extended to higher dimensions, in this way we obtain a complete description of the distributions associated to the sample copula under the independence assumption. As we mentioned before, the second result that makes important the use of Fn, the empirical distribution, or the empirical copula Cn is the central limit theorem. In the Chapter 4 we proved the convergence to a Gaussian process of the sample copula process √ n(Cn m − C), from this result we obtained a Central Limit Theorem similar to the existing one for the empirical copula process. From the last two chapters of this thesis we can replicate, for the sample copula, the most important results of the convergence of the empirical copula, and along with the results of Chapter 2 we can consider that the sample copula could be a good approximation to the checkerboard copula and therefore to the real copula in comparison to the approximation given for the empirical copula. Among possible applications of the sample copula we have the study of the credit risk of the financial institutions, in [32] a methodology is proposed to calculate the default correlations using the Gaussian copula, although this work suffered several criticism because these models did not provide adequate results under the conditions of the financial crisis of 2008, many efforts have been made by using Gaussian model variants, see [3]. The sample copula can be used to model the overall risk without assuming any specific family of copulas. We have also worked on an algorithm to generate, from the sample copula, simulated samples that preserve the dependence structure which may be used in methodologies related to tests of goodness of fit or parameter estimation. 129 References [1] Ash, R. B., Gardner, M. F. (1975), Topics in Stochastic Processes Academic Press, Inc., New York. [2] Billingsley, P. (1968). Convergence of probability measures. John Wiley & Sons, Inc. New York. [3] Bluhm, C., Overbeck, L. and Wagner C. (2010). Introduction to credit risk modeling. 2nd ed., Mathematics Series, CRC Press. [4] Bouyé, E., Durrleman, V., Nikeghbali, A., Riboulet, G. and Roncalli, T. (2000). Copulas for Finance: A reading guide and some applications. City Univ. Business School London and Groupe de Recherche Opérationnelle Crédit Lyonnais Paris. Preprint. [5] Bücher, A., Dette, H. and Volgushev, S. (2011). New estimators of the Pickands dependence function and a test for extreme-value dependence. Ann. Statist., 39, (4), 1963–2006. [6] Caillault, C. and Guegan, D. (2005). Empirical estimation of tail dependence using copulas. Application to Asian markets. Quantitaives Finance, 5, 489–501. [7] Cherubini, U., Luciano, E. and Vechiatto, W. (2004). Copula Methods in Finance. John Wiley & Sons, Chichester. [8] Deheuvels, P. (1979). La fonction de dépendance empirique et ses proprietés. Un test non paramétrique d’indépendance. Roy. Belg. Bull. Cl. Sci., 65, (5), 274–292. [9] Durante, F. and Fernández-Sánchez, J. (2010). Multivariate shuffles and approximations of copulas. Statist. Probab. Lett., 80, 1827–1834. [10] Durrelman, V., Nikeghbali, A. and Roncalli, T. (2000). Which copula is the right one? Groupe de Recherche Opérationnelle Crédit Lyonnais Paris. Preprint. [11] Dvoretzky, A., Kiefer,J. and Wolfowitz, J. (1956). Asymptotic minimax character of the sam- ple distribution function and of the classical multinomial estimator. Ann. Math. Statist., 27, (3), 642–669. [12] Embrechts, P., Höing, A. and Juri, A. (2003). Using copulae to bound the value-at-risk for functions of dependent risks. Finance Stochast., 7, 145–167, [13] Fermanian,J.D., Radulović, D. and Wegkamp, M. (2004). Weak convergence of empirical copula processes. Bernoulli, 10, (5), 847–860. 130 [14] Fermanian, J. D., Radulović, D. and Wegkamp, M. (2015). Asymptotic total variation tests for copulas. Bernoulli, 21, (3), 1911–1945. [15] Fredricks, G.A., Nelsen, R.B. and Rodrı́guez-Lallena, J. (2005). Copulas with fractal support. Insurance Math. Econom., 37, 42–48. [16] Genest, C., Huang, W. and Dufour, J. M. (2013). A regularized goodness-of-fit test for copu- las. J. de la Société Française de Statistique, 154, 64–77. [17] Genest, C., Nešlehová, J. G. and Rémillard B. (2014). On the empirical multilinear copula process for count data. Bernoulli, 20, 1344-1371. [18] Genest, C., Quessy, J. F. and Rémillard, B. (2007). Asymptotic local efficiency of Cramér-von Mises tests for multivariate independence. Ann. Statist., 35, (1), 166–191. [19] Genest, C. and Rémillard, B. (2004). Tests of independence and randomness based on the empirical copula process. Test, 13, 335-369. [20] Genest, C., Remillard, B. and Beaudoin, D. (2009). Goodness-of-fit tests for copulas: A review and a power study. Insurance Math. Econom., 44, 199–213. [21] Genest, C. and Serges J. (2010). On the covariance of the asymptotic empirical copula pro- cess. J. Multivariate Anal., 101, 1837–1845. [22] Gijbels, I., Omelka, M. and Veraverbeke, N. (2012). Multivariate and functional covariates and conditional copulas. Electron. J. Stat., 6, 1273–1306. [23] Gijbels, I., Omelka, M. and Veraverbeke, N. (2015). Partial and average copulas and associ- ation measures. Electron. J. Stat., 9, (2), 2420–2474. [24] González-Barrios, J.M. and Hernández-Cedillo, M.M. (2013). Sample d-copula of order m. Kybernetika, 49, (5), 663–691. [25] González-Barrios, J.M., Hernández-Cedillo, M.M. and Rueda, R. (2014). Minimal number of parameters of a d-dimensionally stochastic matrix. In Modelos en Estadı́stica y Proba- bilidad III, Eds. González-Barrios, J.M., León J.A., Navarro, R. and Villa, J., Aportaciones Matemáticas de la SMM, Ser. Comunicaciones, 47, 123–137. [26] González-Barrios, J. M. and Hoyos-Argüelles, R. (2016). Asymptotic results for the sample copula or order m. Preprint IIMAS UNAM. [27] Gould, H. W. (1972). Combinatorial identities. Morgantown Printing and Binding Co. 131 [28] Grégoire, V., Genest, C. and Gendron, M. (2008). Using copulas to model price dependence in energy markets. Energy Risk, 5, (5), 58–64. [29] Jacod, J. and Protter, P. (2000), Probability Essentials. 2nd ed., Springer, New York. [30] Jaworski, P., Durante, F., Härdle, W. and Rychlik, T. (2010). Copula Theory and its Applica- tions. Lect. Notes in Statist.- Proceedings, 198. Springer, Heidelberg. [31] Joe, H. (2015). Dependence Modelling with Copulas. Monogr. Statist. Appl. Probab., 134, Chapman & Hall/CRC, Boca Raton. [32] Li, D. X., (2000), On default correlation: A copula function approach. Journal of Fixed Econometrics, 9, (4), 43–54. [33] Li, X., Mikusiński, P., Sherwood, H. and Taylor, M. D. (1997), On approximation of copulas. In Benes, V. and Stepán, J., editors, Distributions with given marginals and moment problems, 107-116. [34] Massart, P. (1990). The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Ann. Probab., 18, (3), 1269–1283. [35] Mikusiński, P., Sherwood, H. and Taylor, M.D. (1992). Shuffles of Min. Stochastica, 13, (1), 61–74. [36] Mikusiński, P. and Taylor, M.D. (2010). Some approximation of n-copulas. Metrika, 72, 385- 414. [37] Nelsen, R.B. (2006). An introduction to copulas. 2nd ed., Lect. Notes in Statist., 139, Springer, New York. [38] Neuhaus, G. (1971), On weak convergence of stochastic processes with multidimensional time parameter. The Annals of Mathematical Statistics , 42 (4), 1285-1295. [39] Quessy, J. F. (2016). A general framework for testing homogeneity hypotheses about copulas. Electron. J. Stat., 10, (1), 1064–1097. [40] Rüschendorf L. (1976), Asymptotic Distributions of multivariate rank order statistics. The Annals of Statistics , 4 (5), 912-923. [41] Sancetta, A. and Satchell, S. (2004). The Bernstein copula and its applications to modeling and approximations of multivariate distributions. Econom. Theory, 20, (3), 535–562. 132 [42] Schefzik, R., Thorarinsdottir, T. L. and Gneiting, T. (2013). Uncertainty Quantification in Complex Simulation Models Using Ensemble Copula Coupling. Statist. Sci., 28, (4), 616– 640. [43] Shen, X., Zhu, Y. and Song, L. (2008). Linear B-spline copulas with applications to non parametric estimation of copulas. Compu. Statist. Data Anal., 52, (7), 3806–3819. [44] Stute, W. (1984). The Oscillation Behavior of Empirical Processes: The Multivariate Case. Ann. Probab., 12, (2), 361–379. [45] Swanepoel, J. W. H. and Allison, J. S. (2013). Some new results on the empirical copula estimator with applications. Statist. Probab. Lett., 83, (7), 1731–1739. [46] Trutschnig, W. and Fernández-Sánchez, J. (2012). Idempotent and multivariate copulas with fractal support. J. Statist. Plan. Infer., 142, 3086–3096. [47] Turrin, F. L. (1983). von Mises Calculus for Statistical Functionals. Lecture Notes in Statis- tics, 19. Springer-Verlag, New York, Berlin, Heidelberg Tokyo. [48] van den Goorbergh, R.W.J., Genest, C. and Werker, B.J.M. (2005). Bivariate option pricing using dynamic copula models. Insurance Math. Econom., 37, 101–114. [49] van der Vaart, A. W. and Wellner, J. A. (1996), Weak convergence and empirical processes with applications to statistics (In Springer Series in Statistics) Springer, New York. 133