MATH3871 Assignment 2
Robert Tan School of Mathematics and Statistics
[email protected]
Robert Tan
MATH3871: Assignment 2.
Question 1 Part (a) A DAG diagram of the data and model parameters is:
ln α
β
τ
µi
ln xi
σ2
ln yi i = 1; N
Figure 1: DAG for data and model parameters.
Part (b) The priors used are Normal distributions for ln α and β since they can be both be either positive or negative, and an inverse Gamma distribution for τ . I chose the parameters by using the coefficients and residuals from a linear regression performed on the log-linear model ln yi = ln α + β ln xi + i . Varying these parameters did not significantly change the results, as seen below where I tested with diffuse normal distributions. Varying the precision prior did make changes in the final samples for the standard error. log_alpha ~ dnorm(0.16851, 0.0003958169) beta ~ dnorm(0.41400, 0.0002758921) tau ~ dgamma(0.001, 0.001) mean sd MC_error val2.5pc beta 0.41400 0.01783 0.0004102 0.37850 log_alpha 0.16860 0.02030 0.0004506 0.12750 sigma 0.09165 0.01477 0.0002629 0.06769
median val97.5pc start sample 0.41400 0.4489 1001 3000 0.16830 0.2089 1001 3000 0.09016 0.1252 1001 3000
log_alpha ~ dnorm(0.0, 0.000001) beta ~ dnorm(0.0, 0.000001) tau ~ dgamma(0.001, 0.001) mean sd MC_error val2.5pc median val97.5pc start sample beta 0.41400 0.01783 0.0004102 0.37850 0.41400 0.4489 1001 3000 log_alpha 0.16860 0.02030 0.0004506 0.12750 0.16830 0.2089 1001 3000 sigma 0.09165 0.01477 0.0002629 0.06769 0.09016 0.1252 1001 3000
log_alpha ~ dnorm(0.0, 0.000001) beta ~ dnorm(0.0, 0.000001) tau ~ dgamma(0.01, 0.01) mean sd MC_error val2.5pc median val97.5pc start sample beta 0.41400 0.01873 0.0004308 0.3767 0.41400 0.4506 1001 3000 log_alpha 0.16860 0.02132 0.0004732 0.1254 0.16830 0.2109 1001 3000 sigma 0.09625 0.01550 0.0002760 0.0711 0.09469 0.1315 1001 3000
1
Robert Tan
MATH3871: Assignment 2.
Part (c) The likelihood function of our data is given by
L
24
y |x, α , β , σ
2
−1/2
2πσ 2
=
exp −
i=1
−12
= 2πσ 2
−
exp
2
1 (ln yi − α − β ln xi ) 2 σ2
24
1 2σ 2
(ln yi − ln α − β ln xi )2
i=1
.
Part (d) library(BRugs) library(coda) data <- read.table("dome-data.txt", sep = "", header = F, nrows = 2) data[1,] <- log(data[1,]) data[2,] <- log(data[2,]) dataset <- list(log_x = unlist(data[1,]), log_y = unlist(data[2,]), N = 24) linearmodel <- lm(unlist(data[2,]) ~ unlist(data[1,])) log_alpha_0 <- as.numeric(linearmodel$coefficients[1]) beta_0 <- as.numeric(linearmodel$coefficients[2]) tau_0 <- 1/(summary(linearmodel)$sigma)^2 codafit <- BRugsFit(modelFile = "H:/R/dome-model.txt", data = dataset, inits = list(list(log_alpha = log_alpha_0, beta = beta_0, tau = tau_0), list(log_alpha = 0.25 * log_alpha_0, beta = 0.25 * beta_0, tau = 0.25 * tau_0), list(log_alpha = 4 * log_alpha_0, beta = 4 * beta_0, tau = 4 * tau_0)), numChains = 3, c("log_alpha", "beta", "sigma2"), nBurnin = 1000, nIter = 1000, coda = TRUE) samplesStats("*") a=samplesHistory("*", mfrow = c(3, 1)) b=samplesDensity("*", mfrow = c(3, 1)) bgr=samplesBgr("*", mfrow = c(3, 1)) HPDinterval(codafit, 0.95)
Part (e) The traceplots, posterior density plots and Brooks-Gelman-Rubin statistic plots are as follows. 'beta'
'beta'
beta 2 . 1
0 2 5 4 . 0
8 . 0
5 1 r g b
0 1
0 4 . 0
4 . 0
5 5 3 . 0
0 . 0
0
1000
1200
1400
1600
1800
0.35
2000
0.40
0.45
0.50
1000
1200
1400
iteration
'log_alpha'
'log_alpha'
5 2 . 0
0 2 . 0
5 1 . 0
1600
1800
0.10
2000
0.15
0.20
0.25
1000
1200
1400
iteration
1600
2000
iteration
'sigma2'
'sigma2'
sigma2 2 . 1
0 5 1
5 2 0 . 0
8 . 0
0 0 1
5 1 0 . 0
r g b 4 . 0
0 5
5 0 0 . 0
0 . 0
0
1400
1800
0 . 0
0
1200
2000
4 . 0
5 0 1 . 0
1000
1800
8 . 0 r g b
0 1
1400
2000
log_alpha
5 1
1200
1800
2 . 1
0 2
1000
1600 iteration
1600
1800
2000
0.000
0.005
0.010
0.015
iteration
0.020
0.025
0.030
0.035
1000
1200
1400
1600 iteration
2
Robert Tan
MATH3871: Assignment 2.
Table 1: The 95% HDR regions for each parameter. Lower HDR HDR Centre Upper HDR β 0.378477 0.4133130 0.448149 ln α 0.125292 0.1668455 0.208399 2 0.003980 0.0091880 0.014396 σ
Question 2 Part (a) Claim. By assuming a multinomial likelihood and Dirichlet prior with parameter vector a = (a1 , . . . , a9 ) , the Bayes factor for testing the conjecture that observed counts n = (n1, . . . , n9 ) are consistent with Benford’s law is given by 9
B(a) n B01 = p0 jj B(a + n) j=1 where
p0
= ( p01 , . . . , p09 ) are Benford’s hypothesised proportions and 9 j=1 Γ(a j )
B(a) =
9 j=1 a j
Γ
.
Proof. The density of the Dirichlet distribution of order 9 is
1 f ( x; a) = B(a) Calculating the Bayes factor: B01 =
xai i −1 .
i=1
L0 (n) L1 (n)
9 j=1 n j +
Γ =
9
9 j=1 Γ(n j +
Γ
Ω
9 j=1 n j +
9 j=1 Γ(n j +
1
1)
1
9
1)
j=1
9
n p j j j=1
n
p0 jj
1 B(a)
9
pai i −1 d p
i=1
where Ω is the support of a Dirichlet distribution of order 9 9
=
B(a) n p0 jj B(a + n) j=1 1 Ω B(a + n)
9
9
pai i +ni −1 d p
i=1
B(a) n = p0 jj B(a + n) j=1 1 since B(a + n) evaluates to 1.
9
pai i +ni −1 is the density of a Dirichlet(a + n) distribution, so the final integral
i=1
3
Robert Tan
MATH3871: Assignment 2.
Part (b) We now derive the fractional Bayes factor with training fraction b. The fractional Bayes factor is given by m∗0 (n) b B01 = ∗ m1 (n) We derive m∗0 (n) and m∗1 (n) as follows:
9 j=1 n j +
Γ
9 j=1 Γ(n j +
m∗0 =
9 j=1 Γ(n j + 9 j=1 n j +
Γ
=
9 j=1 n j +
Γ
m∗1 =
9 j=1 Γ(n j +
9 j=1 Γ(n j +
Ω
9 j=1 n j +
Γ
9 j=1 Γ(n j +
Ω
1
1)
1
1)
9
n p j j j=1
1
9
1)
j=1
n
1
9
1)
j=1
1
9
1)
j=1
n
n p j j j=1
p0 jj
1−b
n
p0 jj
9
1 B(a)
9
p0 jj
b
9 j=1 n j +
Γ
pai i −1 d p
i=1
b
1 B(a)
9
pai i −1 d p
i=1
where Ω is the support of a Dirichlet distribution of order 9
Γ
=
9 j=1 Γ(n j +
Γ
=
9 j=1 n j +
9 j=1 n j +
9 j=1 Γ(n j +
9
1−b
1
B(a + n) B(a + bn)
1)
1−b
1
1)
1 Ω B(a + n)
1 Ω B(a + bn)
9
pai i +ni −1 d p
i=1 9
pai i +bni −1 d p
i=1
B(a + n) B(a + bn) 9
1 1 since pai i +ni −1 and pai i +bni −1 are the densities of Dirichlet(a + n) B(a + n) i=1 B(a + bn) i=1 and Dirichlet(a + bn) distributions respectively, so the final integral fraction evaluates to 1. Therefore we have
b B01 =
m∗0 (n) m∗1 (n)
1−b
9
=
n
p0 jj
j=1
4
B(a + bn) B(a + n)
Robert Tan
MATH3871: Assignment 2.
Fractional Bayes Factor vs. Training Fraction
r o t c a F s e y a B l a n o i t c a r F
2 . 1
8 . 0
4 . 0
0 . 0
0.0
0.2
0.4
0.6
0.8
1.0
Training fraction
Figure 2: A plot of the Fractional Bayes Factor for varying values of b. The code used to produce the image is as follows. We select a uniform prior, setting a = 1 so that any distribution of probabilities is equally likely (their marginals are Beta(1, 8) with mean 1/9, so the expected occurrence of each individual digit is 1/9). Note that we set the minimum value of b to n min /n. p <- c(0.301, 0.176, 0.125, 0.097, 0.079, 0.067, 0.058, 0.051, 0.046) n <- c(31,32,29,20,18,18,21,13,10) a <- rep(1,9) log.bayes.factor <- ((sum(lgamma(a)) - lgamma(sum(a))) (sum(lgamma(a+n)) - lgamma(sum(a+n))) + sum(n*log(p))) log.bayes.fractional <- rep(0,100) bmin <- min(n)/sum(n) for (i in 0:947) { log.bayes.fractional[i+1] <- {(1-(bmin + i/1000)) * sum(n * log(p)) + ((sum(lgamma(a + (bmin + i/1000) * n)) lgamma(sum(a + (bmin + i/1000) * n))) (sum(lgamma(a + n)) - lgamma(sum(a + n))))} } bayes.factor <- exp(log.bayes.factor) bayes.fractional <- exp(log.bayes.fractional) x <- seq(bmin, bmin + 0.947,0.001) plot(x, bayes.fractional, type = ’l’, xlim = c(0,1), ylim = c(0,1.2), xlab = "Training fraction", ylab = "Fractional Bayes Factor", main = "Fractional Bayes Factor vs. Training Fraction") abline(1,0, col = ’red’)
Part (c) The Bayes factor returned by the above code is 1.177189 which is greater than 1, slightly favouring the first model, i.e. the election counts adhered to Benford’s law. However, the fractional Bayes factor is always below 1 for any meaningful training fraction, so it favours the second model, that the election counts did not adhere to Benford’s law. In this case I would use the regular Bayes factor since neither model uses an improper prior, so we conclude that there is insufficient evidence to suggest that the election counts do not follow Benford’s law. However we cannot really draw any meaningful conclusions about the validity of the Venezuelan election itself, since election data often does not follow Benford’s law (see Deckert, Myagkov, Ordeshook 2010. The Irrelevance of Benford’s Law for Detecting Fraud in Elections). 5
Robert Tan
MATH3871: Assignment 2.
Question 3 Part (a) The minimal training samples for the Bayes factor contain 2 observations, one 0 and one 1, i.e. they are the sets {0, 1} and {1, 0} (this always exists since we assume 0 < r < n. We derive the partial Bayes factors as follows: R|T
B12 =
B12 T B12 1
n r−1 θ (1 − θ)n−r−1 dθ r 0 n r θ0 (1 − θ0 )n−r r = 1 2 c1 dθ 1 0 2 θ0 (1 − θ0) 1 1 1 B(r, n − r) θr−1 (1 − θ)n−r−1 dθ 0 B(r, n − r) = θ0r−1 (1 − θ0 )n−r−1 = B(r, n − r)θ01−r (1 − θ0 )1−n+r . c1
where B(r, n − r) =
Γ(r)Γ(n − r) Γn
for either training set {0, 1} or {1, 0} due to symmetry. Then the arithmetic and geometric means of these two partial Bayes factors are simply the same expression, so the arithmetic and geometric Bayes factors both take the value AI GI B12 = B 12 = B(r, n − r)θ01−r (1 − θ0 )1−n+r
Part (b) For a given fraction b, we derive the partial Bayes factor as follows: F B12
m∗1 (x) = ∗ m2 (x) 1
n r−1 θ (1 − θ)n−r−1 dθ r
c1
0
1
c1
=
=
0
n r
b
θbr−1 (1 − θ)bn−br−1 dθ
n r θ0 (1 − θ0 )n−r r b n θ0r (1 − θ0)n−r r
B(r, n − r) r(b−1) (1 − θ0 )(n−r)(b−1) . θ0 B(br, bn − br)
6
Robert Tan
MATH3871: Assignment 2.
Part (c) For θ0 = 0 and r = 1, we have I B12 = B(1, n − 1) =
1 n−1
noting that lim θ00 = 0 since we have fixed r = 1 θ0 →0
and
θ0 →0
F → ∞ since b − 1 < 0, so we have θ0b−1 −−−→ ∞. B12
This means the fractional Bayes factor always supports model 1, no matter what the fraction is, whereas the intrinsic Bayes factor supports model 2 for n > 2 but is inconclusive for n = 2. So the two are not consistent with each other for large n. The fractional Bayes factor is correct since it always supports model 1, and here it should be obvious that model 1 is correct because a single non-zero observation guarantees that we cannot have θ0 = 0, and the intrinsic Bayes factor fails to reach this conclusion.
7