# Ryan Martin - A new double empirical Bayes approach for high-dimensional problems

### 2 Answers

on slide 8, why is the prediction loss not at the optimal minimax rate on $B_n$`B`_{n} ? what prior would give the actual minimax rate?

Perhaps it's the beta-binomial in Example 2 at the top of page 9 here https://arxiv.org/pdf/1406.7718v3.pdf (?)

Thanks! That seems to be it.

Todd, take a look at Appendix B in the paper that Joshua linked. The prior that we need to use to get the actual minimax rate is weird, puts some small portion of prior mass on a model whose corresponding X_S matrix has rank the same as the full X matrix. The problem is that we CAN'T prove the other kinds of concentration rate results (e.g., posterior dimension) with this weird prior. Since I'd like to have one formulation that works well in all aspects, I'm willing to give up a little bit in terms of the rate in the prediction case. I hope that helps!

-Ryan

Can you give a reference for the Laplace tails requirement for concentration of the posterior?

See Theorem 2.8 in the paper by Castillo & van der Vaart

http://projecteuclid.org/euclid.aos/1351602537

Their setup isn't regression but similar enough that there should be similar concerns about thin tailed priors in regression too.

Ryan,

I think I missed something crucial in your setup. Can you send me your overheads? (Your heuristic explanation says that you don't want to use the original likelihood after centering. But by taking alpha = 1 (or very near one) that seems to me exactly what you are doing.)nLet me look to see what I've missed.

Larry