### Ryan Martin - A new double empirical Bayes approach for high-dimensional problems

421
views
1
12 months ago by

Friday 30th September 1:10-1:35pm

Martin Slides

Community: WHOA-PSI 2016

Can you give a reference for the Laplace tails requirement for concentration of the posterior?

written 12 months ago by John Kolassa

See Theorem 2.8 in the paper by Castillo & van der Vaart

http://projecteuclid.org/euclid.aos/1351602537

Their setup isn't regression but similar enough that there should be similar concerns about thin tailed priors in regression too.

written 12 months ago by Ryan Martin

Ryan,

I think I missed something crucial in your setup. Can you send me your overheads? (Your heuristic explanation says that you don't want to use the original likelihood after centering. But by taking alpha = 1 (or very near one) that seems to me exactly what you are doing.)nLet me look to see what I've missed.

Larry

written 12 months ago by Larry Brown

0
12 months ago by

on slide 8, why is the prediction loss not at the optimal minimax rate on  $B_n$Bn ? what prior would give the actual minimax rate?

Perhaps it's the beta-binomial in Example 2 at the top of page 9 here https://arxiv.org/pdf/1406.7718v3.pdf (?)

written 12 months ago by Joshua Loftus

Thanks! That seems to be it.

written 12 months ago by Todd Kuffner

Todd, take a look at Appendix B in the paper that Joshua linked.  The prior that we need to use to get the actual minimax rate is weird, puts some small portion of prior mass on a model whose corresponding X_S matrix has rank the same as the full X matrix.  The problem is that we CAN'T prove the other kinds of concentration rate results (e.g., posterior dimension) with this weird prior.  Since I'd like to have one formulation that works well in all aspects, I'm willing to give up a little bit in terms of the rate in the prediction case.  I hope that helps!

-Ryan

written 12 months ago by Ryan Martin
0
12 months ago by

Ryan's slides are now posted above!