Randomise doesn't give you the critical cluster size threshold by default, but it's easy to obtain. If you add the -N option to randomise, for each corrp image you'll get a .txt file that gives the permutation distribution of the maximum statistic (whether that's max voxel T, max cluster size, max TFCE score, whatever). If you find the 95%ile of that distribution, that's the 5% critical threshold based on permutation. Loading that in Matlab and finding this percentile is easy:
MaxC=load('permdist.txt');Nperm=length(MaxC);sMaxC=sort(MaxC);Level=0.05;CritC=sMaxC(ceil(Nperm*Crit))
but you're right, it's something that I always report in papers. I'll try to get that printed out with, say, the -v flag, in an future version.
My problem with 3dcluster, alphasim, and any Monte Carlo based cluster inference tool, is that you have to believe in the Gaussian autocorrelation *and* stationarity that the simulations are based on. VBM data are widely acknowledged to exhibit nonstationary smoothness, but whenever I've looked at a FWHM image from FMRI data I see hints of structure there too. Randomise or any permutation-based procedure will automatically account for any nonstationarity in the data, and is not vulnerable to errors in estimated FWHM smoothness (even if the data were stationary).
Permutation is "exact", in that it guaranteed to control false positive risk with very weak assumptions, but it's not perfect: Parametric models can provide better power *when* all the assumptions are satisfied [1]. But if lots and lots of people find better results with the Monte Carlo method than with permutation, it might be that the Monte Carlo method is inflating significances. The traditional way of comparing methods, with Monte Carlo simulations of homogeneous smooth Gaussian noise, won't reveal this (as the parametric assumptions *define* the Monte Carlo method, and permutation can't out-perform that). A large body of null data with real (i.e. messy) spatial structured noise would be needed to tested to see if there is a substantial statistical inefficiency in permutation cluster size inference.
Hope this helps!
-Tom
[1] However, in all the standard settings, e.g. t-tests, permutation tests have asymtotic relative efficiency of 1, i.e. will be as powerful as parametric tests when larger and larger sample sizes are considered.
[2] Random Field Theory makes these assumptions too, and additionally approximations in the P-value formulas, but these are just more reasons not to use RFT---thought RFT does at least have a way of handling nonstationarity.
No comments:
Post a Comment