pairwise comparison in adonis gives the same p-value #473
Replies: 8 comments 5 replies
-
Hi, Have you had any luck solving your problem? |
Beta Was this translation helpful? Give feedback.
-
We haven't found anything reproducible here and therefore nothing has happened (neither any signs of problems). What is obvious in the original post is that there are only 50 permutations, and none of these was better than the observed in any of these models. The smallest possible P-value is 1/(nperm+1) which for nperm=50 is 1/51 = 0.01960784. It may be that with higher number of permutations you may get different P-values. Try with |
Beta Was this translation helpful? Give feedback.
-
Hi Jarioksa, Thanks for your comment, I have been running it with 999 permutations and have just noticed that I get the following message when I run it: 'nperm' >= set of all permutations: complete enumeration. |
Beta Was this translation helpful? Give feedback.
-
@klv501 Those aren't warnings, they're messages from the software telling you that i) the number of possible permutations of the data is fewer than the requested 999 permutations, and ii) that the number of possible permutations is smaller than the It may well be that a permutation test for a single pair of values has lower power than the omnibus test over all levels, because there are fewer permutations of the smaller subset of data involved in the pair being compared. In a sense, this is one of the negatives of using permutation-based tests. |
Beta Was this translation helpful? Give feedback.
-
@gavinsimpson Thanks for your reply, I think that my small sample size from sub-setting is definitely the issue. Sorry, i'm very new to this type of multivariate analysis - would you be able to recommend an exact test for this as you mention? I don't believe that a MANOVA would be appropriate as I have many OTUs (dependent variables), many with 0 relative abundance values. |
Beta Was this translation helpful? Give feedback.
-
@jarioksa Thank you. I actually tried many values for nperm (if think up to 1000 or more), and the p-values were always the same. The reason is (I think) that my groups are really homogeneous and any permutations yielded a worse score than the original order. |
Beta Was this translation helpful? Give feedback.
-
It is not rare to get the lowest possible P-value in permutation tests. When people use 999 permutations and get P = 0.001, they easily accept this. If the number of permutations +1 is less even, they are alerted for the strange values. So 1000 permutations has a lower limit of 0.00099901 which seems to beg for an explanation. With small data sets – or reduced subsets of data – you are bound the get more often weird probabilities at the lower limit. |
Beta Was this translation helpful? Give feedback.
-
Another case with abundant ties in P-values is with factor predictors. We permute observations, but if predictor variable is a factor, many permutations give identical allocations to factor classes. In that case the number of distinct allocations can be much lower than the number of permutations. This concerns particularly models with one factor predictor – with several factors you already have more distinct combinations of factor levels. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm using permanova tests to evaluate the significance of a given factor in my dataset. Everything runs but I get surprising results, and I'm wondering if it's correct or if there could potentially be an issue with either the package or with the data itself.
Using
adonis
, I was able to test that my factor ("color") was significant. However, I want to have more details about the significance and compare each pair of levels: For example, I want to test whether the sample in the "Red" color group are statistically different from the samples in the "Purple" color group.To do that, I extract from my data all samples corresponding to 2 given colors, and I run adonis on the subset:
My problem is that for all 3 comparisons, I get exactly the same p-value,
0.01961
(everything else is different: R2, F.model, ...). It looks very suspicious to me since I'm comparing different sample groups.Do you have an idea what could cause this issue?
I also tried the
pairwise.adonis()
package to do these pairwise comparisons but I got the same problem (and exactly the same p-value).Thank you for your help
Beta Was this translation helpful? Give feedback.
All reactions