Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rare tate-models error: Singular loci split vs non-split #4593

Open
benlorenz opened this issue Feb 14, 2025 · 6 comments
Open

rare tate-models error: Singular loci split vs non-split #4593

benlorenz opened this issue Feb 14, 2025 · 6 comments
Labels
bug Something isn't working CI topic: FTheoryTools

Comments

@benlorenz
Copy link
Member

benlorenz commented Feb 14, 2025

This happened in our CI here https://github.com/oscar-system/Oscar.jl/actions/runs/13306594007/job/37158890795?pr=4588#step:11:10734 (julia 1.11, ubuntu) and disappeared after a re-run:

      From worker 5:	Singular loci of global Tate models over generic base space: Test Failed at /home/oscarci-tester/oscar-runners/runner-11/_work/Oscar.jl/Oscar.jl/experimental/FTheoryTools/test/tate_models.jl:249
      From worker 5:	  Expression: [k[2:3] for k = singular_loci(t_ivstar_ns)] == [((0, 0, 1), "I_1"), ((3, 4, 8), "Non-split IV^*")]
      From worker 5:	   Evaluated: [((0, 0, 1), "I_1"), ((3, 4, 8), "Split IV^*")] == [((0, 0, 1), "I_1"), ((3, 4, 8), "Non-split IV^*")]

This is the first time I have seen this error, unfortunately I don't know how to reproduce this.

Is there any randomness involved?

PS: This was on b2d1058 which is basically master except for some reordered exports in GModule.

@benlorenz benlorenz added bug Something isn't working topic: FTheoryTools CI labels Feb 14, 2025
@HereAround
Copy link
Member

HereAround commented Feb 14, 2025

Thank you for letting me know @benlorenz . Yes, there is randomness involved in it, due to lack of alternatives. The failure that you are reporting reflects the exact design made by @apturner.

I am not yet sure how this could be improved. Let me elaborate a bit, then maybe we can work something out together.

In theory, we must complete a hard (likely impossible, but certainly impractical) Groebner basis computation. To circumvent this, we choose random values for all but two coordinates, run the Groebner basis computation, and process the result. In our case, this is a classification task (split vs non-split). We repeat this four more times, so that we have in total 5 random choices of values for 5 different collections of variables and 5 classification results based on 5 different, simple Gröbner basis computation. Should all of these 5 computation lead to the same classification, then we are confident in our computed result. If not, we default to the weaker statement Split. This is what must have happened in the run that you are quoting.

How often does this happen? This function has been in OSCAR since the ICMS in Durham. I have not even seen this error once, and you once. Based on this evidence, I estimate roughly once very 3 months. Given our simplistic approach, that is good, not?

Let me ping @apturner, who understands the algorithm much better than I do. Maybe he can say more.

With all that being said: I am not sure what we could done to improve the situation. Ideas/thoughts?

Given that this error seems to happen rarely, maybe we can just ignore it (for now)? (Not good...)

@thofma
Copy link
Collaborator

thofma commented Feb 14, 2025

Usually, our functions are either correct or probably correct, in which case:

  • the function is documented to be of Monte-Carlo type, which means that it may return the wrong results. In this case, don't test that the function always returns the correct result, because this is arguably wrong. Maybe just test that one of the two possible values occur. (Testing that it is always correct just adds noise to the tests/CI.)
  • the function is supposed to be correct, then this is a bug and should be fixed

@apturner
Copy link
Collaborator

Hi all,

Just to clarify some of Martin's comments, we default to "split" because this is the stronger statement, corresponding to the factoring of a polynomial, which does not generically occur. The single-run algorithm tends to skew toward returning non-split, so essentially if any of the five attempts results in the identification being "split", then we go with that. We could up this to a higher threshold, so something like: if at least 3 of the 5 runs identify the singularity as split, then we label it as such.

But that is irrelevant to Tommy's point, which is that this should be documented as a random function. In that case, Martin, we should probably periodically run tests ourselves to verify that it usually returns correct results. Testing that the function returns one of the two allowable results makes sure we didn't make a typo (which is valuable), but doesn't do anything to test the quality of our algorithm.

@thofma
Copy link
Collaborator

thofma commented Feb 14, 2025

Thanks for the explanation. Testing Monte-Carlo algorithms is always annoying. Not sure if someones else has suggestions on how to do this properly? Maybe @fingolfin or @fieker?

@benlorenz
Copy link
Member Author

A similar error appeared again yesterday (but with a different example) in #4597:

      From worker 6:	Singular loci of global Tate models over generic base space: Test Failed at /home/oscarci-tester/oscar-runners/runner-14/_work/Oscar.jl/Oscar.jl/experimental/FTheoryTools/test/tate_models.jl:237
      From worker 6:	  Expression: [k[2:3] for k = singular_loci(t_i6_ns)] == [((0, 0, 1), "I_1"), ((0, 0, 6), "Non-split I_6")]
      From worker 6:	   Evaluated: [((0, 0, 1), "I_1"), ((0, 0, 6), "Split I_6")] == [((0, 0, 1), "I_1"), ((0, 0, 6), "Non-split I_6")]

https://github.com/oscar-system/Oscar.jl/actions/runs/13338453645/job/37258639364?pr=4597#step:11:10722

@thofma
Copy link
Collaborator

thofma commented Feb 19, 2025

Triage suggests to increase the threshold/number of trials.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working CI topic: FTheoryTools
Projects
None yet
Development

No branches or pull requests

4 participants