Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added more matrix benchmarks #34

Merged
merged 7 commits into from
Apr 5, 2017
Merged

Added more matrix benchmarks #34

merged 7 commits into from
Apr 5, 2017

Conversation

siefkenj
Copy link
Contributor

@siefkenj siefkenj commented Apr 3, 2017

This adds several more benchmarks of matrix algorithms including rank, rref, det, as well as matrix multiplication and matrix addition.

@bjodah
Copy link
Member

bjodah commented Apr 3, 2017

There are a quite a few methods named test_, did you mean to write time_ there?

@siefkenj
Copy link
Contributor Author

siefkenj commented Apr 3, 2017

I sure did. So used to pyunit!

@bjodah
Copy link
Member

bjodah commented Apr 3, 2017

Good.

Running this against SymPy master I get:

$ asv run -b MatrixOperations c807dfe..master
· Cloning project.
· Fetching recent changes.
· Creating environments..
· Discovering benchmarks
·· Uninstalling from virtualenv-py3.5-fastcache-mpmath.
·· Building for virtualenv-py3.5-fastcache-mpmath
·· Installing into virtualenv-py3.5-fastcache-mpmath..
· Running 14 total benchmarks (2 commits * 1 environments * 7 benchmarks)
[  0.00%] · For sympy commit hash 980f3e23:
[  0.00%] ·· Building for virtualenv-py3.5-fastcache-mpmath...
[  0.00%] ·· Benchmarking virtualenv-py3.5-fastcache-mpmath
[  7.14%] ··· Running solve.TimeMatrixOperations.time_dense_add                                                                          163.90μs;...
[ 14.29%] ··· Running solve.TimeMatrixOperations.time_dense_multiply                                                                     416.36μs;...
[ 21.43%] ··· Running solve.TimeMatrixOperations.time_det                                                                                  2/9 failed
[ 28.57%] ··· Running solve.TimeMatrixOperations.time_det_bareiss                                                                          2/9 failed
[ 35.71%] ··· Running solve.TimeMatrixOperations.time_det_berkowitz                                                                      105.25μs;...
[ 42.86%] ··· Running solve.TimeMatrixOperations.time_rank                                                                                 1/9 failed
[ 50.00%] ··· Running solve.TimeMatrixOperations.time_rref                                                                                 1/9 failed
[ 50.00%] · For sympy commit hash 1635382c:
[ 50.00%] ·· Building for virtualenv-py3.5-fastcache-mpmath...
[ 50.00%] ·· Benchmarking virtualenv-py3.5-fastcache-mpmath
[ 57.14%] ··· Running solve.TimeMatrixOperations.time_dense_add                                                                          165.28μs;...
[ 64.29%] ··· Running solve.TimeMatrixOperations.time_dense_multiply                                                                     492.55μs;...
[ 71.43%] ··· Running solve.TimeMatrixOperations.time_det                                                                                  2/9 failed
[ 78.57%] ··· Running solve.TimeMatrixOperations.time_det_bareiss                                                                          2/9 failed
[ 85.71%] ··· Running solve.TimeMatrixOperations.time_det_berkowitz                                                                      105.62μs;...
[ 92.86%] ··· Running solve.TimeMatrixOperations.time_rank                                                                                 1/9 failed
[100.00%] ··· Running solve.TimeMatrixOperations.time_rref                                                                                 1/9 failed

Are those failures expected?


# every test will be based of a submatrix of this matrix
big_mat = Matrix([[3, 8, 10, 5, 10, 7, 10, 10, 8, 6], [10, 9, 3, 7, 10, 1, 4, 2, 8, 1], [5, 9, 9, 0, 2, 10, 5, 9, 3, 9], [1, 8, 0, 7, 8, 8, 0, 4, 1, 10], [6, 5, 3, 0, 3, 4, 6, 1, 10, 5], [7, 10, 8, 9, 10, 7, 2, 8, 3, 2], [10, 8, 5, 10, 3, 5, 10, 4, 2, 3], [8, 4, 10, 9, 1, 9, 7, 4, 8, 6], [6, 2, 4, 1, 1, 0, 1, 3, 1, 9], [9, 2, 6, 10, 9, 4, 10, 2, 1, 8]])
symbol_locations = [(2, 2), (1, 9), (0, 0), (0, 7), (9, 1), (6, 9), (8, 9), (4, 0), (3, 8), (3, 2), (2, 8), (1, 8), (5, 3), (5, 9), (6, 4), (5, 5), (7, 9), (5, 1), (1, 0), (3, 3), (7, 1), (2, 5), (1, 5), (4, 4), (4, 2), (7, 3), (3, 4), (6, 6), (9, 5), (1, 6), (9, 0), (3, 1), (0, 4), (8, 3), (2, 3), (3, 9), (9, 6), (4, 8), (9, 3), (8, 0), (6, 7), (5, 7), (8, 6), (3, 6), (4, 5), (1, 2), (9, 8), (7, 4), (8, 8), (6, 1), (0, 3), (4, 7), (7, 0), (9, 7), (5, 4), (7, 6), (2, 6), (3, 7), (3, 5), (1, 4), (5, 0), (4, 9), (7, 8), (6, 8), (2, 1), (9, 2), (3, 0), (7, 7), (2, 7), (2, 0), (8, 1), (7, 5), (4, 3), (1, 3), (9, 9), (0, 6), (4, 1), (5, 8), (8, 4), (0, 8), (2, 4), (9, 4), (7, 2), (1, 7), (6, 3), (6, 5), (5, 2), (6, 0), (0, 1), (8, 2), (2, 9), (8, 5), (0, 2), (0, 9), (8, 7), (4, 6), (0, 5), (1, 1), (6, 2), (5, 6)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you split up these two long lines?

@siefkenj
Copy link
Contributor Author

siefkenj commented Apr 3, 2017

It is expected that for 10 x 10 and 6x6 matrices with symbols, rref and det_bareiss will take a really long time. It can take over 10 minutes to row reduce a 10 x 10 matrix on my machine, which means it'll be really significant when those tests get below 60s!

@bjodah
Copy link
Member

bjodah commented Apr 3, 2017

I see, the only worry I have is that the default timeout is 60 seconds. That means that merging this PR as-is would drastically increase the time spent per commit in the benchmark.

We have had a loose goal of trying to stay under 1 second per benchmark, see:
#8

But we've also discussed having a set of slow benchmarks, which we wouldn't run on each commit, but maybe every 500th commit, and then use the bisecting function of asv to find regressions.

@pbrady
Copy link
Member

pbrady commented Apr 3, 2017 via email

@siefkenj
Copy link
Contributor Author

siefkenj commented Apr 3, 2017

I like the idea of a slow benchmark run more infrequently. I'm working on some optimized code to bring some of these computations to < 1s. The trouble is, things slow down really fast in the matrix library with Symbols around, and some of the algorithms for smaller matrices are hardcoded, so a benchmark of them won't really be testing the general algorithm. I could remove the size 10 matrix and change the size 6 to a size 5.

For things like determinants, size ~4 is where hardcoded speed = algorithm speed, which is why I put a size 6 matrix in the tests.

Could the speed tests be set to timeout after 2 seconds? That seems like it would get a lot of good data and when the algorithms improve and they suddenly jump below that threshold, they'd suddenly start showing up.

@bjodah
Copy link
Member

bjodah commented Apr 3, 2017

@siefkenj We can absolutely change the timeout to 2 "travis-seconds" (I think they use Xeon processors on google compute enginge). Looking at the output from the most recently merged PR we have these timings on Travis:

· Running 25 total benchmarks (1 commits * 1 environments * 25 benchmarks)

[  0.00%] · For sympy commit hash 025e63ae:

[  0.00%] ·· Building for py2.7-fastcache-mpmath....

[  0.00%] ·· Benchmarking py2.7-fastcache-mpmath

[  4.00%] ··· Running dsolve.TimeDsolve01.time_dsolve                     1.21s

[  8.00%] ··· Running integrate.TimeIntegration01.time_doit            325.51ms

[ 12.00%] ··· Running integrate.TimeIntegration01.time_doit_meijerg     95.05ms

[ 16.00%] ··· Running ...onOperations.peakmem_jacobian_wrt_functions        37M

[ 20.00%] ··· Running ...sionOperations.peakmem_jacobian_wrt_symbols        37M

[ 24.00%] ··· Running ....TimeLargeExpressionOperations.peakmem_subs        37M

[ 28.00%] ··· Running ...imeLargeExpressionOperations.time_count_ops    48.37ms

[ 32.00%] ··· Running ...xprs.TimeLargeExpressionOperations.time_cse    60.31ms

[ 36.00%] ··· Running ...LargeExpressionOperations.time_free_symbols    10.07ms

[ 40.00%] ··· Running ...ssionOperations.time_jacobian_wrt_functions   249.31ms

[ 44.00%] ··· Running ...ressionOperations.time_jacobian_wrt_symbols    55.38ms

[ 48.00%] ··· Running ...erations.time_manual_jacobian_wrt_functions   126.68ms

[ 52.00%] ··· Running ...prs.TimeLargeExpressionOperations.time_subs   421.27ms

[ 56.00%] ··· Running logic.LogicSuite.time_dpll                          5.24s

[ 60.00%] ··· Running logic.LogicSuite.time_dpll2                      578.52ms

[ 64.00%] ··· Running logic.LogicSuite.time_load_file                    9.75ms

[ 68.00%] ··· Running ...gDamper.time_kanesmethod_mass_spring_damper     4.91ms

[ 72.00%] ··· Running ...per.time_lagrangesmethod_mass_spring_damper     3.24ms

[ 76.00%] ··· Running refine.TimeRefine01.time_refine                    11.26s

[ 80.00%] ··· Running solve.TimeMatrixSolve.time_solve                       ok

[ 80.00%] ···· 

               ======== ==========

                param1            

               -------- ----------

                  GE     139.55ms 

                  LU     141.16ms 

                 ADJ     403.60ms 

               ======== ==========

[ 84.00%] ··· Running solve.TimeMatrixSolve2.time_cholesky_solve         1.66ms

[ 88.00%] ··· Running solve.TimeMatrixSolve2.time_lusolve              615.77μs

[ 92.00%] ··· Running solve.TimeSolve01.time_solve                     899.59ms

[ 96.00%] ··· Running solve.TimeSolve01.time_solve_nocheck             873.90ms

[100.00%] ··· Running sum.TimeSum.time_doit                             23.78ms

In that case, would you mind creating a new folder e.g. slow_benchmarks/ next to benchmarks/ and
put a version of your class with the slow benchmarks there (and also move refine.py).
We should probably test it too in .travis.yml.

@pbrady thanks for pointing out SYMPY_CACHE_SIZE, when increasing that variable, only 1/9 fail (>60 s) on my workstation.

@siefkenj
Copy link
Contributor Author

siefkenj commented Apr 5, 2017

@bjodah Is the Travis error related to my commit or is it something else?

@bjodah
Copy link
Member

bjodah commented Apr 5, 2017

Forgot to say, we need a folder tests under slow_benchmarks, __init__.py in both of those folders and then move test_refine.py to the new test folder.

@bjodah
Copy link
Member

bjodah commented Apr 5, 2017

@bjodah
Copy link
Member

bjodah commented Apr 5, 2017

Great, thanks!

@bjodah bjodah merged commit e5a36de into sympy:master Apr 5, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants