For my problem I am only interested in few eigenstates (with smallest eigenvalues) of a sparse real symmetric matrix A. As far as I can see, arpack uses a different method and should be much faster than the full diagonalization of the LinearAlgebra package. Why is it much slower in my example?
using LinearAlgebra, SparseArrays, Arpack
A = sprand(5000,4995,0.01) # Matrix with 5-dimensional nullspace
H = sparse(Hermitian(A*A'))
@time E, v = eigen(Matrix(H))
@time E, v = eigs(H, nev=10, which=:SM)
> 12.059152 seconds (27 allocations: 764.733 MiB, 0.72% gc time)
> 37.628222 seconds (680 allocations: 1.424 GiB, 0.47% gc time)
When computing the smallest eigenvalues with
eigs, it's necessary to compute(H - λ*I)\xfor some shiftλat each iteration of the algorithm. In our implementation ofeigsthis results in a sparse factorization ofH - λ*Ifor each iteration and that is costly. Furthermore, computing the smallest value sometimes require many iterations compared to computing large value but I'd suspect the costs of the sparse factorizations are dominating. You can get an idea about the costs by computingwhich=:LMinstead of:SMsince the former only requires matrix-vector multiplications.