Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why won't AbstractMatrix(::SubOperator) return a BandedBlockBandedMatrix? #81

Open
Luapulu opened this issue Sep 9, 2021 · 10 comments
Open

Comments

@Luapulu
Copy link

Luapulu commented Sep 9, 2021

I see that the line to return such a matrix is commented out below. Why?

# isbandedblockbanded(P) && return BandedBlockBandedMatrix

I'd like to preallocate large operator matrices and currently the RaggedMatrix type seems to be the default. The issue is that RaggedMatrix uses too much memory. Currently, I'm just manually using BandedBlockBandedMatrix, but that's not as nice as just being able to use AbstractMatrix(view(SomeOperator, FiniteRange, 1:N))), since I don't want to have to care about the exact memory layout myself.

@dlfivefifty
Copy link
Member

TBH I don't remember. I'm redesigning a lot of this in ClassicalOrthogonalPolynomials.jl. Maybe what you are trying to do works there already?

@Luapulu
Copy link
Author

Luapulu commented Sep 10, 2021

I need to differentiate and integrate on a 2D domain in Chebyshev tensor product basis. I believe ClassicalOrthogonalPolynomials.jl can't do that yet?

@dlfivefifty
Copy link
Member

You are right it's not built in, but perhaps you can build the operators you want via:

julia> using ClassicalOrthogonalPolynomials, LazyBandedMatrices

julia> T,U = ChebyshevT(), ChebyshevU();

julia> x = axes(T,1); D = Derivative(x);

julia> D_x = KronTrav(U\D*T, U\T)
ℵ₀×ℵ₀-blocked ℵ₀×ℵ₀ KronTrav{Float64, 2, BandedMatrix{Float64, Adjoint{Float64, InfiniteArrays.InfStepRange{Float64, Float64}}, InfiniteArrays.OneToInf{Int64}}, BandedMatrix{Float64, LazyArrays.ApplyArray{Float64, 2, typeof(vcat), Tuple{Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, Zeros{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, LazyArrays.ApplyArray{Float64, 2, typeof(hcat), Tuple{Ones{Float64, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}}}}}, InfiniteArrays.OneToInf{Int64}}, Tuple{BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}, BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}}}:
     1.0    0.0      -0.5                                    
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────    
         0.5       0.0          -0.5                            
             2.0        0.0           -1.0                  
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────     
                  0.5           0.0               -0.5          
                      1.0            0.0               -1.0     
                          3.0             0.0              
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────    
                               0.5                0.0          
                                    1.0                0.0     
                                        1.5                   
                                            4.0           
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────     
                                                 0.5         
                                                      1.0     
                                                             
                                                             
                                                        
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────     
                                                                                               

julia> D_y = KronTrav(U\T, U\D*T)
ℵ₀×ℵ₀-blocked ℵ₀×ℵ₀ KronTrav{Float64, 2, BandedMatrix{Float64, LazyArrays.ApplyArray{Float64, 2, typeof(vcat), Tuple{Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, Zeros{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, LazyArrays.ApplyArray{Float64, 2, typeof(hcat), Tuple{Ones{Float64, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}}}}}, InfiniteArrays.OneToInf{Int64}}, BandedMatrix{Float64, Adjoint{Float64, InfiniteArrays.InfStepRange{Float64, Float64}}, InfiniteArrays.OneToInf{Int64}}, Tuple{BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}, BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}}}:
  1.0  0.00.0  0.0  0.00.0  0.0  -0.5                                    
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────    
      2.0  0.0  0.00.0  0.0   0.0   0.0  0.0  -1.0                            
         0.5  0.0   0.0   0.0  0.0   0.0   0.0  -0.5                  
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────     
              3.0  0.0   0.0   0.0  0.0   0.0        0.0  0.0  -1.5          
                 1.0   0.0  0.0   0.0   0.0   0.0      0.0   0.0  -1.0     
                      0.5  0.0        0.0   0.0  0.0        0.0   0.0
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────    
                           4.0  0.0   0.0        0.0  0.0   0.0          
                              1.5   0.0   0.0      0.0   0.0   0.0     
                                   1.0   0.0  0.0        0.0   0.0     
                                        0.5  0.0             0.0
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────     
                                             5.0  0.0   0.0         
                                                2.0   0.0   0.0     
                                                     1.5   0.0     
                                                          1.0     
                                                            
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────     
                                                                                                   

@Luapulu
Copy link
Author

Luapulu commented Sep 10, 2021

That looks interesting. Thanks, for the help.

@dlfivefifty
Copy link
Member

Btw I'd love to get kronecker products working in ContinuumArrays.jl/ClassicalOrthogonalPolynomials.jl. If you felt like helping with that I can walk you through the planned design

@Luapulu
Copy link
Author

Luapulu commented Sep 10, 2021

Is the plan to replace ApproxFunOrthogonalPolynomials.jl with ClassicalOrthogonalPolynomials.jl?

@dlfivefifty
Copy link
Member

Yes that is the plan.

  1. ContinuumArrays.jl is "functions-as-vectors". This is a more natural language for doing linear algebra, e.g., solving linear ODEs, taking subspaces of bases.
  2. ApproxFun.jl will be "functions-as-functions". It will call ContinuumArrays.jl for operations that are linear (e.g., solving ODEs, addition, etc.) so will be a thin wrapper.
  3. ClassicalOrthogonalPolynomials.jl will add the support for orthogonal polynomials

Note there's a WIP PR to make the change

#44

The biggest hold up is there is a lot of functionality missing in terms of analogues of ArraySpace and TensorSpace. Though this is moving quickly, and things that have been updated are much faster. E.g. building multiplication operators via Clenshaw is 10x faster in ClassicalOrthogonalPolynomials.jl: MikaelSlevinsky/FastTransforms#74

@Luapulu
Copy link
Author

Luapulu commented Sep 10, 2021

So, my plan at the moment is to replace all my current high level ApproxFun use with FastTransforms.jl for the transforms from and to the two kinds of Chebyshev Bases and to replace all the derivatives, integrals and conversions with matrix multiplications so I can preallocate the matrices. Whether I use ApproxFunOrthogonalPolynomials.jl or ClassicalOrthogonalPolynomials.jl to generate the matrices in the first place shouldn't matter too much, so it's not such a high priority to switch over for me on that front.

However, I do have a few little things to add to FastTransforms.jl. For example, a method for the Padua Transform that allows me to do something like Padua_transform(into, from, plan) so I can preallocate the output. May be some documentation/comments, too. And if you have any big changes planned for FastTransforms.jl, that would be quite interesting, since the transforms are the biggest limiting factor for my use case at the moment.

If you're curious, I'm the student working on PoincareInvariants.jl.

@Luapulu
Copy link
Author

Luapulu commented Sep 10, 2021

Actually, my plan doesn't quite work out because the transform for Chebyshev polynomials is in ApproxFunOrthogonalPolynomials.jl as I understand? There'd also a version in FastTransforms, but it's not used you said?

So, one of the transforms will still come from ApproxFun I guess.

@dlfivefifty
Copy link
Member

The transforms all come from FastTransforms.jl, it's just a question of which transform is used.

Note the transforms are also used in ClassicalOrthogonalPolynomials.jl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants