-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why won't AbstractMatrix(::SubOperator)
return a BandedBlockBandedMatrix
?
#81
Comments
TBH I don't remember. I'm redesigning a lot of this in ClassicalOrthogonalPolynomials.jl. Maybe what you are trying to do works there already? |
I need to differentiate and integrate on a 2D domain in Chebyshev tensor product basis. I believe ClassicalOrthogonalPolynomials.jl can't do that yet? |
You are right it's not built in, but perhaps you can build the operators you want via: julia> using ClassicalOrthogonalPolynomials, LazyBandedMatrices
julia> T,U = ChebyshevT(), ChebyshevU();
julia> x = axes(T,1); D = Derivative(x);
julia> D_x = KronTrav(U\D*T, U\T)
ℵ₀×ℵ₀-blocked ℵ₀×ℵ₀ KronTrav{Float64, 2, BandedMatrix{Float64, Adjoint{Float64, InfiniteArrays.InfStepRange{Float64, Float64}}, InfiniteArrays.OneToInf{Int64}}, BandedMatrix{Float64, LazyArrays.ApplyArray{Float64, 2, typeof(vcat), Tuple{Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, Zeros{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, LazyArrays.ApplyArray{Float64, 2, typeof(hcat), Tuple{Ones{Float64, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}}}}}, InfiniteArrays.OneToInf{Int64}}, Tuple{BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}, BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}}}:
⋅ │ ⋅ 1.0 │ ⋅ 0.0 ⋅ │ ⋅ -0.5 ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼───────────────── …
⋅ │ ⋅ ⋅ │ ⋅ 0.5 ⋅ │ ⋅ 0.0 ⋅ ⋅ │ ⋅ -0.5 ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ 2.0 │ ⋅ ⋅ 0.0 ⋅ │ ⋅ ⋅ -1.0 ⋅ ⋅ │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ 0.5 ⋅ ⋅ │ ⋅ 0.0 ⋅ ⋅ ⋅ │ ⋅ -0.5 ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.0 ⋅ │ ⋅ ⋅ 0.0 ⋅ ⋅ │ ⋅ ⋅ -1.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ 3.0 │ ⋅ ⋅ ⋅ 0.0 ⋅ │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼───────────────── …
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ 0.5 ⋅ ⋅ ⋅ │ ⋅ 0.0 ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.0 ⋅ ⋅ │ ⋅ ⋅ 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ 1.5 ⋅ │ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ 4.0 │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ 0.5 ⋅ …
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅
─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────
⋮ ⋮ ⋮ ⋮ ⋱
julia> D_y = KronTrav(U\T, U\D*T)
ℵ₀×ℵ₀-blocked ℵ₀×ℵ₀ KronTrav{Float64, 2, BandedMatrix{Float64, LazyArrays.ApplyArray{Float64, 2, typeof(vcat), Tuple{Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, Zeros{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, LazyArrays.ApplyArray{Float64, 2, typeof(hcat), Tuple{Ones{Float64, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}}}}}, InfiniteArrays.OneToInf{Int64}}, BandedMatrix{Float64, Adjoint{Float64, InfiniteArrays.InfStepRange{Float64, Float64}}, InfiniteArrays.OneToInf{Int64}}, Tuple{BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}, BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}}}:
⋅ │ 1.0 0.0 │ 0.0 0.0 0.0 │ 0.0 0.0 -0.5 ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼────────────────────── …
⋅ │ ⋅ ⋅ │ 2.0 0.0 0.0 │ 0.0 0.0 0.0 ⋅ │ 0.0 0.0 -1.0 ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅
⋅ │ ⋅ ⋅ │ ⋅ 0.5 0.0 │ ⋅ 0.0 0.0 0.0 │ ⋅ 0.0 0.0 -0.5 ⋅ │ ⋅ ⋅ ⋅ ⋅
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ 3.0 0.0 0.0 ⋅ │ 0.0 0.0 0.0 ⋅ ⋅ │ 0.0 0.0 -1.5 ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ 1.0 0.0 0.0 │ ⋅ 0.0 0.0 0.0 ⋅ │ ⋅ 0.0 0.0 -1.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ 0.5 0.0 │ ⋅ ⋅ 0.0 0.0 0.0 │ ⋅ ⋅ 0.0 0.0
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼────────────────────── …
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ 4.0 0.0 0.0 ⋅ ⋅ │ 0.0 0.0 0.0 ⋅
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ 1.5 0.0 0.0 ⋅ │ ⋅ 0.0 0.0 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.0 0.0 0.0 │ ⋅ ⋅ 0.0 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ 0.5 0.0 │ ⋅ ⋅ ⋅ 0.0
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ 5.0 0.0 0.0 ⋅ …
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ 2.0 0.0 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ 1.5 0.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ 1.0
⋅ │ ⋅ ⋅ │ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅ ⋅ │ ⋅ ⋅ ⋅ ⋅
─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────
⋮ ⋮ ⋮ ⋮ ⋱
|
That looks interesting. Thanks, for the help. |
Btw I'd love to get kronecker products working in ContinuumArrays.jl/ClassicalOrthogonalPolynomials.jl. If you felt like helping with that I can walk you through the planned design |
Is the plan to replace ApproxFunOrthogonalPolynomials.jl with ClassicalOrthogonalPolynomials.jl? |
Yes that is the plan.
Note there's a WIP PR to make the change The biggest hold up is there is a lot of functionality missing in terms of analogues of |
So, my plan at the moment is to replace all my current high level ApproxFun use with FastTransforms.jl for the transforms from and to the two kinds of Chebyshev Bases and to replace all the derivatives, integrals and conversions with matrix multiplications so I can preallocate the matrices. Whether I use ApproxFunOrthogonalPolynomials.jl or ClassicalOrthogonalPolynomials.jl to generate the matrices in the first place shouldn't matter too much, so it's not such a high priority to switch over for me on that front. However, I do have a few little things to add to FastTransforms.jl. For example, a method for the Padua Transform that allows me to do something like If you're curious, I'm the student working on PoincareInvariants.jl. |
Actually, my plan doesn't quite work out because the transform for Chebyshev polynomials is in ApproxFunOrthogonalPolynomials.jl as I understand? There'd also a version in FastTransforms, but it's not used you said? So, one of the transforms will still come from ApproxFun I guess. |
The transforms all come from FastTransforms.jl, it's just a question of which transform is used. Note the transforms are also used in ClassicalOrthogonalPolynomials.jl |
I see that the line to return such a matrix is commented out below. Why?
ApproxFunBase.jl/src/Operators/Operator.jl
Line 776 in 25ee448
I'd like to preallocate large operator matrices and currently the
RaggedMatrix
type seems to be the default. The issue is thatRaggedMatrix
uses too much memory. Currently, I'm just manually usingBandedBlockBandedMatrix
, but that's not as nice as just being able to useAbstractMatrix(view(SomeOperator, FiniteRange, 1:N)))
, since I don't want to have to care about the exact memory layout myself.The text was updated successfully, but these errors were encountered: