diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json
index ab3450b..f52c305 100644
--- a/dev/.documenter-siteinfo.json
+++ b/dev/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-03T12:40:30","documenter_version":"1.8.0"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-03T12:47:08","documenter_version":"1.8.0"}}
\ No newline at end of file
diff --git a/dev/blocktensors/index.html b/dev/blocktensors/index.html
index 867f841..eb088a5 100644
--- a/dev/blocktensors/index.html
+++ b/dev/blocktensors/index.html
@@ -10,14 +10,14 @@
[1,1,2] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^2))
[2,1,2] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^2))
[1,2,2] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^2))
- [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))julia> eltype(t)
TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
julia> s = sprand(W, 0.5)
2×2×2 SparseBlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 5 stored entries:
-⎡⠉⠉⎤
-⎣⠀⡀⎦
julia> eltype(s)
TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
In analogy to TensorKit
, most of the functionality that requires a space
object can equally well be called in terms of codomain(space), domain(space)
, if that is more convenient.
For indexing operators, AbstractBlockTensorMap
behaves like an AbstractArray{AbstractTensorMap}
, and the individual tensors can be accessed via the getindex
and setindex!
functions. In particular, the getindex
function returns a TT
object, and the setindex!
function expects a TT
object. Both linear and cartesian indexing styles are supported.
julia> t[1] isa eltype(t)
true
julia> t[1] == t[1, 1, 1]
true
julia> t[2] = 3 * t[2]
TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)): + [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> eltype(t)
TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
julia> s = sprand(W, 0.5)
2×2×2 SparseBlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 4 stored entries: +⎡⠁⠁⎤ +⎣⢀⡀⎦
julia> eltype(s)
TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
In analogy to TensorKit
, most of the functionality that requires a space
object can equally well be called in terms of codomain(space), domain(space)
, if that is more convenient.
For indexing operators, AbstractBlockTensorMap
behaves like an AbstractArray{AbstractTensorMap}
, and the individual tensors can be accessed via the getindex
and setindex!
functions. In particular, the getindex
function returns a TT
object, and the setindex!
function expects a TT
object. Both linear and cartesian indexing styles are supported.
julia> t[1] isa eltype(t)
true
julia> t[1] == t[1, 1, 1]
true
julia> t[2] = 3 * t[2]
TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)): [:, :, 1] = - 0.44120106834137607 - 2.3731901233669133
julia> s[1] isa eltype(t)
true
julia> s[1] == s[1, 1, 1]
true
julia> s[1] += 2 * s[1]
TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)): + 1.2460025243781616 + 0.0680740467482408
julia> s[1] isa eltype(t)
true
julia> s[1] == s[1, 1, 1]
true
julia> s[1] += 2 * s[1]
TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)): [:, :, 1] = - 2.9421518396890423
Slicing operations are also supported, and the AbstractBlockTensorMap
can be sliced in the same way as an AbstractArray{AbstractTensorMap}
. There is however one elementary difference: as the slices still contain tensors with the same amount of legs, there can be no reduction in the number of dimensions. In particular, in contrast to AbstractArray
, scalar dimensions are not discarded, and as a result, linear index slicing is not allowed.
julia> ndims(t[1, 1, :]) == 3
true
julia> ndims(t[:, 1:2, [1, 1]]) == 3
true
julia> t[1:2] # error
ERROR: ArgumentError: Cannot index TensorKit.TensorMapSpace{TensorKit.ComplexSpace, 1, 2}[ℂ^1 ← (ℂ^1 ⊗ ℂ^1) ℂ^1 ← (ℂ^2 ⊗ ℂ^1); ℂ^2 ← (ℂ^1 ⊗ ℂ^1) ℂ^2 ← (ℂ^2 ⊗ ℂ^1);;; ℂ^1 ← (ℂ^1 ⊗ ℂ^2) ℂ^1 ← (ℂ^2 ⊗ ℂ^2); ℂ^2 ← (ℂ^1 ⊗ ℂ^2) ℂ^2 ← (ℂ^2 ⊗ ℂ^2)] with 1:2
As part of the TensorKit
interface, AbstractBlockTensorMap
also implements VectorInterface
. This means that you can efficiently add, scale, and compute the inner product of AbstractBlockTensorMap
objects.
julia> t1, t2 = rand!(similar(t)), rand!(similar(t));
julia> add(t1, t2, rand())
2×2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 8 stored entries: + 0.4819120978790121
Slicing operations are also supported, and the AbstractBlockTensorMap
can be sliced in the same way as an AbstractArray{AbstractTensorMap}
. There is however one elementary difference: as the slices still contain tensors with the same amount of legs, there can be no reduction in the number of dimensions. In particular, in contrast to AbstractArray
, scalar dimensions are not discarded, and as a result, linear index slicing is not allowed.
julia> ndims(t[1, 1, :]) == 3
true
julia> ndims(t[:, 1:2, [1, 1]]) == 3
true
julia> t[1:2] # error
ERROR: ArgumentError: Cannot index TensorKit.TensorMapSpace{TensorKit.ComplexSpace, 1, 2}[ℂ^1 ← (ℂ^1 ⊗ ℂ^1) ℂ^1 ← (ℂ^2 ⊗ ℂ^1); ℂ^2 ← (ℂ^1 ⊗ ℂ^1) ℂ^2 ← (ℂ^2 ⊗ ℂ^1);;; ℂ^1 ← (ℂ^1 ⊗ ℂ^2) ℂ^1 ← (ℂ^2 ⊗ ℂ^2); ℂ^2 ← (ℂ^1 ⊗ ℂ^2) ℂ^2 ← (ℂ^2 ⊗ ℂ^2)] with 1:2
As part of the TensorKit
interface, AbstractBlockTensorMap
also implements VectorInterface
. This means that you can efficiently add, scale, and compute the inner product of AbstractBlockTensorMap
objects.
julia> t1, t2 = rand!(similar(t)), rand!(similar(t));
julia> add(t1, t2, rand())
2×2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 8 stored entries: [1,1,1] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)) [2,1,1] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)) [1,2,1] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^1)) @@ -33,10 +33,10 @@ [1,1,2] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^2)) [2,1,2] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^2)) [1,2,2] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^2)) - [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> inner(t1, t2)
7.985935936952311
For further in-place and possibly-in-place methods, see VectorInterface.jl
The TensorOperations.jl
interface is also implemented for AbstractBlockTensorMap
. In particular, the AbstractBlockTensorMap
can be contracted with other AbstractBlockTensorMap
objects, as well as with AbstractTensorMap
objects. In order for that mix to work, the AbstractTensorMap
objects are automatically converted to AbstractBlockTensorMap
objects with a single tensor, i.e. the sum spaces will be a sum of one space. As a consequence, as soon as one of the input tensors is blocked, the output tensor will also be blocked, even though its size might be trivial. In these cases, only
can be used to retrieve the single element in the BlockTensorMap
.
julia> @tensor t3[a; b] := t[a; c d] * conj(t[b; c d])
2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← (ℂ^1 ⊕ ℂ^2)) with 4 stored entries: + [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> inner(t1, t2)
4.980050820138073
For further in-place and possibly-in-place methods, see VectorInterface.jl
The TensorOperations.jl
interface is also implemented for AbstractBlockTensorMap
. In particular, the AbstractBlockTensorMap
can be contracted with other AbstractBlockTensorMap
objects, as well as with AbstractTensorMap
objects. In order for that mix to work, the AbstractTensorMap
objects are automatically converted to AbstractBlockTensorMap
objects with a single tensor, i.e. the sum spaces will be a sum of one space. As a consequence, as soon as one of the input tensors is blocked, the output tensor will also be blocked, even though its size might be trivial. In these cases, only
can be used to retrieve the single element in the BlockTensorMap
.
julia> @tensor t3[a; b] := t[a; c d] * conj(t[b; c d])
2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← (ℂ^1 ⊕ ℂ^2)) with 4 stored entries: [1,1] = TensorMap(ℂ^1 ← ℂ^1) [2,1] = TensorMap(ℂ^2 ← ℂ^1) [1,2] = TensorMap(ℂ^1 ← ℂ^2) [2,2] = TensorMap(ℂ^2 ← ℂ^2)
julia> @tensor t4[a; b] := t[1, :, :][a; c d] * conj(t[1, :, :][b; c d]) # blocktensor * blocktensor = blocktensor
1×1 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}(⊕(ℂ^1) ← ⊕(ℂ^1)) with 1 stored entry: [1,1] = TensorMap(ℂ^1 ← ℂ^1)
julia> t4 isa AbstractBlockTensorMap
true
julia> only(t4) isa eltype(t4)
true
julia> @tensor t5[a; b] := t[1][a; c d] * conj(t[1:1, 1:1, 1:1][b; c d]) # tensor * blocktensor = blocktensor
1×1 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}(⊕(ℂ^1) ← ⊕(ℂ^1)) with 1 stored entry: - [1,1] = TensorMap(ℂ^1 ← ℂ^1)
julia> t5 isa AbstractBlockTensorMap
true
julia> only(t5) isa eltype(t5)
true
Currently, there is only rudimentary support for factorizations of AbstractBlockTensorMap
objects. In particular, the implementations are not yet optimized for performance, and the factorizations are typically carried out by mapping to a dense tensor, and then performing the factorization on that tensor.
Most factorizations do not retain the additional imposed block structure. In particular, constructions of orthogonal bases will typically mix up the subspaces, and as such the resulting vector spaces will be SumSpace
s of a single term.
Settings
This document was generated with Documenter.jl version 1.8.0 on Tuesday 3 December 2024. Using Julia version 1.11.2.