diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index ab3450b..f52c305 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-03T12:40:30","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2024-12-03T12:47:08","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/blocktensors/index.html b/dev/blocktensors/index.html index 867f841..eb088a5 100644 --- a/dev/blocktensors/index.html +++ b/dev/blocktensors/index.html @@ -10,14 +10,14 @@ [1,1,2] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^2)) [2,1,2] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^2)) [1,2,2] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^2)) - [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> eltype(t)TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
julia> s = sprand(W, 0.5)2×2×2 SparseBlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 5 stored entries: -⎡⠉⠉⎤ -⎣⠀⡀⎦
julia> eltype(s)TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
Note

In analogy to TensorKit, most of the functionality that requires a space object can equally well be called in terms of codomain(space), domain(space), if that is more convenient.

Indexing

For indexing operators, AbstractBlockTensorMap behaves like an AbstractArray{AbstractTensorMap}, and the individual tensors can be accessed via the getindex and setindex! functions. In particular, the getindex function returns a TT object, and the setindex! function expects a TT object. Both linear and cartesian indexing styles are supported.

julia> t[1] isa eltype(t)true
julia> t[1] == t[1, 1, 1]true
julia> t[2] = 3 * t[2]TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)): + [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> eltype(t)TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
julia> s = sprand(W, 0.5)2×2×2 SparseBlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 4 stored entries: +⎡⠁⠁⎤ +⎣⢀⡀⎦
julia> eltype(s)TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}
Note

In analogy to TensorKit, most of the functionality that requires a space object can equally well be called in terms of codomain(space), domain(space), if that is more convenient.

Indexing

For indexing operators, AbstractBlockTensorMap behaves like an AbstractArray{AbstractTensorMap}, and the individual tensors can be accessed via the getindex and setindex! functions. In particular, the getindex function returns a TT object, and the setindex! function expects a TT object. Both linear and cartesian indexing styles are supported.

julia> t[1] isa eltype(t)true
julia> t[1] == t[1, 1, 1]true
julia> t[2] = 3 * t[2]TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)): [:, :, 1] = - 0.44120106834137607 - 2.3731901233669133
julia> s[1] isa eltype(t)true
julia> s[1] == s[1, 1, 1]true
julia> s[1] += 2 * s[1]TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)): + 1.2460025243781616 + 0.0680740467482408
julia> s[1] isa eltype(t)true
julia> s[1] == s[1, 1, 1]true
julia> s[1] += 2 * s[1]TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)): [:, :, 1] = - 2.9421518396890423

Slicing operations are also supported, and the AbstractBlockTensorMap can be sliced in the same way as an AbstractArray{AbstractTensorMap}. There is however one elementary difference: as the slices still contain tensors with the same amount of legs, there can be no reduction in the number of dimensions. In particular, in contrast to AbstractArray, scalar dimensions are not discarded, and as a result, linear index slicing is not allowed.

julia> ndims(t[1, 1, :]) == 3true
julia> ndims(t[:, 1:2, [1, 1]]) == 3true
julia> t[1:2] # errorERROR: ArgumentError: Cannot index TensorKit.TensorMapSpace{TensorKit.ComplexSpace, 1, 2}[ℂ^1 ← (ℂ^1 ⊗ ℂ^1) ℂ^1 ← (ℂ^2 ⊗ ℂ^1); ℂ^2 ← (ℂ^1 ⊗ ℂ^1) ℂ^2 ← (ℂ^2 ⊗ ℂ^1);;; ℂ^1 ← (ℂ^1 ⊗ ℂ^2) ℂ^1 ← (ℂ^2 ⊗ ℂ^2); ℂ^2 ← (ℂ^1 ⊗ ℂ^2) ℂ^2 ← (ℂ^2 ⊗ ℂ^2)] with 1:2

VectorInterface.jl

As part of the TensorKit interface, AbstractBlockTensorMap also implements VectorInterface. This means that you can efficiently add, scale, and compute the inner product of AbstractBlockTensorMap objects.

julia> t1, t2 = rand!(similar(t)), rand!(similar(t));
julia> add(t1, t2, rand())2×2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 8 stored entries: + 0.4819120978790121

Slicing operations are also supported, and the AbstractBlockTensorMap can be sliced in the same way as an AbstractArray{AbstractTensorMap}. There is however one elementary difference: as the slices still contain tensors with the same amount of legs, there can be no reduction in the number of dimensions. In particular, in contrast to AbstractArray, scalar dimensions are not discarded, and as a result, linear index slicing is not allowed.

julia> ndims(t[1, 1, :]) == 3true
julia> ndims(t[:, 1:2, [1, 1]]) == 3true
julia> t[1:2] # errorERROR: ArgumentError: Cannot index TensorKit.TensorMapSpace{TensorKit.ComplexSpace, 1, 2}[ℂ^1 ← (ℂ^1 ⊗ ℂ^1) ℂ^1 ← (ℂ^2 ⊗ ℂ^1); ℂ^2 ← (ℂ^1 ⊗ ℂ^1) ℂ^2 ← (ℂ^2 ⊗ ℂ^1);;; ℂ^1 ← (ℂ^1 ⊗ ℂ^2) ℂ^1 ← (ℂ^2 ⊗ ℂ^2); ℂ^2 ← (ℂ^1 ⊗ ℂ^2) ℂ^2 ← (ℂ^2 ⊗ ℂ^2)] with 1:2

VectorInterface.jl

As part of the TensorKit interface, AbstractBlockTensorMap also implements VectorInterface. This means that you can efficiently add, scale, and compute the inner product of AbstractBlockTensorMap objects.

julia> t1, t2 = rand!(similar(t)), rand!(similar(t));
julia> add(t1, t2, rand())2×2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 2, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← ((ℂ^1 ⊕ ℂ^2) ⊗ (ℂ^1 ⊕ ℂ^2))) with 8 stored entries: [1,1,1] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^1)) [2,1,1] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^1)) [1,2,1] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^1)) @@ -33,10 +33,10 @@ [1,1,2] = TensorMap(ℂ^1 ← (ℂ^1 ⊗ ℂ^2)) [2,1,2] = TensorMap(ℂ^2 ← (ℂ^1 ⊗ ℂ^2)) [1,2,2] = TensorMap(ℂ^1 ← (ℂ^2 ⊗ ℂ^2)) - [2,2,2] = TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> inner(t1, t2)7.985935936952311

For further in-place and possibly-in-place methods, see VectorInterface.jl

TensorOperations.jl

The TensorOperations.jl interface is also implemented for AbstractBlockTensorMap. In particular, the AbstractBlockTensorMap can be contracted with other AbstractBlockTensorMap objects, as well as with AbstractTensorMap objects. In order for that mix to work, the AbstractTensorMap objects are automatically converted to AbstractBlockTensorMap objects with a single tensor, i.e. the sum spaces will be a sum of one space. As a consequence, as soon as one of the input tensors is blocked, the output tensor will also be blocked, even though its size might be trivial. In these cases, only can be used to retrieve the single element in the BlockTensorMap.

julia> @tensor t3[a; b] := t[a; c d] * conj(t[b; c d])2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← (ℂ^1 ⊕ ℂ^2)) with 4 stored entries:
+  [2,2,2]  =  TensorMap(ℂ^2 ← (ℂ^2 ⊗ ℂ^2))
julia> inner(t1, t2)4.980050820138073

For further in-place and possibly-in-place methods, see VectorInterface.jl

TensorOperations.jl

The TensorOperations.jl interface is also implemented for AbstractBlockTensorMap. In particular, the AbstractBlockTensorMap can be contracted with other AbstractBlockTensorMap objects, as well as with AbstractTensorMap objects. In order for that mix to work, the AbstractTensorMap objects are automatically converted to AbstractBlockTensorMap objects with a single tensor, i.e. the sum spaces will be a sum of one space. As a consequence, as soon as one of the input tensors is blocked, the output tensor will also be blocked, even though its size might be trivial. In these cases, only can be used to retrieve the single element in the BlockTensorMap.

julia> @tensor t3[a; b] := t[a; c d] * conj(t[b; c d])2×2 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}((ℂ^1 ⊕ ℂ^2) ← (ℂ^1 ⊕ ℂ^2)) with 4 stored entries:
   [1,1]  =  TensorMap(ℂ^1 ← ℂ^1)
   [2,1]  =  TensorMap(ℂ^2 ← ℂ^1)
   [1,2]  =  TensorMap(ℂ^1 ← ℂ^2)
   [2,2]  =  TensorMap(ℂ^2 ← ℂ^2)
julia> @tensor t4[a; b] := t[1, :, :][a; c d] * conj(t[1, :, :][b; c d]) # blocktensor * blocktensor = blocktensor1×1 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}(⊕(ℂ^1) ← ⊕(ℂ^1)) with 1 stored entry: [1,1] = TensorMap(ℂ^1 ← ℂ^1)
julia> t4 isa AbstractBlockTensorMaptrue
julia> only(t4) isa eltype(t4)true
julia> @tensor t5[a; b] := t[1][a; c d] * conj(t[1:1, 1:1, 1:1][b; c d]) # tensor * blocktensor = blocktensor1×1 BlockTensorMap{TensorKit.TensorMap{Float64, TensorKit.ComplexSpace, 1, 1, Vector{Float64}}}(⊕(ℂ^1) ← ⊕(ℂ^1)) with 1 stored entry: - [1,1] = TensorMap(ℂ^1 ← ℂ^1)
julia> t5 isa AbstractBlockTensorMaptrue
julia> only(t5) isa eltype(t5)true

Factorizations

Currently, there is only rudimentary support for factorizations of AbstractBlockTensorMap objects. In particular, the implementations are not yet optimized for performance, and the factorizations are typically carried out by mapping to a dense tensor, and then performing the factorization on that tensor.

Note

Most factorizations do not retain the additional imposed block structure. In particular, constructions of orthogonal bases will typically mix up the subspaces, and as such the resulting vector spaces will be SumSpaces of a single term.

+ [1,1] = TensorMap(ℂ^1 ← ℂ^1)
julia> t5 isa AbstractBlockTensorMaptrue
julia> only(t5) isa eltype(t5)true

Factorizations

Currently, there is only rudimentary support for factorizations of AbstractBlockTensorMap objects. In particular, the implementations are not yet optimized for performance, and the factorizations are typically carried out by mapping to a dense tensor, and then performing the factorization on that tensor.

Note

Most factorizations do not retain the additional imposed block structure. In particular, constructions of orthogonal bases will typically mix up the subspaces, and as such the resulting vector spaces will be SumSpaces of a single term.

diff --git a/dev/index.html b/dev/index.html index d27c62f..2d0822a 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · BlockTensorKit.jl

BlockTensorKit.jl

A Julia package for handling arrays-of-tensors, built on top of TensorKit.jl

Package summary

In the context of developing efficient tensor network algorithms, it can sometimes be convenient to write a tensor as a concatenation of other tensors, without explicitly merging them. This is helpful whenever there are some guarantees on the resulting structure, such as sparsity patterns, triangular structures, or just as a way of keeping things organized. One particular example, for which this package is primarily developed, is the construction of Matrix Product Operators (MPOs) that represent a sum of local operators, both on 1-dimensional geometries, but also for more general tree-like geometries. In those cases, the combination of an upper-triangular blocked structure, as well as efficient usage of the sparsity, can not only greatly speed up runtime, but also facilitates rapid development of novel algorithms.

Mathematically speaking, we can consider these blocked tensors as acting on direct sums of vector spaces, where the indiviual vector spaces are supplied by TensorKit. This leads to a very natural generalization of AbstractTensorMap, which is able to handle arbitrary symmetries.

BlockTensorKit.jl aims to provide a convenient interface to such blocked tensors. In particular, the central types of this package (<:AbstractBlockTensorMap) could be describes as having both AbstractArray-like interfaces, which allow indexing as well as slicing operations, and AbstractTensorMap-like interfaces, allowing linear algebra routines, tensor contraction and tensor factorization. The goal is to abstract away the need to deal with the inner structures of such tensors as much as possible, and have the ability to replace AbstractTensorMaps with AbstractBlockTensorMap without having to change the high-level code.

As these kinds of operations typically appear in performance-critical sections of the code, computational efficiency and performance are high on the priority list. As such, a secondary aim of this package is to provide different algorithms that enable maximal usage of sparsity, multithreading, and other tricks to obtain close-to-maximal performance.

Contents of the manual

The manual fort his package is separated into 4 large parts. The first part focusses on the spacetype that underlies these tensors, which contain the necessary information to construct them. This is followed by a section on BlockTensorMap, highlighting the capabilities and interface. Then, we elaborate on SparseBlockTensorMap, which contains the sparse variant. Finally, we collect all docstrings.

+Home · BlockTensorKit.jl

BlockTensorKit.jl

A Julia package for handling arrays-of-tensors, built on top of TensorKit.jl

Package summary

In the context of developing efficient tensor network algorithms, it can sometimes be convenient to write a tensor as a concatenation of other tensors, without explicitly merging them. This is helpful whenever there are some guarantees on the resulting structure, such as sparsity patterns, triangular structures, or just as a way of keeping things organized. One particular example, for which this package is primarily developed, is the construction of Matrix Product Operators (MPOs) that represent a sum of local operators, both on 1-dimensional geometries, but also for more general tree-like geometries. In those cases, the combination of an upper-triangular blocked structure, as well as efficient usage of the sparsity, can not only greatly speed up runtime, but also facilitates rapid development of novel algorithms.

Mathematically speaking, we can consider these blocked tensors as acting on direct sums of vector spaces, where the indiviual vector spaces are supplied by TensorKit. This leads to a very natural generalization of AbstractTensorMap, which is able to handle arbitrary symmetries.

BlockTensorKit.jl aims to provide a convenient interface to such blocked tensors. In particular, the central types of this package (<:AbstractBlockTensorMap) could be describes as having both AbstractArray-like interfaces, which allow indexing as well as slicing operations, and AbstractTensorMap-like interfaces, allowing linear algebra routines, tensor contraction and tensor factorization. The goal is to abstract away the need to deal with the inner structures of such tensors as much as possible, and have the ability to replace AbstractTensorMaps with AbstractBlockTensorMap without having to change the high-level code.

As these kinds of operations typically appear in performance-critical sections of the code, computational efficiency and performance are high on the priority list. As such, a secondary aim of this package is to provide different algorithms that enable maximal usage of sparsity, multithreading, and other tricks to obtain close-to-maximal performance.

Contents of the manual

The manual fort his package is separated into 4 large parts. The first part focusses on the spacetype that underlies these tensors, which contain the necessary information to construct them. This is followed by a section on BlockTensorMap, highlighting the capabilities and interface. Then, we elaborate on SparseBlockTensorMap, which contains the sparse variant. Finally, we collect all docstrings.

diff --git a/dev/lib/index.html b/dev/lib/index.html index 4348d2a..730a40d 100644 --- a/dev/lib/index.html +++ b/dev/lib/index.html @@ -1,3 +1,3 @@ Library · BlockTensorKit.jl

Library index

BlockTensorKit.AbstractBlockTensorMapType
AbstractBlockTensorMap{E,S,N₁,N₂}

Abstract supertype for tensor maps that have additional block structure, i.e. they act on vector spaces that have a direct sum structure. These behave like AbstractTensorMap but have additional methods to facilitate indexing and manipulation of the block structure.

source
BlockTensorKit.BlockTensorMapType
struct BlockTensorMap{TT<:AbstractTensorMap{E,S,N₁,N₂}} <: AbstractTensorMap{E,S,N₁,N₂}

Dense BlockTensorMap type that stores tensors of type TT in a dense array.

source
BlockTensorKit.SparseBlockTensorMapType
struct SparseBlockTensorMap{TT<:AbstractTensorMap{E,S,N₁,N₂}} <: AbstractBlockTensorMap{E,S,N₁,N₂}

Sparse SparseBlockTensorMap type that stores tensors of type TT in a sparse dictionary.

source
BlockTensorKit.SumSpaceType
struct SumSpace{S<:ElementarySpace} <: ElementarySpace

A (lazy) direct sum of elementary vector spaces of type S.

source
BlockTensorKit.droptol!Function
droptol!(t::AbstractBlockTensorMap, tol=eps(real(scalartype(t)))^(3/4))

Remove the tensor entries of a blocktensor that have norm ≤(tol).

source
BlockTensorKit.dropzeros!Method
dropzeros!(t::AbstractBlockTensorMap)

Remove the tensor entries of a blocktensor that have norm 0. Only applicable to sparse blocktensors.

source
BlockTensorKit.eachspaceMethod
eachspace(V::TensorMapSumSpace) -> SumSpaceIndices

Return an object that can be used to obtain the subspaces of a BlockTensorMap.

source
BlockTensorKit.sprandMethod
sprand([rng], T::Type, W::TensorMapSumSpace, p::Real)

Construct a sparse blocktensor with entries compatible with type T and space W. Each entry is nonzero with probability p.

source
BlockTensorKit.spzerosMethod
spzeros(T::Type, W::TensorMapSumSpace)
-spzeros(T, W, nonzero_inds)

Construct a sparse blocktensor with entries compatible with type T and space W. By default, the tensor will be empty, but nonzero entries can be specified by passing a tuple of indices nonzero_inds.

source
+spzeros(T, W, nonzero_inds)

Construct a sparse blocktensor with entries compatible with type T and space W. By default, the tensor will be empty, but nonzero entries can be specified by passing a tuple of indices nonzero_inds.

source
BlockTensorKit.sumspacetypeMethod
sumspacetype(::Union{S,Type{S}}) where {S<:ElementarySpace}

Return the type of a SumSpace with elements of type S.

source
diff --git a/dev/sumspaces/index.html b/dev/sumspaces/index.html index 718bc70..3459e4e 100644 --- a/dev/sumspaces/index.html +++ b/dev/sumspaces/index.html @@ -7,4 +7,4 @@ (ℂ^2 ⊗ ℂ^1) ← (ℂ^2 ⊗ ℂ^2) … (ℂ^2 ⊗ ℂ^3) ← (ℂ^2 ⊗ ℂ^2) [:, :, 3, 1] = - (ℂ^2 ⊗ ℂ^1) ← (ℂ^3 ⊗ ℂ^2) … (ℂ^2 ⊗ ℂ^3) ← (ℂ^3 ⊗ ℂ^2) + (ℂ^2 ⊗ ℂ^1) ← (ℂ^3 ⊗ ℂ^2) … (ℂ^2 ⊗ ℂ^3) ← (ℂ^3 ⊗ ℂ^2)