diff --git a/dev/index.html b/dev/index.html index 2d5fa102..37b06f8e 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · TensorKit.jl

TensorKit.jl

A Julia package for large-scale tensor computations, with a hint of category theory.

Package summary

TensorKit.jl aims to be a generic package for working with tensors as they appear throughout the physical sciences. TensorKit implements a parametric type Tensor (which is actually a specific case of the type TensorMap) and defines for these types a number of vector space operations (scalar multiplication, addition, norms and inner products), index operations (permutations) and linear algebra operations (multiplication, factorizations). Finally, tensor contractions can be performed using the @tensor macro from TensorOperations.jl.

Currently, most effort is oriented towards tensors as they appear in the context of quantum many body physics and in particular the field of tensor networks. Such tensors often have large dimensions and take on a specific structure when symmetries are present. To deal with generic symmetries, we employ notations and concepts from category theory all the way down to the definition of a tensor.

At the same time, TensorKit.jl focusses on computational efficiency and performance. The underlying storage of a tensor's data can be any DenseArray. Currently, certain operations are already multithreaded, either by distributing the different blocks in case of a structured tensor (i.e. with symmetries) or by using multithreading provided by the package Strided.jl. In the future, we also plan to investigate using CuArrays as underlying storage for the tensors data, so as to leverage GPUs for the different operations defined on tensors.

Contents of the manual

Library outline

+Home · TensorKit.jl

TensorKit.jl

A Julia package for large-scale tensor computations, with a hint of category theory.

Package summary

TensorKit.jl aims to be a generic package for working with tensors as they appear throughout the physical sciences. TensorKit implements a parametric type Tensor (which is actually a specific case of the type TensorMap) and defines for these types a number of vector space operations (scalar multiplication, addition, norms and inner products), index operations (permutations) and linear algebra operations (multiplication, factorizations). Finally, tensor contractions can be performed using the @tensor macro from TensorOperations.jl.

Currently, most effort is oriented towards tensors as they appear in the context of quantum many body physics and in particular the field of tensor networks. Such tensors often have large dimensions and take on a specific structure when symmetries are present. To deal with generic symmetries, we employ notations and concepts from category theory all the way down to the definition of a tensor.

At the same time, TensorKit.jl focusses on computational efficiency and performance. The underlying storage of a tensor's data can be any DenseArray. Currently, certain operations are already multithreaded, either by distributing the different blocks in case of a structured tensor (i.e. with symmetries) or by using multithreading provided by the package Strided.jl. In the future, we also plan to investigate using CuArrays as underlying storage for the tensors data, so as to leverage GPUs for the different operations defined on tensors.

Contents of the manual

Library outline

diff --git a/dev/index/index.html b/dev/index/index.html index f01b5e67..97439a83 100644 --- a/dev/index/index.html +++ b/dev/index/index.html @@ -1,2 +1,2 @@ -Index · TensorKit.jl

Index

+Index · TensorKit.jl

Index

diff --git a/dev/lib/sectors/index.html b/dev/lib/sectors/index.html index 3301c592..6c37a10d 100644 --- a/dev/lib/sectors/index.html +++ b/dev/lib/sectors/index.html @@ -1,32 +1,32 @@ -Symmetry sectors an fusion trees · TensorKit.jl

Symmetry sectors an fusion trees

Type hierarchy

TensorKit.SectorType
abstract type Sector end

Abstract type for representing the (isomorphism classes of) simple objects in (unitary and pivotal) (pre-)fusion categories, e.g. the irreducible representations of a finite or compact group. Subtypes I<:Sector as the set of labels of a GradedSpace.

Every new I<:Sector should implement the following methods:

  • one(::Type{I}): unit element of I
  • conj(a::I): $a̅$, conjugate or dual label of $a$
  • ⊗(a::I, b::I): iterable with unique fusion outputs of $a ⊗ b$ (i.e. don't repeat in case of multiplicities)
  • Nsymbol(a::I, b::I, c::I): number of times c appears in a ⊗ b, i.e. the multiplicity
  • FusionStyle(::Type{I}): UniqueFusion(), SimpleFusion() or GenericFusion()
  • BraidingStyle(::Type{I}): Bosonic(), Fermionic(), Anyonic(), ...
  • Fsymbol(a::I, b::I, c::I, d::I, e::I, f::I): F-symbol: scalar (in case of UniqueFusion/SimpleFusion) or matrix (in case of GenericFusion)
  • Rsymbol(a::I, b::I, c::I): R-symbol: scalar (in case of UniqueFusion/SimpleFusion) or matrix (in case of GenericFusion)

and optionally

  • dim(a::I): quantum dimension of sector a
  • frobeniusschur(a::I): Frobenius-Schur indicator of a
  • Bsymbol(a::I, b::I, c::I): B-symbol: scalar (in case of UniqueFusion/SimpleFusion) or matrix (in case of GenericFusion)
  • twist(a::I) -> twist of sector a

and optionally, if FusionStyle(I) isa GenericFusion

  • vertex_ind2label(i::Int, a::I, b::I, c::I) -> a custom label for the ith copy of c appearing in a ⊗ b

Furthermore, iterate and Base.IteratorSize should be made to work for the singleton type SectorValues{I}.

source
TensorKit.SectorValuesType
struct SectorValues{I<:Sector}

Singleton type to represent an iterator over the possible values of type I, whose instance is obtained as values(I). For a new I::Sector, the following should be defined

  • Base.iterate(::SectorValues{I}[, state]): iterate over the values
  • Base.IteratorSize(::Type{SectorValues{I}}): HasLenght(), SizeUnkown() or IsInfinite() depending on whether the number of values of type I is finite (and sufficiently small) or infinite; for a large number of values, SizeUnknown() is recommend because this will trigger the use of GenericGradedSpace.

If IteratorSize(I) == HasLength(), also the following must be implemented:

  • Base.length(::SectorValues{I}): the number of different values
  • Base.getindex(::SectorValues{I}, i::Int): a mapping between an index i and an instance of I
  • findindex(::SectorValues{I}, c::I): reverse mapping between a value c::I and an index i::Integer ∈ 1:length(values(I))
source
TensorKit.FusionStyleType
FusionStyle(a::Sector) -> ::FusionStyle
-FusionStyle(I::Type{<:Sector}) -> ::FusionStyle

Return the type of fusion behavior of sectors of type I, which can be either

  • UniqueFusion(): single fusion output when fusing two sectors;
  • SimpleFusion(): multiple outputs, but every output occurs at most one, also known as multiplicity free (e.g. irreps of $SU(2)$);
  • GenericFusion(): multiple outputs that can occur more than once (e.g. irreps of $SU(3)$).

There is an abstract supertype MultipleFusion of which both SimpleFusion and GenericFusion are subtypes. Furthermore, there is a type alias MultiplicityFreeFusion for those fusion types which do not require muliplicity labels, i.e. MultiplicityFreeFusion = Union{UniqueFusion,SimpleFusion}.

source
TensorKit.BraidingStyleType
BraidingStyle(::Sector) -> ::BraidingStyle
-BraidingStyle(I::Type{<:Sector}) -> ::BraidingStyle

Return the type of braiding and twist behavior of sectors of type I, which can be either

  • Bosonic(): symmetric braiding with trivial twist (i.e. identity)
  • Fermionic(): symmetric braiding with non-trivial twist (squares to identity)
  • Anyonic(): general $R_(a,b)^c$ phase or matrix (depending on SimpleFusion or GenericFusion fusion) and arbitrary twists

Note that Bosonic and Fermionic are subtypes of SymmetricBraiding, which means that braids are in fact equivalent to crossings (i.e. braiding twice is an identity: isone(Rsymbol(b,a,c)*Rsymbol(a,b,c)) == true) and permutations are uniquely defined.

source
TensorKit.AbstractIrrepType
abstract type AbstractIrrep{G<:Group} <: Sector end

Abstract supertype for sectors which corresponds to irreps (irreducible representations) of a group G. As we assume unitary representations, these would be finite groups or compact Lie groups. Note that this could also include projective rather than linear representations.

Actual concrete implementations of those irreps can be obtained as Irrep[G], or via their actual name, which generically takes the form (asciiG)Irrep, i.e. the ASCII spelling of the group name followed by Irrep.

All irreps have BraidingStyle equal to Bosonic() and thus trivial twists.

source
TensorKit.ZNIrrepType
ZNIrrep{N}(n::Integer)
-Irrep[ℤ{N}](n::Integer)

Represents irreps of the group $ℤ_N$ for some value of N<64. (We need 2*(N-1) <= 127 in order for a ⊗ b to work correctly.) For N equals 2, 3 or 4, ℤ{N} can be replaced by ℤ₂, ℤ₃, ℤ₄, whereas Parity is a synonym for Irrep{ℤ₂}. An arbitrary Integer n can be provided to the constructor, but only the value mod(n, N) is relevant.

source
TensorKit.U1IrrepType
U1Irrep(j::Real)
-Irrep[U₁](j::Real)

Represents irreps of the group $U₁$. The irrep is labelled by a charge, which should be an integer for a linear representation. However, it is often useful to allow half integers to represent irreps ofU₁subgroups ofSU₂, such as the Sz of spin-1/2 system. Hence, the charge is stored as aHalfIntfrom the package HalfIntegers.jl, but can be entered as arbitraryReal`. The sequence of the charges is: 0, 1/2, -1/2, 1, -1, ...

source
TensorKit.SU2IrrepType
SU2Irrep(j::Real)
-Irrep[SU₂](j::Real)

Represents irreps of the group $SU₂$. The irrep is labelled by a half integer j which can be entered as an abitrary Real, but is stored as a HalfInt from the HalfIntegers.jl package.

source
TensorKit.CU1IrrepType
CU1Irrep(j, s = ifelse(j>zero(j), 2, 0))
-Irrep[CU₁](j, s = ifelse(j>zero(j), 2, 0))

Represents irreps of the group $U₁ ⋊ C$ ($U₁$ and charge conjugation or reflection), which is also known as just O₂. The irrep is labelled by a positive half integer j (the $U₁$ charge) and an integer s indicating the behaviour under charge conjugation. They take values:

  • if j == 0, s = 0 (trivial charge conjugation) or s = 1 (non-trivial charge conjugation)
  • if j > 0, s = 2 (two-dimensional representation)
source
TensorKit.FermionParityType
FermionParity <: Sector

Represents sectors with fermion parity. The fermion parity is a ℤ₂ quantum number that yields an additional sign when two odd fermions are exchanged.

See also: FermionNumber, FermionSpin

source
TensorKit.FibonacciAnyonType
struct FibonacciAnyon <: Sector
-FibonacciAnyon(s::Symbol)

Represents the anyons (isomorphism classes of simple objects) of the Fibonacci fusion category. It can take two values, corresponding to the trivial sector FibonacciAnyon(:I) and the non-trivial sector FibonacciAnyon(:τ) with fusion rules $τ ⊗ τ = 1 ⊕ τ$.

source
TensorKit.FusionTreeType
struct FusionTree{I, N, M, L, T}

Represents a fusion tree of sectors of type I<:Sector, fusing (or splitting) N uncoupled sectors to a coupled sector. (It actually represents a splitting tree, but fusion tree is a more common term). The isdual field indicates whether an isomorphism is present (if the corresponding value is true) or not. The field uncoupled contains the sectors coming out of the splitting trees, before the possible 𝑍 isomorphism. This fusion tree has M=max(0, N-2) inner lines. Furthermore, for FusionStyle(I) isa GenericFusion, the L=max(0, N-1) corresponding vertices carry a label of type T. If FusionStyle(I) isa MultiplicityFreeFusion,T = Nothing`.

source

Useful constants

TensorKit.IrrepConstant
const Irrep

A constant of a singleton type used as Irrep[G] with G<:Group a type of group, to construct or obtain a concrete subtype of AbstractIrrep{G} that implements the data structure used to represent irreducible representations of the group G.

source

Methods for defining and characterizing Sector subtypes

Base.oneMethod
one(::Sector) -> Sector
-one(::Type{<:Sector}) -> Sector

Return the unit element within this type of sector.

source
TensorKit.NsymbolFunction
Nsymbol(a::I, b::I, c::I) where {I<:Sector} -> Integer

Return an Integer representing the number of times c appears in the fusion product a ⊗ b. Could be a Bool if FusionStyle(I) == UniqueFusion() or SimpleFusion().

source
TensorKit.FsymbolFunction
Fsymbol(a::I, b::I, c::I, d::I, e::I, f::I) where {I<:Sector}

Return the F-symbol $F^{abc}_d$ that associates the two different fusion orders of sectors a, b and c into an ouput sector d, using either an intermediate sector $a ⊗ b → e$ or $b ⊗ c → f$:

a-<-μ-<-e-<-ν-<-d                                     a-<-λ-<-d
+Symmetry sectors an fusion trees · TensorKit.jl

Symmetry sectors an fusion trees

Type hierarchy

TensorKit.SectorType
abstract type Sector end

Abstract type for representing the (isomorphism classes of) simple objects in (unitary and pivotal) (pre-)fusion categories, e.g. the irreducible representations of a finite or compact group. Subtypes I<:Sector as the set of labels of a GradedSpace.

Every new I<:Sector should implement the following methods:

  • one(::Type{I}): unit element of I
  • conj(a::I): $a̅$, conjugate or dual label of $a$
  • ⊗(a::I, b::I): iterable with unique fusion outputs of $a ⊗ b$ (i.e. don't repeat in case of multiplicities)
  • Nsymbol(a::I, b::I, c::I): number of times c appears in a ⊗ b, i.e. the multiplicity
  • FusionStyle(::Type{I}): UniqueFusion(), SimpleFusion() or GenericFusion()
  • BraidingStyle(::Type{I}): Bosonic(), Fermionic(), Anyonic(), ...
  • Fsymbol(a::I, b::I, c::I, d::I, e::I, f::I): F-symbol: scalar (in case of UniqueFusion/SimpleFusion) or matrix (in case of GenericFusion)
  • Rsymbol(a::I, b::I, c::I): R-symbol: scalar (in case of UniqueFusion/SimpleFusion) or matrix (in case of GenericFusion)

and optionally

  • dim(a::I): quantum dimension of sector a
  • frobeniusschur(a::I): Frobenius-Schur indicator of a
  • Bsymbol(a::I, b::I, c::I): B-symbol: scalar (in case of UniqueFusion/SimpleFusion) or matrix (in case of GenericFusion)
  • twist(a::I) -> twist of sector a

and optionally, if FusionStyle(I) isa GenericFusion

  • vertex_ind2label(i::Int, a::I, b::I, c::I) -> a custom label for the ith copy of c appearing in a ⊗ b

Furthermore, iterate and Base.IteratorSize should be made to work for the singleton type SectorValues{I}.

source
TensorKit.SectorValuesType
struct SectorValues{I<:Sector}

Singleton type to represent an iterator over the possible values of type I, whose instance is obtained as values(I). For a new I::Sector, the following should be defined

  • Base.iterate(::SectorValues{I}[, state]): iterate over the values
  • Base.IteratorSize(::Type{SectorValues{I}}): HasLenght(), SizeUnkown() or IsInfinite() depending on whether the number of values of type I is finite (and sufficiently small) or infinite; for a large number of values, SizeUnknown() is recommend because this will trigger the use of GenericGradedSpace.

If IteratorSize(I) == HasLength(), also the following must be implemented:

  • Base.length(::SectorValues{I}): the number of different values
  • Base.getindex(::SectorValues{I}, i::Int): a mapping between an index i and an instance of I
  • findindex(::SectorValues{I}, c::I): reverse mapping between a value c::I and an index i::Integer ∈ 1:length(values(I))
source
TensorKit.FusionStyleType
FusionStyle(a::Sector) -> ::FusionStyle
+FusionStyle(I::Type{<:Sector}) -> ::FusionStyle

Return the type of fusion behavior of sectors of type I, which can be either

  • UniqueFusion(): single fusion output when fusing two sectors;
  • SimpleFusion(): multiple outputs, but every output occurs at most one, also known as multiplicity free (e.g. irreps of $SU(2)$);
  • GenericFusion(): multiple outputs that can occur more than once (e.g. irreps of $SU(3)$).

There is an abstract supertype MultipleFusion of which both SimpleFusion and GenericFusion are subtypes. Furthermore, there is a type alias MultiplicityFreeFusion for those fusion types which do not require muliplicity labels, i.e. MultiplicityFreeFusion = Union{UniqueFusion,SimpleFusion}.

source
TensorKit.BraidingStyleType
BraidingStyle(::Sector) -> ::BraidingStyle
+BraidingStyle(I::Type{<:Sector}) -> ::BraidingStyle

Return the type of braiding and twist behavior of sectors of type I, which can be either

  • Bosonic(): symmetric braiding with trivial twist (i.e. identity)
  • Fermionic(): symmetric braiding with non-trivial twist (squares to identity)
  • Anyonic(): general $R_(a,b)^c$ phase or matrix (depending on SimpleFusion or GenericFusion fusion) and arbitrary twists

Note that Bosonic and Fermionic are subtypes of SymmetricBraiding, which means that braids are in fact equivalent to crossings (i.e. braiding twice is an identity: isone(Rsymbol(b,a,c)*Rsymbol(a,b,c)) == true) and permutations are uniquely defined.

source
TensorKit.AbstractIrrepType
abstract type AbstractIrrep{G<:Group} <: Sector end

Abstract supertype for sectors which corresponds to irreps (irreducible representations) of a group G. As we assume unitary representations, these would be finite groups or compact Lie groups. Note that this could also include projective rather than linear representations.

Actual concrete implementations of those irreps can be obtained as Irrep[G], or via their actual name, which generically takes the form (asciiG)Irrep, i.e. the ASCII spelling of the group name followed by Irrep.

All irreps have BraidingStyle equal to Bosonic() and thus trivial twists.

source
TensorKit.ZNIrrepType
ZNIrrep{N}(n::Integer)
+Irrep[ℤ{N}](n::Integer)

Represents irreps of the group $ℤ_N$ for some value of N<64. (We need 2*(N-1) <= 127 in order for a ⊗ b to work correctly.) For N equals 2, 3 or 4, ℤ{N} can be replaced by ℤ₂, ℤ₃, ℤ₄, whereas Parity is a synonym for Irrep{ℤ₂}. An arbitrary Integer n can be provided to the constructor, but only the value mod(n, N) is relevant.

source
TensorKit.U1IrrepType
U1Irrep(j::Real)
+Irrep[U₁](j::Real)

Represents irreps of the group $U₁$. The irrep is labelled by a charge, which should be an integer for a linear representation. However, it is often useful to allow half integers to represent irreps ofU₁subgroups ofSU₂, such as the Sz of spin-1/2 system. Hence, the charge is stored as aHalfIntfrom the package HalfIntegers.jl, but can be entered as arbitraryReal`. The sequence of the charges is: 0, 1/2, -1/2, 1, -1, ...

source
TensorKit.SU2IrrepType
SU2Irrep(j::Real)
+Irrep[SU₂](j::Real)

Represents irreps of the group $SU₂$. The irrep is labelled by a half integer j which can be entered as an abitrary Real, but is stored as a HalfInt from the HalfIntegers.jl package.

source
TensorKit.CU1IrrepType
CU1Irrep(j, s = ifelse(j>zero(j), 2, 0))
+Irrep[CU₁](j, s = ifelse(j>zero(j), 2, 0))

Represents irreps of the group $U₁ ⋊ C$ ($U₁$ and charge conjugation or reflection), which is also known as just O₂. The irrep is labelled by a positive half integer j (the $U₁$ charge) and an integer s indicating the behaviour under charge conjugation. They take values:

  • if j == 0, s = 0 (trivial charge conjugation) or s = 1 (non-trivial charge conjugation)
  • if j > 0, s = 2 (two-dimensional representation)
source
TensorKit.FermionParityType
FermionParity <: Sector

Represents sectors with fermion parity. The fermion parity is a ℤ₂ quantum number that yields an additional sign when two odd fermions are exchanged.

See also: FermionNumber, FermionSpin

source
TensorKit.FibonacciAnyonType
struct FibonacciAnyon <: Sector
+FibonacciAnyon(s::Symbol)

Represents the anyons (isomorphism classes of simple objects) of the Fibonacci fusion category. It can take two values, corresponding to the trivial sector FibonacciAnyon(:I) and the non-trivial sector FibonacciAnyon(:τ) with fusion rules $τ ⊗ τ = 1 ⊕ τ$.

source
TensorKit.FusionTreeType
struct FusionTree{I, N, M, L, T}

Represents a fusion tree of sectors of type I<:Sector, fusing (or splitting) N uncoupled sectors to a coupled sector. (It actually represents a splitting tree, but fusion tree is a more common term). The isdual field indicates whether an isomorphism is present (if the corresponding value is true) or not. The field uncoupled contains the sectors coming out of the splitting trees, before the possible 𝑍 isomorphism. This fusion tree has M=max(0, N-2) inner lines. Furthermore, for FusionStyle(I) isa GenericFusion, the L=max(0, N-1) corresponding vertices carry a label of type T. If FusionStyle(I) isa MultiplicityFreeFusion,T = Nothing`.

source

Useful constants

TensorKit.IrrepConstant
const Irrep

A constant of a singleton type used as Irrep[G] with G<:Group a type of group, to construct or obtain a concrete subtype of AbstractIrrep{G} that implements the data structure used to represent irreducible representations of the group G.

source

Methods for defining and characterizing Sector subtypes

Base.oneMethod
one(::Sector) -> Sector
+one(::Type{<:Sector}) -> Sector

Return the unit element within this type of sector.

source
TensorKit.NsymbolFunction
Nsymbol(a::I, b::I, c::I) where {I<:Sector} -> Integer

Return an Integer representing the number of times c appears in the fusion product a ⊗ b. Could be a Bool if FusionStyle(I) == UniqueFusion() or SimpleFusion().

source
TensorKit.FsymbolFunction
Fsymbol(a::I, b::I, c::I, d::I, e::I, f::I) where {I<:Sector}

Return the F-symbol $F^{abc}_d$ that associates the two different fusion orders of sectors a, b and c into an ouput sector d, using either an intermediate sector $a ⊗ b → e$ or $b ⊗ c → f$:

a-<-μ-<-e-<-ν-<-d                                     a-<-λ-<-d
     ∨       ∨       -> Fsymbol(a,b,c,d,e,f)[μ,ν,κ,λ]      ∨
     b       c                                             f
                                                           v
                                                       b-<-κ
                                                           ∨
-                                                          c

If FusionStyle(I) is UniqueFusion or SimpleFusion, the F-symbol is a number. Otherwise it is a rank 4 array of size (Nsymbol(a, b, e), Nsymbol(e, c, d), Nsymbol(b, c, f), Nsymbol(a, f, d)).

source
TensorKit.RsymbolFunction
Rsymbol(a::I, b::I, c::I) where {I<:Sector}

Returns the R-symbol $R^{ab}_c$ that maps between $c → a ⊗ b$ and $c → b ⊗ a$ as in

a -<-μ-<- c                                 b -<-ν-<- c
+                                                          c

If FusionStyle(I) is UniqueFusion or SimpleFusion, the F-symbol is a number. Otherwise it is a rank 4 array of size (Nsymbol(a, b, e), Nsymbol(e, c, d), Nsymbol(b, c, f), Nsymbol(a, f, d)).

source
TensorKit.RsymbolFunction
Rsymbol(a::I, b::I, c::I) where {I<:Sector}

Returns the R-symbol $R^{ab}_c$ that maps between $c → a ⊗ b$ and $c → b ⊗ a$ as in

a -<-μ-<- c                                 b -<-ν-<- c
      ∨          -> Rsymbol(a,b,c)[μ,ν]           v
-     b                                           a

If FusionStyle(I) is UniqueFusion() or SimpleFusion(), the R-symbol is a number. Otherwise it is a square matrix with row and column size Nsymbol(a,b,c) == Nsymbol(b,a,c).

source
TensorKit.BsymbolFunction
Bsymbol(a::I, b::I, c::I) where {I<:Sector}

Return the value of $B^{ab}_c$ which appears in transforming a splitting vertex into a fusion vertex using the transformation

a -<-μ-<- c                                                    a -<-ν-<- c
+     b                                           a

If FusionStyle(I) is UniqueFusion() or SimpleFusion(), the R-symbol is a number. Otherwise it is a square matrix with row and column size Nsymbol(a,b,c) == Nsymbol(b,a,c).

source
TensorKit.BsymbolFunction
Bsymbol(a::I, b::I, c::I) where {I<:Sector}

Return the value of $B^{ab}_c$ which appears in transforming a splitting vertex into a fusion vertex using the transformation

a -<-μ-<- c                                                    a -<-ν-<- c
      ∨          -> √(dim(c)/dim(a)) * Bsymbol(a,b,c)[μ,ν]           ∧
-     b                                                            dual(b)

If FusionStyle(I) is UniqueFusion() or SimpleFusion(), the B-symbol is a number. Otherwise it is a square matrix with row and column size Nsymbol(a, b, c) == Nsymbol(c, dual(b), a).

source
TensorKit.twistFunction
twist(a::Sector)

Return the twist of a sector a

source
twist(t::AbstractTensorMap, i::Int; inv::Bool=false)
-    -> t

Apply a twist to the ith index of t and return the result as a new tensor. If inv=true, use the inverse twist.

See twist! for storing the result in place.

source
Base.isrealMethod
isreal(::Type{<:Sector}) -> Bool

Return whether the topological data (Fsymbol, Rsymbol) of the sector is real or not (in which case it is complex).

source
TensorKit.vertex_ind2labelFunction
vertex_ind2label(k::Int, a::I, b::I, c::I) where {I<:Sector}

Convert the index k of the fusion vertex (a,b)->c into a label. For FusionStyle(I) == UniqueFusion() or FusionStyle(I) == MultipleFusion(), where every fusion output occurs only once and k == 1, the default is to suppress vertex labels by setting them equal to nothing. For FusionStyle(I) == GenericFusion(), the default is to just use k, unless a specialized method is provided.

source
TensorKit.:⊠Method
⊠(s₁::Sector, s₂::Sector)
-deligneproduct(s₁::Sector, s₂::Sector)

Given two sectors s₁ and s₂, which label an isomorphism class of simple objects in a fusion category $C₁$ and $C₂$, s1 ⊠ s2 (obtained as oxtimes+TAB) labels the isomorphism class of simple objects in the Deligne tensor product category $C₁ ⊠ C₂$.

The Deligne tensor product also works in the type domain and for spaces and tensors. For group representations, we have Irrep[G₁] ⊠ Irrep[G₂] == Irrep[G₁ × G₂].

source

Methods for manipulating fusion trees or pairs of fusion-splitting trees

The main method for manipulating a fusion-splitting tree pair is

TensorKit.braidMethod
braid(f₁::FusionTree{I}, f₂::FusionTree{I},
+     b                                                            dual(b)

If FusionStyle(I) is UniqueFusion() or SimpleFusion(), the B-symbol is a number. Otherwise it is a square matrix with row and column size Nsymbol(a, b, c) == Nsymbol(c, dual(b), a).

source
TensorKit.twistFunction
twist(a::Sector)

Return the twist of a sector a

source
twist(t::AbstractTensorMap, i::Int; inv::Bool=false)
+    -> t

Apply a twist to the ith index of t and return the result as a new tensor. If inv=true, use the inverse twist.

See twist! for storing the result in place.

source
Base.isrealMethod
isreal(::Type{<:Sector}) -> Bool

Return whether the topological data (Fsymbol, Rsymbol) of the sector is real or not (in which case it is complex).

source
TensorKit.vertex_ind2labelFunction
vertex_ind2label(k::Int, a::I, b::I, c::I) where {I<:Sector}

Convert the index k of the fusion vertex (a,b)->c into a label. For FusionStyle(I) == UniqueFusion() or FusionStyle(I) == MultipleFusion(), where every fusion output occurs only once and k == 1, the default is to suppress vertex labels by setting them equal to nothing. For FusionStyle(I) == GenericFusion(), the default is to just use k, unless a specialized method is provided.

source
TensorKit.:⊠Method
⊠(s₁::Sector, s₂::Sector)
+deligneproduct(s₁::Sector, s₂::Sector)

Given two sectors s₁ and s₂, which label an isomorphism class of simple objects in a fusion category $C₁$ and $C₂$, s1 ⊠ s2 (obtained as oxtimes+TAB) labels the isomorphism class of simple objects in the Deligne tensor product category $C₁ ⊠ C₂$.

The Deligne tensor product also works in the type domain and for spaces and tensors. For group representations, we have Irrep[G₁] ⊠ Irrep[G₂] == Irrep[G₁ × G₂].

source

Methods for manipulating fusion trees or pairs of fusion-splitting trees

The main method for manipulating a fusion-splitting tree pair is

TensorKit.braidMethod
braid(f₁::FusionTree{I}, f₂::FusionTree{I},
         levels1::IndexTuple, levels2::IndexTuple,
         p1::IndexTuple{N₁}, p2::IndexTuple{N₂}) where {I<:Sector, N₁, N₂}
--> <:AbstractDict{Tuple{FusionTree{I, N₁}, FusionTree{I, N₂}}, <:Number}

Input is a fusion-splitting tree pair that describes the fusion of a set of incoming uncoupled sectors to a set of outgoing uncoupled sectors, represented using the splitting tree f₁ and fusion tree f₂, such that the incoming sectors f₂.uncoupled are fused to f₁.coupled == f₂.coupled and then to the outgoing sectors f₁.uncoupled. Compute new trees and corresponding coefficients obtained from repartitioning and braiding the tree such that sectors p1 become outgoing and sectors p2 become incoming. The uncoupled indices in splitting tree f₁ and fusion tree f₂ have levels (or depths) levels1 and levels2 respectively, which determines how indices braid. In particular, if i and j cross, $τ_{i,j}$ is applied if levels[i] < levels[j] and $τ_{j,i}^{-1}$ if levels[i] > levels[j]. This does not allow to encode the most general braid, but a general braid can be obtained by combining such operations.

source

which, for FusionStyle(G) isa SymmetricBraiding, simplifies to

TensorKit.permuteMethod
permute(f₁::FusionTree{I}, f₂::FusionTree{I},
+-> <:AbstractDict{Tuple{FusionTree{I, N₁}, FusionTree{I, N₂}}, <:Number}

Input is a fusion-splitting tree pair that describes the fusion of a set of incoming uncoupled sectors to a set of outgoing uncoupled sectors, represented using the splitting tree f₁ and fusion tree f₂, such that the incoming sectors f₂.uncoupled are fused to f₁.coupled == f₂.coupled and then to the outgoing sectors f₁.uncoupled. Compute new trees and corresponding coefficients obtained from repartitioning and braiding the tree such that sectors p1 become outgoing and sectors p2 become incoming. The uncoupled indices in splitting tree f₁ and fusion tree f₂ have levels (or depths) levels1 and levels2 respectively, which determines how indices braid. In particular, if i and j cross, $τ_{i,j}$ is applied if levels[i] < levels[j] and $τ_{j,i}^{-1}$ if levels[i] > levels[j]. This does not allow to encode the most general braid, but a general braid can be obtained by combining such operations.

source

which, for FusionStyle(G) isa SymmetricBraiding, simplifies to

TensorKit.permuteMethod
permute(f₁::FusionTree{I}, f₂::FusionTree{I},
         p1::NTuple{N₁, Int}, p2::NTuple{N₂, Int}) where {I, N₁, N₂}
--> <:AbstractDict{Tuple{FusionTree{I, N₁}, FusionTree{I, N₂}}, <:Number}

Input is a double fusion tree that describes the fusion of a set of incoming uncoupled sectors to a set of outgoing uncoupled sectors, represented using the individual trees of outgoing (t1) and incoming sectors (t2) respectively (with identical coupled sector t1.coupled == t2.coupled). Computes new trees and corresponding coefficients obtained from repartitioning and permuting the tree such that sectors p1 become outgoing and sectors p2 become incoming.

source

These operations are implemented by composing the following more elementary manipulations

TensorKit.braidMethod
braid(f::FusionTree{<:Sector, N}, levels::NTuple{N, Int}, p::NTuple{N, Int})
--> <:AbstractDict{typeof(t), <:Number}

Perform a braiding of the uncoupled indices of the fusion tree f and return the result as a <:AbstractDict of output trees and corresponding coefficients. The braiding is determined by specifying that the new sector at position k corresponds to the sector that was originally at the position i = p[k], and assigning to every index i of the original fusion tree a distinct level or depth levels[i]. This permutation is then decomposed into elementary swaps between neighbouring indices, where the swaps are applied as braids such that if i and j cross, $τ_{i,j}$ is applied if levels[i] < levels[j] and $τ_{j,i}^{-1}$ if levels[i] > levels[j]. This does not allow to encode the most general braid, but a general braid can be obtained by combining such operations.

source
TensorKit.permuteMethod
permute(f::FusionTree, p::NTuple{N, Int}) -> <:AbstractDict{typeof(t), <:Number}

Perform a permutation of the uncoupled indices of the fusion tree f and returns the result as a <:AbstractDict of output trees and corresponding coefficients; this requires that BraidingStyle(sectortype(f)) isa SymmetricBraiding.

source
TensorKit.repartitionFunction
repartition(f₁::FusionTree{I, N₁}, f₂::FusionTree{I, N₂}, N::Int) where {I, N₁, N₂}
--> <:AbstractDict{Tuple{FusionTree{I, N}, FusionTree{I, N₁+N₂-N}}, <:Number}

Input is a double fusion tree that describes the fusion of a set of incoming uncoupled sectors to a set of outgoing uncoupled sectors, represented using the individual trees of outgoing (f₁) and incoming sectors (f₂) respectively (with identical coupled sector f₁.coupled == f₂.coupled). Computes new trees and corresponding coefficients obtained from repartitioning the tree by bending incoming to outgoing sectors (or vice versa) in order to have N outgoing sectors.

source
TensorKit.artin_braidFunction
artin_braid(f::FusionTree, i; inv::Bool = false) -> <:AbstractDict{typeof(t), <:Number}

Perform an elementary braid (Artin generator) of neighbouring uncoupled indices i and i+1 on a fusion tree f, and returns the result as a dictionary of output trees and corresponding coefficients.

The keyword inv determines whether index i will braid above or below index i+1, i.e. applying artin_braid(f′, i; inv = true) to all the outputs f′ of artin_braid(f, i; inv = false) and collecting the results should yield a single fusion tree with non-zero coefficient, namely f with coefficient 1. This keyword has no effect if BraidingStyle(sectortype(f)) isa SymmetricBraiding.

source

Finally, there are some additional manipulations for internal use

TensorKit.insertatFunction
insertat(f::FusionTree{I, N₁}, i::Int, f₂::FusionTree{I, N₂})
--> <:AbstractDict{<:FusionTree{I, N₁+N₂-1}, <:Number}

Attach a fusion tree f₂ to the uncoupled leg i of the fusion tree f₁ and bring it into a linear combination of fusion trees in standard form. This requires that f₂.coupled == f₁.uncoupled[i] and f₁.isdual[i] == false.

source
TensorKit.splitFunction
split(f::FusionTree{I, N}, M::Int)
--> (::FusionTree{I, M}, ::FusionTree{I, N-M+1})

Split a fusion tree into two. The first tree has as uncoupled sectors the first M uncoupled sectors of the input tree f, whereas its coupled sector corresponds to the internal sector between uncoupled sectors M and M+1 of the original tree f. The second tree has as first uncoupled sector that same internal sector of f, followed by remaining N-M uncoupled sectors of f. It couples to the same sector as f. This operation is the inverse of insertat in the sense that if f₁, f₂ = split(t, M) ⇒ f == insertat(f₂, 1, f₁).

source
TensorKit.mergeFunction
merge(f₁::FusionTree{I, N₁}, f₂::FusionTree{I, N₂}, c::I, μ = nothing)
--> <:AbstractDict{<:FusionTree{I, N₁+N₂}, <:Number}

Merge two fusion trees together to a linear combination of fusion trees whose uncoupled sectors are those of f₁ followed by those of f₂, and where the two coupled sectors of f₁ and f₂ are further fused to c. In case of FusionStyle(I) == GenericFusion(), also a degeneracy label μ for the fusion of the coupled sectors of f₁ and f₂ to c needs to be specified.

source
+-> <:AbstractDict{Tuple{FusionTree{I, N₁}, FusionTree{I, N₂}}, <:Number}

Input is a double fusion tree that describes the fusion of a set of incoming uncoupled sectors to a set of outgoing uncoupled sectors, represented using the individual trees of outgoing (t1) and incoming sectors (t2) respectively (with identical coupled sector t1.coupled == t2.coupled). Computes new trees and corresponding coefficients obtained from repartitioning and permuting the tree such that sectors p1 become outgoing and sectors p2 become incoming.

source

These operations are implemented by composing the following more elementary manipulations

TensorKit.braidMethod
braid(f::FusionTree{<:Sector, N}, levels::NTuple{N, Int}, p::NTuple{N, Int})
+-> <:AbstractDict{typeof(t), <:Number}

Perform a braiding of the uncoupled indices of the fusion tree f and return the result as a <:AbstractDict of output trees and corresponding coefficients. The braiding is determined by specifying that the new sector at position k corresponds to the sector that was originally at the position i = p[k], and assigning to every index i of the original fusion tree a distinct level or depth levels[i]. This permutation is then decomposed into elementary swaps between neighbouring indices, where the swaps are applied as braids such that if i and j cross, $τ_{i,j}$ is applied if levels[i] < levels[j] and $τ_{j,i}^{-1}$ if levels[i] > levels[j]. This does not allow to encode the most general braid, but a general braid can be obtained by combining such operations.

source
TensorKit.permuteMethod
permute(f::FusionTree, p::NTuple{N, Int}) -> <:AbstractDict{typeof(t), <:Number}

Perform a permutation of the uncoupled indices of the fusion tree f and returns the result as a <:AbstractDict of output trees and corresponding coefficients; this requires that BraidingStyle(sectortype(f)) isa SymmetricBraiding.

source
TensorKit.repartitionFunction
repartition(f₁::FusionTree{I, N₁}, f₂::FusionTree{I, N₂}, N::Int) where {I, N₁, N₂}
+-> <:AbstractDict{Tuple{FusionTree{I, N}, FusionTree{I, N₁+N₂-N}}, <:Number}

Input is a double fusion tree that describes the fusion of a set of incoming uncoupled sectors to a set of outgoing uncoupled sectors, represented using the individual trees of outgoing (f₁) and incoming sectors (f₂) respectively (with identical coupled sector f₁.coupled == f₂.coupled). Computes new trees and corresponding coefficients obtained from repartitioning the tree by bending incoming to outgoing sectors (or vice versa) in order to have N outgoing sectors.

source
TensorKit.artin_braidFunction
artin_braid(f::FusionTree, i; inv::Bool = false) -> <:AbstractDict{typeof(t), <:Number}

Perform an elementary braid (Artin generator) of neighbouring uncoupled indices i and i+1 on a fusion tree f, and returns the result as a dictionary of output trees and corresponding coefficients.

The keyword inv determines whether index i will braid above or below index i+1, i.e. applying artin_braid(f′, i; inv = true) to all the outputs f′ of artin_braid(f, i; inv = false) and collecting the results should yield a single fusion tree with non-zero coefficient, namely f with coefficient 1. This keyword has no effect if BraidingStyle(sectortype(f)) isa SymmetricBraiding.

source

Finally, there are some additional manipulations for internal use

TensorKit.insertatFunction
insertat(f::FusionTree{I, N₁}, i::Int, f₂::FusionTree{I, N₂})
+-> <:AbstractDict{<:FusionTree{I, N₁+N₂-1}, <:Number}

Attach a fusion tree f₂ to the uncoupled leg i of the fusion tree f₁ and bring it into a linear combination of fusion trees in standard form. This requires that f₂.coupled == f₁.uncoupled[i] and f₁.isdual[i] == false.

source
TensorKit.splitFunction
split(f::FusionTree{I, N}, M::Int)
+-> (::FusionTree{I, M}, ::FusionTree{I, N-M+1})

Split a fusion tree into two. The first tree has as uncoupled sectors the first M uncoupled sectors of the input tree f, whereas its coupled sector corresponds to the internal sector between uncoupled sectors M and M+1 of the original tree f. The second tree has as first uncoupled sector that same internal sector of f, followed by remaining N-M uncoupled sectors of f. It couples to the same sector as f. This operation is the inverse of insertat in the sense that if f₁, f₂ = split(t, M) ⇒ f == insertat(f₂, 1, f₁).

source
TensorKit.mergeFunction
merge(f₁::FusionTree{I, N₁}, f₂::FusionTree{I, N₂}, c::I, μ = nothing)
+-> <:AbstractDict{<:FusionTree{I, N₁+N₂}, <:Number}

Merge two fusion trees together to a linear combination of fusion trees whose uncoupled sectors are those of f₁ followed by those of f₂, and where the two coupled sectors of f₁ and f₂ are further fused to c. In case of FusionStyle(I) == GenericFusion(), also a degeneracy label μ for the fusion of the coupled sectors of f₁ and f₂ to c needs to be specified.

source
diff --git a/dev/lib/spaces/index.html b/dev/lib/spaces/index.html index 2f423147..fe8162f7 100644 --- a/dev/lib/spaces/index.html +++ b/dev/lib/spaces/index.html @@ -1,32 +1,32 @@ -Vector spaces · TensorKit.jl

Vector spaces

Type hierarchy

TensorKit.FieldType
abstract type Field end

Abstract type at the top of the type hierarchy for denoting fields over which vector spaces (or more generally, linear categories) can be defined. Two common fields are and , representing the field of real or complex numbers respectively.

source
TensorKit.VectorSpaceType
abstract type VectorSpace end

Abstract type at the top of the type hierarchy for denoting vector spaces, or, more accurately, 𝕜-linear categories. All instances of subtypes of VectorSpace will represent objects in 𝕜-linear monoidal categories.

source
TensorKit.ElementarySpaceType
abstract type ElementarySpace <: VectorSpace end

Elementary finite-dimensional vector space over a field that can be used as the index space corresponding to the indices of a tensor. ElementarySpace is a supertype for all vector spaces (objects) that can be associated with the individual indices of a tensor, as hinted to by its alias IndexSpace.

Every elementary vector space should respond to the methods conj and dual, returning the complex conjugate space and the dual space respectively. The complex conjugate of the dual space is obtained as dual(conj(V)) === conj(dual(V)). These different spaces should be of the same type, so that a tensor can be defined as an element of a homogeneous tensor product of these spaces.

source
TensorKit.GeneralSpaceType
struct GeneralSpace{𝕜} <: ElementarySpace

A finite-dimensional space over an arbitrary field 𝕜 without additional structure. It is thus characterized by its dimension, and whether or not it is the dual and/or conjugate space. For a real field 𝕜, the space and its conjugate are the same.

source
TensorKit.CartesianSpaceType
struct CartesianSpace <: ElementarySpace

A real Euclidean space ℝ^d, which is therefore self-dual. CartesianSpace has no additonal structure and is completely characterised by its dimension d. This is the vector space that is implicitly assumed in most of matrix algebra.

source
TensorKit.ComplexSpaceType
struct ComplexSpace <: ElementarySpace

A standard complex vector space ℂ^d with Euclidean inner product and no additional structure. It is completely characterised by its dimension and whether its the normal space or its dual (which is canonically isomorphic to the conjugate space).

source
TensorKit.GradedSpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
+Vector spaces · TensorKit.jl

Vector spaces

Type hierarchy

TensorKit.FieldType
abstract type Field end

Abstract type at the top of the type hierarchy for denoting fields over which vector spaces (or more generally, linear categories) can be defined. Two common fields are and , representing the field of real or complex numbers respectively.

source
TensorKit.VectorSpaceType
abstract type VectorSpace end

Abstract type at the top of the type hierarchy for denoting vector spaces, or, more accurately, 𝕜-linear categories. All instances of subtypes of VectorSpace will represent objects in 𝕜-linear monoidal categories.

source
TensorKit.ElementarySpaceType
abstract type ElementarySpace <: VectorSpace end

Elementary finite-dimensional vector space over a field that can be used as the index space corresponding to the indices of a tensor. ElementarySpace is a supertype for all vector spaces (objects) that can be associated with the individual indices of a tensor, as hinted to by its alias IndexSpace.

Every elementary vector space should respond to the methods conj and dual, returning the complex conjugate space and the dual space respectively. The complex conjugate of the dual space is obtained as dual(conj(V)) === conj(dual(V)). These different spaces should be of the same type, so that a tensor can be defined as an element of a homogeneous tensor product of these spaces.

source
TensorKit.GeneralSpaceType
struct GeneralSpace{𝕜} <: ElementarySpace

A finite-dimensional space over an arbitrary field 𝕜 without additional structure. It is thus characterized by its dimension, and whether or not it is the dual and/or conjugate space. For a real field 𝕜, the space and its conjugate are the same.

source
TensorKit.CartesianSpaceType
struct CartesianSpace <: ElementarySpace

A real Euclidean space ℝ^d, which is therefore self-dual. CartesianSpace has no additonal structure and is completely characterised by its dimension d. This is the vector space that is implicitly assumed in most of matrix algebra.

source
TensorKit.ComplexSpaceType
struct ComplexSpace <: ElementarySpace

A standard complex vector space ℂ^d with Euclidean inner product and no additional structure. It is completely characterised by its dimension and whether its the normal space or its dual (which is canonically isomorphic to the conjugate space).

source
TensorKit.GradedSpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
     dims::D
     dual::Bool
-end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.CompositeSpaceType
abstract type CompositeSpace{S<:ElementarySpace} <: VectorSpace end

Abstract type for composite spaces that are defined in terms of a number of elementary vector spaces of a homogeneous type S<:ElementarySpace.

source
TensorKit.ProductSpaceType
struct ProductSpace{S<:ElementarySpace, N} <: CompositeSpace{S}

A ProductSpace is a tensor product space of N vector spaces of type S<:ElementarySpace. Only tensor products between ElementarySpace objects of the same type are allowed.

source

Useful constants

TensorKit.VectConstant
const Vect

A constant of a singleton type used as Vect[I] with I<:Sector a type of sector, to construct or obtain the concrete type GradedSpace{I,D} instances without having to specify D.

source
TensorKit.RepConstant
const Rep

A constant of a singleton type used as Rep[G] with G<:Group a type of group, to construct or obtain the concrete type GradedSpace{Irrep[G],D} instances without having to specify D. Note that Rep[G] == Vect[Irrep[G]].

See also Irrep and Vect.

source
Missing docstring.

Missing docstring for ZNSpace{N}. Check Documenter's build log for details.

TensorKit.Z2SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
+end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.CompositeSpaceType
abstract type CompositeSpace{S<:ElementarySpace} <: VectorSpace end

Abstract type for composite spaces that are defined in terms of a number of elementary vector spaces of a homogeneous type S<:ElementarySpace.

source
TensorKit.ProductSpaceType
struct ProductSpace{S<:ElementarySpace, N} <: CompositeSpace{S}

A ProductSpace is a tensor product space of N vector spaces of type S<:ElementarySpace. Only tensor products between ElementarySpace objects of the same type are allowed.

source

Useful constants

TensorKit.VectConstant
const Vect

A constant of a singleton type used as Vect[I] with I<:Sector a type of sector, to construct or obtain the concrete type GradedSpace{I,D} instances without having to specify D.

source
TensorKit.RepConstant
const Rep

A constant of a singleton type used as Rep[G] with G<:Group a type of group, to construct or obtain the concrete type GradedSpace{Irrep[G],D} instances without having to specify D. Note that Rep[G] == Vect[Irrep[G]].

See also Irrep and Vect.

source
Missing docstring.

Missing docstring for ZNSpace{N}. Check Documenter's build log for details.

TensorKit.Z2SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
     dims::D
     dual::Bool
-end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.Z3SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
+end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.Z3SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
     dims::D
     dual::Bool
-end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.Z4SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
+end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.Z4SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
     dims::D
     dual::Bool
-end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.U1SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
+end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.U1SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
     dims::D
     dual::Bool
-end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.SU2SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
+end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.SU2SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
     dims::D
     dual::Bool
-end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.CU1SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
+end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source
TensorKit.CU1SpaceType
struct GradedSpace{I<:Sector, D} <: ElementarySpace
     dims::D
     dual::Bool
-end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source

Methods

Methods often apply similar to e.g. spaces and corresponding tensors or tensor maps, e.g.:

TensorKit.fieldFunction
field(V::VectorSpace) -> Field

Return the field type over which a vector space is defined.

source
TensorKit.sectortypeFunction
sectortype(a) -> Type{<:Sector}

Return the type of sector over which object a (e.g. a representation space or a tensor) is defined. Also works in type domain.

source
TensorKit.sectorsFunction
sectors(V::ElementarySpace)

Return an iterator over the different sectors of V.

source
sectors(P::ProductSpace{S, N}) where {S<:ElementarySpace}

Return an iterator over all possible combinations of sectors (represented as an NTuple{N, sectortype(S)}) that can appear within the tensor product space P.

source
TensorKit.hassectorFunction
hassector(V::VectorSpace, a::Sector) -> Bool

Return whether a vector space V has a subspace corresponding to sector a with non-zero dimension, i.e. dim(V, a) > 0.

source
hassector(P::ProductSpace{S, N}, s::NTuple{N, sectortype(S)}) where {S<:ElementarySpace}
--> Bool

Query whether P has a non-zero degeneracy of sector s, representing a combination of sectors on the individual tensor indices.

source
TensorKit.dimFunction
dim(V::VectorSpace) -> Int

Return the total dimension of the vector space V as an Int.

source
TensorKit.dimsFunction
dims(::ProductSpace{S, N}) -> Dims{N} = NTuple{N, Int}

Return the dimensions of the spaces in the tensor product space as a tuple of integers.

source
dims(P::ProductSpace{S, N}, s::NTuple{N, sectortype(S)}) where {S<:ElementarySpace}
--> Dims{N} = NTuple{N, Int}

Return the degeneracy dimensions corresponding to a tuple of sectors s for each of the spaces in the tensor product P.

source
TensorKit.blocksectorsFunction
blocksectors(P::ProductSpace)

Return an iterator over the different unique coupled sector labels, i.e. the different fusion outputs that can be obtained by fusing the sectors present in the different spaces that make up the ProductSpace instance.

source
blocksectors(W::HomSpace)

Return an iterator over the different unique coupled sector labels, i.e. the intersection of the different fusion outputs that can be obtained by fusing the sectors present in the domain, as well as from the codomain.

See also hasblock.

source
TensorKit.blockdimFunction
blockdim(P::ProductSpace, c::Sector)

Return the total dimension of a coupled sector c in the product space, by summing over all dim(P, s) for all tuples of sectors s::NTuple{N, <:Sector} that can fuse to c, counted with the correct multiplicity (i.e. number of ways in which s can fuse to c).

See also hasblock and blocksectors.

source

The following methods act specifically on ElementarySpace spaces:

TensorKit.isdualFunction
isdual(V::ElementarySpace) -> Bool

Return wether an ElementarySpace V is normal or rather a dual space. Always returns false for spaces where V == dual(V).

source
TensorKit.dualFunction
dual(V::VectorSpace) -> VectorSpace

Return the dual space of V; also obtained via V'. This should satisfy dual(dual(V)) == V. It is assumed that typeof(V) == typeof(V').

source
Base.conjFunction
conj(V::S) where {S<:ElementarySpace} -> S

Return the conjugate space of V. This should satisfy conj(conj(V)) == V.

For field(V)==ℝ, conj(V) == V. It is assumed that typeof(V) == typeof(conj(V)).

source
TensorKit.flipFunction
flip(V::S) where {S<:ElementarySpace} -> S

Return a single vector space of type S that has the same value of isdual as dual(V), but yet is isomorphic to V rather than to dual(V). The spaces flip(V) and dual(V) only differ in the case of GradedSpace{I}.

source
TensorKit.:⊕Function
⊕(V₁::S, V₂::S, V₃::S...) where {S<:ElementarySpace} -> S

Return the corresponding vector space of type S that represents the direct sum sum of the spaces V₁, V₂, ... Note that all the individual spaces should have the same value for isdual, as otherwise the direct sum is not defined.

source
Base.oneunitFunction
oneunit(V::S) where {S<:ElementarySpace} -> S

Return the corresponding vector space of type S that represents the trivial one-dimensional space, i.e. the space that is isomorphic to the corresponding field. Note that this is different from one(V::S), which returns the empty product space ProductSpace{S,0}(()).

source
TensorKit.supremumFunction
supremum(V₁::ElementarySpace, V₂::ElementarySpace, V₃::ElementarySpace...)

Return the supremum of a number of elementary spaces, i.e. an instance V::ElementarySpace such that V ≿ V₁, V ≿ V₂, ... and no other W ≺ V has this property. This requires that all arguments have the same value of isdual( ), and also the return value V will have the same value.

source
TensorKit.infimumFunction
infimum(V₁::ElementarySpace, V₂::ElementarySpace, V₃::ElementarySpace...)

Return the infimum of a number of elementary spaces, i.e. an instance V::ElementarySpace such that V ≾ V₁, V ≾ V₂, ... and no other W ≻ V has this property. This requires that all arguments have the same value of isdual( ), and also the return value V will have the same value.

source

while the following also work on both ElementarySpace and ProductSpace

TensorKit.fuseFunction
fuse(V₁::S, V₂::S, V₃::S...) where {S<:ElementarySpace} -> S
-fuse(P::ProductSpace{S}) where {S<:ElementarySpace} -> S

Return a single vector space of type S that is isomorphic to the fusion product of the individual spaces V₁, V₂, ..., or the spaces contained in P.

source
TensorKit.:⊗Function
⊗(V₁::S, V₂::S, V₃::S...) where {S<:ElementarySpace} -> S

Create a ProductSpace{S}(V₁, V₂, V₃...) representing the tensor product of several elementary vector spaces. For convience, Julia's regular multiplication operator * applied to vector spaces has the same effect.

The tensor product structure is preserved, see fuse for returning a single elementary space of type S that is isomorphic to this tensor product.

source
TensorKit.:⊠Function
⊠(s₁::Sector, s₂::Sector)
-deligneproduct(s₁::Sector, s₂::Sector)

Given two sectors s₁ and s₂, which label an isomorphism class of simple objects in a fusion category $C₁$ and $C₂$, s1 ⊠ s2 (obtained as oxtimes+TAB) labels the isomorphism class of simple objects in the Deligne tensor product category $C₁ ⊠ C₂$.

The Deligne tensor product also works in the type domain and for spaces and tensors. For group representations, we have Irrep[G₁] ⊠ Irrep[G₂] == Irrep[G₁ × G₂].

source
⊠(V₁::VectorSpace, V₂::VectorSpace)

Given two vector spaces V₁ and V₂ (ElementarySpace or ProductSpace), or thus, objects of corresponding fusion categories $C₁$ and $C₂$, $V₁ ⊠ V₂$ constructs the Deligne tensor product, an object in $C₁ ⊠ C₂$ which is the natural tensor product of those categories. In particular, the corresponding type of sectors (simple objects) is given by sectortype(V₁ ⊠ V₂) == sectortype(V₁) ⊠ sectortype(V₂) and can be thought of as a tuple of the individual sectors.

The Deligne tensor product also works in the type domain and for sectors and tensors. For group representations, we have Rep[G₁] ⊠ Rep[G₂] == Rep[G₁ × G₂], i.e. these are the natural representation spaces of the direct product of two groups.

source
Base.oneFunction
one(::Sector) -> Sector
-one(::Type{<:Sector}) -> Sector

Return the unit element within this type of sector.

source
one(::S) where {S<:ElementarySpace} -> ProductSpace{S, 0}
-one(::ProductSpace{S}) where {S<:ElementarySpace} -> ProductSpace{S, 0}

Return a tensor product of zero spaces of type S, i.e. this is the unit object under the tensor product operation, such that V ⊗ one(V) == V.

source
TensorKit.ismonomorphicFunction
ismonomorphic(V₁::VectorSpace, V₂::VectorSpace)
-V₁ ≾ V₂

Return whether there exist monomorphisms from V₁ to V₂, i.e. 'injective' morphisms with left inverses.

source
TensorKit.isepimorphicFunction
isepimorphic(V₁::VectorSpace, V₂::VectorSpace)
-V₁ ≿ V₂

Return whether there exist epimorphisms from V₁ to V₂, i.e. 'surjective' morphisms with right inverses.

source
TensorKit.isisomorphicFunction
isisomorphic(V₁::VectorSpace, V₂::VectorSpace)
-V₁ ≅ V₂

Return if V₁ and V₂ are isomorphic, meaning that there exists isomorphisms from V₁ to V₂, i.e. morphisms with left and right inverses.

source
TensorKit.insertunitFunction
insertunit(P::ProductSpace, i::Int = length(P)+1; dual = false, conj = false)

For P::ProductSpace{S,N}, this adds an extra tensor product factor at position 1 <= i <= N+1 (last position by default) which is just the S-equivalent of the underlying field of scalars, i.e. oneunit(S). With the keyword arguments, one can choose to insert the conjugated or dual space instead, which are all isomorphic to the field of scalars.

source
+end

A complex Euclidean space with a direct sum structure corresponding to labels in a set I, the objects of which have the structure of a monoid with respect to a monoidal product . In practice, we restrict the label set to be a set of superselection sectors of type I<:Sector, e.g. the set of distinct irreps of a finite or compact group, or the isomorphism classes of simple objects of a unitary and pivotal (pre-)fusion category.

Here dims represents the degeneracy or multiplicity of every sector.

The data structure D of dims will depend on the result Base.IteratorElsize(values(I)); if the result is of type HasLength or HasShape, dims will be stored in a NTuple{N,Int} with N = length(values(I)). This requires that a sector s::I can be transformed into an index via s == getindex(values(I), i) and i == findindex(values(I), s). If Base.IteratorElsize(values(I)) results IsInfinite() or SizeUnknown(), a SectorDict{I,Int} is used to store the non-zero degeneracy dimensions with the corresponding sector as key. The parameter D is hidden from the user and should typically be of no concern.

The concrete type GradedSpace{I,D} with correct D can be obtained as Vect[I], or if I == Irrep[G] for some G<:Group, as Rep[G].

source

Methods

Methods often apply similar to e.g. spaces and corresponding tensors or tensor maps, e.g.:

TensorKit.fieldFunction
field(V::VectorSpace) -> Field

Return the field type over which a vector space is defined.

source
TensorKit.sectortypeFunction
sectortype(a) -> Type{<:Sector}

Return the type of sector over which object a (e.g. a representation space or a tensor) is defined. Also works in type domain.

source
TensorKit.sectorsFunction
sectors(V::ElementarySpace)

Return an iterator over the different sectors of V.

source
sectors(P::ProductSpace{S, N}) where {S<:ElementarySpace}

Return an iterator over all possible combinations of sectors (represented as an NTuple{N, sectortype(S)}) that can appear within the tensor product space P.

source
TensorKit.hassectorFunction
hassector(V::VectorSpace, a::Sector) -> Bool

Return whether a vector space V has a subspace corresponding to sector a with non-zero dimension, i.e. dim(V, a) > 0.

source
hassector(P::ProductSpace{S, N}, s::NTuple{N, sectortype(S)}) where {S<:ElementarySpace}
+-> Bool

Query whether P has a non-zero degeneracy of sector s, representing a combination of sectors on the individual tensor indices.

source
TensorKit.dimFunction
dim(V::VectorSpace) -> Int

Return the total dimension of the vector space V as an Int.

source
TensorKit.dimsFunction
dims(::ProductSpace{S, N}) -> Dims{N} = NTuple{N, Int}

Return the dimensions of the spaces in the tensor product space as a tuple of integers.

source
dims(P::ProductSpace{S, N}, s::NTuple{N, sectortype(S)}) where {S<:ElementarySpace}
+-> Dims{N} = NTuple{N, Int}

Return the degeneracy dimensions corresponding to a tuple of sectors s for each of the spaces in the tensor product P.

source
TensorKit.blocksectorsFunction
blocksectors(P::ProductSpace)

Return an iterator over the different unique coupled sector labels, i.e. the different fusion outputs that can be obtained by fusing the sectors present in the different spaces that make up the ProductSpace instance.

source
blocksectors(W::HomSpace)

Return an iterator over the different unique coupled sector labels, i.e. the intersection of the different fusion outputs that can be obtained by fusing the sectors present in the domain, as well as from the codomain.

See also hasblock.

source
TensorKit.blockdimFunction
blockdim(P::ProductSpace, c::Sector)

Return the total dimension of a coupled sector c in the product space, by summing over all dim(P, s) for all tuples of sectors s::NTuple{N, <:Sector} that can fuse to c, counted with the correct multiplicity (i.e. number of ways in which s can fuse to c).

See also hasblock and blocksectors.

source

The following methods act specifically on ElementarySpace spaces:

TensorKit.isdualFunction
isdual(V::ElementarySpace) -> Bool

Return wether an ElementarySpace V is normal or rather a dual space. Always returns false for spaces where V == dual(V).

source
TensorKit.dualFunction
dual(V::VectorSpace) -> VectorSpace

Return the dual space of V; also obtained via V'. This should satisfy dual(dual(V)) == V. It is assumed that typeof(V) == typeof(V').

source
Base.conjFunction
conj(V::S) where {S<:ElementarySpace} -> S

Return the conjugate space of V. This should satisfy conj(conj(V)) == V.

For field(V)==ℝ, conj(V) == V. It is assumed that typeof(V) == typeof(conj(V)).

source
TensorKit.flipFunction
flip(V::S) where {S<:ElementarySpace} -> S

Return a single vector space of type S that has the same value of isdual as dual(V), but yet is isomorphic to V rather than to dual(V). The spaces flip(V) and dual(V) only differ in the case of GradedSpace{I}.

source
TensorKit.:⊕Function
⊕(V₁::S, V₂::S, V₃::S...) where {S<:ElementarySpace} -> S

Return the corresponding vector space of type S that represents the direct sum sum of the spaces V₁, V₂, ... Note that all the individual spaces should have the same value for isdual, as otherwise the direct sum is not defined.

source
Base.oneunitFunction
oneunit(V::S) where {S<:ElementarySpace} -> S

Return the corresponding vector space of type S that represents the trivial one-dimensional space, i.e. the space that is isomorphic to the corresponding field. Note that this is different from one(V::S), which returns the empty product space ProductSpace{S,0}(()).

source
TensorKit.supremumFunction
supremum(V₁::ElementarySpace, V₂::ElementarySpace, V₃::ElementarySpace...)

Return the supremum of a number of elementary spaces, i.e. an instance V::ElementarySpace such that V ≿ V₁, V ≿ V₂, ... and no other W ≺ V has this property. This requires that all arguments have the same value of isdual( ), and also the return value V will have the same value.

source
TensorKit.infimumFunction
infimum(V₁::ElementarySpace, V₂::ElementarySpace, V₃::ElementarySpace...)

Return the infimum of a number of elementary spaces, i.e. an instance V::ElementarySpace such that V ≾ V₁, V ≾ V₂, ... and no other W ≻ V has this property. This requires that all arguments have the same value of isdual( ), and also the return value V will have the same value.

source

while the following also work on both ElementarySpace and ProductSpace

TensorKit.fuseFunction
fuse(V₁::S, V₂::S, V₃::S...) where {S<:ElementarySpace} -> S
+fuse(P::ProductSpace{S}) where {S<:ElementarySpace} -> S

Return a single vector space of type S that is isomorphic to the fusion product of the individual spaces V₁, V₂, ..., or the spaces contained in P.

source
TensorKit.:⊗Function
⊗(V₁::S, V₂::S, V₃::S...) where {S<:ElementarySpace} -> S

Create a ProductSpace{S}(V₁, V₂, V₃...) representing the tensor product of several elementary vector spaces. For convience, Julia's regular multiplication operator * applied to vector spaces has the same effect.

The tensor product structure is preserved, see fuse for returning a single elementary space of type S that is isomorphic to this tensor product.

source
TensorKit.:⊠Function
⊠(s₁::Sector, s₂::Sector)
+deligneproduct(s₁::Sector, s₂::Sector)

Given two sectors s₁ and s₂, which label an isomorphism class of simple objects in a fusion category $C₁$ and $C₂$, s1 ⊠ s2 (obtained as oxtimes+TAB) labels the isomorphism class of simple objects in the Deligne tensor product category $C₁ ⊠ C₂$.

The Deligne tensor product also works in the type domain and for spaces and tensors. For group representations, we have Irrep[G₁] ⊠ Irrep[G₂] == Irrep[G₁ × G₂].

source
⊠(V₁::VectorSpace, V₂::VectorSpace)

Given two vector spaces V₁ and V₂ (ElementarySpace or ProductSpace), or thus, objects of corresponding fusion categories $C₁$ and $C₂$, $V₁ ⊠ V₂$ constructs the Deligne tensor product, an object in $C₁ ⊠ C₂$ which is the natural tensor product of those categories. In particular, the corresponding type of sectors (simple objects) is given by sectortype(V₁ ⊠ V₂) == sectortype(V₁) ⊠ sectortype(V₂) and can be thought of as a tuple of the individual sectors.

The Deligne tensor product also works in the type domain and for sectors and tensors. For group representations, we have Rep[G₁] ⊠ Rep[G₂] == Rep[G₁ × G₂], i.e. these are the natural representation spaces of the direct product of two groups.

source
Base.oneFunction
one(::Sector) -> Sector
+one(::Type{<:Sector}) -> Sector

Return the unit element within this type of sector.

source
one(::S) where {S<:ElementarySpace} -> ProductSpace{S, 0}
+one(::ProductSpace{S}) where {S<:ElementarySpace} -> ProductSpace{S, 0}

Return a tensor product of zero spaces of type S, i.e. this is the unit object under the tensor product operation, such that V ⊗ one(V) == V.

source
TensorKit.ismonomorphicFunction
ismonomorphic(V₁::VectorSpace, V₂::VectorSpace)
+V₁ ≾ V₂

Return whether there exist monomorphisms from V₁ to V₂, i.e. 'injective' morphisms with left inverses.

source
TensorKit.isepimorphicFunction
isepimorphic(V₁::VectorSpace, V₂::VectorSpace)
+V₁ ≿ V₂

Return whether there exist epimorphisms from V₁ to V₂, i.e. 'surjective' morphisms with right inverses.

source
TensorKit.isisomorphicFunction
isisomorphic(V₁::VectorSpace, V₂::VectorSpace)
+V₁ ≅ V₂

Return if V₁ and V₂ are isomorphic, meaning that there exists isomorphisms from V₁ to V₂, i.e. morphisms with left and right inverses.

source
TensorKit.insertunitFunction
insertunit(P::ProductSpace, i::Int = length(P)+1; dual = false, conj = false)

For P::ProductSpace{S,N}, this adds an extra tensor product factor at position 1 <= i <= N+1 (last position by default) which is just the S-equivalent of the underlying field of scalars, i.e. oneunit(S). With the keyword arguments, one can choose to insert the conjugated or dual space instead, which are all isomorphic to the field of scalars.

source
diff --git a/dev/lib/tensors/index.html b/dev/lib/tensors/index.html index dd1fdc4a..2c39c93b 100644 --- a/dev/lib/tensors/index.html +++ b/dev/lib/tensors/index.html @@ -1,25 +1,25 @@ -Tensors · TensorKit.jl

Tensors

Type hierarchy

TensorKit.AbstractTensorMapType
abstract type AbstractTensorMap{S<:IndexSpace, N₁, N₂} end

Abstract supertype of all tensor maps, i.e. linear maps between tensor products of vector spaces of type S<:IndexSpace. An AbstractTensorMap maps from an input space of type ProductSpace{S, N₂} to an output space of type ProductSpace{S, N₁}.

source
TensorKit.AbstractTensorType
AbstractTensor{S<:IndexSpace, N} = AbstractTensorMap{S, N, 0}

Abstract supertype of all tensors, i.e. elements in the tensor product space of type ProductSpace{S, N}, built from elementary spaces of type S<:IndexSpace.

An AbstractTensor{S, N} is actually a special case AbstractTensorMap{S, N, 0}, i.e. a tensor map with only a non-trivial output space.

source
TensorKit.TensorMapType
struct TensorMap{S<:IndexSpace, N₁, N₂, ...} <: AbstractTensorMap{S, N₁, N₂}

Specific subtype of AbstractTensorMap for representing tensor maps (morphisms in a tensor category) whose data is stored in blocks of some subtype of DenseMatrix.

source

Specific TensorMap constructors

TensorKit.idFunction
id([A::Type{<:DenseMatrix} = Matrix{Float64},] space::VectorSpace) -> TensorMap

Construct the identity endomorphism on space space, i.e. return a t::TensorMap with domain(t) == codomain(t) == V, where storagetype(t) = A can be specified.

source
TensorKit.isomorphismFunction
isomorphism([A::Type{<:DenseMatrix} = Matrix{Float64},]
+Tensors · TensorKit.jl

Tensors

Type hierarchy

TensorKit.AbstractTensorMapType
abstract type AbstractTensorMap{S<:IndexSpace, N₁, N₂} end

Abstract supertype of all tensor maps, i.e. linear maps between tensor products of vector spaces of type S<:IndexSpace. An AbstractTensorMap maps from an input space of type ProductSpace{S, N₂} to an output space of type ProductSpace{S, N₁}.

source
TensorKit.AbstractTensorType
AbstractTensor{S<:IndexSpace, N} = AbstractTensorMap{S, N, 0}

Abstract supertype of all tensors, i.e. elements in the tensor product space of type ProductSpace{S, N}, built from elementary spaces of type S<:IndexSpace.

An AbstractTensor{S, N} is actually a special case AbstractTensorMap{S, N, 0}, i.e. a tensor map with only a non-trivial output space.

source
TensorKit.TensorMapType
struct TensorMap{S<:IndexSpace, N₁, N₂, ...} <: AbstractTensorMap{S, N₁, N₂}

Specific subtype of AbstractTensorMap for representing tensor maps (morphisms in a tensor category) whose data is stored in blocks of some subtype of DenseMatrix.

source

Specific TensorMap constructors

TensorKit.idFunction
id([A::Type{<:DenseMatrix} = Matrix{Float64},] space::VectorSpace) -> TensorMap

Construct the identity endomorphism on space space, i.e. return a t::TensorMap with domain(t) == codomain(t) == V, where storagetype(t) = A can be specified.

source
TensorKit.isomorphismFunction
isomorphism([A::Type{<:DenseMatrix} = Matrix{Float64},]
                 cod::VectorSpace, dom::VectorSpace)
--> TensorMap

Return a t::TensorMap that implements a specific isomorphism between the codomain cod and the domain dom, and for which storagetype(t) can optionally be chosen to be of type A. If the two spaces do not allow for such an isomorphism, and are thus not isomorphic, and error will be thrown. When they are isomorphic, there is no canonical choice for a specific isomorphism, but the current choice is such that isomorphism(cod, dom) == inv(isomorphism(dom, cod)).

See also unitary when InnerProductStyle(cod) === EuclideanProduct().

source
TensorKit.unitaryFunction
unitary([A::Type{<:DenseMatrix} = Matrix{Float64},] cod::VectorSpace, dom::VectorSpace)
--> TensorMap

Return a t::TensorMap that implements a specific unitary isomorphism between the codomain cod and the domain dom, for which spacetype(dom) (== spacetype(cod)) must have an inner product. Furthermore, storagetype(t) can optionally be chosen to be of type A. If the two spaces do not allow for such an isomorphism, and are thus not isomorphic, and error will be thrown. When they are isomorphic, there is no canonical choice for a specific isomorphism, but the current choice is such that unitary(cod, dom) == inv(unitary(dom, cod)) = adjoint(unitary(dom, cod)).

source
TensorKit.isometryFunction
isometry([A::Type{<:DenseMatrix} = Matrix{Float64},] cod::VectorSpace, dom::VectorSpace)
--> TensorMap

Return a t::TensorMap that implements a specific isometry that embeds the domain dom into the codomain cod, and which requires that spacetype(dom) (== spacetype(cod)) has an Euclidean inner product. An isometry t is such that its adjoint t' is the left inverse of t, i.e. t'*t = id(dom), while t*t' is some idempotent endomorphism of cod, i.e. it squares to itself. When dom and cod do not allow for such an isometric inclusion, an error will be thrown.

source

TensorMap operations

Missing docstring.

Missing docstring for permute(t::TensorMap{S}, p1::IndexTuple, p2::IndexTuple) where {S}. Check Documenter's build log for details.

Base.permute!Function
permute!(tdst::AbstractTensorMap{S,N₁,N₂}, tsrc::AbstractTensorMap{S},
+-> TensorMap

Return a t::TensorMap that implements a specific isomorphism between the codomain cod and the domain dom, and for which storagetype(t) can optionally be chosen to be of type A. If the two spaces do not allow for such an isomorphism, and are thus not isomorphic, and error will be thrown. When they are isomorphic, there is no canonical choice for a specific isomorphism, but the current choice is such that isomorphism(cod, dom) == inv(isomorphism(dom, cod)).

See also unitary when InnerProductStyle(cod) === EuclideanProduct().

source
TensorKit.unitaryFunction
unitary([A::Type{<:DenseMatrix} = Matrix{Float64},] cod::VectorSpace, dom::VectorSpace)
+-> TensorMap

Return a t::TensorMap that implements a specific unitary isomorphism between the codomain cod and the domain dom, for which spacetype(dom) (== spacetype(cod)) must have an inner product. Furthermore, storagetype(t) can optionally be chosen to be of type A. If the two spaces do not allow for such an isomorphism, and are thus not isomorphic, and error will be thrown. When they are isomorphic, there is no canonical choice for a specific isomorphism, but the current choice is such that unitary(cod, dom) == inv(unitary(dom, cod)) = adjoint(unitary(dom, cod)).

source
TensorKit.isometryFunction
isometry([A::Type{<:DenseMatrix} = Matrix{Float64},] cod::VectorSpace, dom::VectorSpace)
+-> TensorMap

Return a t::TensorMap that implements a specific isometry that embeds the domain dom into the codomain cod, and which requires that spacetype(dom) (== spacetype(cod)) has an Euclidean inner product. An isometry t is such that its adjoint t' is the left inverse of t, i.e. t'*t = id(dom), while t*t' is some idempotent endomorphism of cod, i.e. it squares to itself. When dom and cod do not allow for such an isometric inclusion, an error will be thrown.

source

TensorMap operations

Missing docstring.

Missing docstring for permute(t::TensorMap{S}, p1::IndexTuple, p2::IndexTuple) where {S}. Check Documenter's build log for details.

Base.permute!Function
permute!(tdst::AbstractTensorMap{S,N₁,N₂}, tsrc::AbstractTensorMap{S},
          (p₁, p₂)::Tuple{IndexTuple{N₁},IndexTuple{N₂}}) where {S,N₁,N₂}
-    -> tdst

Write into tdst the result of permuting the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively.

See permute for creating a new tensor and add_permute! for a more general version.

source
TensorKit.braidFunction
braid(f::FusionTree{<:Sector, N}, levels::NTuple{N, Int}, p::NTuple{N, Int})
--> <:AbstractDict{typeof(t), <:Number}

Perform a braiding of the uncoupled indices of the fusion tree f and return the result as a <:AbstractDict of output trees and corresponding coefficients. The braiding is determined by specifying that the new sector at position k corresponds to the sector that was originally at the position i = p[k], and assigning to every index i of the original fusion tree a distinct level or depth levels[i]. This permutation is then decomposed into elementary swaps between neighbouring indices, where the swaps are applied as braids such that if i and j cross, $τ_{i,j}$ is applied if levels[i] < levels[j] and $τ_{j,i}^{-1}$ if levels[i] > levels[j]. This does not allow to encode the most general braid, but a general braid can be obtained by combining such operations.

source
braid(f₁::FusionTree{I}, f₂::FusionTree{I},
+    -> tdst

Write into tdst the result of permuting the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively.

See permute for creating a new tensor and add_permute! for a more general version.

source
TensorKit.braidFunction
braid(f::FusionTree{<:Sector, N}, levels::NTuple{N, Int}, p::NTuple{N, Int})
+-> <:AbstractDict{typeof(t), <:Number}

Perform a braiding of the uncoupled indices of the fusion tree f and return the result as a <:AbstractDict of output trees and corresponding coefficients. The braiding is determined by specifying that the new sector at position k corresponds to the sector that was originally at the position i = p[k], and assigning to every index i of the original fusion tree a distinct level or depth levels[i]. This permutation is then decomposed into elementary swaps between neighbouring indices, where the swaps are applied as braids such that if i and j cross, $τ_{i,j}$ is applied if levels[i] < levels[j] and $τ_{j,i}^{-1}$ if levels[i] > levels[j]. This does not allow to encode the most general braid, but a general braid can be obtained by combining such operations.

source
braid(f₁::FusionTree{I}, f₂::FusionTree{I},
         levels1::IndexTuple, levels2::IndexTuple,
         p1::IndexTuple{N₁}, p2::IndexTuple{N₂}) where {I<:Sector, N₁, N₂}
--> <:AbstractDict{Tuple{FusionTree{I, N₁}, FusionTree{I, N₂}}, <:Number}

Input is a fusion-splitting tree pair that describes the fusion of a set of incoming uncoupled sectors to a set of outgoing uncoupled sectors, represented using the splitting tree f₁ and fusion tree f₂, such that the incoming sectors f₂.uncoupled are fused to f₁.coupled == f₂.coupled and then to the outgoing sectors f₁.uncoupled. Compute new trees and corresponding coefficients obtained from repartitioning and braiding the tree such that sectors p1 become outgoing and sectors p2 become incoming. The uncoupled indices in splitting tree f₁ and fusion tree f₂ have levels (or depths) levels1 and levels2 respectively, which determines how indices braid. In particular, if i and j cross, $τ_{i,j}$ is applied if levels[i] < levels[j] and $τ_{j,i}^{-1}$ if levels[i] > levels[j]. This does not allow to encode the most general braid, but a general braid can be obtained by combining such operations.

source
braid(tsrc::AbstractTensorMap{S}, (p₁, p₂)::Tuple{IndexTuple{N₁},IndexTuple{N₂}}, levels::Tuple;
+-> <:AbstractDict{Tuple{FusionTree{I, N₁}, FusionTree{I, N₂}}, <:Number}

Input is a fusion-splitting tree pair that describes the fusion of a set of incoming uncoupled sectors to a set of outgoing uncoupled sectors, represented using the splitting tree f₁ and fusion tree f₂, such that the incoming sectors f₂.uncoupled are fused to f₁.coupled == f₂.coupled and then to the outgoing sectors f₁.uncoupled. Compute new trees and corresponding coefficients obtained from repartitioning and braiding the tree such that sectors p1 become outgoing and sectors p2 become incoming. The uncoupled indices in splitting tree f₁ and fusion tree f₂ have levels (or depths) levels1 and levels2 respectively, which determines how indices braid. In particular, if i and j cross, $τ_{i,j}$ is applied if levels[i] < levels[j] and $τ_{j,i}^{-1}$ if levels[i] > levels[j]. This does not allow to encode the most general braid, but a general braid can be obtained by combining such operations.

source
braid(tsrc::AbstractTensorMap{S}, (p₁, p₂)::Tuple{IndexTuple{N₁},IndexTuple{N₂}}, levels::Tuple;
       copy::Bool = false) where {S,N₁,N₂}
-    -> tdst::TensorMap{S,N₁,N₂}

Return tensor tdst obtained by braiding the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively. Here, levels is a tuple of length numind(tsrc) that assigns a level or height to the indices of tsrc, which determines whether they will braid over or under any other index with which they have to change places.

If copy=false, tdst might share data with tsrc whenever possible. Otherwise, a copy is always made.

To braid into an existing destination, see braid! and add_braid!

source
TensorKit.braid!Function
braid!(tdst::AbstractTensorMap{S,N₁,N₂}, tsrc::AbstractTensorMap{S},
+    -> tdst::TensorMap{S,N₁,N₂}

Return tensor tdst obtained by braiding the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively. Here, levels is a tuple of length numind(tsrc) that assigns a level or height to the indices of tsrc, which determines whether they will braid over or under any other index with which they have to change places.

If copy=false, tdst might share data with tsrc whenever possible. Otherwise, a copy is always made.

To braid into an existing destination, see braid! and add_braid!

source
TensorKit.braid!Function
braid!(tdst::AbstractTensorMap{S,N₁,N₂}, tsrc::AbstractTensorMap{S},
        (p₁, p₂)::Tuple{IndexTuple{N₁},IndexTuple{N₂}}, levels::Tuple) where {S,N₁,N₂}
-    -> tdst

Write into tdst the result of braiding the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively. Here, levels is a tuple of length numind(tsrc) that assigns a level or height to the indices of tsrc, which determines whether they will braid over or under any other index with which they have to change places.

See braid for creating a new tensor and add_braid! for a more general version.

source
Missing docstring.

Missing docstring for twist. Check Documenter's build log for details.

TensorKit.twist!Function
twist!(t::AbstractTensorMap, i::Int; inv::Bool=false)
-    -> t

Apply a twist to the ith index of t, storing the result in t. If inv=true, use the inverse twist.

See twist for creating a new tensor.

source
Missing docstring.

Missing docstring for add!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for trace!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for contract!. Check Documenter's build log for details.

TensorMap factorizations

TensorKit.leftorthFunction
leftorth(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
-            alg::OrthogonalFactorizationAlgorithm = QRpos()) -> Q, R

Create orthonormal basis Q for indices in leftind, and remainder R such that permute(t, (leftind, rightind)) = Q*R.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using leftorth!(t, alg = QRpos()).

Different algorithms are available, namely QR(), QRpos(), SVD() and Polar(). QR() and QRpos() use a standard QR decomposition, producing an upper triangular matrix R. Polar() produces a Hermitian and positive semidefinite R. QRpos() corrects the standard QR decomposition such that the diagonal elements of R are positive. Only QRpos() and Polar() are unique (no residual freedom) so that they always return the same result for the same input tensor t.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and leftorth(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.rightorthFunction
rightorth(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
-            alg::OrthogonalFactorizationAlgorithm = LQpos()) -> L, Q

Create orthonormal basis Q for indices in rightind, and remainder L such that permute(t, (leftind, rightind)) = L*Q.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using rightorth!(t, alg = LQpos()).

Different algorithms are available, namely LQ(), LQpos(), RQ(), RQpos(), SVD() and Polar(). LQ() and LQpos() produce a lower triangular matrix L and are computed using a QR decomposition of the transpose. RQ() and RQpos() produce an upper triangular remainder L and only works if the total left dimension is smaller than or equal to the total right dimension. LQpos() and RQpos() add an additional correction such that the diagonal elements of L are positive. Polar() produces a Hermitian and positive semidefinite L. Only LQpos(), RQpos() and Polar() are unique (no residual freedom) so that they always return the same result for the same input tensor t.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and rightorth(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.leftnullFunction
leftnull(t::AbstractTensor, (leftind, rightind)::Index2Tuple;
-            alg::OrthogonalFactorizationAlgorithm = QRpos()) -> N

Create orthonormal basis for the orthogonal complement of the support of the indices in leftind, such that N' * permute(t, (leftind, rightind)) = 0.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using leftnull!(t, alg = QRpos()).

Different algorithms are available, namely QR() (or equivalently, QRpos()), SVD() and SDD(). The first assumes that the matrix is full rank and requires iszero(atol) and iszero(rtol). With SVD() and SDD(), rightnull will use the corresponding singular value decomposition, and one can specify an absolute or relative tolerance for which singular values are to be considered zero, where max(atol, norm(t)*rtol) is used as upper bound.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and leftnull(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.rightnullFunction
rightnull(t::AbstractTensor, (leftind, rightind)::Index2Tuple;
+    -> tdst

Write into tdst the result of braiding the indices of tsrc. The codomain and domain of tdst correspond to the indices in p₁ and p₂ of tsrc respectively. Here, levels is a tuple of length numind(tsrc) that assigns a level or height to the indices of tsrc, which determines whether they will braid over or under any other index with which they have to change places.

See braid for creating a new tensor and add_braid! for a more general version.

source
Missing docstring.

Missing docstring for twist. Check Documenter's build log for details.

TensorKit.twist!Function
twist!(t::AbstractTensorMap, i::Int; inv::Bool=false)
+    -> t

Apply a twist to the ith index of t, storing the result in t. If inv=true, use the inverse twist.

See twist for creating a new tensor.

source
Missing docstring.

Missing docstring for add!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for trace!. Check Documenter's build log for details.

Missing docstring.

Missing docstring for contract!. Check Documenter's build log for details.

TensorMap factorizations

TensorKit.leftorthFunction
leftorth(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
+            alg::OrthogonalFactorizationAlgorithm = QRpos()) -> Q, R

Create orthonormal basis Q for indices in leftind, and remainder R such that permute(t, (leftind, rightind)) = Q*R.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using leftorth!(t, alg = QRpos()).

Different algorithms are available, namely QR(), QRpos(), SVD() and Polar(). QR() and QRpos() use a standard QR decomposition, producing an upper triangular matrix R. Polar() produces a Hermitian and positive semidefinite R. QRpos() corrects the standard QR decomposition such that the diagonal elements of R are positive. Only QRpos() and Polar() are unique (no residual freedom) so that they always return the same result for the same input tensor t.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and leftorth(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.rightorthFunction
rightorth(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
+            alg::OrthogonalFactorizationAlgorithm = LQpos()) -> L, Q

Create orthonormal basis Q for indices in rightind, and remainder L such that permute(t, (leftind, rightind)) = L*Q.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using rightorth!(t, alg = LQpos()).

Different algorithms are available, namely LQ(), LQpos(), RQ(), RQpos(), SVD() and Polar(). LQ() and LQpos() produce a lower triangular matrix L and are computed using a QR decomposition of the transpose. RQ() and RQpos() produce an upper triangular remainder L and only works if the total left dimension is smaller than or equal to the total right dimension. LQpos() and RQpos() add an additional correction such that the diagonal elements of L are positive. Polar() produces a Hermitian and positive semidefinite L. Only LQpos(), RQpos() and Polar() are unique (no residual freedom) so that they always return the same result for the same input tensor t.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and rightorth(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.leftnullFunction
leftnull(t::AbstractTensor, (leftind, rightind)::Index2Tuple;
+            alg::OrthogonalFactorizationAlgorithm = QRpos()) -> N

Create orthonormal basis for the orthogonal complement of the support of the indices in leftind, such that N' * permute(t, (leftind, rightind)) = 0.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using leftnull!(t, alg = QRpos()).

Different algorithms are available, namely QR() (or equivalently, QRpos()), SVD() and SDD(). The first assumes that the matrix is full rank and requires iszero(atol) and iszero(rtol). With SVD() and SDD(), rightnull will use the corresponding singular value decomposition, and one can specify an absolute or relative tolerance for which singular values are to be considered zero, where max(atol, norm(t)*rtol) is used as upper bound.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and leftnull(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.rightnullFunction
rightnull(t::AbstractTensor, (leftind, rightind)::Index2Tuple;
             alg::OrthogonalFactorizationAlgorithm = LQ(),
             atol::Real = 0.0,
-            rtol::Real = eps(real(float(one(scalartype(t)))))*iszero(atol)) -> N

Create orthonormal basis for the orthogonal complement of the support of the indices in rightind, such that permute(t, (leftind, rightind))*N' = 0.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using rightnull!(t, alg = LQpos()).

Different algorithms are available, namely LQ() (or equivalently, LQpos), SVD() and SDD(). The first assumes that the matrix is full rank and requires iszero(atol) and iszero(rtol). With SVD() and SDD(), rightnull will use the corresponding singular value decomposition, and one can specify an absolute or relative tolerance for which singular values are to be considered zero, where max(atol, norm(t)*rtol) is used as upper bound.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and rightnull(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.tsvdFunction
tsvd(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
+            rtol::Real = eps(real(float(one(scalartype(t)))))*iszero(atol)) -> N

Create orthonormal basis for the orthogonal complement of the support of the indices in rightind, such that permute(t, (leftind, rightind))*N' = 0.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using rightnull!(t, alg = LQpos()).

Different algorithms are available, namely LQ() (or equivalently, LQpos), SVD() and SDD(). The first assumes that the matrix is full rank and requires iszero(atol) and iszero(rtol). With SVD() and SDD(), rightnull will use the corresponding singular value decomposition, and one can specify an absolute or relative tolerance for which singular values are to be considered zero, where max(atol, norm(t)*rtol) is used as upper bound.

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and rightnull(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.tsvdFunction
tsvd(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple;
     trunc::TruncationScheme = notrunc(), p::Real = 2, alg::Union{SVD, SDD} = SDD())
-    -> U, S, V, ϵ

Compute the (possibly truncated) singular value decomposition such that norm(permute(t, (leftind, rightind)) - U * S * V) ≈ ϵ, where ϵ thus represents the truncation error.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using tsvd!(t, trunc = notrunc(), p = 2).

A truncation parameter trunc can be specified for the new internal dimension, in which case a truncated singular value decomposition will be computed. Choices are:

  • notrunc(): no truncation (default);
  • truncerr(η::Real): truncates such that the p-norm of the truncated singular values is smaller than η times the p-norm of all singular values;
  • truncdim(χ::Int): truncates such that the equivalent total dimension of the internal vector space is no larger than χ;
  • truncspace(V): truncates such that the dimension of the internal vector space is smaller than that of V in any sector.
  • truncbelow(χ::Real): truncates such that every singular value is larger then χ ;

The method tsvd also returns the truncation error ϵ, computed as the p norm of the singular values that were truncated.

THe keyword alg can be equal to SVD() or SDD(), corresponding to the underlying LAPACK algorithm that computes the decomposition (_gesvd or _gesdd).

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and tsvd(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.eighFunction
eigh(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple) -> D, V

Compute eigenvalue factorization of tensor t as linear map from rightind to leftind. The function eigh assumes that the linear map is hermitian and D and V tensors with the same scalartype as t. See eig and eigen for non-hermitian tensors. Hermiticity requires that the tensor acts on inner product spaces, and the current implementation requires InnerProductStyle(t) === EuclideanProduct().

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using eigh!(t). Note that the permuted tensor on which eigh! is called should have equal domain and codomain, as otherwise the eigenvalue decomposition is meaningless and cannot satisfy

permute(t, (leftind, rightind)) * V = V * D

See also eigen and eig.

source
TensorKit.eigFunction
eig(t::AbstractTensor, (leftind, rightind)::Index2Tuple; kwargs...) -> D, V

Compute eigenvalue factorization of tensor t as linear map from rightind to leftind. The function eig assumes that the linear map is not hermitian and returns type stable complex valued D and V tensors for both real and complex valued t. See eigh for hermitian linear maps

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using eig!(t). Note that the permuted tensor on which eig! is called should have equal domain and codomain, as otherwise the eigenvalue decomposition is meaningless and cannot satisfy

permute(t, (leftind, rightind)) * V = V * D

Accepts the same keyword arguments scale, permute and sortby as eigen of dense matrices. See the corresponding documentation for more information.

See also eigen and eigh.

source
LinearAlgebra.eigenFunction
eigen(t::AbstractTensor, (leftind, rightind)::Index2Tuple; kwargs...) -> D, V

Compute eigenvalue factorization of tensor t as linear map from rightind to leftind.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using eigen!(t). Note that the permuted tensor on which eigen! is called should have equal domain and codomain, as otherwise the eigenvalue decomposition is meaningless and cannot satisfy

permute(t, (leftind, rightind)) * V = V * D

Accepts the same keyword arguments scale, permute and sortby as eigen of dense matrices. See the corresponding documentation for more information.

See also eig and eigh

source
+ -> U, S, V, ϵ

Compute the (possibly truncated) singular value decomposition such that norm(permute(t, (leftind, rightind)) - U * S * V) ≈ ϵ, where ϵ thus represents the truncation error.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using tsvd!(t, trunc = notrunc(), p = 2).

A truncation parameter trunc can be specified for the new internal dimension, in which case a truncated singular value decomposition will be computed. Choices are:

  • notrunc(): no truncation (default);
  • truncerr(η::Real): truncates such that the p-norm of the truncated singular values is smaller than η times the p-norm of all singular values;
  • truncdim(χ::Int): truncates such that the equivalent total dimension of the internal vector space is no larger than χ;
  • truncspace(V): truncates such that the dimension of the internal vector space is smaller than that of V in any sector.
  • truncbelow(χ::Real): truncates such that every singular value is larger then χ ;

The method tsvd also returns the truncation error ϵ, computed as the p norm of the singular values that were truncated.

THe keyword alg can be equal to SVD() or SDD(), corresponding to the underlying LAPACK algorithm that computes the decomposition (_gesvd or _gesdd).

Orthogonality requires InnerProductStyle(t) <: HasInnerProduct, and tsvd(!) is currently only implemented for InnerProductStyle(t) === EuclideanProduct().

source
TensorKit.eighFunction
eigh(t::AbstractTensorMap, (leftind, rightind)::Index2Tuple) -> D, V

Compute eigenvalue factorization of tensor t as linear map from rightind to leftind. The function eigh assumes that the linear map is hermitian and D and V tensors with the same scalartype as t. See eig and eigen for non-hermitian tensors. Hermiticity requires that the tensor acts on inner product spaces, and the current implementation requires InnerProductStyle(t) === EuclideanProduct().

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using eigh!(t). Note that the permuted tensor on which eigh! is called should have equal domain and codomain, as otherwise the eigenvalue decomposition is meaningless and cannot satisfy

permute(t, (leftind, rightind)) * V = V * D

See also eigen and eig.

source
TensorKit.eigFunction
eig(t::AbstractTensor, (leftind, rightind)::Index2Tuple; kwargs...) -> D, V

Compute eigenvalue factorization of tensor t as linear map from rightind to leftind. The function eig assumes that the linear map is not hermitian and returns type stable complex valued D and V tensors for both real and complex valued t. See eigh for hermitian linear maps

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using eig!(t). Note that the permuted tensor on which eig! is called should have equal domain and codomain, as otherwise the eigenvalue decomposition is meaningless and cannot satisfy

permute(t, (leftind, rightind)) * V = V * D

Accepts the same keyword arguments scale, permute and sortby as eigen of dense matrices. See the corresponding documentation for more information.

See also eigen and eigh.

source
LinearAlgebra.eigenFunction
eigen(t::AbstractTensor, (leftind, rightind)::Index2Tuple; kwargs...) -> D, V

Compute eigenvalue factorization of tensor t as linear map from rightind to leftind.

If leftind and rightind are not specified, the current partition of left and right indices of t is used. In that case, less memory is allocated if one allows the data in t to be destroyed/overwritten, by using eigen!(t). Note that the permuted tensor on which eigen! is called should have equal domain and codomain, as otherwise the eigenvalue decomposition is meaningless and cannot satisfy

permute(t, (leftind, rightind)) * V = V * D

Accepts the same keyword arguments scale, permute and sortby as eigen of dense matrices. See the corresponding documentation for more information.

See also eig and eigh

source
diff --git a/dev/man/categories/index.html b/dev/man/categories/index.html index e9f1bbc6..2b5df32c 100644 --- a/dev/man/categories/index.html +++ b/dev/man/categories/index.html @@ -6,4 +6,4 @@ Springer Science & Business Media.
  • kitaev
      Kitaev, A. (2006). Anyons in an exactly solved model and beyond.
             Annals of Physics, 321(1), 2-111.
  • beer
        From categories to anyons: a travelogue
             Kerstin Beer, Dmytro Bondarenko, Alexander Hahn, Maria Kalabakov, Nicole Knust, Laura Niermann, Tobias J. Osborne, Christin Schridde, Stefan Seckmeyer, Deniz E. Stiegemann, and Ramona Wolf
    -        [https://arxiv.org/abs/1811.06670](https://arxiv.org/abs/1811.06670)
  • + [https://arxiv.org/abs/1811.06670](https://arxiv.org/abs/1811.06670) diff --git a/dev/man/intro/index.html b/dev/man/intro/index.html index d687f4a1..fd9f2e1f 100644 --- a/dev/man/intro/index.html +++ b/dev/man/intro/index.html @@ -1,2 +1,2 @@ -Introduction · TensorKit.jl

    Introduction

    Before providing a typical "user guide" and discussing the implementation of TensorKit.jl on the next pages, let us discuss some of the rationale behind this package.

    What is a tensor?

    At the very start we should ponder about the most suitable and sufficiently general definition of a tensor. A good starting point is the following:

    • A tensor $t$ is an element from the tensor product of $N$ vector spaces $V_1 , V_2, …, V_N$, where $N$ is referred to as the rank or order of the tensor, i.e.

      $t ∈ V_1 ⊗ V_2 ⊗ … ⊗ V_N.$

    If you think of a tensor as an object with indices, a rank $N$ tensor has $N$ indices where every index is associated with the corresponding vector space in that it labels a particular basis in that space. We will return to index notation at the very end of this manual.

    As the tensor product of vector spaces is itself a vector space, this implies that a tensor behaves as a vector, i.e. tensors from the same tensor product space can be added and multiplied by scalars. The tensor product is only defined for vector spaces over the same field of scalars, e.g. there is no meaning in $ℝ^5 ⊗ ℂ^3$. When all the vector spaces in the tensor product have an inner product, this also implies an inner product for the tensor product space. It is hence clear that the different vector spaces in the tensor product should have some form of homogeneity in their structure, yet they do not need to be all equal and can e.g. have different dimensions. It goes without saying that defining the vector spaces and their properties will be an important part of the definition of a tensor. As a consequence, this also constitutes a significant part of the implementation, and is discussed in the section on Vector spaces.

    Aside from the interpretation of a tensor as a vector, we also want to interpret it as a matrix (or more correctly, a linear map) in order to decompose tensors using linear algebra factorisations (e.g. eigenvalue or singular value decomposition). Henceforth, we use the term "tensor map" as follows:

    • A tensor map $t$ is a linear map from a source or domain $W_1 ⊗ W_2 ⊗ … ⊗ W_{N_2}$ to a target or codomain $V_1 ⊗ V_2 ⊗ … ⊗ V_{N_1}$, i.e.

      $t:W_1 ⊗ W_2 ⊗ … ⊗ W_{N_2} → V_1 ⊗ V_2 ⊗ … ⊗ V_{N_1}.$

    A tensor of rank $N$ is then just a special case of a tensor map with $N_1 = N$ and $N_2 = 0$. A contraction between two tensors is just a composition of linear maps (i.e. matrix multiplication), where the contracted indices correspond to the domain of the first tensor and the codomain of the second tensor.

    In order to allow for arbitrary tensor contractions or decompositions, we need to be able to reorganise which vector spaces appear in the domain and the codomain of the tensor map, and in which order. This amounts to defining canonical isomorphisms between the different ways to order and partition the tensor indices (i.e. the vector spaces). For example, a linear map $W → V$ is often denoted as a rank 2 tensor in $V ⊗ W^*$, where $W^*$ corresponds to the dual space of $W$. This simple example introduces two new concepts.

    1. Typical vector spaces can appear in the domain and codomain in different related forms, e.g. as normal space or dual space. In fact, the most generic case is that every vector space $V$ has associated with it a dual space $V^*$, a conjugate space $\overline{V}$ and a conjugate dual space $\overline{V}^*$. The four different vector spaces $V$, $V^*$, $\overline{V}$ and $\overline{V}^*$ correspond to the representation spaces of respectively the fundamental, dual or contragredient, complex conjugate and dual complex conjugate representation of the general linear group $\mathsf{GL}(V)$. In index notation these spaces are denoted with respectively contravariant (upper), covariant (lower), dotted contravariant and dotted covariant indices.

      For real vector spaces, the conjugate (dual) space is identical to the normal (dual) space and we only have upper and lower indices, i.e. this is the setting of e.g. general relativity. For (complex) vector spaces with a sesquilinear inner product $\overline{V} ⊗ V → ℂ$, the inner product allows to define an isomorphism from the conjugate space to the dual space (known as Riesz representation theorem in the more general context of Hilbert spaces).

      In particular, in spaces with a Euclidean inner product (the setting of e.g. quantum mechanics), the conjugate and dual space are naturally isomorphic (because the dual and conjugate representation of the unitary group are the same). Again we only need upper and lower indices (or kets and bras).

      Finally, in $ℝ^d$ with a Euclidean inner product, these four different spaces are all equivalent and we only need one type of index. The space is completely characterized by its dimension $d$. This is the setting of much of classical mechanics and we refer to such tensors as cartesian tensors and the corresponding space as cartesian space. These are the tensors that can equally well be represented as multidimensional arrays (i.e. using some AbstractArray{<:Real,N} in Julia) without loss of structure.

      The implementation of all of this is discussed in Vector spaces.

    2. In the generic case, the identification between maps $W → V$ and tensors in $V ⊗ W^*$ is not an equivalence but an isomorphism, which needs to be defined. Similarly, there is an isomorphism between between $V ⊗ W$ and $W ⊗ V$ that can be non-trivial (e.g. in the case of fermions / super vector spaces). The correct formalism here is provided by theory of monoidal categories, which is introduced on the next page. Nonetheless, we try to hide these canonical isomorphisms from the user wherever possible, and one does not need to know category theory to be able to use this package.

    This brings us to our final (yet formal) definition

    • A tensor (map) is a homomorphism between two objects from the category $\mathbf{Vect}$ (or some subcategory thereof). In practice, this will be $\mathbf{FinVect}$, the category of finite dimensional vector spaces. More generally even, our concept of a tensor makes sense, in principle, for any linear (a.k.a. $\mathbf{Vect}$-enriched) monoidal category. We refer to the next page on "Monoidal categories and their properties".

    Symmetries and block sparsity

    Physical problems often have some symmetry, i.e. the setup is invariant under the action of a group $\mathsf{G}$ which acts on the vector spaces $V$ in the problem according to a certain representation. Having quantum mechanics in mind, TensorKit.jl is so far restricted to unitary representations. A general representation space $V$ can be specified as the number of times every irreducible representation (irrep) $a$ of $\mathsf{G}$ appears, i.e.

    $V = \bigoplus_{a} ℂ^{n_a} ⊗ R_a$

    with $R_a$ the space associated with irrep $a$ of $\mathsf{G}$, which itself has dimension $d_a$ (often called the quantum dimension), and $n_a$ the number of times this irrep appears in $V$. If the unitary irrep $a$ for $g ∈ \mathsf{G}$ is given by $u_a(g)$, then there exists a specific basis for $V$ such that the group action of $\mathsf{G}$ on $V$ is given by the unitary representation

    $u(g) = \bigoplus_{a} 𝟙_{n_a} ⊗ u_a(g)$

    with $𝟙_{n_a}$ the $n_a × n_a$ identity matrix. The total dimension of $V$ is given by $∑_a n_a d_a$.

    The reason for implementing symmetries is to exploit the computation and memory gains resulting from restricting to tensor maps $t:W_1 ⊗ W_2 ⊗ … ⊗ W_{N_2} → V_1 ⊗ V_2 ⊗ … ⊗ V_{N_1}$ that are equivariant under the symmetry, i.e. that act as intertwiners between the symmetry action on the domain and the codomain. Indeed, such tensors should be block diagonal because of Schur's lemma, but only after we couple the individual irreps in the spaces $W_i$ to a joint irrep, which is then again split into the individual irreps of the spaces $V_i$. The basis change from the tensor product of irreps in the (co)domain to the joint irrep is implemented by a sequence of Clebsch–Gordan coefficients, also known as a fusion (or splitting) tree. We implement the necessary machinery to manipulate these fusion trees under index permutations and repartitions for arbitrary groups $\mathsf{G}$. In particular, this fits with the formalism of monoidal categories, and more specifically fusion categories, and only requires the topological data of the group, i.e. the fusion rules of the irreps, their quantum dimensions and the F-symbol (6j-symbol or more precisely Racah's W-symbol in the case of $\mathsf{SU}_2$). In particular, we don't actually need the Clebsch–Gordan coefficients themselves (but they can be useful for checking purposes).

    Hence, a second major part of TensorKit.jl is the interface and implementation for specifying symmetries, and further details are provided in Sectors, representation spaces and fusion trees.

    +Introduction · TensorKit.jl

    Introduction

    Before providing a typical "user guide" and discussing the implementation of TensorKit.jl on the next pages, let us discuss some of the rationale behind this package.

    What is a tensor?

    At the very start we should ponder about the most suitable and sufficiently general definition of a tensor. A good starting point is the following:

    • A tensor $t$ is an element from the tensor product of $N$ vector spaces $V_1 , V_2, …, V_N$, where $N$ is referred to as the rank or order of the tensor, i.e.

      $t ∈ V_1 ⊗ V_2 ⊗ … ⊗ V_N.$

    If you think of a tensor as an object with indices, a rank $N$ tensor has $N$ indices where every index is associated with the corresponding vector space in that it labels a particular basis in that space. We will return to index notation at the very end of this manual.

    As the tensor product of vector spaces is itself a vector space, this implies that a tensor behaves as a vector, i.e. tensors from the same tensor product space can be added and multiplied by scalars. The tensor product is only defined for vector spaces over the same field of scalars, e.g. there is no meaning in $ℝ^5 ⊗ ℂ^3$. When all the vector spaces in the tensor product have an inner product, this also implies an inner product for the tensor product space. It is hence clear that the different vector spaces in the tensor product should have some form of homogeneity in their structure, yet they do not need to be all equal and can e.g. have different dimensions. It goes without saying that defining the vector spaces and their properties will be an important part of the definition of a tensor. As a consequence, this also constitutes a significant part of the implementation, and is discussed in the section on Vector spaces.

    Aside from the interpretation of a tensor as a vector, we also want to interpret it as a matrix (or more correctly, a linear map) in order to decompose tensors using linear algebra factorisations (e.g. eigenvalue or singular value decomposition). Henceforth, we use the term "tensor map" as follows:

    • A tensor map $t$ is a linear map from a source or domain $W_1 ⊗ W_2 ⊗ … ⊗ W_{N_2}$ to a target or codomain $V_1 ⊗ V_2 ⊗ … ⊗ V_{N_1}$, i.e.

      $t:W_1 ⊗ W_2 ⊗ … ⊗ W_{N_2} → V_1 ⊗ V_2 ⊗ … ⊗ V_{N_1}.$

    A tensor of rank $N$ is then just a special case of a tensor map with $N_1 = N$ and $N_2 = 0$. A contraction between two tensors is just a composition of linear maps (i.e. matrix multiplication), where the contracted indices correspond to the domain of the first tensor and the codomain of the second tensor.

    In order to allow for arbitrary tensor contractions or decompositions, we need to be able to reorganise which vector spaces appear in the domain and the codomain of the tensor map, and in which order. This amounts to defining canonical isomorphisms between the different ways to order and partition the tensor indices (i.e. the vector spaces). For example, a linear map $W → V$ is often denoted as a rank 2 tensor in $V ⊗ W^*$, where $W^*$ corresponds to the dual space of $W$. This simple example introduces two new concepts.

    1. Typical vector spaces can appear in the domain and codomain in different related forms, e.g. as normal space or dual space. In fact, the most generic case is that every vector space $V$ has associated with it a dual space $V^*$, a conjugate space $\overline{V}$ and a conjugate dual space $\overline{V}^*$. The four different vector spaces $V$, $V^*$, $\overline{V}$ and $\overline{V}^*$ correspond to the representation spaces of respectively the fundamental, dual or contragredient, complex conjugate and dual complex conjugate representation of the general linear group $\mathsf{GL}(V)$. In index notation these spaces are denoted with respectively contravariant (upper), covariant (lower), dotted contravariant and dotted covariant indices.

      For real vector spaces, the conjugate (dual) space is identical to the normal (dual) space and we only have upper and lower indices, i.e. this is the setting of e.g. general relativity. For (complex) vector spaces with a sesquilinear inner product $\overline{V} ⊗ V → ℂ$, the inner product allows to define an isomorphism from the conjugate space to the dual space (known as Riesz representation theorem in the more general context of Hilbert spaces).

      In particular, in spaces with a Euclidean inner product (the setting of e.g. quantum mechanics), the conjugate and dual space are naturally isomorphic (because the dual and conjugate representation of the unitary group are the same). Again we only need upper and lower indices (or kets and bras).

      Finally, in $ℝ^d$ with a Euclidean inner product, these four different spaces are all equivalent and we only need one type of index. The space is completely characterized by its dimension $d$. This is the setting of much of classical mechanics and we refer to such tensors as cartesian tensors and the corresponding space as cartesian space. These are the tensors that can equally well be represented as multidimensional arrays (i.e. using some AbstractArray{<:Real,N} in Julia) without loss of structure.

      The implementation of all of this is discussed in Vector spaces.

    2. In the generic case, the identification between maps $W → V$ and tensors in $V ⊗ W^*$ is not an equivalence but an isomorphism, which needs to be defined. Similarly, there is an isomorphism between between $V ⊗ W$ and $W ⊗ V$ that can be non-trivial (e.g. in the case of fermions / super vector spaces). The correct formalism here is provided by theory of monoidal categories, which is introduced on the next page. Nonetheless, we try to hide these canonical isomorphisms from the user wherever possible, and one does not need to know category theory to be able to use this package.

    This brings us to our final (yet formal) definition

    • A tensor (map) is a homomorphism between two objects from the category $\mathbf{Vect}$ (or some subcategory thereof). In practice, this will be $\mathbf{FinVect}$, the category of finite dimensional vector spaces. More generally even, our concept of a tensor makes sense, in principle, for any linear (a.k.a. $\mathbf{Vect}$-enriched) monoidal category. We refer to the next page on "Monoidal categories and their properties".

    Symmetries and block sparsity

    Physical problems often have some symmetry, i.e. the setup is invariant under the action of a group $\mathsf{G}$ which acts on the vector spaces $V$ in the problem according to a certain representation. Having quantum mechanics in mind, TensorKit.jl is so far restricted to unitary representations. A general representation space $V$ can be specified as the number of times every irreducible representation (irrep) $a$ of $\mathsf{G}$ appears, i.e.

    $V = \bigoplus_{a} ℂ^{n_a} ⊗ R_a$

    with $R_a$ the space associated with irrep $a$ of $\mathsf{G}$, which itself has dimension $d_a$ (often called the quantum dimension), and $n_a$ the number of times this irrep appears in $V$. If the unitary irrep $a$ for $g ∈ \mathsf{G}$ is given by $u_a(g)$, then there exists a specific basis for $V$ such that the group action of $\mathsf{G}$ on $V$ is given by the unitary representation

    $u(g) = \bigoplus_{a} 𝟙_{n_a} ⊗ u_a(g)$

    with $𝟙_{n_a}$ the $n_a × n_a$ identity matrix. The total dimension of $V$ is given by $∑_a n_a d_a$.

    The reason for implementing symmetries is to exploit the computation and memory gains resulting from restricting to tensor maps $t:W_1 ⊗ W_2 ⊗ … ⊗ W_{N_2} → V_1 ⊗ V_2 ⊗ … ⊗ V_{N_1}$ that are equivariant under the symmetry, i.e. that act as intertwiners between the symmetry action on the domain and the codomain. Indeed, such tensors should be block diagonal because of Schur's lemma, but only after we couple the individual irreps in the spaces $W_i$ to a joint irrep, which is then again split into the individual irreps of the spaces $V_i$. The basis change from the tensor product of irreps in the (co)domain to the joint irrep is implemented by a sequence of Clebsch–Gordan coefficients, also known as a fusion (or splitting) tree. We implement the necessary machinery to manipulate these fusion trees under index permutations and repartitions for arbitrary groups $\mathsf{G}$. In particular, this fits with the formalism of monoidal categories, and more specifically fusion categories, and only requires the topological data of the group, i.e. the fusion rules of the irreps, their quantum dimensions and the F-symbol (6j-symbol or more precisely Racah's W-symbol in the case of $\mathsf{SU}_2$). In particular, we don't actually need the Clebsch–Gordan coefficients themselves (but they can be useful for checking purposes).

    Hence, a second major part of TensorKit.jl is the interface and implementation for specifying symmetries, and further details are provided in Sectors, representation spaces and fusion trees.

    diff --git a/dev/man/sectors/index.html b/dev/man/sectors/index.html index a99a0358..19555a21 100644 --- a/dev/man/sectors/index.html +++ b/dev/man/sectors/index.html @@ -244,7 +244,7 @@ FusionTree{Irrep[SU₂]}((1/2, 1/2, 1/2, 1/2, 1/2), 1/2, (true, false, false, true, false), (0, 1/2, 1)) FusionTree{Irrep[SU₂]}((1/2, 1/2, 1/2, 1/2, 1/2), 1/2, (true, false, false, true, false), (1, 1/2, 0)) FusionTree{Irrep[SU₂]}((1/2, 1/2, 1/2, 1/2, 1/2), 1/2, (true, false, false, true, false), (1, 1/2, 1)) - FusionTree{Irrep[SU₂]}((1/2, 1/2, 1/2, 1/2, 1/2), 1/2, (true, false, false, true, false), (1, 3/2, 1))
    julia> iter = fusiontrees(ntuple(n->s, 16))TensorKit.FusionTreeIterator{SU2Irrep, 16}((Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2)), Irrep[SU₂](0), (false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false))
    julia> sum(n->1, iter)1430
    julia> length(iter)1430
    julia> @elapsed sum(n->1, iter)0.030213149
    julia> @elapsed length(iter)8.7542e-5
    julia> s2 = s ⊠ s(Irrep[SU₂](1/2) ⊠ Irrep[SU₂](1/2))
    julia> collect(fusiontrees((s2,s2,s2,s2)))4-element Vector{FusionTree{TensorKit.ProductSector{Tuple{SU2Irrep, SU2Irrep}}, 4, 2, 3, Nothing}}: + FusionTree{Irrep[SU₂]}((1/2, 1/2, 1/2, 1/2, 1/2), 1/2, (true, false, false, true, false), (1, 3/2, 1))
    julia> iter = fusiontrees(ntuple(n->s, 16))TensorKit.FusionTreeIterator{SU2Irrep, 16}((Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2), Irrep[SU₂](1/2)), Irrep[SU₂](0), (false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false))
    julia> sum(n->1, iter)1430
    julia> length(iter)1430
    julia> @elapsed sum(n->1, iter)0.030465472
    julia> @elapsed length(iter)8.8205e-5
    julia> s2 = s ⊠ s(Irrep[SU₂](1/2) ⊠ Irrep[SU₂](1/2))
    julia> collect(fusiontrees((s2,s2,s2,s2)))4-element Vector{FusionTree{TensorKit.ProductSector{Tuple{SU2Irrep, SU2Irrep}}, 4, 2, 3, Nothing}}: FusionTree{Irrep[SU₂ × SU₂]}(((1/2, 1/2), (1/2, 1/2), (1/2, 1/2), (1/2, 1/2)), (0, 0), (false, false, false, false), ((0, 0), (1/2, 1/2))) FusionTree{Irrep[SU₂ × SU₂]}(((1/2, 1/2), (1/2, 1/2), (1/2, 1/2), (1/2, 1/2)), (0, 0), (false, false, false, false), ((1, 0), (1/2, 1/2))) FusionTree{Irrep[SU₂ × SU₂]}(((1/2, 1/2), (1/2, 1/2), (1/2, 1/2), (1/2, 1/2)), (0, 0), (false, false, false, false), ((0, 1), (1/2, 1/2))) @@ -320,4 +320,4 @@ 0.618034 0.786151 0.786151 -0.618034
    julia> Fτ'*Fτ2×2 Matrix{Float64}: 1.0 0.0 - 0.0 1.0
    julia> polar(x) = rationalize.((abs(x), angle(x)/(2pi)))polar (generic function with 1 method)
    julia> Rsymbol(τ,τ,𝟙) |> polar(1//1, 2//5)
    julia> Rsymbol(τ,τ,τ) |> polar(1//1, -3//10)
    julia> twist(τ) |> polar(1//1, -2//5) + 0.0 1.0
    julia> polar(x) = rationalize.((abs(x), angle(x)/(2pi)))polar (generic function with 1 method)
    julia> Rsymbol(τ,τ,𝟙) |> polar(1//1, 2//5)
    julia> Rsymbol(τ,τ,τ) |> polar(1//1, -3//10)
    julia> twist(τ) |> polar(1//1, -2//5) diff --git a/dev/man/spaces/index.html b/dev/man/spaces/index.html index a025f751..ab31a342 100644 --- a/dev/man/spaces/index.html +++ b/dev/man/spaces/index.html @@ -31,4 +31,4 @@ codomain::P1 domain::P2 end

    and can create it as either domain → codomain or codomain ← domain (where the arrows are obtained as \to+TAB or \leftarrow+TAB, and as \rightarrow+TAB respectively). The reason for first listing the codomain and than the domain will become clear in the section on tensor maps.

    Note that HomSpace is not a subtype of VectorSpace, i.e. we restrict the latter to denote certain categories and their objects, and keep HomSpace distinct. However, HomSpace has a number of properties defined, which we illustrate via examples

    julia> W = ℂ^2 ⊗ ℂ^3 → ℂ^3 ⊗ dual(ℂ^4)(ℂ^3 ⊗ (ℂ^4)') ← (ℂ^2 ⊗ ℂ^3)
    julia> field(W)
    julia> dual(W)((ℂ^3)' ⊗ (ℂ^2)') ← (ℂ^4 ⊗ (ℂ^3)')
    julia> adjoint(W)(ℂ^2 ⊗ ℂ^3) ← (ℂ^3 ⊗ (ℂ^4)')
    julia> spacetype(W)ComplexSpace
    julia> spacetype(typeof(W))ComplexSpace
    julia> W[1]ℂ^3
    julia> W[2](ℂ^4)'
    julia> W[3](ℂ^2)'
    julia> W[4](ℂ^3)'
    julia> dim(W)72

    Note that indexing W yields first the spaces in the codomain, followed by the dual of the spaces in the domain. This particular convention is useful in combination with the instances of type TensorMap, which represent morphisms living in such a HomSpace. Also note that dim(W) here seems to be the product of the dimensions of the individual spaces, but that this is no longer true once symmetries are involved. At any time will dim(::HomSpace) represent the number of linearly independent morphisms in this space.

    Partial order among vector spaces

    Vector spaces of the same spacetype can be given a partial order, based on whether there exist injective morphisms (a.k.a monomorphisms) or surjective morphisms (a.k.a. epimorphisms) between them. In particular, we define ismonomorphic(V1, V2), with Unicode synonym V1 ≾ V2 (obtained as \precsim+TAB), to express whether there exist monomorphisms in V1→V2. Similarly, we define isepimorphic(V1, V2), with Unicode synonym V1 ≿ V2 (obtained as \succsim+TAB), to express whether there exist epimorphisms in V1→V2. Finally, we define isisomorphic(V1, V2), with Unicode alternative V1 ≅ V2 (obtained as \cong+TAB), to express whether there exist isomorphism in V1→V2. In particular V1 ≅ V2 if and only if V1 ≾ V2 && V1 ≿ V2.

    For completeness, we also export the strict comparison operators and (\prec+TAB and \succ+TAB), with definitions

    ≺(V1::VectorSpace, V2::VectorSpace) = V1 ≾ V2 && !(V1 ≿ V2)
    -≻(V1::VectorSpace, V2::VectorSpace) = V1 ≿ V2 && !(V1 ≾ V2)

    However, as we expect these to be less commonly used, no ASCII alternative is provided.

    In the context of InnerProductStyle(V) <: EuclideanProduct, V1 ≾ V2 implies that there exists isometries $W:V1 → V2$ such that $W^† ∘ W = \mathrm{id}_{V1}$, while V1 ≅ V2 implies that there exist unitaries $U:V1→V2$ such that $U^† ∘ U = \mathrm{id}_{V1}$ and $U ∘ U^† = \mathrm{id}_{V2}$.

    Note that spaces that are isomorphic are not necessarily equal. One can be a dual space, and the other a normal space, or one can be an instance of ProductSpace, while the other is an ElementarySpace. There will exist (infinitely) many isomorphisms between the corresponding spaces, but in general none of those will be canonical.

    There are also a number of convenience functions to create isomorphic spaces. The function fuse(V1, V2, ...) or fuse(V1 ⊗ V2 ⊗ ...) returns an elementary space that is isomorphic to V1 ⊗ V2 ⊗ .... The function flip(V::ElementarySpace) returns a space that is isomorphic to V but has isdual(flip(V)) == isdual(V'), i.e. if V is a normal space than flip(V) is a dual space. flip(V) is different from dual(V) in the case of GradedSpace. It is useful to flip a tensor index from a ket to a bra (or vice versa), by contracting that index with a unitary map from V1 to flip(V1). We refer to Index operations for further information. Some examples:

    julia> ℝ^3 ≾ ℝ^5true
    julia> ℂ^3 ≾ (ℂ^5)'true
    julia> (ℂ^5) ≅ (ℂ^5)'true
    julia> fuse(ℝ^5, ℝ^3)ℝ^15
    julia> fuse(ℂ^3, (ℂ^5)' ⊗ ℂ^2)ℂ^30
    julia> fuse(ℂ^3, (ℂ^5)') ⊗ ℂ^2 ≅ fuse(ℂ^3, (ℂ^5)', ℂ^2) ≅ ℂ^3 ⊗ (ℂ^5)' ⊗ ℂ^2true
    julia> flip(ℂ^4)(ℂ^4)'
    julia> flip(ℂ^4) ≅ ℂ^4true
    julia> flip(ℂ^4) == ℂ^4false

    We also define the direct sum V1 and V2 as V1 ⊕ V2, where is obtained by typing \oplus+TAB. This is possible only if isdual(V1) == isdual(V2). With a little pun on Julia Base, oneunit applied to an elementary space (in the value or type domain) returns the one-dimensional space, which is isomorphic to the scalar field of the space itself. Some examples illustrate this better

    julia> ℝ^5 ⊕ ℝ^3ℝ^8
    julia> ℂ^5 ⊕ ℂ^3ℂ^8
    julia> ℂ^5 ⊕ (ℂ^3)'ERROR: SpaceMismatch("Direct sum of a vector space and its dual does not exist")
    julia> oneunit(ℝ^3)ℝ^1
    julia> ℂ^5 ⊕ oneunit(ComplexSpace)ℂ^6
    julia> oneunit((ℂ^3)')ℂ^1
    julia> (ℂ^5) ⊕ oneunit((ℂ^5))ℂ^6
    julia> (ℂ^5)' ⊕ oneunit((ℂ^5)')ERROR: SpaceMismatch("Direct sum of a vector space and its dual does not exist")

    Finally, while spaces have a partial order, there is no unique infimum or supremum of a two or more spaces. However, if V1 and V2 are two ElementarySpace instances with isdual(V1) == isdual(V2), then we can define a unique infimum V::ElementarySpace with the same value of isdual that satisfies V ≾ V1 and V ≾ V2, as well as a unique supremum W::ElementarySpace with the same value of isdual that satisfies W ≿ V1 and W ≿ V2. For CartesianSpace and ComplexSpace, this simply amounts to the space with minimal or maximal dimension, i.e.

    julia> infimum(ℝ^5, ℝ^3)ℝ^3
    julia> supremum(ℂ^5, ℂ^3)ℂ^5
    julia> supremum(ℂ^5, (ℂ^3)')ERROR: SpaceMismatch("Supremum of space and dual space does not exist")

    The names infimum and supremum are especially suited in the case of GradedSpace, as the infimum of two spaces might be different from either of those two spaces, and similar for the supremum.

    +≻(V1::VectorSpace, V2::VectorSpace) = V1 ≿ V2 && !(V1 ≾ V2)

    However, as we expect these to be less commonly used, no ASCII alternative is provided.

    In the context of InnerProductStyle(V) <: EuclideanProduct, V1 ≾ V2 implies that there exists isometries $W:V1 → V2$ such that $W^† ∘ W = \mathrm{id}_{V1}$, while V1 ≅ V2 implies that there exist unitaries $U:V1→V2$ such that $U^† ∘ U = \mathrm{id}_{V1}$ and $U ∘ U^† = \mathrm{id}_{V2}$.

    Note that spaces that are isomorphic are not necessarily equal. One can be a dual space, and the other a normal space, or one can be an instance of ProductSpace, while the other is an ElementarySpace. There will exist (infinitely) many isomorphisms between the corresponding spaces, but in general none of those will be canonical.

    There are also a number of convenience functions to create isomorphic spaces. The function fuse(V1, V2, ...) or fuse(V1 ⊗ V2 ⊗ ...) returns an elementary space that is isomorphic to V1 ⊗ V2 ⊗ .... The function flip(V::ElementarySpace) returns a space that is isomorphic to V but has isdual(flip(V)) == isdual(V'), i.e. if V is a normal space than flip(V) is a dual space. flip(V) is different from dual(V) in the case of GradedSpace. It is useful to flip a tensor index from a ket to a bra (or vice versa), by contracting that index with a unitary map from V1 to flip(V1). We refer to Index operations for further information. Some examples:

    julia> ℝ^3 ≾ ℝ^5true
    julia> ℂ^3 ≾ (ℂ^5)'true
    julia> (ℂ^5) ≅ (ℂ^5)'true
    julia> fuse(ℝ^5, ℝ^3)ℝ^15
    julia> fuse(ℂ^3, (ℂ^5)' ⊗ ℂ^2)ℂ^30
    julia> fuse(ℂ^3, (ℂ^5)') ⊗ ℂ^2 ≅ fuse(ℂ^3, (ℂ^5)', ℂ^2) ≅ ℂ^3 ⊗ (ℂ^5)' ⊗ ℂ^2true
    julia> flip(ℂ^4)(ℂ^4)'
    julia> flip(ℂ^4) ≅ ℂ^4true
    julia> flip(ℂ^4) == ℂ^4false

    We also define the direct sum V1 and V2 as V1 ⊕ V2, where is obtained by typing \oplus+TAB. This is possible only if isdual(V1) == isdual(V2). With a little pun on Julia Base, oneunit applied to an elementary space (in the value or type domain) returns the one-dimensional space, which is isomorphic to the scalar field of the space itself. Some examples illustrate this better

    julia> ℝ^5 ⊕ ℝ^3ℝ^8
    julia> ℂ^5 ⊕ ℂ^3ℂ^8
    julia> ℂ^5 ⊕ (ℂ^3)'ERROR: SpaceMismatch("Direct sum of a vector space and its dual does not exist")
    julia> oneunit(ℝ^3)ℝ^1
    julia> ℂ^5 ⊕ oneunit(ComplexSpace)ℂ^6
    julia> oneunit((ℂ^3)')ℂ^1
    julia> (ℂ^5) ⊕ oneunit((ℂ^5))ℂ^6
    julia> (ℂ^5)' ⊕ oneunit((ℂ^5)')ERROR: SpaceMismatch("Direct sum of a vector space and its dual does not exist")

    Finally, while spaces have a partial order, there is no unique infimum or supremum of a two or more spaces. However, if V1 and V2 are two ElementarySpace instances with isdual(V1) == isdual(V2), then we can define a unique infimum V::ElementarySpace with the same value of isdual that satisfies V ≾ V1 and V ≾ V2, as well as a unique supremum W::ElementarySpace with the same value of isdual that satisfies W ≿ V1 and W ≿ V2. For CartesianSpace and ComplexSpace, this simply amounts to the space with minimal or maximal dimension, i.e.

    julia> infimum(ℝ^5, ℝ^3)ℝ^3
    julia> supremum(ℂ^5, ℂ^3)ℂ^5
    julia> supremum(ℂ^5, (ℂ^3)')ERROR: SpaceMismatch("Supremum of space and dual space does not exist")

    The names infimum and supremum are especially suited in the case of GradedSpace, as the infimum of two spaces might be different from either of those two spaces, and similar for the supremum.

    diff --git a/dev/man/tensors/index.html b/dev/man/tensors/index.html index cad11d2c..dd7c7859 100644 --- a/dev/man/tensors/index.html +++ b/dev/man/tensors/index.html @@ -4,38 +4,38 @@ TensorMap(undef, codomain, domain) TensorMap(undef, eltype::Type{<:Number}, codomain, domain)

    Here, in the first form, f can be any function or object that is called with an argument of type Dims{2} = Tuple{Int,Int} and is such that f((m,n)) creates a DenseMatrix instance with size(f(m,n)) == (m,n). In the second form, f is called as f(eltype,(m,n)). Possibilities for f are randn and rand from Julia Base. TensorKit.jl provides randnormal and randuniform as an synonym for randn and rand, as well as the new function randisometry, alternatively called randhaar, that creates a random isometric m × n matrix w satisfying w'*w ≈ I distributed according to the Haar measure (this requires m>= n). The third and fourth calling syntax use the UndefInitializer from Julia Base and generates a TensorMap with unitialized data, which could thus contain NaNs.

    In all of these constructors, the last two arguments can be replaced by domain→codomain or codomain←domain, where the arrows are obtained as \rightarrow+TAB and \leftarrow+TAB and create a HomSpace as explained in the section on Spaces of morphisms. Some examples are perhaps in order

    julia> t1 = TensorMap(randnormal, ℂ^2 ⊗ ℂ^3, ℂ^2)TensorMap((ℂ^2 ⊗ ℂ^3) ← ℂ^2):
     [:, :, 1] =
    - -1.5415362182260957  -1.597342202528225  -0.13912382524767616
    -  0.8178872296624544  -1.035980741897329  -0.4954462471177879
    +  1.2447879199223042  -0.032127203347427635  -0.28865417962648265
    + -0.6737027479483797  -0.15285701220900527    0.20845876742024894
     
     [:, :, 2] =
    - -1.5636775043295945  1.6937216023128638    0.5636897514958786
    -  1.1814267278085013  0.35743531784127114  -0.9157295789571707
    julia> t2 = TensorMap(randisometry, Float32, ℂ^2 ⊗ ℂ^3 ← ℂ^2)ERROR: UndefVarError: `_leftorth!` not defined
    julia> t3 = TensorMap(undef, ℂ^2 → ℂ^2 ⊗ ℂ^3)TensorMap((ℂ^2 ⊗ ℂ^3) ← ℂ^2): + 0.5009109951775879 -1.051436297363579 0.822770127716411 + -1.145492313438669 -1.3014100962702244 0.9682617951619008
    julia> t2 = TensorMap(randisometry, Float32, ℂ^2 ⊗ ℂ^3 ← ℂ^2)ERROR: UndefVarError: `_leftorth!` not defined
    julia> t3 = TensorMap(undef, ℂ^2 → ℂ^2 ⊗ ℂ^3)TensorMap((ℂ^2 ⊗ ℂ^3) ← ℂ^2): [:, :, 1] = - 0.0 5.0e-324 0.0 - 0.0 0.0 0.0 + 6.90381204142735e-310 6.90381204142735e-310 6.90381204142735e-310 + 6.90381204142735e-310 6.90381204142735e-310 6.90381204142735e-310 [:, :, 2] = - 0.0 0.0 0.0 - 0.0 0.0 0.0
    julia> domain(t1) == domain(t2) == domain(t3)ERROR: UndefVarError: `t2` not defined
    julia> codomain(t1) == codomain(t2) == codomain(t3)ERROR: UndefVarError: `t2` not defined
    julia> disp(x) = show(IOContext(Core.stdout, :compact=>false), "text/plain", trunc.(x; digits = 3));
    julia> t1[] |> disp2×3×2 StridedViews.StridedView{Float64, 3, Array{Float64, 3}, typeof(identity)}: + 6.90381204142735e-310 6.90381204142735e-310 6.90381204142735e-310 + 6.90381204142735e-310 6.90381204142735e-310 6.90381204142735e-310
    julia> domain(t1) == domain(t2) == domain(t3)ERROR: UndefVarError: `t2` not defined
    julia> codomain(t1) == codomain(t2) == codomain(t3)ERROR: UndefVarError: `t2` not defined
    julia> disp(x) = show(IOContext(Core.stdout, :compact=>false), "text/plain", trunc.(x; digits = 3));
    julia> t1[] |> disp2×3×2 StridedViews.StridedView{Float64, 3, Array{Float64, 3}, typeof(identity)}: [:, :, 1] = - -1.541 -1.597 -0.139 - 0.817 -1.035 -0.495 + 1.244 -0.032 -0.288 + -0.673 -0.152 0.208 [:, :, 2] = - -1.563 1.693 0.563 - 1.181 0.357 -0.915
    julia> block(t1, Trivial()) |> disp6×2 Array{Float64, 2}: - -1.541 -1.563 - 0.817 1.181 - -1.597 1.693 - -1.035 0.357 - -0.139 0.563 - -0.495 -0.915
    julia> reshape(t1[], dim(codomain(t1)), dim(domain(t1))) |> disp6×2 Array{Float64, 2}: - -1.541 -1.563 - 0.817 1.181 - -1.597 1.693 - -1.035 0.357 - -0.139 0.563 - -0.495 -0.915

    Finally, all constructors can also be replaced by Tensor(..., codomain), in which case the domain is assumed to be the empty ProductSpace{S,0}(), which can easily be obtained as one(codomain). Indeed, the empty product space is the unit object of the monoidal category, equivalent to the field of scalars 𝕜, and thus the multiplicative identity (especially since * also acts as tensor product on vector spaces).

    The matrices created by f are the matrices $B_c$ discussed above, i.e. those returned by block(t, c). Only numerical matrices of type DenseMatrix are accepted, which in practice just means Julia's intrinsic Matrix{T} for some T<:Number. In the future, we will add support for CuMatrix from CuArrays.jl to harness GPU computing power, and maybe SharedArray from the Julia's SharedArrays standard library.

    Support for static or sparse data is currently unavailable, and if it would be implemented, it would lead to new subtypes of AbstractTensorMap which are distinct from TensorMap. Future implementations of e.g. SparseTensorMap or StaticTensorMap could be useful. Furthermore, there could be specific implementations for tensors whose blocks are Diagonal.

    Tensor maps from existing data

    To create a TensorMap with existing data, one can use the aforementioned form but with the function f replaced with the actual data, i.e. TensorMap(data, codomain, domain) or any of its equivalents.

    Here, data can be of two types. It can be a dictionary (any Associative subtype) which has blocksectors c of type sectortype(codomain) as keys, and the corresponding matrix blocks as value, i.e. data[c] is some DenseMatrix of size (blockdim(codomain, c), blockdim(domain, c)). This is the form of how the data is stored within the TensorMap objects.

    For those space types for which a TensorMap can be converted to a plain multidimensional array, the data can also be a general DenseArray, either of rank N₁+N₂ and with matching size (dims(codomain)..., dims(domain)...), or just as a DenseMatrix with size (dim(codomain), dim(domain)). This is true in particular if the sector type is Trivial, e.g. for CartesianSpace or ComplexSpace. Then the data array is just reshaped into matrix form and referred to as such in the resulting TensorMap instance. When spacetype is GradedSpace, the TensorMap constructor will try to reconstruct the tensor data such that the resulting tensor t satisfies data == convert(Array, t). This might not be possible, if the data does not respect the symmetry structure. Let's sketch this with a simple example

    julia> data = zeros(2,2,2,2)2×2×2×2 Array{Float64, 4}:
    +  0.5    -1.051  0.822
    + -1.145  -1.301  0.968
    julia> block(t1, Trivial()) |> disp6×2 Array{Float64, 2}: + 1.244 0.5 + -0.673 -1.145 + -0.032 -1.051 + -0.152 -1.301 + -0.288 0.822 + 0.208 0.968
    julia> reshape(t1[], dim(codomain(t1)), dim(domain(t1))) |> disp6×2 Array{Float64, 2}: + 1.244 0.5 + -0.673 -1.145 + -0.032 -1.051 + -0.152 -1.301 + -0.288 0.822 + 0.208 0.968

    Finally, all constructors can also be replaced by Tensor(..., codomain), in which case the domain is assumed to be the empty ProductSpace{S,0}(), which can easily be obtained as one(codomain). Indeed, the empty product space is the unit object of the monoidal category, equivalent to the field of scalars 𝕜, and thus the multiplicative identity (especially since * also acts as tensor product on vector spaces).

    The matrices created by f are the matrices $B_c$ discussed above, i.e. those returned by block(t, c). Only numerical matrices of type DenseMatrix are accepted, which in practice just means Julia's intrinsic Matrix{T} for some T<:Number. In the future, we will add support for CuMatrix from CuArrays.jl to harness GPU computing power, and maybe SharedArray from the Julia's SharedArrays standard library.

    Support for static or sparse data is currently unavailable, and if it would be implemented, it would lead to new subtypes of AbstractTensorMap which are distinct from TensorMap. Future implementations of e.g. SparseTensorMap or StaticTensorMap could be useful. Furthermore, there could be specific implementations for tensors whose blocks are Diagonal.

    Tensor maps from existing data

    To create a TensorMap with existing data, one can use the aforementioned form but with the function f replaced with the actual data, i.e. TensorMap(data, codomain, domain) or any of its equivalents.

    Here, data can be of two types. It can be a dictionary (any Associative subtype) which has blocksectors c of type sectortype(codomain) as keys, and the corresponding matrix blocks as value, i.e. data[c] is some DenseMatrix of size (blockdim(codomain, c), blockdim(domain, c)). This is the form of how the data is stored within the TensorMap objects.

    For those space types for which a TensorMap can be converted to a plain multidimensional array, the data can also be a general DenseArray, either of rank N₁+N₂ and with matching size (dims(codomain)..., dims(domain)...), or just as a DenseMatrix with size (dim(codomain), dim(domain)). This is true in particular if the sector type is Trivial, e.g. for CartesianSpace or ComplexSpace. Then the data array is just reshaped into matrix form and referred to as such in the resulting TensorMap instance. When spacetype is GradedSpace, the TensorMap constructor will try to reconstruct the tensor data such that the resulting tensor t satisfies data == convert(Array, t). This might not be possible, if the data does not respect the symmetry structure. Let's sketch this with a simple example

    julia> data = zeros(2,2,2,2)2×2×2×2 Array{Float64, 4}:
     [:, :, 1, 1] =
      0.0  0.0
      0.0  0.0
    @@ -108,192 +108,192 @@
      1.0

    Hence, we recognize that the Heisenberg interaction has eigenvalue $-1$ in the coupled spin zero sector (SUIrrep(0)), and eigenvalue $+1$ in the coupled spin 1 sector (SU2Irrep(1)). Using Irrep[U₁] instead, we observe that both coupled charge U1Irrep(+1) and U1Irrep(-1) have eigenvalue $+1$. The coupled charge U1Irrep(0) sector is two-dimensional, and has an eigenvalue $+1$ and an eigenvalue $-1$.

    To construct the proper data in more complicated cases, one has to know where to find each sector in the range 1:dim(V) of every index i with associated space V, as well as the internal structure of the representation space when the corresponding sector c has dim(c)>1, i.e. in the case of FusionStyle(c) isa MultipleFusion. Currently, the only non- abelian sectors are Irrep[SU₂] and Irrep[CU₁], for which the internal structure is the natural one.

    There are some tools available to facilate finding the proper range of sector c in space V, namely axes(V, c). This also works on a ProductSpace, with a tuple of sectors. An example

    julia> V = SU2Space(0=>3, 1=>2, 2=>1)Rep[SU₂](0=>3, 1=>2, 2=>1)
    julia> P = V ⊗ V ⊗ V(Rep[SU₂](0=>3, 1=>2, 2=>1) ⊗ Rep[SU₂](0=>3, 1=>2, 2=>1) ⊗ Rep[SU₂](0=>3, 1=>2, 2=>1))
    julia> axes(P, (SU2Irrep(1), SU2Irrep(0), SU2Irrep(2)))(4:9, 1:3, 10:14)

    Note that the length of the range is the degeneracy dimension of that sector, times the dimension of the internal representation space, i.e. the quantum dimension of that sector.

    Constructing similar tensors

    A third way to construct a TensorMap instance is to use Base.similar, i.e.

    similar(t [, T::Type{<:Number}, codomain, domain])

    where T is a possibly different eltype for the tensor data, and codomain and domain optionally define a new codomain and domain for the resulting tensor. By default, these values just take the value from the input tensor t. The result will be a new TensorMap instance, with undef data, but whose data is stored in the same subtype of DenseMatrix (e.g. Matrix or CuMatrix or ...) as t. In particular, this uses the methods storagetype(t) and TensorKit.similarstoragetype(t, T).

    Special purpose constructors

    Finally, there are methods zero, one, id, isomorphism, unitary and isometry to create specific new tensors. Tensor maps behave as vectors and can be added (if they have the same domain and codomain); zero(t) is the additive identity, i.e. a TensorMap instance where all entries are zero. For a t::TensorMap with domain(t) == codomain(t), i.e. an endomorphism, one(t) creates the identity tensor, i.e. the identity under composition. As discussed in the section on linear algebra operations, we denote composition of tensor maps with the mutliplication operator *, such that one(t) is the multiplicative identity. Similarly, it can be created as id(V) with V the relevant vector space, e.g. one(t) == id(domain(t)). The identity tensor is currently represented with dense data, and one can use id(A::Type{<:DenseMatrix}, V) to specify the type of DenseMatrix (and its eltype), e.g. A = Matrix{Float64}. Finally, it often occurs that we want to construct a specific isomorphism between two spaces that are isomorphic but not equal, and for which there is no canonical choice. Hereto, one can use the method u = isomorphism([A::Type{<:DenseMatrix}, ] codomain, domain), which will explicitly check that the domain and codomain are isomorphic, and return an error otherwise. Again, an optional first argument can be given to specify the specific type of DenseMatrix that is currently used to store the rather trivial data of this tensor. If InnerProductStyle(u) <: EuclideanProduct, the same result can be obtained with the method u = unitary([A::Type{<:DenseMatrix}, ] codomain, domain). Note that reversing the domain and codomain yields the inverse morphism, which in the case of EuclideanProduct coincides with the adjoint morphism, i.e. isomorphism(A, domain, codomain) == adjoint(u) == inv(u), where inv and adjoint will be further discussed below. Finally, if two spaces V1 and V2 are such that V2 can be embedded in V1, i.e. there exists an inclusion with a left inverse, and furthermore they represent tensor products of some ElementarySpace with EuclideanProduct, the function w = isometry([A::Type{<:DenseMatrix}, ], V1, V2) creates one specific isometric embedding, such that adjoint(w) * w == id(V2) and w * adjoint(w) is some hermitian idempotent (a.k.a. orthogonal projector) acting on V1. An error will be thrown if such a map cannot be constructed for the given domain and codomain.

    Let's conclude this section with some examples with GradedSpace.

    julia> V1 = ℤ₂Space(0=>3,1=>2)Rep[ℤ₂](0=>3, 1=>2)
    julia> V2 = ℤ₂Space(0=>2,1=>1)Rep[ℤ₂](0=>2, 1=>1)
    julia> # First a `TensorMap{ℤ₂Space, 1, 1}` m = TensorMap(randn, V1, V2)TensorMap(Rep[ℤ₂](0=>3, 1=>2) ← Rep[ℤ₂](0=>2, 1=>1)): * Data for sector (Irrep[ℤ₂](0),) ← (Irrep[ℤ₂](0),): - -1.3070996598667615 -0.5699824053771354 - 0.10791205835970563 -0.5896666013965292 - 1.3949012769747429 -1.0828919625091467 + 1.1903120506293698 -1.3972436487354318 + -1.9511465541693898 0.11810612297827527 + 1.2609529113481335 1.5305965237545536 * Data for sector (Irrep[ℤ₂](1),) ← (Irrep[ℤ₂](1),): - -0.9843311882752321 - -0.6478380900561241
    julia> convert(Array, m) |> disp5×3 Array{Float64, 2}: - -1.307 -0.569 0.0 - 0.107 -0.589 0.0 - 1.394 -1.082 0.0 - 0.0 0.0 -0.984 - 0.0 0.0 -0.647
    julia> # compare with: + -0.6032916502362278 + -0.21353068411482365
    julia> convert(Array, m) |> disp5×3 Array{Float64, 2}: + 1.19 -1.397 0.0 + -1.951 0.118 0.0 + 1.26 1.53 0.0 + 0.0 0.0 -0.603 + 0.0 0.0 -0.213
    julia> # compare with: block(m, Irrep[ℤ₂](0)) |> disp3×2 Array{Float64, 2}: - -1.307 -0.569 - 0.107 -0.589 - 1.394 -1.082
    julia> block(m, Irrep[ℤ₂](1)) |> disp2×1 Array{Float64, 2}: - -0.984 - -0.647
    julia> # Now a `TensorMap{ℤ₂Space, 2, 2}` + 1.19 -1.397 + -1.951 0.118 + 1.26 1.53
    julia> block(m, Irrep[ℤ₂](1)) |> disp2×1 Array{Float64, 2}: + -0.603 + -0.213
    julia> # Now a `TensorMap{ℤ₂Space, 2, 2}` t = TensorMap(randn, V1 ⊗ V1, V2 ⊗ V2')TensorMap((Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>3, 1=>2)) ← (Rep[ℤ₂](0=>2, 1=>1) ⊗ Rep[ℤ₂](0=>2, 1=>1)')): * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](0)): [:, :, 1, 1] = - 0.7515863331158726 0.9086336756074929 -0.08679578580450788 - 1.7016374500508118 1.9736843825549024 0.69762914912029 - -0.25822449707652734 1.5851568640498506 -0.8125986510206715 + 0.15623302033935416 0.32519372231320987 -0.8191208728633551 + -0.4043321973919456 -0.3487150506612369 -1.9766325021586484 + 0.6432468861208477 -0.14281364190769597 0.014443635007373608 [:, :, 2, 1] = - -0.8491036392635631 -0.6391318277082233 0.38068508490699005 - -2.0034765695728582 1.047486287868407 1.0583471676009348 - 0.7667302736306165 1.7847351731358505 -0.8074998845438284 + -0.7027238425007215 0.44273092459861035 1.8837432963423058 + 1.303652391792764 -0.3104675899063104 -0.2044931125723184 + -0.7368010067165545 0.36547240658099844 -1.5077940626383413 [:, :, 1, 2] = - 0.3149690056367044 1.271131245240603 -0.17209729803763785 - -0.24199730784872128 -0.656687095620722 -0.9278375204138377 - 0.08440833540979077 0.4264477957415338 -0.05422902411075907 + 0.4816672821872562 0.11978176625020825 -2.788903653510166 + 1.2980852010458537 0.5206038552307057 0.7195116265544258 + -0.21393527321902533 -0.41939431286658163 0.8136440163836003 [:, :, 2, 2] = - 0.14115817944589523 -0.6360523891246749 -0.25545289757913986 - 2.120497955074197 0.3756105946265927 0.7268506478795459 - 0.5389548716954566 -1.6859680695165653 2.2960871972866674 + -0.04978534548865306 0.8810371981598909 -0.6066028027247258 + 1.7149608309269175 -0.6039894193879913 -0.574296853166419 + 0.10978124622312711 0.016603009879866758 -1.32034965615444 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](0)): [:, :, 1, 1] = - -0.7101398035173567 0.2972456380409885 - -0.12335409425423828 -0.7326744040831621 + -0.2614519223743285 0.3916278647086205 + 0.3405132079563074 0.5372071099843843 [:, :, 2, 1] = - 0.16047673260853093 1.1151076471032348 - 1.5282545505600302 0.13430726500186638 + -1.4100575473709345 -0.01664275605191828 + 1.7868098627811244 1.2639349608989197 [:, :, 1, 2] = - 1.2963786088070894 0.3632676675964794 - -0.657312508628588 -0.6212274322385325 + 1.011738329036388 -1.72100387733541 + 1.78551612012046 -1.1938181633342808 [:, :, 2, 2] = - 0.012820201291664406 0.38126837648106976 - -0.5261464050155558 -0.20657082688519646 + -0.5905460364997976 -0.08598491228844658 + -1.2877542774384998 0.2867761980506579 * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](1)): [:, :, 1, 1] = - 0.39053108139572124 0.457116673906865 -0.3763325177895575 - -0.201850541765156 0.7743723582745323 1.256054221643284 - 0.6807361703268856 -1.225472526133802 0.28134334973888553 + -1.505329593211366 0.6160204791148324 1.1217608172656974 + -1.3822797056825629 0.6913287909658612 -1.2151459792761317 + 0.22371151336790587 1.1115249619822314 0.6378851785713389 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](1)): [:, :, 1, 1] = - -0.5319223494469681 1.2310379112719236 - 0.01510917927135 -0.0700260469619432 + -0.8980872238423966 -0.1402345286019172 + 0.24613323725257763 -1.0195015780188477 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](1)): [:, :, 1, 1] = - 1.6689101990434196 -0.1470497930076116 -0.8112597634269894 - -0.11324718774100993 -0.32649934951246645 -1.2291549570064522 + -1.1303799798107375 -0.05965059328844637 -0.7750960789872016 + 0.38702269040324655 1.1202428535446107 0.5392546893236307 [:, :, 2, 1] = - 3.104912733292937 -0.8648638118789542 0.7371445545844854 - -0.2641053968771188 2.422157903546986 1.036150041256859 + -1.2168560103140087 -0.4371110724768948 0.07654516301089771 + -1.2750307424625096 -0.2868640533153936 1.6954030946497063 * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](1)): [:, :, 1, 1] = - -0.08209436788488951 -1.2595754972667699 - 1.8362546787920502 0.9442398781139765 - 0.509486697835428 -0.4344954724508762 + 1.0581790064579557 -2.385508049461094 + 1.2352288949930879 -0.3876968417156715 + 0.6184306958350109 -0.3829393132687461 [:, :, 2, 1] = - -1.5973008388817616 1.4072150098175884 - -0.19664967703347572 -1.025094472661451 - -1.0174693747738182 0.292878534680413 + 0.4686936821310811 -0.034211036472842835 + 0.8920742183795888 -0.5902821798322776 + -0.19874584205311502 1.3757614926604242 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](0)): [:, :, 1, 1] = - -0.8177503642491504 -0.22044974474896473 0.2065939837158475 - 0.009581338854744898 0.10699587605613158 -0.013859886648672517 + -2.640635866701604 0.2944623798723272 0.26817513075426896 + -0.45462840568140694 -1.0440182516732324 -0.29619996373562657 [:, :, 1, 2] = - -0.33212657892643077 1.4853080148502842 -2.09578711708063 - 0.4619056433761277 -0.12939878257834123 0.970639244464737 + 1.2027589350003338 -1.0812633091432617 -1.703118544414374 + -0.8720355660254682 1.3055482548315511 0.5476943942925389 * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](0)): [:, :, 1, 1] = - 0.1075434656377849 1.7956802580876425 - 1.6791285223152292 0.8048398049825071 - 0.505707170142967 1.1116197559381105 + 0.4460158081294388 1.0082844760656038 + 1.3197206438573839 -2.022231036360877 + -0.6883955496035286 -1.527126755847853 [:, :, 1, 2] = - -1.870413748510834 -1.8521351774733876 - 0.0019804052704409077 1.513121603962191 - -1.8171046712221555 -0.8527166929001314
    julia> (array = convert(Array, t)) |> disp5×5×3×3 Array{Float64, 4}: + -0.14660142259753667 -1.505087348286884 + -1.62129696767999 0.9139846253057714 + -0.6179926232324948 -1.1040631133223429
    julia> (array = convert(Array, t)) |> disp5×5×3×3 Array{Float64, 4}: [:, :, 1, 1] = - 0.751 0.908 -0.086 0.0 0.0 - 1.701 1.973 0.697 0.0 0.0 - -0.258 1.585 -0.812 0.0 0.0 - 0.0 0.0 0.0 -0.71 0.297 - 0.0 0.0 0.0 -0.123 -0.732 + 0.156 0.325 -0.819 0.0 0.0 + -0.404 -0.348 -1.976 0.0 0.0 + 0.643 -0.142 0.014 0.0 0.0 + 0.0 0.0 0.0 -0.261 0.391 + 0.0 0.0 0.0 0.34 0.537 [:, :, 2, 1] = - -0.849 -0.639 0.38 0.0 0.0 - -2.003 1.047 1.058 0.0 0.0 - 0.766 1.784 -0.807 0.0 0.0 - 0.0 0.0 0.0 0.16 1.115 - 0.0 0.0 0.0 1.528 0.134 + -0.702 0.442 1.883 0.0 0.0 + 1.303 -0.31 -0.204 0.0 0.0 + -0.736 0.365 -1.507 0.0 0.0 + 0.0 0.0 0.0 -1.41 -0.016 + 0.0 0.0 0.0 1.786 1.263 [:, :, 3, 1] = - 0.0 0.0 0.0 0.107 1.795 - 0.0 0.0 0.0 1.679 0.804 - 0.0 0.0 0.0 0.505 1.111 - -0.817 -0.22 0.206 0.0 0.0 - 0.009 0.106 -0.013 0.0 0.0 + 0.0 0.0 0.0 0.446 1.008 + 0.0 0.0 0.0 1.319 -2.022 + 0.0 0.0 0.0 -0.688 -1.527 + -2.64 0.294 0.268 0.0 0.0 + -0.454 -1.044 -0.296 0.0 0.0 [:, :, 1, 2] = - 0.314 1.271 -0.172 0.0 0.0 - -0.241 -0.656 -0.927 0.0 0.0 - 0.084 0.426 -0.054 0.0 0.0 - 0.0 0.0 0.0 1.296 0.363 - 0.0 0.0 0.0 -0.657 -0.621 + 0.481 0.119 -2.788 0.0 0.0 + 1.298 0.52 0.719 0.0 0.0 + -0.213 -0.419 0.813 0.0 0.0 + 0.0 0.0 0.0 1.011 -1.721 + 0.0 0.0 0.0 1.785 -1.193 [:, :, 2, 2] = - 0.141 -0.636 -0.255 0.0 0.0 - 2.12 0.375 0.726 0.0 0.0 - 0.538 -1.685 2.296 0.0 0.0 - 0.0 0.0 0.0 0.012 0.381 - 0.0 0.0 0.0 -0.526 -0.206 + -0.049 0.881 -0.606 0.0 0.0 + 1.714 -0.603 -0.574 0.0 0.0 + 0.109 0.016 -1.32 0.0 0.0 + 0.0 0.0 0.0 -0.59 -0.085 + 0.0 0.0 0.0 -1.287 0.286 [:, :, 3, 2] = - 0.0 0.0 0.0 -1.87 -1.852 - 0.0 0.0 0.0 0.001 1.513 - 0.0 0.0 0.0 -1.817 -0.852 - -0.332 1.485 -2.095 0.0 0.0 - 0.461 -0.129 0.97 0.0 0.0 + 0.0 0.0 0.0 -0.146 -1.505 + 0.0 0.0 0.0 -1.621 0.913 + 0.0 0.0 0.0 -0.617 -1.104 + 1.202 -1.081 -1.703 0.0 0.0 + -0.872 1.305 0.547 0.0 0.0 [:, :, 1, 3] = - 0.0 0.0 0.0 -0.082 -1.259 - 0.0 0.0 0.0 1.836 0.944 - 0.0 0.0 0.0 0.509 -0.434 - 1.668 -0.147 -0.811 0.0 0.0 - -0.113 -0.326 -1.229 0.0 0.0 + 0.0 0.0 0.0 1.058 -2.385 + 0.0 0.0 0.0 1.235 -0.387 + 0.0 0.0 0.0 0.618 -0.382 + -1.13 -0.059 -0.775 0.0 0.0 + 0.387 1.12 0.539 0.0 0.0 [:, :, 2, 3] = - 0.0 0.0 0.0 -1.597 1.407 - 0.0 0.0 0.0 -0.196 -1.025 - 0.0 0.0 0.0 -1.017 0.292 - 3.104 -0.864 0.737 0.0 0.0 - -0.264 2.422 1.036 0.0 0.0 + 0.0 0.0 0.0 0.468 -0.034 + 0.0 0.0 0.0 0.892 -0.59 + 0.0 0.0 0.0 -0.198 1.375 + -1.216 -0.437 0.076 0.0 0.0 + -1.275 -0.286 1.695 0.0 0.0 [:, :, 3, 3] = - 0.39 0.457 -0.376 0.0 0.0 - -0.201 0.774 1.256 0.0 0.0 - 0.68 -1.225 0.281 0.0 0.0 - 0.0 0.0 0.0 -0.531 1.231 - 0.0 0.0 0.0 0.015 -0.07
    julia> d1 = dim(codomain(t))25
    julia> d2 = dim(domain(t))9
    julia> (matrix = reshape(array, d1, d2)) |> disp25×9 Array{Float64, 2}: - 0.751 -0.849 0.0 0.314 0.141 0.0 0.0 0.0 0.39 - 1.701 -2.003 0.0 -0.241 2.12 0.0 0.0 0.0 -0.201 - -0.258 0.766 0.0 0.084 0.538 0.0 0.0 0.0 0.68 - 0.0 0.0 -0.817 0.0 0.0 -0.332 1.668 3.104 0.0 - 0.0 0.0 0.009 0.0 0.0 0.461 -0.113 -0.264 0.0 - 0.908 -0.639 0.0 1.271 -0.636 0.0 0.0 0.0 0.457 - 1.973 1.047 0.0 -0.656 0.375 0.0 0.0 0.0 0.774 - 1.585 1.784 0.0 0.426 -1.685 0.0 0.0 0.0 -1.225 - 0.0 0.0 -0.22 0.0 0.0 1.485 -0.147 -0.864 0.0 - 0.0 0.0 0.106 0.0 0.0 -0.129 -0.326 2.422 0.0 - -0.086 0.38 0.0 -0.172 -0.255 0.0 0.0 0.0 -0.376 - 0.697 1.058 0.0 -0.927 0.726 0.0 0.0 0.0 1.256 - -0.812 -0.807 0.0 -0.054 2.296 0.0 0.0 0.0 0.281 - 0.0 0.0 0.206 0.0 0.0 -2.095 -0.811 0.737 0.0 - 0.0 0.0 -0.013 0.0 0.0 0.97 -1.229 1.036 0.0 - 0.0 0.0 0.107 0.0 0.0 -1.87 -0.082 -1.597 0.0 - 0.0 0.0 1.679 0.0 0.0 0.001 1.836 -0.196 0.0 - 0.0 0.0 0.505 0.0 0.0 -1.817 0.509 -1.017 0.0 - -0.71 0.16 0.0 1.296 0.012 0.0 0.0 0.0 -0.531 - -0.123 1.528 0.0 -0.657 -0.526 0.0 0.0 0.0 0.015 - 0.0 0.0 1.795 0.0 0.0 -1.852 -1.259 1.407 0.0 - 0.0 0.0 0.804 0.0 0.0 1.513 0.944 -1.025 0.0 - 0.0 0.0 1.111 0.0 0.0 -0.852 -0.434 0.292 0.0 - 0.297 1.115 0.0 0.363 0.381 0.0 0.0 0.0 1.231 - -0.732 0.134 0.0 -0.621 -0.206 0.0 0.0 0.0 -0.07
    julia> (u = reshape(convert(Array, unitary(codomain(t), fuse(codomain(t)))), d1, d1)) |> disp25×25 Array{Float64, 2}: + -1.505 0.616 1.121 0.0 0.0 + -1.382 0.691 -1.215 0.0 0.0 + 0.223 1.111 0.637 0.0 0.0 + 0.0 0.0 0.0 -0.898 -0.14 + 0.0 0.0 0.0 0.246 -1.019
    julia> d1 = dim(codomain(t))25
    julia> d2 = dim(domain(t))9
    julia> (matrix = reshape(array, d1, d2)) |> disp25×9 Array{Float64, 2}: + 0.156 -0.702 0.0 0.481 -0.049 0.0 0.0 0.0 -1.505 + -0.404 1.303 0.0 1.298 1.714 0.0 0.0 0.0 -1.382 + 0.643 -0.736 0.0 -0.213 0.109 0.0 0.0 0.0 0.223 + 0.0 0.0 -2.64 0.0 0.0 1.202 -1.13 -1.216 0.0 + 0.0 0.0 -0.454 0.0 0.0 -0.872 0.387 -1.275 0.0 + 0.325 0.442 0.0 0.119 0.881 0.0 0.0 0.0 0.616 + -0.348 -0.31 0.0 0.52 -0.603 0.0 0.0 0.0 0.691 + -0.142 0.365 0.0 -0.419 0.016 0.0 0.0 0.0 1.111 + 0.0 0.0 0.294 0.0 0.0 -1.081 -0.059 -0.437 0.0 + 0.0 0.0 -1.044 0.0 0.0 1.305 1.12 -0.286 0.0 + -0.819 1.883 0.0 -2.788 -0.606 0.0 0.0 0.0 1.121 + -1.976 -0.204 0.0 0.719 -0.574 0.0 0.0 0.0 -1.215 + 0.014 -1.507 0.0 0.813 -1.32 0.0 0.0 0.0 0.637 + 0.0 0.0 0.268 0.0 0.0 -1.703 -0.775 0.076 0.0 + 0.0 0.0 -0.296 0.0 0.0 0.547 0.539 1.695 0.0 + 0.0 0.0 0.446 0.0 0.0 -0.146 1.058 0.468 0.0 + 0.0 0.0 1.319 0.0 0.0 -1.621 1.235 0.892 0.0 + 0.0 0.0 -0.688 0.0 0.0 -0.617 0.618 -0.198 0.0 + -0.261 -1.41 0.0 1.011 -0.59 0.0 0.0 0.0 -0.898 + 0.34 1.786 0.0 1.785 -1.287 0.0 0.0 0.0 0.246 + 0.0 0.0 1.008 0.0 0.0 -1.505 -2.385 -0.034 0.0 + 0.0 0.0 -2.022 0.0 0.0 0.913 -0.387 -0.59 0.0 + 0.0 0.0 -1.527 0.0 0.0 -1.104 -0.382 1.375 0.0 + 0.391 -0.016 0.0 -1.721 -0.085 0.0 0.0 0.0 -0.14 + 0.537 1.263 0.0 -1.193 0.286 0.0 0.0 0.0 -1.019
    julia> (u = reshape(convert(Array, unitary(codomain(t), fuse(codomain(t)))), d1, d1)) |> disp25×25 Array{Float64, 2}: 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 @@ -328,256 +328,256 @@ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
    julia> u'*u ≈ I ≈ v'*vtrue
    julia> (u'*matrix*v) |> disp25×9 Array{Float64, 2}: - 0.751 -0.849 0.314 0.141 0.39 0.0 0.0 0.0 0.0 - 1.701 -2.003 -0.241 2.12 -0.201 0.0 0.0 0.0 0.0 - -0.258 0.766 0.084 0.538 0.68 0.0 0.0 0.0 0.0 - 0.908 -0.639 1.271 -0.636 0.457 0.0 0.0 0.0 0.0 - 1.973 1.047 -0.656 0.375 0.774 0.0 0.0 0.0 0.0 - 1.585 1.784 0.426 -1.685 -1.225 0.0 0.0 0.0 0.0 - -0.086 0.38 -0.172 -0.255 -0.376 0.0 0.0 0.0 0.0 - 0.697 1.058 -0.927 0.726 1.256 0.0 0.0 0.0 0.0 - -0.812 -0.807 -0.054 2.296 0.281 0.0 0.0 0.0 0.0 - -0.71 0.16 1.296 0.012 -0.531 0.0 0.0 0.0 0.0 - -0.123 1.528 -0.657 -0.526 0.015 0.0 0.0 0.0 0.0 - 0.297 1.115 0.363 0.381 1.231 0.0 0.0 0.0 0.0 - -0.732 0.134 -0.621 -0.206 -0.07 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 -0.817 -0.332 1.668 3.104 - 0.0 0.0 0.0 0.0 0.0 0.009 0.461 -0.113 -0.264 - 0.0 0.0 0.0 0.0 0.0 -0.22 1.485 -0.147 -0.864 - 0.0 0.0 0.0 0.0 0.0 0.106 -0.129 -0.326 2.422 - 0.0 0.0 0.0 0.0 0.0 0.206 -2.095 -0.811 0.737 - 0.0 0.0 0.0 0.0 0.0 -0.013 0.97 -1.229 1.036 - 0.0 0.0 0.0 0.0 0.0 0.107 -1.87 -0.082 -1.597 - 0.0 0.0 0.0 0.0 0.0 1.679 0.001 1.836 -0.196 - 0.0 0.0 0.0 0.0 0.0 0.505 -1.817 0.509 -1.017 - 0.0 0.0 0.0 0.0 0.0 1.795 -1.852 -1.259 1.407 - 0.0 0.0 0.0 0.0 0.0 0.804 1.513 0.944 -1.025 - 0.0 0.0 0.0 0.0 0.0 1.111 -0.852 -0.434 0.292
    julia> # compare with: + 0.156 -0.702 0.481 -0.049 -1.505 0.0 0.0 0.0 0.0 + -0.404 1.303 1.298 1.714 -1.382 0.0 0.0 0.0 0.0 + 0.643 -0.736 -0.213 0.109 0.223 0.0 0.0 0.0 0.0 + 0.325 0.442 0.119 0.881 0.616 0.0 0.0 0.0 0.0 + -0.348 -0.31 0.52 -0.603 0.691 0.0 0.0 0.0 0.0 + -0.142 0.365 -0.419 0.016 1.111 0.0 0.0 0.0 0.0 + -0.819 1.883 -2.788 -0.606 1.121 0.0 0.0 0.0 0.0 + -1.976 -0.204 0.719 -0.574 -1.215 0.0 0.0 0.0 0.0 + 0.014 -1.507 0.813 -1.32 0.637 0.0 0.0 0.0 0.0 + -0.261 -1.41 1.011 -0.59 -0.898 0.0 0.0 0.0 0.0 + 0.34 1.786 1.785 -1.287 0.246 0.0 0.0 0.0 0.0 + 0.391 -0.016 -1.721 -0.085 -0.14 0.0 0.0 0.0 0.0 + 0.537 1.263 -1.193 0.286 -1.019 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 -2.64 1.202 -1.13 -1.216 + 0.0 0.0 0.0 0.0 0.0 -0.454 -0.872 0.387 -1.275 + 0.0 0.0 0.0 0.0 0.0 0.294 -1.081 -0.059 -0.437 + 0.0 0.0 0.0 0.0 0.0 -1.044 1.305 1.12 -0.286 + 0.0 0.0 0.0 0.0 0.0 0.268 -1.703 -0.775 0.076 + 0.0 0.0 0.0 0.0 0.0 -0.296 0.547 0.539 1.695 + 0.0 0.0 0.0 0.0 0.0 0.446 -0.146 1.058 0.468 + 0.0 0.0 0.0 0.0 0.0 1.319 -1.621 1.235 0.892 + 0.0 0.0 0.0 0.0 0.0 -0.688 -0.617 0.618 -0.198 + 0.0 0.0 0.0 0.0 0.0 1.008 -1.505 -2.385 -0.034 + 0.0 0.0 0.0 0.0 0.0 -2.022 0.913 -0.387 -0.59 + 0.0 0.0 0.0 0.0 0.0 -1.527 -1.104 -0.382 1.375
    julia> # compare with: block(t, Z2Irrep(0)) |> disp13×5 Array{Float64, 2}: - 0.751 -0.849 0.314 0.141 0.39 - 1.701 -2.003 -0.241 2.12 -0.201 - -0.258 0.766 0.084 0.538 0.68 - 0.908 -0.639 1.271 -0.636 0.457 - 1.973 1.047 -0.656 0.375 0.774 - 1.585 1.784 0.426 -1.685 -1.225 - -0.086 0.38 -0.172 -0.255 -0.376 - 0.697 1.058 -0.927 0.726 1.256 - -0.812 -0.807 -0.054 2.296 0.281 - -0.71 0.16 1.296 0.012 -0.531 - -0.123 1.528 -0.657 -0.526 0.015 - 0.297 1.115 0.363 0.381 1.231 - -0.732 0.134 -0.621 -0.206 -0.07
    julia> block(t, Z2Irrep(1)) |> disp12×4 Array{Float64, 2}: - -0.817 -0.332 1.668 3.104 - 0.009 0.461 -0.113 -0.264 - -0.22 1.485 -0.147 -0.864 - 0.106 -0.129 -0.326 2.422 - 0.206 -2.095 -0.811 0.737 - -0.013 0.97 -1.229 1.036 - 0.107 -1.87 -0.082 -1.597 - 1.679 0.001 1.836 -0.196 - 0.505 -1.817 0.509 -1.017 - 1.795 -1.852 -1.259 1.407 - 0.804 1.513 0.944 -1.025 - 1.111 -0.852 -0.434 0.292

    Here, we illustrated some additional concepts. Firstly, note that we convert a TensorMap to an Array. This only works when sectortype(t) supports fusiontensor, and in particular when BraidingStyle(sectortype(t)) == Bosonic(), e.g. the case of trivial tensors (the category $\mathbf{Vect}$) and group representations (the category $\mathbf{Rep}_{\mathsf{G}}$, which can be interpreted as a subcategory of $\mathbf{Vect}$). Here, we are in this case with $\mathsf{G} = ℤ₂$. For a TensorMap{S,1,1}, the blocks directly correspond to the diagonal blocks in the block diagonal structure of its representation as an Array, there is no basis transform in between. This is no longer the case for TensorMap{S,N₁,N₂} with different values of N₁ and N₂. Here, we use the operation fuse(V), which creates an ElementarySpace which is isomorphic to a given space V (of type ProductSpace or ElementarySpace). The specific map between those two spaces constructed using the specific method unitary implements precisely the basis change from the product basis to the coupled basis. In this case, for a group G with FusionStyle(Irrep[G]) isa UniqueFusion, it is a permutation matrix. Specifically choosing V equal to the codomain and domain of t, we can construct the explicit basis transforms that bring t into block diagonal form.

    Let's repeat the same exercise for I = Irrep[SU₂], which has FusionStyle(I) isa MultipleFusion.

    julia> V1 = SU₂Space(0=>2,1=>1)Rep[SU₂](0=>2, 1=>1)
    julia> V2 = SU₂Space(0=>1,1=>1)Rep[SU₂](0=>1, 1=>1)
    julia> # First a `TensorMap{SU₂Space, 1, 1}` + 0.156 -0.702 0.481 -0.049 -1.505 + -0.404 1.303 1.298 1.714 -1.382 + 0.643 -0.736 -0.213 0.109 0.223 + 0.325 0.442 0.119 0.881 0.616 + -0.348 -0.31 0.52 -0.603 0.691 + -0.142 0.365 -0.419 0.016 1.111 + -0.819 1.883 -2.788 -0.606 1.121 + -1.976 -0.204 0.719 -0.574 -1.215 + 0.014 -1.507 0.813 -1.32 0.637 + -0.261 -1.41 1.011 -0.59 -0.898 + 0.34 1.786 1.785 -1.287 0.246 + 0.391 -0.016 -1.721 -0.085 -0.14 + 0.537 1.263 -1.193 0.286 -1.019
    julia> block(t, Z2Irrep(1)) |> disp12×4 Array{Float64, 2}: + -2.64 1.202 -1.13 -1.216 + -0.454 -0.872 0.387 -1.275 + 0.294 -1.081 -0.059 -0.437 + -1.044 1.305 1.12 -0.286 + 0.268 -1.703 -0.775 0.076 + -0.296 0.547 0.539 1.695 + 0.446 -0.146 1.058 0.468 + 1.319 -1.621 1.235 0.892 + -0.688 -0.617 0.618 -0.198 + 1.008 -1.505 -2.385 -0.034 + -2.022 0.913 -0.387 -0.59 + -1.527 -1.104 -0.382 1.375

    Here, we illustrated some additional concepts. Firstly, note that we convert a TensorMap to an Array. This only works when sectortype(t) supports fusiontensor, and in particular when BraidingStyle(sectortype(t)) == Bosonic(), e.g. the case of trivial tensors (the category $\mathbf{Vect}$) and group representations (the category $\mathbf{Rep}_{\mathsf{G}}$, which can be interpreted as a subcategory of $\mathbf{Vect}$). Here, we are in this case with $\mathsf{G} = ℤ₂$. For a TensorMap{S,1,1}, the blocks directly correspond to the diagonal blocks in the block diagonal structure of its representation as an Array, there is no basis transform in between. This is no longer the case for TensorMap{S,N₁,N₂} with different values of N₁ and N₂. Here, we use the operation fuse(V), which creates an ElementarySpace which is isomorphic to a given space V (of type ProductSpace or ElementarySpace). The specific map between those two spaces constructed using the specific method unitary implements precisely the basis change from the product basis to the coupled basis. In this case, for a group G with FusionStyle(Irrep[G]) isa UniqueFusion, it is a permutation matrix. Specifically choosing V equal to the codomain and domain of t, we can construct the explicit basis transforms that bring t into block diagonal form.

    Let's repeat the same exercise for I = Irrep[SU₂], which has FusionStyle(I) isa MultipleFusion.

    julia> V1 = SU₂Space(0=>2,1=>1)Rep[SU₂](0=>2, 1=>1)
    julia> V2 = SU₂Space(0=>1,1=>1)Rep[SU₂](0=>1, 1=>1)
    julia> # First a `TensorMap{SU₂Space, 1, 1}` m = TensorMap(randn, V1, V2)TensorMap(Rep[SU₂](0=>2, 1=>1) ← Rep[SU₂](0=>1, 1=>1)): * Data for fusiontree FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()): - -1.7358291855003922 - 1.3836790672951895 + -1.7558345815622265 + -0.03797957697225863 * Data for fusiontree FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()) ← FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()): - -0.7057655751500963
    julia> convert(Array, m) |> disp5×4 Array{Float64, 2}: - -1.735 0.0 0.0 0.0 - 1.383 0.0 0.0 0.0 - 0.0 -0.705 0.0 0.0 - 0.0 0.0 -0.705 0.0 - 0.0 0.0 0.0 -0.705
    julia> # compare with: + 0.9608541106703924
    julia> convert(Array, m) |> disp5×4 Array{Float64, 2}: + -1.755 0.0 0.0 0.0 + -0.037 0.0 0.0 0.0 + 0.0 0.96 0.0 0.0 + 0.0 0.0 0.96 0.0 + 0.0 0.0 0.0 0.96
    julia> # compare with: block(m, Irrep[SU₂](0)) |> disp2×1 Array{Float64, 2}: - -1.735 - 1.383
    julia> block(m, Irrep[SU₂](1)) |> disp1×1 Array{Float64, 2}: - -0.705
    julia> # Now a `TensorMap{SU₂Space, 2, 2}` + -1.755 + -0.037
    julia> block(m, Irrep[SU₂](1)) |> disp1×1 Array{Float64, 2}: + 0.96
    julia> # Now a `TensorMap{SU₂Space, 2, 2}` t = TensorMap(randn, V1 ⊗ V1, V2 ⊗ V2')TensorMap((Rep[SU₂](0=>2, 1=>1) ⊗ Rep[SU₂](0=>2, 1=>1)) ← (Rep[SU₂](0=>1, 1=>1) ⊗ Rep[SU₂](0=>1, 1=>1)')): * Data for fusiontree FusionTree{Irrep[SU₂]}((0, 0), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((0, 0), 0, (false, true), ()): [:, :, 1, 1] = - 0.2857660489847253 -1.0953478068038063 - 0.19296897032533783 -0.6721283854710277 + -0.7967802130254111 -0.008354139826547743 + 1.2882688397963515 -1.1010745597338922 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((0, 0), 0, (false, true), ()): [:, :, 1, 1] = - -0.7542359224067451 + 0.0997651074190818 * Data for fusiontree FusionTree{Irrep[SU₂]}((0, 0), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 0, (false, true), ()): [:, :, 1, 1] = - 1.848163860004645 1.5452166061075954 - 1.1596421870656204 0.263903150792803 + -1.916878377672167 -0.79773629245283 + 2.030443141874705 0.9527898768911549 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 0, (false, true), ()): [:, :, 1, 1] = - -1.1148078008122584 + -0.8661693542450227 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((0, 1), 1, (false, true), ()): [:, :, 1, 1] = - 2.5329891421251496 + 0.03713520591934996 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 0), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((0, 1), 1, (false, true), ()): [:, :, 1, 1] = - -0.7105026976263482 0.23213425627820836 + 0.2829552585129313 -0.00570418481085198 * Data for fusiontree FusionTree{Irrep[SU₂]}((0, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((0, 1), 1, (false, true), ()): [:, :, 1, 1] = - 0.9661952424634342 - 1.3681119756233022 + -0.36301635979604596 + 0.21393953755580628 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 1, (false, true), ()): [:, :, 1, 1] = - -0.12070849550749829 + -1.3758397679588217 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 0), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 1, (false, true), ()): [:, :, 1, 1] = - 0.3232561402830053 -0.5381051417824106 + 0.038726340729875054 0.07728808383517799 * Data for fusiontree FusionTree{Irrep[SU₂]}((0, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 1, (false, true), ()): [:, :, 1, 1] = - 2.3527224188759996 - 0.09513488074165063 + 0.5980994058284778 + 0.6880404826365242 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 0), 1, (false, true), ()): [:, :, 1, 1] = - 0.9028637091862544 + -0.23936797533771376 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 0), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 0), 1, (false, true), ()): [:, :, 1, 1] = - -0.21855127978800623 -0.568236602338759 + 0.34017946100462587 -1.6155007301988402 * Data for fusiontree FusionTree{Irrep[SU₂]}((0, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 0), 1, (false, true), ()): [:, :, 1, 1] = - 0.6001220225113686 - 2.139655138145694 + 0.5007528350432758 + 0.6606594497789481 * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 2, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 2, (false, true), ()): [:, :, 1, 1] = - 0.6729484175505543
    julia> (array = convert(Array, t)) |> disp5×5×4×4 Array{Float64, 4}: + -1.7484795198376268
    julia> (array = convert(Array, t)) |> disp5×5×4×4 Array{Float64, 4}: [:, :, 1, 1] = - 0.285 -1.095 0.0 0.0 0.0 - 0.192 -0.672 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 -0.435 - 0.0 0.0 0.0 0.435 0.0 - 0.0 0.0 -0.435 0.0 0.0 + -0.796 -0.008 0.0 0.0 0.0 + 1.288 -1.101 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.057 + 0.0 0.0 0.0 -0.057 0.0 + 0.0 0.0 0.057 0.0 0.0 [:, :, 2, 1] = - 0.0 0.0 0.6 0.0 0.0 - 0.0 0.0 2.139 0.0 0.0 - -0.218 -0.568 0.0 0.638 0.0 - 0.0 0.0 -0.638 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.5 0.0 0.0 + 0.0 0.0 0.66 0.0 0.0 + 0.34 -1.615 0.0 -0.169 0.0 + 0.0 0.0 0.169 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 [:, :, 3, 1] = - 0.0 0.0 0.0 0.6 0.0 - 0.0 0.0 0.0 2.139 0.0 - 0.0 0.0 0.0 0.0 0.638 - -0.218 -0.568 0.0 0.0 0.0 - 0.0 0.0 -0.638 0.0 0.0 + 0.0 0.0 0.0 0.5 0.0 + 0.0 0.0 0.0 0.66 0.0 + 0.0 0.0 0.0 0.0 -0.169 + 0.34 -1.615 0.0 0.0 0.0 + 0.0 0.0 0.169 0.0 0.0 [:, :, 4, 1] = - 0.0 0.0 0.0 0.0 0.6 - 0.0 0.0 0.0 0.0 2.139 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.638 - -0.218 -0.568 0.0 -0.638 0.0 + 0.0 0.0 0.0 0.0 0.5 + 0.0 0.0 0.0 0.0 0.66 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 -0.169 + 0.34 -1.615 0.0 0.169 0.0 [:, :, 1, 2] = - 0.0 0.0 0.0 0.0 0.966 - 0.0 0.0 0.0 0.0 1.368 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 1.791 - -0.71 0.232 0.0 -1.791 0.0 + 0.0 0.0 0.0 0.0 -0.363 + 0.0 0.0 0.0 0.0 0.213 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.026 + 0.282 -0.005 0.0 -0.026 0.0 [:, :, 2, 2] = - 1.067 0.892 0.0 1.663 0.0 - 0.669 0.152 0.0 0.067 0.0 - 0.0 0.0 0.0 0.0 -0.319 - 0.228 -0.38 0.0 0.595 0.0 - 0.0 0.0 -0.199 0.0 0.0 + -1.106 -0.46 0.0 0.422 0.0 + 1.172 0.55 0.0 0.486 0.0 + 0.0 0.0 0.0 0.0 -1.268 + 0.027 0.054 0.0 -0.294 0.0 + 0.0 0.0 0.107 0.0 0.0 [:, :, 3, 2] = - 0.0 0.0 0.0 0.0 1.663 - 0.0 0.0 0.0 0.0 0.067 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.276 - 0.228 -0.38 0.0 0.396 0.0 + 0.0 0.0 0.0 0.0 0.422 + 0.0 0.0 0.0 0.0 0.486 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 -1.562 + 0.027 0.054 0.0 -0.186 0.0 [:, :, 4, 2] = - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.672 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 -1.748 [:, :, 1, 3] = - 0.0 0.0 0.0 -0.966 0.0 - 0.0 0.0 0.0 -1.368 0.0 - 0.0 0.0 0.0 0.0 -1.791 - 0.71 -0.232 0.0 0.0 0.0 - 0.0 0.0 1.791 0.0 0.0 + 0.0 0.0 0.0 0.363 0.0 + 0.0 0.0 0.0 -0.213 0.0 + 0.0 0.0 0.0 0.0 -0.026 + -0.282 0.005 0.0 0.0 0.0 + 0.0 0.0 0.026 0.0 0.0 [:, :, 2, 3] = - 0.0 0.0 -1.663 0.0 0.0 - 0.0 0.0 -0.067 0.0 0.0 - -0.228 0.38 0.0 -0.276 0.0 - 0.0 0.0 -0.396 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 -0.422 0.0 0.0 + 0.0 0.0 -0.486 0.0 0.0 + -0.027 -0.054 0.0 1.562 0.0 + 0.0 0.0 0.186 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 [:, :, 3, 3] = - 1.067 0.892 0.0 0.0 0.0 - 0.669 0.152 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 -0.595 - 0.0 0.0 0.0 -0.077 0.0 - 0.0 0.0 -0.595 0.0 0.0 + -1.106 -0.46 0.0 0.0 0.0 + 1.172 0.55 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.294 + 0.0 0.0 0.0 1.454 0.0 + 0.0 0.0 0.294 0.0 0.0 [:, :, 4, 3] = - 0.0 0.0 0.0 0.0 1.663 - 0.0 0.0 0.0 0.0 0.067 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 -0.396 - 0.228 -0.38 0.0 -0.276 0.0 + 0.0 0.0 0.0 0.0 0.422 + 0.0 0.0 0.0 0.0 0.486 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.186 + 0.027 0.054 0.0 1.562 0.0 [:, :, 1, 4] = - 0.0 0.0 0.966 0.0 0.0 - 0.0 0.0 1.368 0.0 0.0 - -0.71 0.232 0.0 1.791 0.0 - 0.0 0.0 -1.791 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 -0.363 0.0 0.0 + 0.0 0.0 0.213 0.0 0.0 + 0.282 -0.005 0.0 0.026 0.0 + 0.0 0.0 -0.026 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 [:, :, 2, 4] = - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.672 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 -1.748 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 [:, :, 3, 4] = - 0.0 0.0 -1.663 0.0 0.0 - 0.0 0.0 -0.067 0.0 0.0 - -0.228 0.38 0.0 0.396 0.0 - 0.0 0.0 0.276 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 -0.422 0.0 0.0 + 0.0 0.0 -0.486 0.0 0.0 + -0.027 -0.054 0.0 -0.186 0.0 + 0.0 0.0 -1.562 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 [:, :, 4, 4] = - 1.067 0.892 0.0 -1.663 0.0 - 0.669 0.152 0.0 -0.067 0.0 - 0.0 0.0 0.0 0.0 -0.199 - -0.228 0.38 0.0 0.595 0.0 - 0.0 0.0 -0.319 0.0 0.0
    julia> d1 = dim(codomain(t))25
    julia> d2 = dim(domain(t))16
    julia> (matrix = reshape(array, d1, d2)) |> disp25×16 Array{Float64, 2}: - 0.285 0.0 0.0 0.0 0.0 1.067 0.0 0.0 0.0 0.0 1.067 0.0 0.0 0.0 0.0 1.067 - 0.192 0.0 0.0 0.0 0.0 0.669 0.0 0.0 0.0 0.0 0.669 0.0 0.0 0.0 0.0 0.669 - 0.0 -0.218 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.228 0.0 0.0 -0.71 0.0 -0.228 0.0 - 0.0 0.0 -0.218 0.0 0.0 0.228 0.0 0.0 0.71 0.0 0.0 0.0 0.0 0.0 0.0 -0.228 - 0.0 0.0 0.0 -0.218 -0.71 0.0 0.228 0.0 0.0 0.0 0.0 0.228 0.0 0.0 0.0 0.0 - -1.095 0.0 0.0 0.0 0.0 0.892 0.0 0.0 0.0 0.0 0.892 0.0 0.0 0.0 0.0 0.892 - -0.672 0.0 0.0 0.0 0.0 0.152 0.0 0.0 0.0 0.0 0.152 0.0 0.0 0.0 0.0 0.152 - 0.0 -0.568 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.38 0.0 0.0 0.232 0.0 0.38 0.0 - 0.0 0.0 -0.568 0.0 0.0 -0.38 0.0 0.0 -0.232 0.0 0.0 0.0 0.0 0.0 0.0 0.38 - 0.0 0.0 0.0 -0.568 0.232 0.0 -0.38 0.0 0.0 0.0 0.0 -0.38 0.0 0.0 0.0 0.0 - 0.0 0.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.663 0.0 0.0 0.966 0.0 -1.663 0.0 - 0.0 2.139 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.067 0.0 0.0 1.368 0.0 -0.067 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.672 0.0 0.0 - 0.0 -0.638 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.396 0.0 0.0 -1.791 0.0 0.276 0.0 - -0.435 0.0 -0.638 0.0 0.0 -0.199 0.0 0.0 1.791 0.0 -0.595 0.0 0.0 0.0 0.0 -0.319 - 0.0 0.0 0.6 0.0 0.0 1.663 0.0 0.0 -0.966 0.0 0.0 0.0 0.0 0.0 0.0 -1.663 - 0.0 0.0 2.139 0.0 0.0 0.067 0.0 0.0 -1.368 0.0 0.0 0.0 0.0 0.0 0.0 -0.067 - 0.0 0.638 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.276 0.0 0.0 1.791 0.0 0.396 0.0 - 0.435 0.0 0.0 0.0 0.0 0.595 0.0 0.0 0.0 0.0 -0.077 0.0 0.0 0.0 0.0 0.595 - 0.0 0.0 0.0 -0.638 -1.791 0.0 0.396 0.0 0.0 0.0 0.0 -0.276 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.6 0.966 0.0 1.663 0.0 0.0 0.0 0.0 1.663 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 2.139 1.368 0.0 0.067 0.0 0.0 0.0 0.0 0.067 0.0 0.0 0.0 0.0 - -0.435 0.0 0.638 0.0 0.0 -0.319 0.0 0.0 -1.791 0.0 -0.595 0.0 0.0 0.0 0.0 -0.199 - 0.0 0.0 0.0 0.638 1.791 0.0 0.276 0.0 0.0 0.0 0.0 -0.396 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.672 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
    julia> (u = reshape(convert(Array, unitary(codomain(t), fuse(codomain(t)))), d1, d1)) |> disp25×25 Array{Float64, 2}: + -1.106 -0.46 0.0 -0.422 0.0 + 1.172 0.55 0.0 -0.486 0.0 + 0.0 0.0 0.0 0.0 0.107 + -0.027 -0.054 0.0 -0.294 0.0 + 0.0 0.0 -1.268 0.0 0.0
    julia> d1 = dim(codomain(t))25
    julia> d2 = dim(domain(t))16
    julia> (matrix = reshape(array, d1, d2)) |> disp25×16 Array{Float64, 2}: + -0.796 0.0 0.0 0.0 0.0 -1.106 0.0 0.0 0.0 0.0 -1.106 0.0 0.0 0.0 0.0 -1.106 + 1.288 0.0 0.0 0.0 0.0 1.172 0.0 0.0 0.0 0.0 1.172 0.0 0.0 0.0 0.0 1.172 + 0.0 0.34 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.027 0.0 0.0 0.282 0.0 -0.027 0.0 + 0.0 0.0 0.34 0.0 0.0 0.027 0.0 0.0 -0.282 0.0 0.0 0.0 0.0 0.0 0.0 -0.027 + 0.0 0.0 0.0 0.34 0.282 0.0 0.027 0.0 0.0 0.0 0.0 0.027 0.0 0.0 0.0 0.0 + -0.008 0.0 0.0 0.0 0.0 -0.46 0.0 0.0 0.0 0.0 -0.46 0.0 0.0 0.0 0.0 -0.46 + -1.101 0.0 0.0 0.0 0.0 0.55 0.0 0.0 0.0 0.0 0.55 0.0 0.0 0.0 0.0 0.55 + 0.0 -1.615 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.054 0.0 0.0 -0.005 0.0 -0.054 0.0 + 0.0 0.0 -1.615 0.0 0.0 0.054 0.0 0.0 0.005 0.0 0.0 0.0 0.0 0.0 0.0 -0.054 + 0.0 0.0 0.0 -1.615 -0.005 0.0 0.054 0.0 0.0 0.0 0.0 0.054 0.0 0.0 0.0 0.0 + 0.0 0.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.422 0.0 0.0 -0.363 0.0 -0.422 0.0 + 0.0 0.66 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.486 0.0 0.0 0.213 0.0 -0.486 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.748 0.0 0.0 + 0.0 0.169 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.186 0.0 0.0 -0.026 0.0 -1.562 0.0 + 0.057 0.0 0.169 0.0 0.0 0.107 0.0 0.0 0.026 0.0 0.294 0.0 0.0 0.0 0.0 -1.268 + 0.0 0.0 0.5 0.0 0.0 0.422 0.0 0.0 0.363 0.0 0.0 0.0 0.0 0.0 0.0 -0.422 + 0.0 0.0 0.66 0.0 0.0 0.486 0.0 0.0 -0.213 0.0 0.0 0.0 0.0 0.0 0.0 -0.486 + 0.0 -0.169 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.562 0.0 0.0 0.026 0.0 -0.186 0.0 + -0.057 0.0 0.0 0.0 0.0 -0.294 0.0 0.0 0.0 0.0 1.454 0.0 0.0 0.0 0.0 -0.294 + 0.0 0.0 0.0 0.169 -0.026 0.0 -0.186 0.0 0.0 0.0 0.0 1.562 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.5 -0.363 0.0 0.422 0.0 0.0 0.0 0.0 0.422 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.66 0.213 0.0 0.486 0.0 0.0 0.0 0.0 0.486 0.0 0.0 0.0 0.0 + 0.057 0.0 -0.169 0.0 0.0 -1.268 0.0 0.0 -0.026 0.0 0.294 0.0 0.0 0.0 0.0 0.107 + 0.0 0.0 0.0 -0.169 0.026 0.0 -1.562 0.0 0.0 0.0 0.0 0.186 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.748 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
    julia> (u = reshape(convert(Array, unitary(codomain(t), fuse(codomain(t)))), d1, d1)) |> disp25×25 Array{Float64, 2}: 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 @@ -619,57 +619,57 @@ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.999 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.707 0.0 0.0 0.0 0.707 0.0 0.0 0.0 0.0 0.577 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.707 0.0 0.0 0.0 0.408 0.0 0.0
    julia> u'*u ≈ I ≈ v'*vtrue
    julia> (u'*matrix*v) |> disp25×16 Array{Float64, 2}: - 0.285 1.848 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 -0.0 0.0 0.0 - 0.192 1.159 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.0 0.0 0.0 - -1.095 1.545 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 -0.0 0.0 0.0 - -0.672 0.263 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 -0.0 0.0 0.0 - -0.754 -1.114 0.0 -0.0 0.0 0.0 -0.0 0.0 0.0 -0.0 0.0 0.0 0.0 -0.0 0.0 0.0 - 0.0 0.0 -0.218 0.0 0.0 -0.71 0.0 0.0 0.323 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 -0.218 0.0 0.0 -0.71 0.0 0.0 0.323 0.0 0.0 0.0 -0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 -0.218 0.0 0.0 -0.71 0.0 0.0 0.323 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 -0.568 0.0 0.0 0.232 0.0 0.0 -0.538 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 - 0.0 -0.0 0.0 -0.568 0.0 0.0 0.232 0.0 0.0 -0.538 0.0 0.0 0.0 -0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 -0.568 0.0 0.0 0.232 0.0 0.0 -0.538 0.0 0.0 0.0 -0.0 0.0 - 0.0 0.0 0.6 0.0 0.0 0.966 0.0 0.0 2.352 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.6 0.0 0.0 0.966 0.0 0.0 2.352 0.0 0.0 0.0 -0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.6 0.0 0.0 0.966 0.0 0.0 2.352 0.0 0.0 0.0 -0.0 0.0 - 0.0 0.0 2.139 0.0 0.0 1.368 0.0 0.0 0.095 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 2.139 0.0 0.0 1.368 0.0 0.0 0.095 0.0 0.0 0.0 -0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 2.139 0.0 0.0 1.368 0.0 0.0 0.095 0.0 0.0 0.0 -0.0 0.0 - 0.0 0.0 0.902 0.0 0.0 2.532 0.0 0.0 -0.12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - 0.0 -0.0 0.0 0.902 0.0 0.0 2.532 0.0 0.0 -0.12 0.0 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.902 0.0 0.0 2.532 0.0 0.0 -0.12 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.672 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.672 0.0 0.0 0.0 - 0.0 -0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.672 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.672 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.672
    julia> # compare with: + -0.796 -1.916 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 + 1.288 2.03 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 -0.0 0.0 0.0 + -0.008 -0.797 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 + -1.101 0.952 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 + 0.099 -0.866 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 + 0.0 0.0 0.34 0.0 0.0 0.282 0.0 0.0 0.038 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.34 0.0 0.0 0.282 0.0 0.0 0.038 0.0 0.0 0.0 -0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.34 0.0 0.0 0.282 0.0 0.0 0.038 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 -1.615 0.0 0.0 -0.005 0.0 0.0 0.077 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 -1.615 0.0 0.0 -0.005 0.0 0.0 0.077 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 -1.615 0.0 0.0 -0.005 0.0 0.0 0.077 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.5 0.0 0.0 -0.363 0.0 0.0 0.598 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 -0.0 0.0 0.5 0.0 0.0 -0.363 0.0 0.0 0.598 0.0 0.0 0.0 -0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.5 0.0 0.0 -0.363 0.0 0.0 0.598 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.66 0.0 0.0 0.213 0.0 0.0 0.688 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.66 0.0 0.0 0.213 0.0 0.0 0.688 0.0 0.0 0.0 -0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.66 0.0 0.0 0.213 0.0 0.0 0.688 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 -0.239 0.0 0.0 0.037 0.0 0.0 -1.375 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 -0.0 0.0 -0.239 0.0 0.0 0.037 0.0 0.0 -1.375 0.0 0.0 0.0 -0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 -0.239 0.0 0.0 0.037 0.0 0.0 -1.375 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.748 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.748 0.0 0.0 0.0 + 0.0 -0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 -0.0 0.0 0.0 0.0 -1.748 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.748 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.748
    julia> # compare with: block(t, SU2Irrep(0)) |> disp5×2 Array{Float64, 2}: - 0.285 1.848 - 0.192 1.159 - -1.095 1.545 - -0.672 0.263 - -0.754 -1.114
    julia> block(t, SU2Irrep(1)) |> disp5×3 Array{Float64, 2}: - -0.218 -0.71 0.323 - -0.568 0.232 -0.538 - 0.6 0.966 2.352 - 2.139 1.368 0.095 - 0.902 2.532 -0.12
    julia> block(t, SU2Irrep(2)) |> disp1×1 Array{Float64, 2}: - 0.672

    Note that the basis transforms u and v are no longer permutation matrices, but are still unitary. Furthermore, note that they render the tensor block diagonal, but that now every element of the diagonal blocks labeled by c comes itself in a tensor product with an identity matrix of size dim(c), i.e. dim(SU2Irrep(1)) = 3 and dim(SU2Irrep(2)) = 5.

    Tensor properties

    Given a t::AbstractTensorMap{S,N₁,N₂}, there are various methods to query its properties. The most important are clearly codomain(t) and domain(t). For t::AbstractTensor{S,N}, i.e. t::AbstractTensorMap{S,N,0}, we can use space(t) as synonym for codomain(t). However, for a general AbstractTensorMap this has no meaning. However, we can query space(t, i), the space associated with the ith index. For i ∈ 1:N₁, this corresponds to codomain(t, i) = codomain(t)[i]. For j = i-N₁ ∈ (1:N₂), this corresponds to dual(domain(t, j)) = dual(domain(t)[j]).

    The total number of indices, i.e. N₁+N₂, is given by numind(t), with N₁ == numout(t) and N₂ == numin(t), the number of outgoing and incoming indices. There are also the unexported methods TensorKit.codomainind(t) and TensorKit.domainind(t) which return the tuples (1, 2, …, N₁) and (N₁+1, …, N₁+N₂), and are useful for internal purposes. The type parameter S<:ElementarySpace can be obtained as spacetype(t); the corresponding sector can directly obtained as sectortype(t) and is Trivial when S != GradedSpace. The underlying field scalars of S can also directly be obtained as field(t). This is different from eltype(t), which returns the type of Number in the tensor data, i.e. the type parameter T in the (subtype of) DenseMatrix{T} in which the matrix blocks are stored. Note that during construction, a (one-time) warning is printed if !(T ⊂ field(S)). The specific DenseMatrix{T} subtype in which the tensor data is stored is obtained as storagetype(t). Each of the methods numind, numout, numin, TensorKit.codomainind, TensorKit.domainind, spacetype, sectortype, field, eltype and storagetype work in the type domain as well, i.e. they are encoded in typeof(t).

    Finally, there are methods to probe the data, which we already encountered. blocksectors(t) returns an iterator over the different coupled sectors that can be obtained from fusing the uncoupled sectors available in the domain, but they must also be obtained from fusing the uncoupled sectors available in the codomain (i.e. it is the intersection of both blocksectors(codomain(t)) and blocksectors(domain(t))). For a specific sector c ∈ blocksectors(t), block(t, c) returns the corresponding data. Both are obtained together with blocks(t), which returns an iterator over the pairs c=>block(t, c). Furthermore, there is fusiontrees(t) which returns an iterator over splitting-fusion tree pairs (f₁,f₂), for which the corresponding data is given by t[f₁,f₂] (i.e. using Base.getindex).

    Let's again illustrate these methods with an example, continuing with the tensor t from the previous example

    julia> typeof(t)TensorMap{GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}, 2, 2, SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Matrix{Float64}}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}}
    julia> codomain(t)(Rep[SU₂](0=>2, 1=>1) ⊗ Rep[SU₂](0=>2, 1=>1))
    julia> domain(t)(Rep[SU₂](0=>1, 1=>1) ⊗ Rep[SU₂](0=>1, 1=>1)')
    julia> space(t,1)Rep[SU₂](0=>2, 1=>1)
    julia> space(t,2)Rep[SU₂](0=>2, 1=>1)
    julia> space(t,3)Rep[SU₂](0=>1, 1=>1)'
    julia> space(t,4)Rep[SU₂](0=>1, 1=>1)
    julia> numind(t)4
    julia> numout(t)2
    julia> numin(t)2
    julia> spacetype(t)GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}
    julia> sectortype(t)SU2Irrep
    julia> field(t)
    julia> eltype(t)Float64
    julia> storagetype(t)Matrix{Float64} (alias for Array{Float64, 2})
    julia> blocksectors(t)3-element Vector{SU2Irrep}: + -0.796 -1.916 + 1.288 2.03 + -0.008 -0.797 + -1.101 0.952 + 0.099 -0.866
    julia> block(t, SU2Irrep(1)) |> disp5×3 Array{Float64, 2}: + 0.34 0.282 0.038 + -1.615 -0.005 0.077 + 0.5 -0.363 0.598 + 0.66 0.213 0.688 + -0.239 0.037 -1.375
    julia> block(t, SU2Irrep(2)) |> disp1×1 Array{Float64, 2}: + -1.748

    Note that the basis transforms u and v are no longer permutation matrices, but are still unitary. Furthermore, note that they render the tensor block diagonal, but that now every element of the diagonal blocks labeled by c comes itself in a tensor product with an identity matrix of size dim(c), i.e. dim(SU2Irrep(1)) = 3 and dim(SU2Irrep(2)) = 5.

    Tensor properties

    Given a t::AbstractTensorMap{S,N₁,N₂}, there are various methods to query its properties. The most important are clearly codomain(t) and domain(t). For t::AbstractTensor{S,N}, i.e. t::AbstractTensorMap{S,N,0}, we can use space(t) as synonym for codomain(t). However, for a general AbstractTensorMap this has no meaning. However, we can query space(t, i), the space associated with the ith index. For i ∈ 1:N₁, this corresponds to codomain(t, i) = codomain(t)[i]. For j = i-N₁ ∈ (1:N₂), this corresponds to dual(domain(t, j)) = dual(domain(t)[j]).

    The total number of indices, i.e. N₁+N₂, is given by numind(t), with N₁ == numout(t) and N₂ == numin(t), the number of outgoing and incoming indices. There are also the unexported methods TensorKit.codomainind(t) and TensorKit.domainind(t) which return the tuples (1, 2, …, N₁) and (N₁+1, …, N₁+N₂), and are useful for internal purposes. The type parameter S<:ElementarySpace can be obtained as spacetype(t); the corresponding sector can directly obtained as sectortype(t) and is Trivial when S != GradedSpace. The underlying field scalars of S can also directly be obtained as field(t). This is different from eltype(t), which returns the type of Number in the tensor data, i.e. the type parameter T in the (subtype of) DenseMatrix{T} in which the matrix blocks are stored. Note that during construction, a (one-time) warning is printed if !(T ⊂ field(S)). The specific DenseMatrix{T} subtype in which the tensor data is stored is obtained as storagetype(t). Each of the methods numind, numout, numin, TensorKit.codomainind, TensorKit.domainind, spacetype, sectortype, field, eltype and storagetype work in the type domain as well, i.e. they are encoded in typeof(t).

    Finally, there are methods to probe the data, which we already encountered. blocksectors(t) returns an iterator over the different coupled sectors that can be obtained from fusing the uncoupled sectors available in the domain, but they must also be obtained from fusing the uncoupled sectors available in the codomain (i.e. it is the intersection of both blocksectors(codomain(t)) and blocksectors(domain(t))). For a specific sector c ∈ blocksectors(t), block(t, c) returns the corresponding data. Both are obtained together with blocks(t), which returns an iterator over the pairs c=>block(t, c). Furthermore, there is fusiontrees(t) which returns an iterator over splitting-fusion tree pairs (f₁,f₂), for which the corresponding data is given by t[f₁,f₂] (i.e. using Base.getindex).

    Let's again illustrate these methods with an example, continuing with the tensor t from the previous example

    julia> typeof(t)TensorMap{GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}, 2, 2, SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Matrix{Float64}}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}}
    julia> codomain(t)(Rep[SU₂](0=>2, 1=>1) ⊗ Rep[SU₂](0=>2, 1=>1))
    julia> domain(t)(Rep[SU₂](0=>1, 1=>1) ⊗ Rep[SU₂](0=>1, 1=>1)')
    julia> space(t,1)Rep[SU₂](0=>2, 1=>1)
    julia> space(t,2)Rep[SU₂](0=>2, 1=>1)
    julia> space(t,3)Rep[SU₂](0=>1, 1=>1)'
    julia> space(t,4)Rep[SU₂](0=>1, 1=>1)
    julia> numind(t)4
    julia> numout(t)2
    julia> numin(t)2
    julia> spacetype(t)GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}
    julia> sectortype(t)SU2Irrep
    julia> field(t)
    julia> eltype(t)Float64
    julia> storagetype(t)Matrix{Float64} (alias for Array{Float64, 2})
    julia> blocksectors(t)3-element Vector{SU2Irrep}: 0 1 2
    julia> blocks(t)TensorKit.SortedVectorDict{SU2Irrep, Matrix{Float64}} with 3 entries: - 0 => [0.285766 1.84816; 0.192969 1.15964; … ; -0.672128 0.263903; -0.754236 -… - 1 => [-0.218551 -0.710503 0.323256; -0.568237 0.232134 -0.538105; … ; 2.13966… - 2 => [0.672948;;]
    julia> block(t, first(blocksectors(t)))5×2 Matrix{Float64}: - 0.285766 1.84816 - 0.192969 1.15964 - -1.09535 1.54522 - -0.672128 0.263903 - -0.754236 -1.11481
    julia> fusiontrees(t)TensorKit.TensorKeyIterator{SU2Irrep, FusionTree{SU2Irrep, 2, 0, 1, Nothing}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}}(TensorKit.SortedVectorDict{SU2Irrep, Dict{FusionTree{SU2Irrep, 2, 0, 1, Nothing}, UnitRange{Int64}}}(0 => Dict(FusionTree{Irrep[SU₂]}((0, 0), 0, (false, false), ()) => 1:4, FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()) => 5:5), 1 => Dict(FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()) => 5:5, FusionTree{Irrep[SU₂]}((1, 0), 1, (false, false), ()) => 1:2, FusionTree{Irrep[SU₂]}((0, 1), 1, (false, false), ()) => 3:4), 2 => Dict(FusionTree{Irrep[SU₂]}((1, 1), 2, (false, false), ()) => 1:1)), TensorKit.SortedVectorDict{SU2Irrep, Dict{FusionTree{SU2Irrep, 2, 0, 1, Nothing}, UnitRange{Int64}}}(0 => Dict(FusionTree{Irrep[SU₂]}((0, 0), 0, (false, true), ()) => 1:1, FusionTree{Irrep[SU₂]}((1, 1), 0, (false, true), ()) => 2:2), 1 => Dict(FusionTree{Irrep[SU₂]}((0, 1), 1, (false, true), ()) => 2:2, FusionTree{Irrep[SU₂]}((1, 1), 1, (false, true), ()) => 3:3, FusionTree{Irrep[SU₂]}((1, 0), 1, (false, true), ()) => 1:1), 2 => Dict(FusionTree{Irrep[SU₂]}((1, 1), 2, (false, true), ()) => 1:1)))
    julia> f1, f2 = first(fusiontrees(t))(FusionTree{Irrep[SU₂]}((0, 0), 0, (false, false), ()), FusionTree{Irrep[SU₂]}((0, 0), 0, (false, true), ()))
    julia> t[f1,f2]2×2×1×1 StridedViews.StridedView{Float64, 4, Matrix{Float64}, typeof(identity)}: -[:, :, 1, 1] = - 0.285766 -1.09535 - 0.192969 -0.672128

    Reading and writing tensors: Dict conversion

    There are no custom or dedicated methods for reading, writing or storing TensorMaps, however, there is the possibility to convert a t::AbstractTensorMap into a Dict, simply as convert(Dict, t). The backward conversion convert(TensorMap, dict) will return a tensor that is equal to t, i.e. t == convert(TensorMap, convert(Dict, t)).

    This conversion relies on that the string represenation of objects such as VectorSpace, FusionTree or Sector should be such that it represents valid code to recreate the object. Hence, we store information about the domain and codomain of the tensor, and the sector associated with each data block, as a String obtained with repr. This provides the flexibility to still change the internal structure of such objects, without this breaking the ability to load older data files. The resulting dictionary can then be stored using any of the provided Julia packages such as JLD.jl, JLD2.jl, BSON.jl, JSON.jl, ...

    Vector space and linear algebra operations

    AbstractTensorMap instances t represent linear maps, i.e. homomorphisms in a 𝕜-linear category, just like matrices. To a large extent, they follow the interface of Matrix in Julia's LinearAlgebra standard library. Many methods from LinearAlgebra are (re)exported by TensorKit.jl, and can then us be used without using LinearAlgebra explicitly. In all of the following methods, the implementation acts directly on the underlying matrix blocks (typically using the same method) and never needs to perform any basis transforms.

    In particular, AbstractTensorMap instances can be composed, provided the domain of the first object coincides with the codomain of the second. Composing tensor maps uses the regular multiplication symbol as in t = t1*t2, which is also used for matrix multiplication. TensorKit.jl also supports (and exports) the mutating method mul!(t, t1, t2). We can then also try to invert a tensor map using inv(t), though this can only exist if the domain and codomain are isomorphic, which can e.g. be checked as fuse(codomain(t)) == fuse(domain(t)). If the inverse is composed with another tensor t2, we can use the syntax t1\t2 or t2/t1. However, this syntax also accepts instances t1 whose domain and codomain are not isomorphic, and then amounts to pinv(t1), the Moore-Penrose pseudoinverse. This, however, is only really justified as minimizing the least squares problem if InnerProductStyle(t) <: EuclideanProduct.

    AbstractTensorMap instances behave themselves as vectors (i.e. they are 𝕜-linear) and so they can be multiplied by scalars and, if they live in the same space, i.e. have the same domain and codomain, they can be added to each other. There is also a zero(t), the additive identity, which produces a zero tensor with the same domain and codomain as t. In addition, TensorMap supports basic Julia methods such as fill! and copyto!, as well as copy(t) to create a copy with independent data. Aside from basic + and * operations, TensorKit.jl reexports a number of efficient in-place methods from LinearAlgebra, such as axpy! (for y ← α * x + y), axpby! (for y ← α * x + β * y), lmul! and rmul! (for y ← α*y and y ← y*α, which is typically the same) and mul!, which can also be used for out-of-place scalar multiplication y ← α*x.

    For t::AbstractTensorMap{S} where InnerProductStyle(S) <: EuclideanProduct, we can compute norm(t), and for two such instances, the inner product dot(t1, t2), provided t1 and t2 have the same domain and codomain. Furthermore, there is normalize(t) and normalize!(t) to return a scaled version of t with unit norm. These operations should also exist for InnerProductStyle(S) <: HasInnerProduct, but require an interface for defining a custom inner product in these spaces. Currently, there is no concrete subtype of HasInnerProduct that is not an EuclideanProduct. In particular, CartesianSpace, ComplexSpace and GradedSpace all have InnerProductStyle(V) <: EuclideanProduct.

    With tensors that have InnerProductStyle(t) <: EuclideanProduct there is associated an adjoint operation, given by adjoint(t) or simply t', such that domain(t') == codomain(t) and codomain(t') == domain(t). Note that for an instance t::TensorMap{S,N₁,N₂}, t' is simply stored in a wrapper called AdjointTensorMap{S,N₂,N₁}, which is another subtype of AbstractTensorMap. This should be mostly unvisible to the user, as all methods should work for this type as well. It can be hard to reason about the index order of t', i.e. index i of t appears in t' at index position j = TensorKit.adjointtensorindex(t, i), where the latter method is typically not necessary and hence unexported. There is also a plural TensorKit.adjointtensorindices to convert multiple indices at once. Note that, because the adjoint interchanges domain and codomain, we have space(t', j) == space(t, i)'.

    AbstractTensorMap instances can furthermore be tested for exact (t1 == t2) or approximate (t1 ≈ t2) equality, though the latter requires that norm can be computed.

    When tensor map instances are endomorphisms, i.e. they have the same domain and codomain, there is a multiplicative identity which can be obtained as one(t) or one!(t), where the latter overwrites the contents of t. The multiplicative identity on a space V can also be obtained using id(A, V) as discussed above, such that for a general homomorphism t′, we have t′ == id(codomain(t′))*t′ == t′*id(domain(t′)). Returning to the case of endomorphisms t, we can compute the trace via tr(t) and exponentiate them using exp(t), or if the contents of t can be destroyed in the process, exp!(t). Furthermore, there are a number of tensor factorizations for both endomorphisms and general homomorphism that we discuss below.

    Finally, there are a number of operations that also belong in this paragraph because of their analogy to common matrix operations. The tensor product of two TensorMap instances t1 and t2 is obtained as t1 ⊗ t2 and results in a new TensorMap with codomain(t1⊗t2) = codomain(t1) ⊗ codomain(t2) and domain(t1⊗t2) = domain(t1) ⊗ domain(t2). If we have two TensorMap{S,N,1} instances t1 and t2 with the same codomain, we can combine them in a way that is analoguous to hcat, i.e. we stack them such that the new tensor catdomain(t1, t2) has also the same codomain, but has a domain which is domain(t1) ⊕ domain(t2). Similarly, if t1 and t2 are of type TensorMap{S,1,N} and have the same domain, the operation catcodomain(t1, t2) results in a new tensor with the same domain and a codomain given by codomain(t1) ⊕ codomain(t2), which is the analogy of vcat. Note that direct sum only makes sense between ElementarySpace objects, i.e. there is no way to give a tensor product meaning to a direct sum of tensor product spaces.

    Time for some more examples:

    julia> t == t + zero(t) == t*id(domain(t)) == id(codomain(t))*ttrue
    julia> t2 = TensorMap(randn, ComplexF64, codomain(t), domain(t));
    julia> dot(t2, t)4.707367430497651 + 36.60163520410839im
    julia> tr(t2'*t)4.70736743049765 + 36.60163520410839im
    julia> dot(t2, t) ≈ dot(t', t2')true
    julia> dot(t2, t2)50.4234702402534 + 0.0im
    julia> norm(t2)^250.42347024025339
    julia> t3 = copyto!(similar(t, ComplexF64), t);ERROR: MethodError: no method matching copyto!(::TensorMap{GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}, 2, 2, SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Matrix{ComplexF64}}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}}, ::TensorMap{GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}, 2, 2, SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Matrix{Float64}}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}}) + 0 => [-0.79678 -1.91688; 1.28827 2.03044; … ; -1.10107 0.95279; 0.0997651 -0.… + 1 => [0.340179 0.282955 0.0387263; -1.6155 -0.00570418 0.0772881; … ; 0.66065… + 2 => [-1.74848;;]
    julia> block(t, first(blocksectors(t)))5×2 Matrix{Float64}: + -0.79678 -1.91688 + 1.28827 2.03044 + -0.00835414 -0.797736 + -1.10107 0.95279 + 0.0997651 -0.866169
    julia> fusiontrees(t)TensorKit.TensorKeyIterator{SU2Irrep, FusionTree{SU2Irrep, 2, 0, 1, Nothing}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}}(TensorKit.SortedVectorDict{SU2Irrep, Dict{FusionTree{SU2Irrep, 2, 0, 1, Nothing}, UnitRange{Int64}}}(0 => Dict(FusionTree{Irrep[SU₂]}((0, 0), 0, (false, false), ()) => 1:4, FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()) => 5:5), 1 => Dict(FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()) => 5:5, FusionTree{Irrep[SU₂]}((1, 0), 1, (false, false), ()) => 1:2, FusionTree{Irrep[SU₂]}((0, 1), 1, (false, false), ()) => 3:4), 2 => Dict(FusionTree{Irrep[SU₂]}((1, 1), 2, (false, false), ()) => 1:1)), TensorKit.SortedVectorDict{SU2Irrep, Dict{FusionTree{SU2Irrep, 2, 0, 1, Nothing}, UnitRange{Int64}}}(0 => Dict(FusionTree{Irrep[SU₂]}((0, 0), 0, (false, true), ()) => 1:1, FusionTree{Irrep[SU₂]}((1, 1), 0, (false, true), ()) => 2:2), 1 => Dict(FusionTree{Irrep[SU₂]}((0, 1), 1, (false, true), ()) => 2:2, FusionTree{Irrep[SU₂]}((1, 1), 1, (false, true), ()) => 3:3, FusionTree{Irrep[SU₂]}((1, 0), 1, (false, true), ()) => 1:1), 2 => Dict(FusionTree{Irrep[SU₂]}((1, 1), 2, (false, true), ()) => 1:1)))
    julia> f1, f2 = first(fusiontrees(t))(FusionTree{Irrep[SU₂]}((0, 0), 0, (false, false), ()), FusionTree{Irrep[SU₂]}((0, 0), 0, (false, true), ()))
    julia> t[f1,f2]2×2×1×1 StridedViews.StridedView{Float64, 4, Matrix{Float64}, typeof(identity)}: +[:, :, 1, 1] = + -0.79678 -0.00835414 + 1.28827 -1.10107

    Reading and writing tensors: Dict conversion

    There are no custom or dedicated methods for reading, writing or storing TensorMaps, however, there is the possibility to convert a t::AbstractTensorMap into a Dict, simply as convert(Dict, t). The backward conversion convert(TensorMap, dict) will return a tensor that is equal to t, i.e. t == convert(TensorMap, convert(Dict, t)).

    This conversion relies on that the string represenation of objects such as VectorSpace, FusionTree or Sector should be such that it represents valid code to recreate the object. Hence, we store information about the domain and codomain of the tensor, and the sector associated with each data block, as a String obtained with repr. This provides the flexibility to still change the internal structure of such objects, without this breaking the ability to load older data files. The resulting dictionary can then be stored using any of the provided Julia packages such as JLD.jl, JLD2.jl, BSON.jl, JSON.jl, ...

    Vector space and linear algebra operations

    AbstractTensorMap instances t represent linear maps, i.e. homomorphisms in a 𝕜-linear category, just like matrices. To a large extent, they follow the interface of Matrix in Julia's LinearAlgebra standard library. Many methods from LinearAlgebra are (re)exported by TensorKit.jl, and can then us be used without using LinearAlgebra explicitly. In all of the following methods, the implementation acts directly on the underlying matrix blocks (typically using the same method) and never needs to perform any basis transforms.

    In particular, AbstractTensorMap instances can be composed, provided the domain of the first object coincides with the codomain of the second. Composing tensor maps uses the regular multiplication symbol as in t = t1*t2, which is also used for matrix multiplication. TensorKit.jl also supports (and exports) the mutating method mul!(t, t1, t2). We can then also try to invert a tensor map using inv(t), though this can only exist if the domain and codomain are isomorphic, which can e.g. be checked as fuse(codomain(t)) == fuse(domain(t)). If the inverse is composed with another tensor t2, we can use the syntax t1\t2 or t2/t1. However, this syntax also accepts instances t1 whose domain and codomain are not isomorphic, and then amounts to pinv(t1), the Moore-Penrose pseudoinverse. This, however, is only really justified as minimizing the least squares problem if InnerProductStyle(t) <: EuclideanProduct.

    AbstractTensorMap instances behave themselves as vectors (i.e. they are 𝕜-linear) and so they can be multiplied by scalars and, if they live in the same space, i.e. have the same domain and codomain, they can be added to each other. There is also a zero(t), the additive identity, which produces a zero tensor with the same domain and codomain as t. In addition, TensorMap supports basic Julia methods such as fill! and copyto!, as well as copy(t) to create a copy with independent data. Aside from basic + and * operations, TensorKit.jl reexports a number of efficient in-place methods from LinearAlgebra, such as axpy! (for y ← α * x + y), axpby! (for y ← α * x + β * y), lmul! and rmul! (for y ← α*y and y ← y*α, which is typically the same) and mul!, which can also be used for out-of-place scalar multiplication y ← α*x.

    For t::AbstractTensorMap{S} where InnerProductStyle(S) <: EuclideanProduct, we can compute norm(t), and for two such instances, the inner product dot(t1, t2), provided t1 and t2 have the same domain and codomain. Furthermore, there is normalize(t) and normalize!(t) to return a scaled version of t with unit norm. These operations should also exist for InnerProductStyle(S) <: HasInnerProduct, but require an interface for defining a custom inner product in these spaces. Currently, there is no concrete subtype of HasInnerProduct that is not an EuclideanProduct. In particular, CartesianSpace, ComplexSpace and GradedSpace all have InnerProductStyle(V) <: EuclideanProduct.

    With tensors that have InnerProductStyle(t) <: EuclideanProduct there is associated an adjoint operation, given by adjoint(t) or simply t', such that domain(t') == codomain(t) and codomain(t') == domain(t). Note that for an instance t::TensorMap{S,N₁,N₂}, t' is simply stored in a wrapper called AdjointTensorMap{S,N₂,N₁}, which is another subtype of AbstractTensorMap. This should be mostly unvisible to the user, as all methods should work for this type as well. It can be hard to reason about the index order of t', i.e. index i of t appears in t' at index position j = TensorKit.adjointtensorindex(t, i), where the latter method is typically not necessary and hence unexported. There is also a plural TensorKit.adjointtensorindices to convert multiple indices at once. Note that, because the adjoint interchanges domain and codomain, we have space(t', j) == space(t, i)'.

    AbstractTensorMap instances can furthermore be tested for exact (t1 == t2) or approximate (t1 ≈ t2) equality, though the latter requires that norm can be computed.

    When tensor map instances are endomorphisms, i.e. they have the same domain and codomain, there is a multiplicative identity which can be obtained as one(t) or one!(t), where the latter overwrites the contents of t. The multiplicative identity on a space V can also be obtained using id(A, V) as discussed above, such that for a general homomorphism t′, we have t′ == id(codomain(t′))*t′ == t′*id(domain(t′)). Returning to the case of endomorphisms t, we can compute the trace via tr(t) and exponentiate them using exp(t), or if the contents of t can be destroyed in the process, exp!(t). Furthermore, there are a number of tensor factorizations for both endomorphisms and general homomorphism that we discuss below.

    Finally, there are a number of operations that also belong in this paragraph because of their analogy to common matrix operations. The tensor product of two TensorMap instances t1 and t2 is obtained as t1 ⊗ t2 and results in a new TensorMap with codomain(t1⊗t2) = codomain(t1) ⊗ codomain(t2) and domain(t1⊗t2) = domain(t1) ⊗ domain(t2). If we have two TensorMap{S,N,1} instances t1 and t2 with the same codomain, we can combine them in a way that is analoguous to hcat, i.e. we stack them such that the new tensor catdomain(t1, t2) has also the same codomain, but has a domain which is domain(t1) ⊕ domain(t2). Similarly, if t1 and t2 are of type TensorMap{S,1,N} and have the same domain, the operation catcodomain(t1, t2) results in a new tensor with the same domain and a codomain given by codomain(t1) ⊕ codomain(t2), which is the analogy of vcat. Note that direct sum only makes sense between ElementarySpace objects, i.e. there is no way to give a tensor product meaning to a direct sum of tensor product spaces.

    Time for some more examples:

    julia> t == t + zero(t) == t*id(domain(t)) == id(codomain(t))*ttrue
    julia> t2 = TensorMap(randn, ComplexF64, codomain(t), domain(t));
    julia> dot(t2, t)8.669597850207957 - 11.088338435399876im
    julia> tr(t2'*t)8.669597850207957 - 11.088338435399876im
    julia> dot(t2, t) ≈ dot(t', t2')true
    julia> dot(t2, t2)50.002018582494216 + 0.0im
    julia> norm(t2)^250.00201858249421
    julia> t3 = copyto!(similar(t, ComplexF64), t);ERROR: MethodError: no method matching copyto!(::TensorMap{GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}, 2, 2, SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Matrix{ComplexF64}}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}}, ::TensorMap{GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}, 2, 2, SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Matrix{Float64}}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}, FusionTree{SU2Irrep, 2, 0, 1, Nothing}}) Closest candidates are: copyto!(!Matched::AbstractArray, ::Any) @@ -683,44 +683,44 @@ p1::NTuple{N₁′,Int}, p2::NTuple{N₂′,Int})

    and

    permute(t::AbstractTensorMap{S,N₁,N₂},
             p1::NTuple{N₁′,Int}, p2::NTuple{N₂′,Int}; copy = false)

    both of which return an instance of AbstractTensorMap{S,N₁′,N₂′}.

    In these methods, p1 and p2 specify which of the original tensor indices ranging from 1 to N₁+N₂ make up the new codomain (with N₁′ spaces) and new domain (with N₂′ spaces). Hence, (p1..., p2...) should be a valid permutation of 1:(N₁+N₂). Note that, throughout TensorKit.jl, permutations are always specified using tuples of Ints, for reasons of type stability. For braid, we also need to specify levels or depths for each of the indices of the original tensor, which determine whether indices will braid over or underneath each other (use the braiding or its inverse). We refer to the section on manipulating fusion trees for more details.

    When BraidingStyle(sectortype(t)) isa SymmetricBraiding, we can use the simpler interface of permute, which does not require the argument levels. permute accepts a keyword argument copy. When copy == true, the result will be a tensor with newly allocated data that can independently be modified from that of the input tensor t. When copy takes the default value false, permute can try to return the result in a way that it shares its data with the input tensor t, though this is only possible in specific cases (e.g. when sectortype(S) == Trivial and (p1..., p2...) = (1:(N₁+N₂)...)).

    Both braid and permute come in a version where the result is stored in an already existing tensor, i.e. braid!(tdst, tsrc, levels, p1, p2) and permute!(tdst, tsrc, p1, p2).

    Another operation that belongs und index manipulations is taking the transpose of a tensor, i.e. LinearAlgebra.transpose(t) and LinearAlgebra.transpose!(tdst, tsrc), both of which are reexported by TensorKit.jl. Note that transpose(t) is not simply equal to reshuffling domain and codomain with braid(t, (1:(N₁+N₂)...), reverse(domainind(tsrc)), reverse(codomainind(tsrc)))). Indeed, the graphical representation (where we draw the codomain and domain as a single object), makes clear that this introduces an additional (inverse) twist, which is then compensated in the transpose implementation.

    transpose

    In categorical language, the reason for this extra twist is that we use the left coevaluation $η$, but the right evaluation $\tilde{ϵ}$, when repartitioning the indices between domain and codomain.

    There are a number of other index related manipulations. We can apply a twist (or inverse twist) to one of the tensor map indices via twist(t, i; inv = false) or twist!(t, i; inv = false). Note that the latter method does not store the result in a new destination tensor, but just modifies the tensor t in place. Twisting several indices simultaneously can be obtained by using the defining property

    $θ_{V⊗W} = τ_{W,V} ∘ (θ_W ⊗ θ_V) ∘ τ_{V,W} = (θ_V ⊗ θ_W) ∘ τ_{W,V} ∘ τ_{V,W}.$

    but is currently not implemented explicitly.

    For all sector types I with BraidingStyle(I) == Bosonic(), all twists are 1 and thus have no effect. Let us start with some examples, in which we illustrate that, albeit permute might act highly non-trivial on the fusion trees and on the corresponding data, after conversion to a regular Array (when possible), it just acts like permutedims

    julia> domain(t) → codomain(t)(Rep[SU₂](0=>2, 1=>1) ⊗ Rep[SU₂](0=>2, 1=>1)) ← (Rep[SU₂](0=>1, 1=>1) ⊗ Rep[SU₂](0=>1, 1=>1)')
    julia> ta = convert(Array, t);
    julia> t′ = permute(t, (1,2,3,4));
    julia> domain(t′) → codomain(t′)(Rep[SU₂](0=>2, 1=>1) ⊗ Rep[SU₂](0=>2, 1=>1) ⊗ Rep[SU₂](0=>1, 1=>1)' ⊗ Rep[SU₂](0=>1, 1=>1)) ← ProductSpace{GradedSpace{SU2Irrep, TensorKit.SortedVectorDict{SU2Irrep, Int64}}, 0}()
    julia> convert(Array, t′) ≈ tatrue
    julia> t′′ = permute(t, (4,2,3),(1,));
    julia> domain(t′′) → codomain(t′′)(Rep[SU₂](0=>1, 1=>1) ⊗ Rep[SU₂](0=>2, 1=>1) ⊗ Rep[SU₂](0=>1, 1=>1)') ← Rep[SU₂](0=>2, 1=>1)'
    julia> convert(Array, t′′) ≈ permutedims(ta, (4,2,3,1))true
    julia> mTensorMap(Rep[SU₂](0=>2, 1=>1) ← Rep[SU₂](0=>1, 1=>1)): * Data for fusiontree FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()): - -1.7358291855003922 - 1.3836790672951895 + -1.7558345815622265 + -0.03797957697225863 * Data for fusiontree FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()) ← FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()): - -0.7057655751500963
    julia> transpose(m)TensorMap(Rep[SU₂](0=>1, 1=>1)' ← Rep[SU₂](0=>2, 1=>1)'): + 0.9608541106703924
    julia> transpose(m)TensorMap(Rep[SU₂](0=>1, 1=>1)' ← Rep[SU₂](0=>2, 1=>1)'): * Data for fusiontree FusionTree{Irrep[SU₂]}((0,), 0, (true,), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (true,), ()): - -1.7358291855003922 1.3836790672951895 + -1.7558345815622265 -0.03797957697225863 * Data for fusiontree FusionTree{Irrep[SU₂]}((1,), 1, (true,), ()) ← FusionTree{Irrep[SU₂]}((1,), 1, (true,), ()): - -0.7057655751500962
    julia> convert(Array, transpose(t)) ≈ permutedims(ta,(4,3,2,1))true
    julia> dot(t2, t) ≈ dot(transpose(t2), transpose(t))true
    julia> transpose(transpose(t)) ≈ ttrue
    julia> twist(t, 3) ≈ ttrue
    julia> # as twist acts trivially for + 0.9608541106703923
    julia> convert(Array, transpose(t)) ≈ permutedims(ta,(4,3,2,1))true
    julia> dot(t2, t) ≈ dot(transpose(t2), transpose(t))true
    julia> transpose(transpose(t)) ≈ ttrue
    julia> twist(t, 3) ≈ ttrue
    julia> # as twist acts trivially for BraidingStyle(sectortype(t))Bosonic()

    Note that transpose acts like one would expect on a TensorMap{S,1,1}. On a TensorMap{S,N₁,N₂}, because transpose replaces the codomain with the dual of the domain, which has its tensor product operation reversed, this in the end amounts in a complete reversal of all tensor indices when representing it as a plain mutli-dimensional Array. Also, note that we have not defined the conjugation of TensorMap instances. One definition that one could think of is conj(t) = adjoint(transpose(t)). However note that codomain(adjoint(tranpose(t))) == domain(transpose(t)) == dual(codomain(t)) and similarly domain(adjoint(tranpose(t))) == dual(domain(t)), where dual of a ProductSpace is composed of the dual of the ElementarySpace instances, in reverse order of tensor product. This might be very confusing, and as such we leave tensor conjugation undefined. However, note that we have a conjugation syntax within the context of tensor contractions.

    To show the effect of twist, we now consider a type of sector I for which BraidingStyle{I} != Bosonic(). In particular, we use FibonacciAnyon. We cannot convert the resulting TensorMap to an Array, so we have to rely on indirect tests to verify our results.

    julia> V1 = GradedSpace{FibonacciAnyon}(:I=>3,:τ=>2)Vect[FibonacciAnyon](:I=>3, :τ=>2)
    julia> V2 = GradedSpace{FibonacciAnyon}(:I=>2,:τ=>1)Vect[FibonacciAnyon](:I=>2, :τ=>1)
    julia> m = TensorMap(randn, Float32, V1, V2)TensorMap(Vect[FibonacciAnyon](:I=>3, :τ=>2) ← Vect[FibonacciAnyon](:I=>2, :τ=>1)): * Data for fusiontree FusionTree{FibonacciAnyon}((:I,), :I, (false,), ()) ← FusionTree{FibonacciAnyon}((:I,), :I, (false,), ()): - -1.4180183f0 + 0.0f0im -0.016467694f0 + 0.0f0im - -1.2224537f0 + 0.0f0im 0.35486653f0 + 0.0f0im - 0.06304926f0 + 0.0f0im -1.0096518f0 + 0.0f0im + -0.8153963f0 + 0.0f0im 1.0127101f0 + 0.0f0im + 2.5010397f0 + 0.0f0im 1.5999309f0 + 0.0f0im + -0.23346955f0 + 0.0f0im 0.28401253f0 + 0.0f0im * Data for fusiontree FusionTree{FibonacciAnyon}((:τ,), :τ, (false,), ()) ← FusionTree{FibonacciAnyon}((:τ,), :τ, (false,), ()): - 0.7558526f0 + 0.0f0im - -1.6609164f0 + 0.0f0im
    julia> transpose(m)TensorMap(Vect[FibonacciAnyon](:I=>2, :τ=>1)' ← Vect[FibonacciAnyon](:I=>3, :τ=>2)'): + -1.1678278f0 + 0.0f0im + -1.1499968f0 + 0.0f0im
    julia> transpose(m)TensorMap(Vect[FibonacciAnyon](:I=>2, :τ=>1)' ← Vect[FibonacciAnyon](:I=>3, :τ=>2)'): * Data for fusiontree FusionTree{FibonacciAnyon}((:I,), :I, (true,), ()) ← FusionTree{FibonacciAnyon}((:I,), :I, (true,), ()): - -1.4180183f0 + 0.0f0im -1.2224537f0 + 0.0f0im 0.06304926f0 + 0.0f0im - -0.016467694f0 + 0.0f0im 0.35486653f0 + 0.0f0im -1.0096518f0 + 0.0f0im + -0.8153963f0 + 0.0f0im 2.5010397f0 + 0.0f0im -0.23346955f0 + 0.0f0im + 1.0127101f0 + 0.0f0im 1.5999309f0 + 0.0f0im 0.28401253f0 + 0.0f0im * Data for fusiontree FusionTree{FibonacciAnyon}((:τ,), :τ, (true,), ()) ← FusionTree{FibonacciAnyon}((:τ,), :τ, (true,), ()): - 0.7558526f0 + 0.0f0im -1.6609164f0 + 0.0f0im
    julia> twist(braid(m, (1,2), (2,), (1,)), 1)ERROR: AssertionError: length(levels) == numind(t)
    julia> t1 = TensorMap(randn, V1*V2', V2*V1);
    julia> t2 = TensorMap(randn, ComplexF64, V1*V2', V2*V1);
    julia> dot(t1, t2) ≈ dot(transpose(t1), transpose(t2))true
    julia> transpose(transpose(t1)) ≈ t1true

    A final operation that one might expect in this section is to fuse or join indices, and its inverse, to split a given index into two or more indices. For a plain tensor (i.e. with sectortype(t) == Trivial) amount to the equivalent of reshape on the multidimensional data. However, this represents only one possibility, as there is no canonically unique way to embed the tensor product of two spaces V₁ ⊗ V₂ in a new space V = fuse(V₁⊗V₂). Such a mapping can always be accompagnied by a basis transform. However, one particular choice is created by the function isomorphism, or for EuclideanProduct spaces, unitary. Hence, we can join or fuse two indices of a tensor by first constructing u = unitary(fuse(space(t, i) ⊗ space(t, j)), space(t, i) ⊗ space(t, j)) and then contracting this map with indices i and j of t, as explained in the section on contracting tensors. Note, however, that a typical algorithm is not expected to often need to fuse and split indices, as e.g. tensor factorizations can easily be applied without needing to reshape or fuse indices first, as explained in the next section.

    Tensor factorizations

    Eigenvalue decomposition

    As tensors are linear maps, they have various kinds of factorizations. Endomorphism, i.e. tensor maps t with codomain(t) == domain(t), have an eigenvalue decomposition. For this, we overload both LinearAlgebra.eigen(t; kwargs...) and LinearAlgebra.eigen!(t; kwargs...), where the latter destroys t in the process. The keyword arguments are the same that are accepted by LinearAlgebra.eigen(!) for matrices. The result is returned as D, V = eigen(t), such that t*V ≈ V*D. For given t::TensorMap{S,N,N}, V is a TensorMap{S,N,1}, whose codomain corresponds to that of t, but whose domain is a single space S (or more correctly a ProductSpace{S,1}), that corresponds to fuse(codomain(t)). The eigenvalues are encoded in D, a TensorMap{S,1,1}, whose domain and codomain correspond to the domain of V. Indeed, we cannot reasonably associate a tensor product structure with the different eigenvalues. Note that D stores the eigenvalues on the diagonal of a (collection of) DenseMatrix instance(s), as there is currently no dedicated DiagonalTensorMap or diagonal storage support.

    We also define LinearAlgebra.ishermitian(t), which can only return true for instances of AbstractEuclideanTensorMap. In all other cases, as the inner product is not defined, there is no notion of hermiticity (i.e. we are not working in a -category). For instances of EuclideanTensorMap, we also define and export the routines eigh and eigh!, which compute the eigenvalue decomposition under the guarantee (not checked) that the map is hermitian. Hence, eigenvalues will be real and V will be unitary with eltype(V) == eltype(t). We also define and export eig and eig!, which similarly assume that the TensorMap is not hermitian (hence this does not require EuclideanTensorMap), and always returns complex values eigenvalues and eigenvectors. Like for matrices, LinearAlgebra.eigen is type unstable and checks hermiticity at run-time, then falling back to either eig or eigh.

    Orthogonal factorizations

    Other factorizations that are provided by TensorKit.jl are orthogonal or unitary in nature, and thus always require a AbstractEuclideanTensorMap. However, they don't require equal domain and codomain. Let us first discuss the singular value decomposition, for which we define and export the methods tsvd and tsvd! (where as always, the latter destroys the input).

    U, Σ, Vʰ, ϵ = tsvd(t; trunc = notrunc(), p::Real = 2,
    + -1.1678278f0 + 0.0f0im  -1.1499968f0 + 0.0f0im
    julia> twist(braid(m, (1,2), (2,), (1,)), 1)ERROR: AssertionError: length(levels) == numind(t)
    julia> t1 = TensorMap(randn, V1*V2', V2*V1);
    julia> t2 = TensorMap(randn, ComplexF64, V1*V2', V2*V1);
    julia> dot(t1, t2) ≈ dot(transpose(t1), transpose(t2))true
    julia> transpose(transpose(t1)) ≈ t1true

    A final operation that one might expect in this section is to fuse or join indices, and its inverse, to split a given index into two or more indices. For a plain tensor (i.e. with sectortype(t) == Trivial) amount to the equivalent of reshape on the multidimensional data. However, this represents only one possibility, as there is no canonically unique way to embed the tensor product of two spaces V₁ ⊗ V₂ in a new space V = fuse(V₁⊗V₂). Such a mapping can always be accompagnied by a basis transform. However, one particular choice is created by the function isomorphism, or for EuclideanProduct spaces, unitary. Hence, we can join or fuse two indices of a tensor by first constructing u = unitary(fuse(space(t, i) ⊗ space(t, j)), space(t, i) ⊗ space(t, j)) and then contracting this map with indices i and j of t, as explained in the section on contracting tensors. Note, however, that a typical algorithm is not expected to often need to fuse and split indices, as e.g. tensor factorizations can easily be applied without needing to reshape or fuse indices first, as explained in the next section.

    Tensor factorizations

    Eigenvalue decomposition

    As tensors are linear maps, they have various kinds of factorizations. Endomorphism, i.e. tensor maps t with codomain(t) == domain(t), have an eigenvalue decomposition. For this, we overload both LinearAlgebra.eigen(t; kwargs...) and LinearAlgebra.eigen!(t; kwargs...), where the latter destroys t in the process. The keyword arguments are the same that are accepted by LinearAlgebra.eigen(!) for matrices. The result is returned as D, V = eigen(t), such that t*V ≈ V*D. For given t::TensorMap{S,N,N}, V is a TensorMap{S,N,1}, whose codomain corresponds to that of t, but whose domain is a single space S (or more correctly a ProductSpace{S,1}), that corresponds to fuse(codomain(t)). The eigenvalues are encoded in D, a TensorMap{S,1,1}, whose domain and codomain correspond to the domain of V. Indeed, we cannot reasonably associate a tensor product structure with the different eigenvalues. Note that D stores the eigenvalues on the diagonal of a (collection of) DenseMatrix instance(s), as there is currently no dedicated DiagonalTensorMap or diagonal storage support.

    We also define LinearAlgebra.ishermitian(t), which can only return true for instances of AbstractEuclideanTensorMap. In all other cases, as the inner product is not defined, there is no notion of hermiticity (i.e. we are not working in a -category). For instances of EuclideanTensorMap, we also define and export the routines eigh and eigh!, which compute the eigenvalue decomposition under the guarantee (not checked) that the map is hermitian. Hence, eigenvalues will be real and V will be unitary with eltype(V) == eltype(t). We also define and export eig and eig!, which similarly assume that the TensorMap is not hermitian (hence this does not require EuclideanTensorMap), and always returns complex values eigenvalues and eigenvectors. Like for matrices, LinearAlgebra.eigen is type unstable and checks hermiticity at run-time, then falling back to either eig or eigh.

    Orthogonal factorizations

    Other factorizations that are provided by TensorKit.jl are orthogonal or unitary in nature, and thus always require a AbstractEuclideanTensorMap. However, they don't require equal domain and codomain. Let us first discuss the singular value decomposition, for which we define and export the methods tsvd and tsvd! (where as always, the latter destroys the input).

    U, Σ, Vʰ, ϵ = tsvd(t; trunc = notrunc(), p::Real = 2,
                             alg::OrthogonalFactorizationAlgorithm = SDD())

    This computes a (possibly truncated) singular value decomposition of t::TensorMap{S,N₁,N₂} (with InnerProductStyle(t)<:EuclideanProduct), such that norm(t - U*Σ*Vʰ) ≈ ϵ, where U::TensorMap{S,N₁,1}, S::TensorMap{S,1,1}, Vʰ::TensorMap{S,1,N₂} and ϵ::Real. U is an isometry, i.e. U'*U approximates the identity, whereas U*U' is an idempotent (squares to itself). The same holds for adjoint(Vʰ). The domain of U equals the domain and codomain of Σ and the codomain of . In the case of trunc = notrunc() (default value, see below), this space is given by min(fuse(codomain(t)), fuse(domain(t))). The singular values are contained in Σ and are stored on the diagonal of a (collection of) DenseMatrix instance(s), similar to the eigenvalues before.

    The keyword argument trunc provides a way to control the truncation, and is connected to the keyword argument p. The default value notrunc() implies no truncation, and thus ϵ = 0. Other valid options are

    Furthermore, the alg keyword can be either SVD() or SDD() (default), which corresponds to two different algorithms in LAPACK to compute singular value decompositions. The default value SDD() uses a divide-and-conquer algorithm and is typically the fastest, but can loose some accuracy. The SVD() method uses a QR-iteration scheme and can be more accurate, but is typically slower. Since Julia 1.3, these two algorithms are also available in the LinearAlgebra standard library, where they are specified as LinearAlgebra.DivideAndConquer() and LinearAlgebra.QRIteration().

    Note that we defined the new method tsvd (truncated or tensor singular value decomposition), rather than overloading LinearAlgebra.svd. We (will) also support LinearAlgebra.svd(t) as alternative for tsvd(t; trunc = notrunc()), but note that the return values are then given by U, Σ, V = svd(t) with V = adjoint(Vʰ).

    We also define the following pair of orthogonal factorization algorithms, which are useful when one is not interested in truncating a tensor or knowing the singular values, but only in its image or coimage.

    Furthermore, we can compute an orthonormal basis for the orthogonal complement of the image and of the co-image (i.e. the kernel) with the following methods:

    Note that the methods leftorth, rightorth, leftnull and rightnull also come in a form with exclamation mark, i.e. leftorth!, rightorth!, leftnull! and rightnull!, which destroy the input tensor t.

    Factorizations for custom index bipartions

    Finally, note that each of the factorizations take a single argument, the tensor map t, and a number of keyword arguments. They perform the factorization according to the given codomain and domain of the tensor map. In many cases, we want to perform the factorization according to a different bipartition of the indices. When BraidingStyle(sectortype(t)) isa SymmetricBraiding, we can immediately specify an alternative bipartition of the indices of t in all of these methods, in the form

    factorize(t::AbstracTensorMap, pleft::NTuple{N₁′,Int}, pright::NTuple{N₂′,Int}; kwargs...)

    where pleft will be the indices in the codomain of the new tensor map, and pright the indices of the domain. Here, factorize is any of the methods LinearAlgebra.eigen, eig, eigh, tsvd, LinearAlgebra.svd, leftorth, rightorth, leftnull and rightnull. This signature does not allow for the exclamation mark, because it amounts to

    factorize!(permute(t, pleft, pright; copy = true); kwargs...)

    where permute was introduced and discussed in the previous section. When the braiding is not symmetric, the user should manually apply braid to bring the tensor map in proper form before performing the factorization.

    Some examples to conclude this section

    julia> V1 = SU₂Space(0=>2,1/2=>1)Rep[SU₂](0=>2, 1/2=>1)
    julia> V2 = SU₂Space(0=>1,1/2=>1,1=>1)Rep[SU₂](0=>1, 1/2=>1, 1=>1)
    julia> t = TensorMap(randn, V1 ⊗ V1, V2);
    julia> U, S, W = tsvd(t);
    julia> t ≈ U * S * Wtrue
    julia> D, V = eigh(t'*t);
    julia> D ≈ S*Strue
    julia> U'*U ≈ id(domain(U))true
    julia> STensorMap(Rep[SU₂](0=>1, 1/2=>1, 1=>1) ← Rep[SU₂](0=>1, 1/2=>1, 1=>1)): * Data for fusiontree FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()): - 2.2823761481969376 + 3.036281180421673 * Data for fusiontree FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()) ← FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()): - 1.1610556727610135 + 1.7401751352978876 * Data for fusiontree FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()) ← FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()): - 2.3334276591558356
    julia> Q, R = leftorth(t; alg = Polar());
    julia> isposdef(R)true
    julia> Q ≈ U*Wtrue
    julia> R ≈ W'*S*Wtrue
    julia> U2, S2, W2, ε = tsvd(t; trunc = truncspace(V1));
    julia> W2*W2' ≈ id(codomain(W2))true
    julia> S2TensorMap(Rep[SU₂](0=>1, 1/2=>1) ← Rep[SU₂](0=>1, 1/2=>1)): + 2.013202384544521
    julia> Q, R = leftorth(t; alg = Polar());
    julia> isposdef(R)true
    julia> Q ≈ U*Wtrue
    julia> R ≈ W'*S*Wtrue
    julia> U2, S2, W2, ε = tsvd(t; trunc = truncspace(V1));
    julia> W2*W2' ≈ id(codomain(W2))true
    julia> S2TensorMap(Rep[SU₂](0=>1, 1/2=>1) ← Rep[SU₂](0=>1, 1/2=>1)): * Data for fusiontree FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()): - 2.2823761481969376 + 3.036281180421673 * Data for fusiontree FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()) ← FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()): - 1.1610556727610135
    julia> ε ≈ norm(block(S, Irrep[SU₂](1)))*sqrt(dim(Irrep[SU₂](1)))true
    julia> L, Q = rightorth(t, (1,), (2,3));
    julia> codomain(L), domain(L), domain(Q)(ProductSpace(Rep[SU₂](0=>2, 1/2=>1)), ProductSpace(Rep[SU₂](0=>2, 1/2=>1)), (Rep[SU₂](0=>2, 1/2=>1)' ⊗ Rep[SU₂](0=>1, 1/2=>1, 1=>1)))
    julia> Q*Q'TensorMap(Rep[SU₂](0=>2, 1/2=>1) ← Rep[SU₂](0=>2, 1/2=>1)): + 1.7401751352978876
    julia> ε ≈ norm(block(S, Irrep[SU₂](1)))*sqrt(dim(Irrep[SU₂](1)))true
    julia> L, Q = rightorth(t, (1,), (2,3));
    julia> codomain(L), domain(L), domain(Q)(ProductSpace(Rep[SU₂](0=>2, 1/2=>1)), ProductSpace(Rep[SU₂](0=>2, 1/2=>1)), (Rep[SU₂](0=>2, 1/2=>1)' ⊗ Rep[SU₂](0=>1, 1/2=>1, 1=>1)))
    julia> Q*Q'TensorMap(Rep[SU₂](0=>2, 1/2=>1) ← Rep[SU₂](0=>2, 1/2=>1)): * Data for fusiontree FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()): - 0.9999999999999998 -2.3310400038844662e-17 - -2.3310400038844662e-17 0.9999999999999998 + 1.0000000000000002 8.203534838953819e-17 + 8.203534838953819e-17 1.0 * Data for fusiontree FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()) ← FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()): - 0.9999999999999997
    julia> P = Q'*Q;
    julia> P ≈ P*Ptrue
    julia> t′ = permute(t, (1,), (2,3));
    julia> t′ ≈ t′ * Ptrue

    Bosonic tensor contractions and tensor networks

    One of the most important operation with tensor maps is to compose them, more generally known as contracting them. As mentioned in the section on category theory, a typical composition of maps in a ribbon category can graphically be represented as a planar arrangement of the morphisms (i.e. tensor maps, boxes with lines eminating from top and bottom, corresponding to source and target, i.e. domain and codomain), where the lines connecting the source and targets of the different morphisms should be thought of as ribbons, that can braid over or underneath each other, and that can twist. Technically, we can embed this diagram in $ℝ × [0,1]$ and attach all the unconnected line endings corresponding objects in the source at some position $(x,0)$ for $x∈ℝ$, and all line endings corresponding to objects in the target at some position $(x,1)$. The resulting morphism is then invariant under what is known as framed three-dimensional isotopy, i.e. three-dimensional rearrangements of the morphism that respect the rules of boxes connected by ribbons whose open endings are kept fixed. Such a two-dimensional diagram cannot easily be encoded in a single line of code.

    However, things simplify when the braiding is symmetric (such that over- and under- crossings become equivalent, i.e. just crossings), and when twists, i.e. self-crossings in this case, are trivial. This amounts to BraidingStyle(I) == Bosonic() in the language of TensorKit.jl, and is true for any subcategory of $\mathbf{Vect}$, i.e. ordinary tensors, possibly with some symmetry constraint. The case of $\mathbf{SVect}$ and its subcategories, and more general categories, are discussed below.

    In the case of triival twists, we can deform the diagram such that we first combine every morphism with a number of coevaluations $η$ so as to represent it as a tensor, i.e. with a trivial domain. We can then rearrange the morphism to be all ligned up horizontally, where the original morphism compositions are now being performed by evaluations $ϵ$. This process will generate a number of crossings and twists, where the latter can be omitted because they act trivially. Similarly, double crossings can also be omitted. As a consequence, the diagram, or the morphism it represents, is completely specified by the tensors it is composed of, and which indices between the different tensors are connect, via the evaluation $ϵ$, and which indices make up the source and target of the resulting morphism. If we also compose the resulting morphisms with coevaluations so that it has a trivial domain, we just have one type of unconnected lines, henceforth called open indices. We sketch such a rearrangement in the following picture

    tensor unitary

    Hence, we can now specify such a tensor diagram, henceforth called a tensor contraction or also tensor network, using a one-dimensional syntax that mimicks abstract index notation and specifies which indices are connected by the evaluation map using Einstein's summation conventation. Indeed, for BraidingStyle(I) == Bosonic(), such a tensor contraction can take the same format as if all tensors were just multi-dimensional arrays. For this, we rely on the interface provided by the package TensorOperations.jl.

    The above picture would be encoded as

    @tensor E[a,b,c,d,e] := A[v,w,d,x]*B[y,z,c,x]*C[v,e,y,b]*D[a,w,z]

    or

    @tensor E[:] := A[1,2,-4,3]*B[4,5,-3,3]*C[1,-5,4,-2]*D[-1,2,5]

    where the latter syntax is known as NCON-style, and labels the unconnected or outgoing indices with negative integers, and the contracted indices with positive integers.

    A number of remarks are in order. TensorOperations.jl accepts both integers and any valid variable name as dummy label for indices, and everything in between [ ] is not resolved in the current context but interpreted as a dummy label. Here, we label the indices of a TensorMap, like A::TensorMap{S,N₁,N₂}, in a linear fashion, where the first position corresponds to the first space in codomain(A), and so forth, up to position N₁. Index N₁+1then corresponds to the first space in domain(A). However, because we have applied the coevaluation $η$, it actually corresponds to the corresponding dual space, in accordance with the interface of space(A, i) that we introduced above, and as indiated by the dotted box around $A$ in the above picture. The same holds for the other tensor maps. Note that our convention also requires that we braid indices that we brought from the domain to the codomain, and so this is only unambiguous for a symmetric braiding, where there is a unique way to permute the indices.

    With the current syntax, we create a new object E because we use the definition operator :=. Furthermore, with the current syntax, it will be a Tensor, i.e. it will have a trivial domain, and correspond to the dotted box in the picture above, rather than the actual morphism E. We can also directly define E with the correct codomain and domain by rather using

    @tensor E[a b c;d e] := A[v,w,d,x]*B[y,z,c,x]*C[v,e,y,b]*D[a,w,z]

    or

    @tensor E[(a,b,c);(d,e)] := A[v,w,d,x]*B[y,z,c,x]*C[v,e,y,b]*D[a,w,z]

    where the latter syntax can also be used when the codomain is empty. When using the assignment operator =, the TensorMap E is assumed to exist and the contents will be written to the currently allocated memory. Note that for existing tensors, both on the left hand side and right hand side, trying to specify the indices in the domain and the codomain seperately using the above syntax, has no effect, as the bipartition of indices are already fixed by the existing object. Hence, if E has been created by the previous line of code, all of the following lines are now equivalent

    @tensor E[(a,b,c);(d,e)] = A[v,w,d,x]*B[y,z,c,x]*C[v,e,y,b]*D[a,w,z]
    + 1.0000000000000004
    julia> P = Q'*Q;
    julia> P ≈ P*Ptrue
    julia> t′ = permute(t, (1,), (2,3));
    julia> t′ ≈ t′ * Ptrue

    Bosonic tensor contractions and tensor networks

    One of the most important operation with tensor maps is to compose them, more generally known as contracting them. As mentioned in the section on category theory, a typical composition of maps in a ribbon category can graphically be represented as a planar arrangement of the morphisms (i.e. tensor maps, boxes with lines eminating from top and bottom, corresponding to source and target, i.e. domain and codomain), where the lines connecting the source and targets of the different morphisms should be thought of as ribbons, that can braid over or underneath each other, and that can twist. Technically, we can embed this diagram in $ℝ × [0,1]$ and attach all the unconnected line endings corresponding objects in the source at some position $(x,0)$ for $x∈ℝ$, and all line endings corresponding to objects in the target at some position $(x,1)$. The resulting morphism is then invariant under what is known as framed three-dimensional isotopy, i.e. three-dimensional rearrangements of the morphism that respect the rules of boxes connected by ribbons whose open endings are kept fixed. Such a two-dimensional diagram cannot easily be encoded in a single line of code.

    However, things simplify when the braiding is symmetric (such that over- and under- crossings become equivalent, i.e. just crossings), and when twists, i.e. self-crossings in this case, are trivial. This amounts to BraidingStyle(I) == Bosonic() in the language of TensorKit.jl, and is true for any subcategory of $\mathbf{Vect}$, i.e. ordinary tensors, possibly with some symmetry constraint. The case of $\mathbf{SVect}$ and its subcategories, and more general categories, are discussed below.

    In the case of triival twists, we can deform the diagram such that we first combine every morphism with a number of coevaluations $η$ so as to represent it as a tensor, i.e. with a trivial domain. We can then rearrange the morphism to be all ligned up horizontally, where the original morphism compositions are now being performed by evaluations $ϵ$. This process will generate a number of crossings and twists, where the latter can be omitted because they act trivially. Similarly, double crossings can also be omitted. As a consequence, the diagram, or the morphism it represents, is completely specified by the tensors it is composed of, and which indices between the different tensors are connect, via the evaluation $ϵ$, and which indices make up the source and target of the resulting morphism. If we also compose the resulting morphisms with coevaluations so that it has a trivial domain, we just have one type of unconnected lines, henceforth called open indices. We sketch such a rearrangement in the following picture

    tensor unitary

    Hence, we can now specify such a tensor diagram, henceforth called a tensor contraction or also tensor network, using a one-dimensional syntax that mimicks abstract index notation and specifies which indices are connected by the evaluation map using Einstein's summation conventation. Indeed, for BraidingStyle(I) == Bosonic(), such a tensor contraction can take the same format as if all tensors were just multi-dimensional arrays. For this, we rely on the interface provided by the package TensorOperations.jl.

    The above picture would be encoded as

    @tensor E[a,b,c,d,e] := A[v,w,d,x]*B[y,z,c,x]*C[v,e,y,b]*D[a,w,z]

    or

    @tensor E[:] := A[1,2,-4,3]*B[4,5,-3,3]*C[1,-5,4,-2]*D[-1,2,5]

    where the latter syntax is known as NCON-style, and labels the unconnected or outgoing indices with negative integers, and the contracted indices with positive integers.

    A number of remarks are in order. TensorOperations.jl accepts both integers and any valid variable name as dummy label for indices, and everything in between [ ] is not resolved in the current context but interpreted as a dummy label. Here, we label the indices of a TensorMap, like A::TensorMap{S,N₁,N₂}, in a linear fashion, where the first position corresponds to the first space in codomain(A), and so forth, up to position N₁. Index N₁+1then corresponds to the first space in domain(A). However, because we have applied the coevaluation $η$, it actually corresponds to the corresponding dual space, in accordance with the interface of space(A, i) that we introduced above, and as indiated by the dotted box around $A$ in the above picture. The same holds for the other tensor maps. Note that our convention also requires that we braid indices that we brought from the domain to the codomain, and so this is only unambiguous for a symmetric braiding, where there is a unique way to permute the indices.

    With the current syntax, we create a new object E because we use the definition operator :=. Furthermore, with the current syntax, it will be a Tensor, i.e. it will have a trivial domain, and correspond to the dotted box in the picture above, rather than the actual morphism E. We can also directly define E with the correct codomain and domain by rather using

    @tensor E[a b c;d e] := A[v,w,d,x]*B[y,z,c,x]*C[v,e,y,b]*D[a,w,z]

    or

    @tensor E[(a,b,c);(d,e)] := A[v,w,d,x]*B[y,z,c,x]*C[v,e,y,b]*D[a,w,z]

    where the latter syntax can also be used when the codomain is empty. When using the assignment operator =, the TensorMap E is assumed to exist and the contents will be written to the currently allocated memory. Note that for existing tensors, both on the left hand side and right hand side, trying to specify the indices in the domain and the codomain seperately using the above syntax, has no effect, as the bipartition of indices are already fixed by the existing object. Hence, if E has been created by the previous line of code, all of the following lines are now equivalent

    @tensor E[(a,b,c);(d,e)] = A[v,w,d,x]*B[y,z,c,x]*C[v,e,y,b]*D[a,w,z]
     @tensor E[a,b,c,d,e] = A[v w d;x]*B[(y,z,c);(x,)]*C[v e y; b]*D[a,w,z]
     @tensor E[a b; c d e] = A[v; w d x]*B[y,z,c,x]*C[v,e,y,b]*D[a w;z]

    and none of those will or can change the partition of the indices of E into its codomain and its domain.

    Two final remarks are in order. Firstly, the order of the tensors appearing on the right hand side is irrelevant, as we can reorder them by using the allowed moves of the Penrose graphical calculus, which yields some crossings and a twist. As the latter is trivial, it can be omitted, and we just use the same rules to evaluate the newly ordered tensor network. For the particular case of matrix matrix multiplication, which also captures more general settings by appropriotely combining spaces into a single line, we indeed find

    tensor contraction reorder

    or thus, the following to lines of code yield the same result

    @tensor C[i,j] := B[i,k]*A[k,j]
     @tensor C[i,j] := A[k,j]*B[i,k]

    Reordering of tensors can be used internally by the @tensor macro to evaluate the contraction in a more efficient manner. In particular, the NCON-style of specifying the contraction gives the user control over the order, and there are other macros, such as @tensoropt, that try to automate this process. There is also an @ncon macro and ncon function, an we recommend reading the manual of TensorOperations.jl to learn more about the possibilities and how they work.

    A final remark involves the use of adjoints of tensors. The current framework is such that the user should not be to worried about the actual bipartition into codomain and domain of a given TensorMap instance. Indeed, for factorizations one just specifies the requested bipartition via the factorize(t, pleft, pright) interface, and for tensor contractions the @contract macro figures out the correct manipulations automatically. However, when wanting to use the adjoint of an instance t::TensorMap{S,N₁,N₂}, the resulting adjoint(t) is a AbstractTensorMap{S,N₂,N₁} and one need to know the values of N₁ and N₂ to know exactly where the ith index of t will end up in adjoint(t), and hence to know and understand the index order of t'. Within the @tensor macro, one can instead use conj() on the whole index expression so as to be able to use the original index ordering of t. Indeed, for matrices of thus, TensorMap{S,1,1} instances, this yields exactly the equivalence one expects, namely equivalence between the following to expressions.

    @tensor C[i,j] := B'[i,k]*A[k,j]
    -@tensor C[i,j] := conj(B[k,i])*A[k,j]

    For e.g. an instance A::TensorMap{S,3,2}, the following two syntaxes have the same effect within an @tensor expression: conj(A[a,b,c,d,e]) and A'[d,e,a,b,c].

    Some examples:

    Fermionic tensor contractions

    TODO

    Anyonic tensor contractions

    TODO

    +@tensor C[i,j] := conj(B[k,i])*A[k,j]

    For e.g. an instance A::TensorMap{S,3,2}, the following two syntaxes have the same effect within an @tensor expression: conj(A[a,b,c,d,e]) and A'[d,e,a,b,c].

    Some examples:

    Fermionic tensor contractions

    TODO

    Anyonic tensor contractions

    TODO

    diff --git a/dev/man/tutorial/index.html b/dev/man/tutorial/index.html index de353423..28ae2800 100644 --- a/dev/man/tutorial/index.html +++ b/dev/man/tutorial/index.html @@ -1,647 +1,647 @@ Tutorial · TensorKit.jl

    Tutorial

    Before discussing at length all aspects of this package, both its usage and implementation, we start with a short tutorial to sketch the main capabilities. Thereto, we start by loading TensorKit.jl

    julia> using TensorKit

    Cartesian tensors

    The most important objects in TensorKit.jl are tensors, which we now create with random (normally distributed) entries in the following manner

    julia> A = Tensor(randn, ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)TensorMap((ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}()):
     [:, :, 1] =
    - -0.9337910682247635     0.7312571071025481
    -  0.08975011073521032   -0.8599073963298248
    -  0.005316931776289221  -0.062290318525665696
    +  1.0555813089269042   0.4376422399575223
    +  0.9871406423620411  -0.9453234658997305
    + -1.8885500237866142   2.2436909048773024
     
     [:, :, 2] =
    - -0.06535975764489176  -0.8462748628662249
    - -0.9153504276007219    0.017820304617742144
    - -0.19096760861122627  -1.041919932486446
    + -0.17100541362569313  -0.03601530560342866
    +  1.7391800449333275    1.6211959694028348
    + -0.8923333445063663    0.6180770601076007
     
     [:, :, 3] =
    -  0.11052773377033744  1.087295076284188
    -  0.5166007841445812   2.1755517543521914
    - -0.15632504197492023  1.7147369662797365
    + 0.9189928980552804   0.7547089887486599
    + 0.6922728429011329  -0.11062011029822173
    + 0.7489141686664806   0.747184674033249
     
     [:, :, 4] =
    - 0.2075726537719507   0.7369960358542204
    - 1.271860012778008    1.3314353485639887
    - 1.0346676775626442  -2.6210298948544275

    Note that we entered the tensor size not as plain dimensions, by specifying the vector space associated with these tensor indices, in this case ℝ^n, which can be obtained by typing \bbR+TAB. The tensor then lives in the tensor product of the different spaces, which we can obtain by typing (i.e. \otimes+TAB), although for simplicity also the usual multiplication sign * does the job. Note also that A is printed as an instance of a parametric type TensorMap, which we will discuss below and contains Tensor.

    Briefly sidetracking into the nature of ℝ^n:

    julia> V = ℝ^3ℝ^3
    julia> typeof(V)CartesianSpace
    julia> V == CartesianSpace(3)true
    julia> supertype(CartesianSpace)ElementarySpace
    julia> supertype(ElementarySpace)VectorSpace

    i.e. ℝ^n can also be created without Unicode using the longer syntax CartesianSpace(n). It is subtype of ElementarySpace{ℝ}, with a standard (Euclidean) inner product over the real numbers. Furthermore,

    julia> W = ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)
    julia> typeof(W)ProductSpace{CartesianSpace, 3}
    julia> supertype(ProductSpace)CompositeSpace
    julia> supertype(CompositeSpace)VectorSpace

    i.e. the tensor product of a number of CartesianSpaces is some generic parametric ProductSpace type, specifically ProductSpace{CartesianSpace,N} for the tensor product of N instances of CartesianSpace.

    Tensors are itself vectors (but not Vectors), so we can compute linear combinations, provided they live in the same space.

    julia> B = Tensor(randn, ℝ^3 * ℝ^2 * ℝ^4);
    julia> C = 0.5*A + 2.5*BTensorMap((ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}()): + 0.9190764612794894 0.8640402412092449 + -0.5000561792274693 -0.676735850458127 + -1.0981789885761908 0.1325316358646223

    Note that we entered the tensor size not as plain dimensions, by specifying the vector space associated with these tensor indices, in this case ℝ^n, which can be obtained by typing \bbR+TAB. The tensor then lives in the tensor product of the different spaces, which we can obtain by typing (i.e. \otimes+TAB), although for simplicity also the usual multiplication sign * does the job. Note also that A is printed as an instance of a parametric type TensorMap, which we will discuss below and contains Tensor.

    Briefly sidetracking into the nature of ℝ^n:

    julia> V = ℝ^3ℝ^3
    julia> typeof(V)CartesianSpace
    julia> V == CartesianSpace(3)true
    julia> supertype(CartesianSpace)ElementarySpace
    julia> supertype(ElementarySpace)VectorSpace

    i.e. ℝ^n can also be created without Unicode using the longer syntax CartesianSpace(n). It is subtype of ElementarySpace{ℝ}, with a standard (Euclidean) inner product over the real numbers. Furthermore,

    julia> W = ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)
    julia> typeof(W)ProductSpace{CartesianSpace, 3}
    julia> supertype(ProductSpace)CompositeSpace
    julia> supertype(CompositeSpace)VectorSpace

    i.e. the tensor product of a number of CartesianSpaces is some generic parametric ProductSpace type, specifically ProductSpace{CartesianSpace,N} for the tensor product of N instances of CartesianSpace.

    Tensors are itself vectors (but not Vectors), so we can compute linear combinations, provided they live in the same space.

    julia> B = Tensor(randn, ℝ^3 * ℝ^2 * ℝ^4);
    julia> C = 0.5*A + 2.5*BTensorMap((ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}()): [:, :, 1] = - 0.9039747818788164 -0.05102349748879037 - 0.09598505592954483 -4.0612902142084994 - -5.3539859502641605 -5.7005468039484635 + 2.0208620771263868 -3.37794629095611 + -1.7283510857055138 0.5909374148913051 + -4.334657094539649 0.06727124920078853 [:, :, 2] = - 0.3273891662545269 4.587150095045647 - -2.109267740268719 1.6888566678056043 - -1.809245442808028 1.367675196333943 + 0.4602587665616734 0.27458375306988814 + 0.8262030208948887 -0.8495166851351729 + 0.08875755421305748 -3.63197276438458 [:, :, 3] = - 1.877514754480465 4.861351043344545 - 1.2283840218210829 2.044294772229089 - -0.00107869796741987 -2.5731431411295302 + -2.6680127483142932 -0.6469203080629893 + 3.6051801068003537 3.503398733164546 + 2.9373240528046467 -0.3787251368344546 [:, :, 4] = - -5.185218994022165 0.673594716620794 - 1.9406287551807755 0.9109586574721811 - 2.4163851241313283 -3.0777869292273055

    Given that they are behave as vectors, they also have a scalar product and norm, which they inherit from the Euclidean inner product on the individual ℝ^n spaces:

    julia> scalarBA = dot(B,A)2.7192758502836525
    julia> scalarAA = dot(A,A)25.914831272069257
    julia> normA² = norm(A)^225.91483127206926

    If two tensors live on different spaces, these operations have no meaning and are thus not allowed

    julia> B′ = Tensor(randn, ℝ^4 * ℝ^2 * ℝ^3);
    julia> space(B′) == space(A)false
    julia> C′ = 0.5*A + 2.5*B′ERROR: SpaceMismatch("(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}() ≠ (ℝ^4 ⊗ ℝ^2 ⊗ ℝ^3) ← ProductSpace{CartesianSpace, 0}()")
    julia> scalarBA′ = dot(B′,A)ERROR: SpaceMismatch("(ℝ^4 ⊗ ℝ^2 ⊗ ℝ^3) ← ProductSpace{CartesianSpace, 0}() ≠ (ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}()")

    However, in this particular case, we can reorder the indices of B′ to match space of A, using the routine permute (we deliberately choose not to overload permutedims from Julia Base, for reasons that become clear below):

    julia> space(permute(B′,(3,2,1))) == space(A)true

    We can contract two tensors using Einstein summation convention, which takes the interface from TensorOperations.jl. TensorKit.jl reexports the @tensor macro

    julia> @tensor D[a,b,c,d] := A[a,b,e]*B[d,c,e]TensorMap((ℝ^3 ⊗ ℝ^2 ⊗ ℝ^2 ⊗ ℝ^3) ← ProductSpace{CartesianSpace, 0}()):
    +  2.743715597677503    1.6588760261583282
    +  1.2731298792700874   2.244650443749476
    + -0.4161134956348913  -0.6244212885320436

    Given that they are behave as vectors, they also have a scalar product and norm, which they inherit from the Euclidean inner product on the individual ℝ^n spaces:

    julia> scalarBA = dot(B,A)-1.9837066490934938
    julia> scalarAA = dot(A,A)25.184103276061954
    julia> normA² = norm(A)^225.18410327606195

    If two tensors live on different spaces, these operations have no meaning and are thus not allowed

    julia> B′ = Tensor(randn, ℝ^4 * ℝ^2 * ℝ^3);
    julia> space(B′) == space(A)false
    julia> C′ = 0.5*A + 2.5*B′ERROR: SpaceMismatch("(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}() ≠ (ℝ^4 ⊗ ℝ^2 ⊗ ℝ^3) ← ProductSpace{CartesianSpace, 0}()")
    julia> scalarBA′ = dot(B′,A)ERROR: SpaceMismatch("(ℝ^4 ⊗ ℝ^2 ⊗ ℝ^3) ← ProductSpace{CartesianSpace, 0}() ≠ (ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}()")

    However, in this particular case, we can reorder the indices of B′ to match space of A, using the routine permute (we deliberately choose not to overload permutedims from Julia Base, for reasons that become clear below):

    julia> space(permute(B′,(3,2,1))) == space(A)true

    We can contract two tensors using Einstein summation convention, which takes the interface from TensorOperations.jl. TensorKit.jl reexports the @tensor macro

    julia> @tensor D[a,b,c,d] := A[a,b,e]*B[d,c,e]TensorMap((ℝ^3 ⊗ ℝ^2 ⊗ ℝ^2 ⊗ ℝ^3) ← ProductSpace{CartesianSpace, 0}()):
     [:, :, 1, 1] =
    - -0.88003363663867   -0.4875641030178916
    - -2.396820691722583  -1.6999890046741921
    - -2.32747959960867    6.610706782065022
    +  0.2831620122529901    0.0988148860530199
    + -0.35370309721433196  -0.690587380771311
    + -3.2629651091241856    0.6612848965003584
     
     [:, :, 2, 1] =
    -  0.24086020011361714  0.1498850626591955
    - -0.8019976359573299   4.098868398086565
    - -0.5273237707730981   0.5638772757423685
    + -1.4641768203381553  -0.5190446572046999
    + -1.745688265614139    1.2630018639231786
    +  1.766873928823072   -3.3967662020664706
     
     [:, :, 1, 2] =
    - 0.1753050208239493   1.380560300022431
    - 1.4707652738635522   1.5096829706004853
    - 0.6055814735030067  -0.015429059436456234
    +  0.8227749369742096   1.1219421109974546
    + -0.3097295397195598   0.25552169513894585
    +  2.0011877234396303  -0.9500589005469022
     
     [:, :, 2, 2] =
    -  1.3750943579055317   -1.1425508901033594
    - -0.4230408017962221    2.224011664598274
    - -0.09436315002804743  -0.21071131673660676
    +  2.820409384448435     2.177154710443327
    + -0.2661492826351446   -2.335400934245191
    + -0.27949343461922044   1.7446613023027504
     
     [:, :, 1, 3] =
    - 2.20668350881224    -0.3933459094023428
    - 1.41723278982338     2.9087369019888833
    - 0.9006515600160763  -1.0904088849329243
    + -0.4771308968817725    0.21843019342867967
    + -0.28349866970766135   1.4794921954814058
    +  3.079564550462808    -2.137514641023139
     
     [:, :, 2, 3] =
    -  1.7698367483112827  -4.310618220790501
    - -2.5030079958115357  -1.9629790034646797
    - -0.6732305142659513  -1.1460040742147755
    julia> @tensor d = A[a,b,c]*A[a,b,c]25.91483127206926
    julia> d ≈ scalarAA ≈ normA²true

    We hope that the index convention is clear. The := is to create a new tensor D, without the : the result would be written in an existing tensor D, which in this case would yield an error as no tensor D exists. If the contraction yields a scalar, regular assignment with = can be used.

    Finally, we can factorize a tensor, creating a bipartition of a subset of its indices and its complement. With a plain Julia Array, one would apply permutedims and reshape to cast the array into a matrix before applying e.g. the singular value decomposition. With TensorKit.jl, one just specifies which indices go to the left (rows) and right (columns)

    julia> U, S, Vd = tsvd(A, (1,3), (2,));
    julia> @tensor A′[a,b,c] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> A ≈ A′true
    julia> UTensorMap((ℝ^3 ⊗ ℝ^4) ← ℝ^2): + -0.7061692914072266 -0.5936606817313385 + -3.2282271429734397 -1.9366414866923933 + 2.281355511284149 -2.1822580806398415
    julia> @tensor d = A[a,b,c]*A[a,b,c]25.18410327606196
    julia> d ≈ scalarAA ≈ normA²true

    We hope that the index convention is clear. The := is to create a new tensor D, without the : the result would be written in an existing tensor D, which in this case would yield an error as no tensor D exists. If the contraction yields a scalar, regular assignment with = can be used.

    Finally, we can factorize a tensor, creating a bipartition of a subset of its indices and its complement. With a plain Julia Array, one would apply permutedims and reshape to cast the array into a matrix before applying e.g. the singular value decomposition. With TensorKit.jl, one just specifies which indices go to the left (rows) and right (columns)

    julia> U, S, Vd = tsvd(A, (1,3), (2,));
    julia> @tensor A′[a,b,c] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> A ≈ A′true
    julia> UTensorMap((ℝ^3 ⊗ ℝ^4) ← ℝ^2): [:, :, 1] = - 0.16414378274671185 -0.18367882485527215 … 0.1591346071349083 - -0.18749380178300293 0.008893485133626545 0.28257108612986304 - -0.013575258310157655 -0.2255366275949465 -0.5756606672010504 + -0.26862754522786547 0.04465269761625491 … -0.21817807903793693 + -0.2952741031275006 -0.4133112943671871 0.11198624328823267 + 0.5790590277982238 0.2592236180652265 0.29858964481428285 [:, :, 2] = - 0.4184076890127368 0.03962758060924269 … -0.10337358151174884 - -0.03111574899263898 0.4181995558842374 -0.5967070861676086 - -0.0017125777519560075 0.09929713509696532 -0.4427420331236168

    Note that the tsvd routine returns the decomposition of the linear map as three factors, U, S and Vd, each of them a TensorMap, such that Vd is already what is commonly called V'. Furthermore, observe that U is printed differently then A, i.e. as a TensorMap((ℝ^3 ⊗ ℝ^4) ← ProductSpace(ℝ^2)). What this means is that tensors (or more appropriately, TensorMap instances) in TensorKit.jl are always considered to be linear maps between two ProductSpace instances, with

    julia> codomain(U)(ℝ^3 ⊗ ℝ^4)
    julia> domain(U)ProductSpace(ℝ^2)
    julia> codomain(A)(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)
    julia> domain(A)ProductSpace{CartesianSpace, 0}()

    Hence, a Tensor instance such as A is just a specific case of TensorMap with an empty domain, i.e. a ProductSpace{CartesianSpace,0} instance. In particular, we can represent a vector v and matrix m as

    julia> v = Tensor(randn, ℝ^3)TensorMap(ℝ^3 ← ProductSpace{CartesianSpace, 0}()):
    - 0.35824661862600293
    - 0.6139588872474919
    - 0.13970590616022677
    julia> m1 = TensorMap(randn, ℝ^4, ℝ^3)TensorMap(ℝ^4 ← ℝ^3): - 1.2983612106135594 0.6078837595250391 -0.08686144012843619 - 0.2967466498277512 -0.5062033082047673 0.06620930495614559 - -0.36872285712971214 2.5158784579599542 -2.4952937336119043 - 0.540460336305756 0.6981998190405294 -0.5822437119267259
    julia> m2 = TensorMap(randn, ℝ^4 → ℝ^2) # alternative syntax for TensorMap(randn, ℝ^2, ℝ^4)TensorMap(ℝ^2 ← ℝ^4): - 0.4513912362224189 -0.007069848555095417 … -0.6093597449359544 - -0.25774027680842676 0.09778440081393554 0.15728158360953584
    julia> w = m1 * v # matrix vector productTensorMap(ℝ^4 ← ProductSpace{CartesianSpace, 0}()): - 0.8262140938277608 - -0.19522970499187461 - 1.0639449495320048 - 0.5409411865767159
    julia> m3 = m2*m1 # matrix matrix productTensorMap(ℝ^2 ← ℝ^3): - 0.47879619340648283 -1.6769767852318853 1.8320991978885603 - -0.6535839273240747 2.8578601102744035 -2.9927641271147016

    Note that for the construction of m1, in accordance with how one specifies the dimensions of a matrix (e.g. randn(4,3)), the first space is the codomain and the second the domain. This is somewhat opposite to the general notation for a function f:domain→codomain, so that we also support this more mathemical notation, as illustrated in the construction of m2. In fact, there is a third syntax which mixes both and reads as TensorMap(randn, codomain←domain).

    This 'matrix vector' or 'matrix matrix' product can be computed between any two TensorMap instances for which the domain of the first matches with the codomain of the second, e.g.

    julia> v′ = v ⊗ vTensorMap((ℝ^3 ⊗ ℝ^3) ← ProductSpace{CartesianSpace, 0}()):
    - 0.1283406397569648    0.21994869533179737  0.050049168483982914
    - 0.21994869533179737   0.3769455152301785   0.08577368268803535
    - 0.050049168483982914  0.08577368268803535  0.019517740216050086
    julia> m1′ = m1 ⊗ m1TensorMap((ℝ^4 ⊗ ℝ^4) ← (ℝ^3 ⊗ ℝ^3)): + 0.16591065571181549 -0.016648477965347384 … 0.286089411924857 + -0.24206960121386964 0.5373126373704193 -0.2162480126377546 + 0.5907183825220851 0.14948197723367196 -0.0002279307900647355

    Note that the tsvd routine returns the decomposition of the linear map as three factors, U, S and Vd, each of them a TensorMap, such that Vd is already what is commonly called V'. Furthermore, observe that U is printed differently then A, i.e. as a TensorMap((ℝ^3 ⊗ ℝ^4) ← ProductSpace(ℝ^2)). What this means is that tensors (or more appropriately, TensorMap instances) in TensorKit.jl are always considered to be linear maps between two ProductSpace instances, with

    julia> codomain(U)(ℝ^3 ⊗ ℝ^4)
    julia> domain(U)ProductSpace(ℝ^2)
    julia> codomain(A)(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)
    julia> domain(A)ProductSpace{CartesianSpace, 0}()

    Hence, a Tensor instance such as A is just a specific case of TensorMap with an empty domain, i.e. a ProductSpace{CartesianSpace,0} instance. In particular, we can represent a vector v and matrix m as

    julia> v = Tensor(randn, ℝ^3)TensorMap(ℝ^3 ← ProductSpace{CartesianSpace, 0}()):
    + -0.5137131189225208
    +  0.4369084543705012
    + -0.23687566125069517
    julia> m1 = TensorMap(randn, ℝ^4, ℝ^3)TensorMap(ℝ^4 ← ℝ^3): + 0.06396078170072682 -0.8062955856083248 -1.4110264226702969 + 0.6621577948301324 -0.3103214711649129 1.5071884901212225 + 1.6879714558293206 -0.7904172577776607 0.3932376373581048 + 0.4489163060005832 -0.9354367052001278 0.4781697253082169
    julia> m2 = TensorMap(randn, ℝ^4 → ℝ^2) # alternative syntax for TensorMap(randn, ℝ^2, ℝ^4)TensorMap(ℝ^2 ← ℝ^4): + 0.01428159142831924 0.5891826474106712 … 0.12517881223159064 + -0.10165484619903117 -0.3920368682330817 0.1426020755351584
    julia> w = m1 * v # matrix vector productTensorMap(ℝ^4 ← ProductSpace{CartesianSpace, 0}()): + -0.05089703381786471 + -0.8327574905525895 + -1.3056214890075382 + -0.7525811705936054
    julia> m3 = m2*m1 # matrix matrix productTensorMap(ℝ^2 ← ℝ^3): + -0.5341541044000195 0.14810411216763406 0.6990841746182533 + -0.18551539957311744 0.062471449199599995 -0.37538980001658473

    Note that for the construction of m1, in accordance with how one specifies the dimensions of a matrix (e.g. randn(4,3)), the first space is the codomain and the second the domain. This is somewhat opposite to the general notation for a function f:domain→codomain, so that we also support this more mathemical notation, as illustrated in the construction of m2. In fact, there is a third syntax which mixes both and reads as TensorMap(randn, codomain←domain).

    This 'matrix vector' or 'matrix matrix' product can be computed between any two TensorMap instances for which the domain of the first matches with the codomain of the second, e.g.

    julia> v′ = v ⊗ vTensorMap((ℝ^3 ⊗ ℝ^3) ← ProductSpace{CartesianSpace, 0}()):
    +  0.263901168553104    -0.22444560477828804   0.12168613473792912
    + -0.22444560477828804   0.19088899750042032  -0.10349297903503164
    +  0.12168613473792912  -0.10349297903503164   0.05611007889295409
    julia> m1′ = m1 ⊗ m1TensorMap((ℝ^4 ⊗ ℝ^4) ← (ℝ^3 ⊗ ℝ^3)): [:, :, 1, 1] = - 1.6857418332259075 0.385284339515877 … 0.7017127365345527 - 0.385284339515877 0.08805857418399397 0.1603797941635128 - -0.47873545516382354 -0.10941727256815861 -0.19928007936794342 - 0.7017127365345527 0.1603797941635128 0.29209737511973083 + 0.004090981595768031 0.04235213016656476 … 0.02871303784999998 + 0.04235213016656476 0.43845294525430367 0.2972534312446351 + 0.10796397380335722 1.1177034569281512 0.7577579105853252 + 0.02871303784999998 0.2972534312446351 0.20152584979320923 [:, :, 2, 1] = - 0.7892526939292516 0.1803874691237537 … 0.3285370611077099 - -0.6572347400573304 -0.1502141358414893 -0.27358281019143477 - 3.266519000433461 0.7465785037734252 1.3597325174934436 - 0.9065155622996298 0.2071884572106192 0.3773493090072625 + -0.05157129593735375 -0.5338949069476786 … -0.36195923583586614 + -0.019848403874227386 -0.20548178103500123 -0.13930836850801917 + -0.05055570567720407 -0.5233809484057361 -0.3548311955606582 + -0.05983126289615252 -0.6194067059184812 -0.4199327901957979 [:, :, 3, 1] = - -0.11277752456079361 -0.025775841357327228 … -0.04694516314381691 - 0.08596359333674353 0.019647389433160127 0.0357835032231688 - -3.2397925928087807 -0.7404700557855136 -1.3486072904495352 - -0.7559626506893163 -0.17277887089753016 -0.31467963235982993 + -0.09025035299437235 -0.9343221444824141 … -0.6334327693343672 + 0.09640095399849156 0.9979966070120254 0.6766014894318156 + 0.025151786679571318 0.26038536679725394 0.17653078754319734 + 0.030584109416335367 0.31662381086461905 0.2146581867266783 [:, :, 1, 2] = - 0.7892526939292516 -0.6572347400573304 … 0.9065155622996298 - 0.1803874691237537 -0.1502141358414893 0.2071884572106192 - -0.2241406366148233 0.18664873008977406 -0.257442232124072 - 0.3285370611077099 -0.27358281019143477 0.3773493090072625 + -0.05157129593735375 -0.019848403874227386 … -0.05983126289615252 + -0.5338949069476786 -0.20548178103500123 -0.6194067059184812 + -1.3610039334680386 -0.5238137854573345 -1.5789904571128428 + -0.36195923583586614 -0.13930836850801917 -0.4199327901957979 [:, :, 2, 2] = - 0.3695226650942956 -0.30771277007552605 … 0.42442433089805903 - -0.30771277007552605 0.25624178923745067 -0.3534310581862859 - 1.5293616555327552 -1.2735459984604376 1.7565858840756063 - 0.42442433089805903 -0.3534310581862859 0.48748298730822803 + 0.6501125713714715 0.25021083231975033 … 0.7542384860188589 + 0.25021083231975033 0.09629941546595586 0.29028609453936255 + 0.6373099457347652 0.24528344626769985 0.739385315348855 + 0.7542384860188589 0.29028609453936255 0.8750418294356708 [:, :, 3, 2] = - -0.05280165878303289 0.04396954834844473 … -0.06064664177927393 - 0.04024756121228159 -0.0335153692027392 0.04622732473918008 - -1.5168485359072759 1.2631259428969714 -1.7422136332607987 - -0.353936496565832 0.29473369315873216 -0.4065224543047261 + 1.1377043757757668 0.4378717953356107 … 1.3199259077730254 + -1.215239426264418 -0.4677129495772415 -1.4098794353145516 + -0.31706577109688716 -0.12203008214238159 -0.36784891985094825 + -0.38554613868756055 -0.14838633262416814 -0.4472975123687686 [:, :, 1, 3] = - -0.11277752456079361 0.08596359333674353 … -0.7559626506893163 - -0.025775841357327228 0.019647389433160127 -0.17277887089753016 - 0.032027798378558424 -0.024412884092002412 0.2146865650074314 - -0.04694516314381691 0.0357835032231688 -0.31467963235982993 + -0.09025035299437235 0.09640095399849156 … 0.030584109416335367 + -0.9343221444824141 0.9979966070120254 0.31662381086461905 + -2.3817723248884195 2.5440911498791157 0.8071368473620172 + -0.6334327693343672 0.6766014894318156 0.2146581867266783 [:, :, 2, 3] = - -0.05280165878303289 0.04024756121228159 … -0.353936496565832 - 0.04396954834844473 -0.0335153692027392 0.29473369315873216 - -0.21853282604651092 0.16657456405566792 -1.464854412119091 - -0.06064664177927393 0.04622732473918008 -0.4065224543047261 + 1.1377043757757668 -1.215239426264418 … -0.38554613868756055 + 0.4378717953356107 -0.4677129495772415 -0.14838633262416814 + 1.1152996356588785 -1.1913077933156695 -0.3779536030304181 + 1.3199259077730254 -1.4098794353145516 -0.4472975123687686 [:, :, 3, 3] = - 0.007544909781185905 -0.005751035578393614 … 0.05057452732368175 - -0.005751035578393614 0.004383672062775885 -0.038549951481754774 - 0.21674480724499243 -0.1652116637638697 1.4528690858056938 - 0.05057452732368175 -0.038549951481754774 0.3390077400782121
    julia> w′ = m1′ * v′TensorMap((ℝ^4 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}()): - 0.6826297288396279 -0.16130153379812273 … 0.446933232281595 - -0.16130153379812273 0.038114637711214386 -0.10560778827332681 - 0.8790463123602081 -0.20771365862472824 0.5755316434521467 - 0.446933232281595 -0.10560778827332681 0.2926173673350253
    julia> w′ ≈ w ⊗ wtrue

    Another example involves checking that U from the singular value decomposition is a unitary, or at least a left isometric tensor

    julia> codomain(U)(ℝ^3 ⊗ ℝ^4)
    julia> domain(U)ProductSpace(ℝ^2)
    julia> space(U)(ℝ^3 ⊗ ℝ^4) ← ℝ^2
    julia> U'*U # should be the identity on the corresponding domain = codomainTensorMap(ℝ^2 ← ℝ^2): - 0.9999999999999998 1.6357591794641708e-17 - 1.6357591794641708e-17 0.9999999999999999
    julia> U'*U ≈ one(U'*U)true
    julia> P = U*U' # should be a projectorTensorMap((ℝ^3 ⊗ ℝ^4) ← (ℝ^3 ⊗ ℝ^4)): + 1.9909955654737352 -2.126682783505595 … -0.6747101169308918 + -2.126682783505595 2.27161714475389 0.7206919063089711 + -0.5548686967007261 0.5926832409085988 0.18803433303637718 + -0.6747101169308918 0.7206919063089711 0.2286462862013356
    julia> w′ = m1′ * v′TensorMap((ℝ^4 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}()): + 0.0025905080514568857 0.04238488615873531 … 0.038304149290390964 + 0.0423848861587353 0.6934850380714461 0.6267176070606612 + 0.06645226107934754 1.087266074797453 0.9825861485494594 + 0.03830414929039094 0.626717607060661 0.5663784183320415
    julia> w′ ≈ w ⊗ wtrue

    Another example involves checking that U from the singular value decomposition is a unitary, or at least a left isometric tensor

    julia> codomain(U)(ℝ^3 ⊗ ℝ^4)
    julia> domain(U)ProductSpace(ℝ^2)
    julia> space(U)(ℝ^3 ⊗ ℝ^4) ← ℝ^2
    julia> U'*U # should be the identity on the corresponding domain = codomainTensorMap(ℝ^2 ← ℝ^2): + 0.9999999999999997 1.0363599713755635e-16 + 1.0363599713755635e-16 0.9999999999999999
    julia> U'*U ≈ one(U'*U)true
    julia> P = U*U' # should be a projectorTensorMap((ℝ^3 ⊗ ℝ^4) ← (ℝ^3 ⊗ ℝ^4)): [:, :, 1, 1] = - 0.20200817563937878 -0.013569252698335954 … -0.01713134496426488 - -0.043795010494134756 0.17643772001531174 -0.20328454596871046 - -0.002944849950243554 0.004526249620135824 -0.2797377904009295 + 0.09968710373387348 -0.014757104444291215 … 0.10607392371915292 + 0.039156831227209275 0.20017269040918037 -0.06596043920691529 + -0.057544730994209 -0.04483395132601036 -0.08024721946375761 [:, :, 2, 1] = - -0.043795010494134756 0.03320559932772654 … -0.026620206072178857 - 0.036122115542416906 -0.014680065748531288 -0.03441333929712008 - 0.0025985649302180036 0.039197015017728515 0.12170905700163007 + 0.039156831227209275 -0.009154694818973614 … -0.004831213246152848 + 0.14578448780959172 -0.008026934098692411 0.01928043263296211 + -0.3139760983778262 -0.11272706395129121 -0.08811061446024099 [:, :, 3, 1] = - -0.002944849950243554 0.0024256221806012936 … -0.0019832581021048054 - 0.0025985649302180036 -0.0008369306132517553 -0.0028140682049895547 - 0.00018722056074399915 0.0028916639136031966 0.008572972412035234 + -0.057544730994209 0.016021985695092056 … 0.04266028833435426 + -0.3139760983778262 0.07806881576184091 -0.06289503108373042 + 0.6842575651241326 0.238407528066866 0.17276638652914159 [:, :, 1, 2] = - -0.013569252698335954 0.03530825584515577 … -0.03332610256656831 - 0.03320559932772654 0.014938691713339752 -0.07554838319562436 - 0.0024256221806012936 0.04536120794378277 0.08819187926018472 + -0.014757104444291215 0.0022710352229713547 … -0.014505193060326535 + -0.009154694818973614 -0.027400901852525045 0.00860068813217963 + 0.016021985695092056 0.009086386428266766 0.013336627821973146 [:, :, 2, 2] = - 0.17643772001531174 0.014938691713339752 … -0.041815524615576576 - -0.014680065748531288 0.17496996261959544 -0.24702959667458266 - -0.0008369306132517553 0.039520211153524096 -0.19027415120934696 + 0.20017269040918037 -0.027400901852525045 … 0.24389492069481336 + -0.008026934098692411 0.45953109632943534 -0.16247796916128185 + 0.07806881576184091 -0.026821493686314837 -0.12353294267677752 [:, :, 3, 2] = - 0.004526249620135824 0.04536120794378277 … -0.046155383115683214 - 0.039197015017728515 0.039520211153524096 -0.12298143397007204 - 0.0028916639136031966 0.06072669142536656 0.08586955004340086 + -0.04483395132601036 0.009086386428266766 … -0.013791700070588888 + -0.11272706395129121 -0.026821493686314837 -0.003295701343235278 + 0.238407528066866 0.08954174568051444 0.07736741650039798 [:, :, 1, 3] = - 0.0123314671726384 -0.04581833651672668 … 0.044048954707783784 - -0.04225763659945002 -0.024270451686421543 0.1042660743359658 - -0.0030936800015030725 -0.05945252999217546 -0.10785145381017688 + 0.10170358777498369 -0.014129189175520657 … 0.12104684525982119 + 0.003973903132361896 0.22812745050686567 -0.07976130331034108 + 0.021673355520852092 -0.019500727334971085 -0.0662588462591374 [:, :, 2, 3] = - -0.03209735907064005 -0.09673133426634065 … 0.10183956262071506 - -0.08004682505956102 -0.10505399922978398 0.2887510807070225 - -0.005936827021414988 -0.13200226618648458 -0.15507423780893478 + 0.04950579890136365 -0.008314324440743901 … 0.03902860450504955 + 0.05772532479181951 0.07397108405177266 -0.01949151109875041 + -0.11410883158059201 -0.05018616205855642 -0.056463436360192694 [:, :, 3, 3] = - 0.08298006025472633 -0.06660250187745033 … 0.05413348942334974 - -0.07168556885222672 0.02494352508324858 0.07476494510505247 - -0.00516237277835005 -0.07916240104152174 -0.23804508941152464 + 0.08815584677697005 -0.011967502058495326 … 0.10879327298736226 + -0.007411543098754831 0.20495297645613866 -0.07289847092617228 + 0.04304300853091787 -0.008984650170658064 -0.052721548551213436 [:, :, 1, 4] = - -0.01713134496426488 -0.03332610256656831 … 0.036009920542547785 - -0.026620206072178857 -0.041815524615576576 0.10665058738954551 - -0.0019832581021048054 -0.046155383115683214 -0.045839704468276725 + 0.10607392371915292 -0.014505193060326535 … 0.12944882578819475 + -0.004831213246152848 0.24389492069481336 -0.08629921020475591 + 0.04266028833435426 -0.013791700070588888 -0.0652109237118893 [:, :, 2, 4] = - -0.20328454596871046 -0.07554838319562436 … 0.10665058738954551 - -0.03441333929712008 -0.24702959667458266 0.43590576539924836 - -0.0028140682049895547 -0.12298143397007204 0.1015222485358738 + -0.06596043920691529 0.00860068813217963 … -0.08629921020475591 + 0.01928043263296211 -0.16247796916128185 0.05930412165558971 + -0.06289503108373042 -0.003295701343235278 0.033487222187889715 [:, :, 3, 4] = - -0.2797377904009295 0.08819187926018472 … -0.045839704468276725 - 0.12170905700163007 -0.19027415120934696 0.1015222485358738 - 0.008572972412035234 0.08586955004340086 0.5274057116567923
    julia> P*P ≈ Ptrue

    Here, the adjoint of a TensorMap results in a new tensor map (actually a simple wrapper of type AdjointTensorMap <: AbstractTensorMap) with domain and codomain interchanged.

    Our original tensor A living in ℝ^4 * ℝ^2 * ℝ^3 is isomorphic to e.g. a linear map ℝ^3 → ℝ^4 * ℝ^2. This is where the full power of permute emerges. It allows to specify a permutation where some indices go to the codomain, and others go to the domain, as in

    julia> A2 = permute(A,(1,2),(3,))TensorMap((ℝ^3 ⊗ ℝ^2) ← ℝ^4):
    + -0.08024721946375761   0.013336627821973146  …  -0.0652109237118893
    + -0.08811061446024099  -0.12353294267677752       0.033487222187889715
    +  0.17276638652914159   0.07736741650039798       0.08915582794276464
    julia> P*P ≈ Ptrue

    Here, the adjoint of a TensorMap results in a new tensor map (actually a simple wrapper of type AdjointTensorMap <: AbstractTensorMap) with domain and codomain interchanged.

    Our original tensor A living in ℝ^4 * ℝ^2 * ℝ^3 is isomorphic to e.g. a linear map ℝ^3 → ℝ^4 * ℝ^2. This is where the full power of permute emerges. It allows to specify a permutation where some indices go to the codomain, and others go to the domain, as in

    julia> A2 = permute(A,(1,2),(3,))TensorMap((ℝ^3 ⊗ ℝ^2) ← ℝ^4):
     [:, :, 1] =
    - -0.9337910682247635     0.7312571071025481
    -  0.08975011073521032   -0.8599073963298248
    -  0.005316931776289221  -0.062290318525665696
    +  1.0555813089269042   0.4376422399575223
    +  0.9871406423620411  -0.9453234658997305
    + -1.8885500237866142   2.2436909048773024
     
     [:, :, 2] =
    - -0.06535975764489176  -0.8462748628662249
    - -0.9153504276007219    0.017820304617742144
    - -0.19096760861122627  -1.041919932486446
    + -0.17100541362569313  -0.03601530560342866
    +  1.7391800449333275    1.6211959694028348
    + -0.8923333445063663    0.6180770601076007
     
     [:, :, 3] =
    -  0.11052773377033744  1.087295076284188
    -  0.5166007841445812   2.1755517543521914
    - -0.15632504197492023  1.7147369662797365
    + 0.9189928980552804   0.7547089887486599
    + 0.6922728429011329  -0.11062011029822173
    + 0.7489141686664806   0.747184674033249
     
     [:, :, 4] =
    - 0.2075726537719507   0.7369960358542204
    - 1.271860012778008    1.3314353485639887
    - 1.0346676775626442  -2.6210298948544275
    julia> codomain(A2)(ℝ^3 ⊗ ℝ^2)
    julia> domain(A2)ProductSpace(ℝ^4)

    In fact, tsvd(A, (1,3),(2,)) is a shorthand for tsvd(permute(A,(1,3),(2,))), where tsvd(A::TensorMap) will just compute the singular value decomposition according to the given codomain and domain of A.

    Note, finally, that the @tensor macro treats all indices at the same footing and thus does not distinguish between codomain and domain. The linear numbering is first all indices in the codomain, followed by all indices in the domain. However, when @tensor creates a new tensor (i.e. when using :=), the default syntax always creates a Tensor, i.e. with all indices in the codomain.

    julia> @tensor A′[a,b,c] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> codomain(A′)(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)
    julia> domain(A′)ProductSpace{CartesianSpace, 0}()
    julia> @tensor A2′[(a,b);(c,)] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> codomain(A2′)(ℝ^3 ⊗ ℝ^2)
    julia> domain(A2′)ProductSpace(ℝ^4)
    julia> @tensor A2′′[a b; c] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> A2 ≈ A2′ == A2′′true

    As illustrated for A2′ and A2′′, additional syntax is available that enables one to immediately specify the desired codomain and domain indices.

    Complex tensors

    For applications in e.g. quantum physics, we of course want to work with complex tensors. Trying to create a complex-valued tensor with CartesianSpace indices is of course somewhat contrived and prints a (one-time) warning

    julia> A = Tensor(randn, ComplexF64, ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)┌ Warning: scalartype(data) = ComplexF64 ⊈ ℝ)
    +  0.9190764612794894   0.8640402412092449
    + -0.5000561792274693  -0.676735850458127
    + -1.0981789885761908   0.1325316358646223
    julia> codomain(A2)(ℝ^3 ⊗ ℝ^2)
    julia> domain(A2)ProductSpace(ℝ^4)

    In fact, tsvd(A, (1,3),(2,)) is a shorthand for tsvd(permute(A,(1,3),(2,))), where tsvd(A::TensorMap) will just compute the singular value decomposition according to the given codomain and domain of A.

    Note, finally, that the @tensor macro treats all indices at the same footing and thus does not distinguish between codomain and domain. The linear numbering is first all indices in the codomain, followed by all indices in the domain. However, when @tensor creates a new tensor (i.e. when using :=), the default syntax always creates a Tensor, i.e. with all indices in the codomain.

    julia> @tensor A′[a,b,c] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> codomain(A′)(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)
    julia> domain(A′)ProductSpace{CartesianSpace, 0}()
    julia> @tensor A2′[(a,b);(c,)] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> codomain(A2′)(ℝ^3 ⊗ ℝ^2)
    julia> domain(A2′)ProductSpace(ℝ^4)
    julia> @tensor A2′′[a b; c] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> A2 ≈ A2′ == A2′′true

    As illustrated for A2′ and A2′′, additional syntax is available that enables one to immediately specify the desired codomain and domain indices.

    Complex tensors

    For applications in e.g. quantum physics, we of course want to work with complex tensors. Trying to create a complex-valued tensor with CartesianSpace indices is of course somewhat contrived and prints a (one-time) warning

    julia> A = Tensor(randn, ComplexF64, ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)┌ Warning: scalartype(data) = ComplexF64 ⊈ ℝ)
     └ @ TensorKit ~/work/TensorKit.jl/TensorKit.jl/src/tensors/tensor.jl:33
     TensorMap((ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4) ← ProductSpace{CartesianSpace, 0}()):
     [:, :, 1] =
    -  -0.8671661728666265 + 0.49215392729424273im  …  -0.06056950085327045 - 0.5323022317464563im
    -  -1.4448567344570091 - 0.1304079497380723im       -0.8501494819891615 + 0.17226928668175667im
    - -0.09159879669051396 + 0.30011306597637083im       0.9877423436353759 + 0.6560774839167954im
    +   0.5946887399738913 + 0.6675887990589415im  …     -0.527946175636018 - 0.06731730775648363im
    + 0.015930492858489735 + 0.4831419320834282im     -0.053402147646534025 - 0.0561467865751868im
    +   1.5764479905168631 - 1.8100523037602432im        0.4155961323333514 + 1.0115246243511382im
     
     [:, :, 2] =
    - 0.07620440420913885 + 0.10485443334238043im  …   0.7962777405181538 + 0.6765485939678411im
    - 0.23982405912340926 - 0.03176466638614277im     -1.1677342499811652 + 0.9011863724480337im
    - 0.40038796409351673 - 0.43040455418035195im      0.1670384965567812 + 0.11531780104824209im
    +    -0.7655737452849927 + 2.233291896593319im    …  0.16290418824851766 + 0.1603863374653683im
    + -0.0021993642390281333 - 0.541464558294894im       -0.9696340120500261 - 0.3785799641248994im
    +    0.33677452194892643 - 0.11033049789112175im      -0.916686446428361 - 0.6454694662469286im
     
     [:, :, 3] =
    - -0.08933436384160746 - 0.16096285430185345im  …   0.17133892420831398 - 0.6485692773065512im
    -  0.26958896723019093 - 0.26987837536552095im     -0.25572112333420377 - 1.0854933766451742im
    -   0.9533577854270727 - 0.6050547510230689im       -0.5132686541689664 + 0.7740162511577627im
    + -0.7266007844434653 - 0.9377318160747717im   …  -0.18240669099780665 + 0.31126214516983997im
    +  1.0290814449066523 + 0.0796463119999713im        0.1824460305306445 - 0.6694201718489im
    + -0.1610663659174546 - 0.12411694341813634im       0.2672848847072118 + 0.5053640893295271im
     
     [:, :, 4] =
    -  0.5959986335830743 - 0.9360133551928455im   …  0.016835871599059103 - 0.7166458150959981im
    -  1.2846003638608345 - 0.17972416110486053im      -1.1824116668598723 + 1.172252520708675im
    - 0.20316773029788054 + 0.5934001201179402im       0.43685067658825566 - 0.5619725652733573im

    although most of the above operations will work in the expected way (at your own risk). Indeed, we instead want to work with complex vector spaces

    julia> A = Tensor(randn, ComplexF64, ℂ^3 ⊗ ℂ^2 ⊗ ℂ^4)TensorMap((ℂ^3 ⊗ ℂ^2 ⊗ ℂ^4) ← ProductSpace{ComplexSpace, 0}()):
    +   -0.746642227564477 - 0.8175041317521696im  …  0.11545729510477612 - 0.8586936955821053im
    + -0.25818590570084854 - 0.588125926590225im       1.0583195442997078 - 0.5539042287850574im
    +   0.9562618253534529 - 0.3827272778366046im     0.06235784288495693 + 0.4309376850055296im

    although most of the above operations will work in the expected way (at your own risk). Indeed, we instead want to work with complex vector spaces

    julia> A = Tensor(randn, ComplexF64, ℂ^3 ⊗ ℂ^2 ⊗ ℂ^4)TensorMap((ℂ^3 ⊗ ℂ^2 ⊗ ℂ^4) ← ProductSpace{ComplexSpace, 0}()):
     [:, :, 1] =
    -  -0.5750081076149144 + 0.6873395301205997im   …  0.43317818041352146 - 0.0277964803374728im
    -   1.9568163175375215 - 0.3896861525078923im       -1.223196272617589 + 0.48346076095432067im
    - -0.17476530786457478 - 0.47935269875867287im      0.2681467022179706 - 0.05320212274486516im
    +  0.6360953113896072 - 0.42311290672728985im  …    0.1570153759532642 - 0.7653088634301265im
    +  -0.483327293881031 - 0.36064693583380586im     -0.46706430545953775 - 0.12241013274409804im
    + -0.4110136064000504 + 0.48666169978947477im      -0.9132820820342664 + 0.4058254810622393im
     
     [:, :, 2] =
    - -0.15794888094304776 - 1.989251223124558im    …   0.47269065818878475 - 0.5255760287898628im
    - -0.28628552819434705 + 0.5460960024882278im      -0.38693993066246507 - 0.5777138086303555im
    - 0.049001754692502376 + 0.29695795420280247im      -0.6546268304694886 - 0.07904523473179012im
    +  0.2688234466396421 - 0.117910900929527im     …  -0.45742432051133924 + 1.0775410357878128im
    +  0.6308364891597142 - 0.012618460489161279im     -0.49722770224474466 + 0.8245577130872748im
    + -0.4702750709003257 - 0.44949872621634973im        1.0013008321976582 + 0.08502274362938406im
     
     [:, :, 3] =
    -    1.0137766642946424 + 0.9776739653753489im   …  -1.634249190216138 - 0.298468970579338im
    - -0.030404662366462182 + 0.40082508365634206im     1.1291712759458885 + 0.3197770228047314im
    -   -0.5874190830654993 + 0.9208383460800352im      -1.370476994519552 + 0.35765433141859426im
    +  0.09200370269634629 - 0.2784084868982801im  …   0.8785344873253728 + 1.6470142692938077im
    + -0.43325099856671045 + 0.4411770736318087im     -0.5939585156967321 + 0.9618993791273983im
    +  0.48661689920050094 + 0.7149720369816228im     -0.8973179182074638 + 1.0188439958666178im
     
     [:, :, 4] =
    -     0.8035367927672357 + 0.3308794371981286im   …  -1.3832100315448277 + 2.2828937151569546im
    -    0.30655795527499036 + 0.23570895255786506im     0.25440816505283914 - 0.1645886137504937im
    - -0.0019335632039577254 + 0.34069136732185223im      0.4751935864853574 + 0.15722138224431578im

    where is obtained as \bbC+TAB and we also have the non-Unicode alternative ℂ^n == ComplexSpace(n). Most functionality works exactly the same

    julia> B = Tensor(randn, ℂ^3 * ℂ^2 * ℂ^4);
    julia> C = im*A + (2.5-0.8im)*BTensorMap((ℂ^3 ⊗ ℂ^2 ⊗ ℂ^4) ← ProductSpace{ComplexSpace, 0}()): + 0.3290006620699988 - 0.6589707748041261im … -0.8743981995860999 - 0.05263128376931622im + -0.2531860830875862 + 0.1333312756710722im -0.4565715958028171 - 0.021448275305631113im + 0.6271760454576515 + 0.5563981534986161im -0.11785589794339588 + 0.07132611936933032im

    where is obtained as \bbC+TAB and we also have the non-Unicode alternative ℂ^n == ComplexSpace(n). Most functionality works exactly the same

    julia> B = Tensor(randn, ℂ^3 * ℂ^2 * ℂ^4);
    julia> C = im*A + (2.5-0.8im)*BTensorMap((ℂ^3 ⊗ ℂ^2 ⊗ ℂ^4) ← ProductSpace{ComplexSpace, 0}()): [:, :, 1] = - 0.7426143994382682 - 1.032593365073752im … 0.9529585826447489 + 0.13712630767519313im - 1.4191195732891893 + 1.6273976228875064im -1.4244995908319742 - 0.9220638470567397im - -1.2916355292558257 + 0.3919509251000648im -1.5551335228591667 + 0.7828141088112608im + 0.9710664343921469 + 0.46075018253685296im … -0.14951203555028447 + 0.4497580636269957im + -3.5476625608682055 + 0.7673317450636127im -1.8510940059480794 + 0.16445701892195913im + -3.1429431697473373 + 0.4389964639864658im -4.3604462945494555 + 0.35219657828164297im [:, :, 2] = - -2.8913298993960233 + 1.4038370782635383im … 0.24024318819857027 + 0.5639971671779984im - 0.6353600706965181 - 0.6643514716134657im -0.6228497756582929 - 0.0027595836900976im - -5.757707124420195 + 1.7964414891620677im 1.3097043748092758 - 1.048437755294284im + -1.444339049842035 + 0.768743430886542im … 1.8230268482617547 - 1.3856060434072008im + -3.6828781185353137 + 1.8133953944475463im -1.9645957913101577 - 0.13241551721342204im + -3.881409386965615 + 0.9156155253179034im 4.631867625603313 - 0.5081040859568049im [:, :, 3] = - -5.040759698550287 + 2.3139640989106227im … -3.157986459030687 - 0.5281834527409301im - 1.403425513231062 - 0.6077648533704315im 5.980212803260653 - 0.886825468395035im - 0.809539577161753 - 1.1411400185028715im -4.160561387407766 - 0.153546736603017im + -0.3669707090646305 + 0.29852504540447766im … -0.40434002166997796 + 0.4808787280857472im + 0.9444066127172247 - 0.8766377781984012im 1.2677895279446614 - 1.3074589659597913im + -2.9233095390050297 + 1.1932848998479912im -4.479241921893334 + 0.21000941812108564im [:, :, 4] = - -1.5380997768896418 + 1.1898473014685198im … -3.8201019512165555 - 0.8913033960057554im - -0.41441142813545506 + 0.36374274745981916im 1.544294475255005 - 0.18709771062860447im - 0.9941614516091754 - 0.4290864652618866im -0.8663828509896022 + 0.7021252564838492im
    julia> scalarBA = dot(B,A)6.483260166062351 + 2.6838984923924367im
    julia> scalarAA = dot(A,A)30.877650101764942 + 0.0im
    julia> normA² = norm(A)^230.87765010176495
    julia> U,S,Vd = tsvd(A,(1,3),(2,));
    julia> @tensor A′[a,b,c] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> A′ ≈ Atrue
    julia> permute(A,(1,3),(2,)) ≈ U*S*Vdtrue

    However, trying the following

    julia> @tensor D[a,b,c,d] := A[a,b,e]*B[d,c,e]ERROR: SpaceMismatch("(ℂ^3 ⊗ ℂ^2) ← ((ℂ^3)' ⊗ (ℂ^2)') ≠ (ℂ^3 ⊗ ℂ^2) ← (ℂ^4)' * ℂ^4 ← ((ℂ^3)' ⊗ (ℂ^2)')")
    julia> @tensor d = A[a,b,c]*A[a,b,c]ERROR: SpaceMismatch("ProductSpace{ComplexSpace, 0}() ← ProductSpace{ComplexSpace, 0}() ≠ ProductSpace{ComplexSpace, 0}() ← ((ℂ^3)' ⊗ (ℂ^2)' ⊗ (ℂ^4)') * (ℂ^3 ⊗ ℂ^2 ⊗ ℂ^4) ← ProductSpace{ComplexSpace, 0}()")

    we obtain SpaceMismatch errors. The reason for this is that, with ComplexSpace, an index in a space ℂ^n can only be contracted with an index in the dual space dual(ℂ^n) == (ℂ^n)'. Because of the complex Euclidean inner product, the dual space is equivalent to the complex conjugate space, but not the the space itself.

    julia> dual(ℂ^3) == conj(ℂ^3) == (ℂ^3)'true
    julia> (ℂ^3)' == ℂ^3false
    julia> @tensor d = conj(A[a,b,c])*A[a,b,c]30.87765010176495 + 0.0im
    julia> d ≈ normA²true

    This might seem overly strict or puristic, but we believe that it can help to catch errors, e.g. unintended contractions. In particular, contracting two indices both living in ℂ^n would represent an operation that is not invariant under arbitrary unitary basis changes.

    It also makes clear the isomorphism between linear maps ℂ^n → ℂ^m and tensors in ℂ^m ⊗ (ℂ^n)':

    julia> m = TensorMap(randn, ComplexF64, ℂ^3, ℂ^4)TensorMap(ℂ^3 ← ℂ^4):
    -  -0.505235532222133 + 0.3597613729534717im  …  -0.1790750217682332 - 1.0412321811963023im
    - -0.8153863995691887 - 0.3647734759690068im     0.13825879407669636 + 0.6636544600818447im
    - -0.2176395777884256 - 0.285685444975569im       0.5663032494061968 + 0.2926243730048722im
    julia> m2 = permute(m, (1,2), ())TensorMap((ℂ^3 ⊗ (ℂ^4)') ← ProductSpace{ComplexSpace, 0}()): - -0.505235532222133 + 0.3597613729534717im … -0.1790750217682332 - 1.0412321811963023im - -0.8153863995691887 - 0.3647734759690068im 0.13825879407669636 + 0.6636544600818447im - -0.2176395777884256 - 0.285685444975569im 0.5663032494061968 + 0.2926243730048722im
    julia> codomain(m2)(ℂ^3 ⊗ (ℂ^4)')
    julia> space(m, 1)ℂ^3
    julia> space(m, 2)(ℂ^4)'

    Hence, spaces become their corresponding dual space if they are 'permuted' from the domain to the codomain or vice versa. Also, spaces in the domain are reported as their dual when probing them with space(A, i). Generalizing matrix vector and matrix matrix multiplication to arbitrary tensor contractions require that the two indices to be contracted have spaces which are each others dual. Knowing this, all the other functionality of tensors with CartesianSpace indices remains the same for tensors with ComplexSpace indices.

    Symmetries

    So far, the functionality that we have illustrated seems to be just a convenience (or inconvenience?) wrapper around dense multidimensional arrays, e.g. Julia's Base Array. More power becomes visible when involving symmetries. With symmetries, we imply that there is some symmetry action defined on every vector space associated with each of the indices of a TensorMap, and the TensorMap is then required to be equivariant, i.e. it acts as an intertwiner between the tensor product representation on the domain and that on the codomain. By Schur's lemma, this means that the tensor is block diagonal in some basis corresponding to the irreducible representations that can be coupled to by combining the different representations on the different spaces in the domain or codomain. For Abelian symmetries, this does not require a basis change and it just imposes that the tensor has some block sparsity. Let's clarify all of this with some examples.

    We start with a simple $ℤ₂$ symmetry:

    julia> V1 = ℤ₂Space(0=>3,1=>2)Rep[ℤ₂](0=>3, 1=>2)
    julia> dim(V1)5
    julia> V2 = ℤ₂Space(0=>1,1=>1)Rep[ℤ₂](0=>1, 1=>1)
    julia> dim(V2)2
    julia> A = Tensor(randn, V1*V1*V2')TensorMap((Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>1, 1=>1)') ← ProductSpace{GradedSpace{Z2Irrep, Tuple{Int64, Int64}}, 0}()): + -2.9450940643272983 + 1.4823014105920547im … -1.8849108779948303 - 0.25438470782157296im + 4.664581374259645 - 1.7885181310654157im 1.0108213701437525 - 0.7731709861510159im + -6.541670752484209 + 2.542463277133041im -2.3595342358741838 + 0.6143706993381571im
    julia> scalarBA = dot(B,A)2.985473949112455 + 1.386313453398504im
    julia> scalarAA = dot(A,A)17.554703252196177 + 0.0im
    julia> normA² = norm(A)^217.55470325219618
    julia> U,S,Vd = tsvd(A,(1,3),(2,));
    julia> @tensor A′[a,b,c] := U[a,c,d]*S[d,e]*Vd[e,b];
    julia> A′ ≈ Atrue
    julia> permute(A,(1,3),(2,)) ≈ U*S*Vdtrue

    However, trying the following

    julia> @tensor D[a,b,c,d] := A[a,b,e]*B[d,c,e]ERROR: SpaceMismatch("(ℂ^3 ⊗ ℂ^2) ← ((ℂ^3)' ⊗ (ℂ^2)') ≠ (ℂ^3 ⊗ ℂ^2) ← (ℂ^4)' * ℂ^4 ← ((ℂ^3)' ⊗ (ℂ^2)')")
    julia> @tensor d = A[a,b,c]*A[a,b,c]ERROR: SpaceMismatch("ProductSpace{ComplexSpace, 0}() ← ProductSpace{ComplexSpace, 0}() ≠ ProductSpace{ComplexSpace, 0}() ← ((ℂ^3)' ⊗ (ℂ^2)' ⊗ (ℂ^4)') * (ℂ^3 ⊗ ℂ^2 ⊗ ℂ^4) ← ProductSpace{ComplexSpace, 0}()")

    we obtain SpaceMismatch errors. The reason for this is that, with ComplexSpace, an index in a space ℂ^n can only be contracted with an index in the dual space dual(ℂ^n) == (ℂ^n)'. Because of the complex Euclidean inner product, the dual space is equivalent to the complex conjugate space, but not the the space itself.

    julia> dual(ℂ^3) == conj(ℂ^3) == (ℂ^3)'true
    julia> (ℂ^3)' == ℂ^3false
    julia> @tensor d = conj(A[a,b,c])*A[a,b,c]17.554703252196177 + 0.0im
    julia> d ≈ normA²true

    This might seem overly strict or puristic, but we believe that it can help to catch errors, e.g. unintended contractions. In particular, contracting two indices both living in ℂ^n would represent an operation that is not invariant under arbitrary unitary basis changes.

    It also makes clear the isomorphism between linear maps ℂ^n → ℂ^m and tensors in ℂ^m ⊗ (ℂ^n)':

    julia> m = TensorMap(randn, ComplexF64, ℂ^3, ℂ^4)TensorMap(ℂ^3 ← ℂ^4):
    + -0.8219742432307734 - 1.2332332025906438im  …    1.0246126441205075 - 1.0945929515320691im
    + -1.1509342155978473 - 0.582669479697431im       -0.8220121812156309 - 1.1614509892514933im
    + -0.4570938877797472 - 0.6918481702603796im     -0.12750445690112763 - 0.3006833740743877im
    julia> m2 = permute(m, (1,2), ())TensorMap((ℂ^3 ⊗ (ℂ^4)') ← ProductSpace{ComplexSpace, 0}()): + -0.8219742432307734 - 1.2332332025906438im … 1.0246126441205075 - 1.0945929515320691im + -1.1509342155978473 - 0.582669479697431im -0.8220121812156309 - 1.1614509892514933im + -0.4570938877797472 - 0.6918481702603796im -0.12750445690112763 - 0.3006833740743877im
    julia> codomain(m2)(ℂ^3 ⊗ (ℂ^4)')
    julia> space(m, 1)ℂ^3
    julia> space(m, 2)(ℂ^4)'

    Hence, spaces become their corresponding dual space if they are 'permuted' from the domain to the codomain or vice versa. Also, spaces in the domain are reported as their dual when probing them with space(A, i). Generalizing matrix vector and matrix matrix multiplication to arbitrary tensor contractions require that the two indices to be contracted have spaces which are each others dual. Knowing this, all the other functionality of tensors with CartesianSpace indices remains the same for tensors with ComplexSpace indices.

    Symmetries

    So far, the functionality that we have illustrated seems to be just a convenience (or inconvenience?) wrapper around dense multidimensional arrays, e.g. Julia's Base Array. More power becomes visible when involving symmetries. With symmetries, we imply that there is some symmetry action defined on every vector space associated with each of the indices of a TensorMap, and the TensorMap is then required to be equivariant, i.e. it acts as an intertwiner between the tensor product representation on the domain and that on the codomain. By Schur's lemma, this means that the tensor is block diagonal in some basis corresponding to the irreducible representations that can be coupled to by combining the different representations on the different spaces in the domain or codomain. For Abelian symmetries, this does not require a basis change and it just imposes that the tensor has some block sparsity. Let's clarify all of this with some examples.

    We start with a simple $ℤ₂$ symmetry:

    julia> V1 = ℤ₂Space(0=>3,1=>2)Rep[ℤ₂](0=>3, 1=>2)
    julia> dim(V1)5
    julia> V2 = ℤ₂Space(0=>1,1=>1)Rep[ℤ₂](0=>1, 1=>1)
    julia> dim(V2)2
    julia> A = Tensor(randn, V1*V1*V2')TensorMap((Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>1, 1=>1)') ← ProductSpace{GradedSpace{Z2Irrep, Tuple{Int64, Int64}}, 0}()): * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (): [:, :, 1] = - 1.6593043471451887 0.48542384394001326 - 0.8892917189233961 -1.2249363008883947 + -0.016276359769990013 -0.22426320958670015 + 0.932421323624112 -0.4269009488072073 * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (): [:, :, 1] = - -0.5860890454404036 -0.7715662938506616 - 1.2997868393043355 1.4994908571793013 - 0.1372188609441276 -0.7640440802475603 + 3.7677128285742683 0.7496038597987614 + -0.9649460831908412 -0.2432394923566399 + -0.7095836975501318 -1.2197090432139792 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (): [:, :, 1] = - 0.23421819477507436 -0.01991091761417646 -0.05758600209675834 - -0.6604128598710926 -2.3592426778625417 -1.767070322092584 + 1.2334169105718025 1.2935723652456423 -1.7171923926754455 + -0.66727836353175 -0.4042825938543303 -0.27379193407261493 * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (): [:, :, 1] = - 1.0161907518731803 -0.6018888492744975 0.8505457016569765 - 0.7689951632939167 -1.1811195462362025 1.58477624953763 - 0.9617336224907944 -0.6599912375925213 -0.4002024222630042
    julia> convert(Array, A)5×5×2 Array{Float64, 3}: + -1.0549883952675942 -0.17705954204486954 -0.7392954791383336 + -0.013444212406976965 -0.07827844206366598 0.14229223745118705 + -0.8215983774093304 -1.765275844451808 0.3608688769559288
    julia> convert(Array, A)5×5×2 Array{Float64, 3}: [:, :, 1] = - 1.01619 -0.601889 0.850546 0.0 0.0 - 0.768995 -1.18112 1.58478 0.0 0.0 - 0.961734 -0.659991 -0.400202 0.0 0.0 - 0.0 0.0 0.0 1.6593 0.485424 - 0.0 0.0 0.0 0.889292 -1.22494 + -1.05499 -0.17706 -0.739295 0.0 0.0 + -0.0134442 -0.0782784 0.142292 0.0 0.0 + -0.821598 -1.76528 0.360869 0.0 0.0 + 0.0 0.0 0.0 -0.0162764 -0.224263 + 0.0 0.0 0.0 0.932421 -0.426901 [:, :, 2] = - 0.0 0.0 0.0 -0.586089 -0.771566 - 0.0 0.0 0.0 1.29979 1.49949 - 0.0 0.0 0.0 0.137219 -0.764044 - 0.234218 -0.0199109 -0.057586 0.0 0.0 - -0.660413 -2.35924 -1.76707 0.0 0.0

    Here, we create a 5-dimensional space V1, which has a three-dimensional subspace associated with charge 0 (the trivial irrep of $ℤ₂$) and a two-dimensional subspace with charge 1 (the non-trivial irrep). Similar for V2, where both subspaces are one- dimensional. Representing the tensor as a dense Array, we see that it is zero in those regions where the charges don't add to zero (modulo 2). Of course, the Tensor(Map) type in TensorKit.jl won't store these zero blocks, and only stores the non-zero information, which we can recognize also in the full Array representation.

    From there on, the resulting tensors support all of the same operations as the ones we encountered in the previous examples.

    julia> B = Tensor(randn, V1'*V1*V2);
    julia> @tensor C[a,b] := A[a,c,d]*B[c,b,d]TensorMap((Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>3, 1=>2)) ← ProductSpace{GradedSpace{Z2Irrep, Tuple{Int64, Int64}}, 0}()): + 0.0 0.0 0.0 3.76771 0.749604 + 0.0 0.0 0.0 -0.964946 -0.243239 + 0.0 0.0 0.0 -0.709584 -1.21971 + 1.23342 1.29357 -1.71719 0.0 0.0 + -0.667278 -0.404283 -0.273792 0.0 0.0

    Here, we create a 5-dimensional space V1, which has a three-dimensional subspace associated with charge 0 (the trivial irrep of $ℤ₂$) and a two-dimensional subspace with charge 1 (the non-trivial irrep). Similar for V2, where both subspaces are one- dimensional. Representing the tensor as a dense Array, we see that it is zero in those regions where the charges don't add to zero (modulo 2). Of course, the Tensor(Map) type in TensorKit.jl won't store these zero blocks, and only stores the non-zero information, which we can recognize also in the full Array representation.

    From there on, the resulting tensors support all of the same operations as the ones we encountered in the previous examples.

    julia> B = Tensor(randn, V1'*V1*V2);
    julia> @tensor C[a,b] := A[a,c,d]*B[c,b,d]TensorMap((Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>3, 1=>2)) ← ProductSpace{GradedSpace{Z2Irrep, Tuple{Int64, Int64}}, 0}()): * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (): - 2.9886837970566917 -0.06096937787982587 0.9157166142421359 - 3.312704327647421 -2.3318376647601506 -0.1840745475341887 - 2.0471621022045454 -0.826355884967673 0.8947895971887824 + -1.471014089422895 3.0079054303937305 5.408340135969764 + 0.7989913681559327 -0.9211539574925491 -0.7987132378539428 + 3.1003544346558787 -1.7068789964630664 1.84161163662961 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (): - 0.14977845160722608 0.8631659108765608 - 2.2324220776931205 -4.645545487606566
    julia> U,S,V = tsvd(A,(1,3),(2,));
    julia> U'*U # should be the identity on the corresponding domain = codomainTensorMap(Rep[ℤ₂](0=>3, 1=>2)' ← Rep[ℤ₂](0=>3, 1=>2)'): + 0.32087709173966333 0.8048437765067722 + 1.0047716459441896 0.22304719798728437
    julia> U,S,V = tsvd(A,(1,3),(2,));
    julia> U'*U # should be the identity on the corresponding domain = codomainTensorMap(Rep[ℤ₂](0=>3, 1=>2)' ← Rep[ℤ₂](0=>3, 1=>2)'): * Data for sector (Irrep[ℤ₂](0),) ← (Irrep[ℤ₂](0),): - 0.9999999999999998 -2.4944528621697335e-16 -1.7673901546078634e-16 - -2.4944528621697335e-16 0.9999999999999999 -4.1455122537420226e-16 - -1.7673901546078634e-16 -4.1455122537420226e-16 0.9999999999999998 + 1.0 -2.7210449897618077e-17 -2.0152082047835797e-16 + -2.7210449897618077e-17 0.9999999999999999 5.4493707325782766e-17 + -2.0152082047835797e-16 5.4493707325782766e-17 0.9999999999999992 * Data for sector (Irrep[ℤ₂](1),) ← (Irrep[ℤ₂](1),): - 1.0000000000000004 2.782702734116087e-17 - 2.782702734116087e-17 1.0000000000000002
    julia> U'*U ≈ one(U'*U)true
    julia> P = U*U' # should be a projectorTensorMap((Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>1, 1=>1)') ← (Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>1, 1=>1)')): + 0.999999999999999 -6.287702082082968e-17 + -6.287702082082968e-17 0.9999999999999994
    julia> U'*U ≈ one(U'*U)true
    julia> P = U*U' # should be a projectorTensorMap((Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>1, 1=>1)') ← (Rep[ℤ₂](0=>3, 1=>2) ⊗ Rep[ℤ₂](0=>1, 1=>1)')): * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](0)): [:, :, 1, 1] = - 0.3611324961155829 - 0.3252605027183733 - 0.33279833515276475 + 0.8121120505519245 + -0.04905650114731836 + -0.06480180805813751 [:, :, 2, 1] = - 0.3252605027183733 - 0.8337535666138889 - -0.16362728685776187 + -0.04905650114731836 + 0.006091087246304535 + 0.013350571601239479 [:, :, 3, 1] = - 0.33279833515276475 - -0.16362728685776187 - 0.7747306719613185 + -0.06480180805813751 + 0.013350571601239479 + 0.9767161922463586 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](0)): [:, :, 1, 1] = - 0.07356256872533529 - -0.09356943057634225 + 0.002290133387554271 + 0.3820667592933458 [:, :, 2, 1] = - -0.0622379040445127 - 0.04654800145669955 + -0.055124846516257486 + -0.020747509058314142 [:, :, 3, 1] = - 0.1832315357450075 - 0.0584862488737674 + 0.0025090770257625438 + 0.1354911971049322 * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](1)): [:, :, 1, 1] = - 0.07356256872533529 - -0.0622379040445127 - 0.1832315357450075 + 0.002290133387554271 + -0.055124846516257486 + 0.0025090770257625438 [:, :, 2, 1] = - -0.09356943057634225 - 0.04654800145669955 - 0.0584862488737674 + 0.3820667592933458 + -0.020747509058314142 + 0.1354911971049322 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](1)): [:, :, 1, 1] = - 0.0459167286363731 - -0.030815235824331454 + 0.9968072430564545 + -0.011501059288173944 [:, :, 2, 1] = - -0.030815235824331454 - 0.9844665366728361 + -0.011501059288173944 + 0.2082734268989573 * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](1)): [:, :, 1, 1] = - 0.13221486239164112 - -0.2664859920552033 - 0.08836522531208783 + 0.8623561950021744 + -0.2190320237837236 + -0.12400255993122096 [:, :, 2, 1] = - -0.2664859920552033 - 0.5418000865275286 - -0.1571982630454507 + -0.2190320237837236 + 0.057285250648071474 + 0.0662711159463551 [:, :, 3, 1] = - 0.08836522531208783 - -0.1571982630454507 - 0.15238036745310238 + -0.12400255993122096 + 0.0662711159463551 + 0.7495353642481801 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](1)): [:, :, 1, 1] = - -0.159725831352555 - 0.10197330212871325 + 0.00414551004992616 + 0.2352214709113129 [:, :, 2, 1] = - 0.3595354965636637 - -0.15251525042060068 + 0.006073623772732197 + -0.0399984193307365 [:, :, 3, 1] = - 0.06108457377447538 - 0.30481069365392655 + 0.14935273288135809 + 0.3816508450038151 * Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](0)): [:, :, 1, 1] = - -0.159725831352555 - 0.3595354965636637 - 0.06108457377447538 + 0.00414551004992616 + 0.006073623772732197 + 0.14935273288135809 [:, :, 2, 1] = - 0.10197330212871325 - -0.15251525042060068 - 0.30481069365392655 + 0.2352214709113129 + -0.0399984193307365 + 0.3816508450038151 * Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](0)): [:, :, 1, 1] = - 0.4948105128705678 - 0.30242954481925083 + 0.030749077032802177 + 0.08627432014218323 [:, :, 2, 1] = - 0.30242954481925083 - 0.6787941707571608
    julia> P*P ≈ Ptrue

    We also support other abelian symmetries, e.g.

    julia> V = U₁Space(0=>2,1=>1,-1=>1)Rep[U₁](0=>2, 1=>1, -1=>1)
    julia> dim(V)4
    julia> A = TensorMap(randn, V*V, V)TensorMap((Rep[U₁](0=>2, 1=>1, -1=>1) ⊗ Rep[U₁](0=>2, 1=>1, -1=>1)) ← Rep[U₁](0=>2, 1=>1, -1=>1)): + 0.08627432014218323 + 0.3000741130687704
    julia> P*P ≈ Ptrue

    We also support other abelian symmetries, e.g.

    julia> V = U₁Space(0=>2,1=>1,-1=>1)Rep[U₁](0=>2, 1=>1, -1=>1)
    julia> dim(V)4
    julia> A = TensorMap(randn, V*V, V)TensorMap((Rep[U₁](0=>2, 1=>1, -1=>1) ⊗ Rep[U₁](0=>2, 1=>1, -1=>1)) ← Rep[U₁](0=>2, 1=>1, -1=>1)): * Data for sector (Irrep[U₁](0), Irrep[U₁](0)) ← (Irrep[U₁](0),): [:, :, 1] = - -0.7796006134746782 2.5666291542742377 - -0.05959789046634579 0.08386488615767418 + -0.3723141472492799 0.40077714153442173 + -0.03376708855530123 -0.7623071425386879 [:, :, 2] = - 1.1155472043514039 -3.0011207644747406 - 0.786646273425902 0.30760254244997703 + -1.5292678482460462 -1.9208580355327698 + -1.0353366467050513 -3.214581158946452 * Data for sector (Irrep[U₁](-1), Irrep[U₁](1)) ← (Irrep[U₁](0),): [:, :, 1] = - 0.4423142752147809 + 0.4708133948423085 [:, :, 2] = - -1.3098256504238615 + -0.1610427364221391 * Data for sector (Irrep[U₁](1), Irrep[U₁](-1)) ← (Irrep[U₁](0),): [:, :, 1] = - 0.8769673379847988 + 0.13245570545301524 [:, :, 2] = - -0.5778257241200374 + 0.9162854629339018 * Data for sector (Irrep[U₁](1), Irrep[U₁](0)) ← (Irrep[U₁](1),): [:, :, 1] = - -0.399879062353704 0.2295257111272168 + 1.2210523555773007 -0.8333863379306162 * Data for sector (Irrep[U₁](0), Irrep[U₁](1)) ← (Irrep[U₁](1),): [:, :, 1] = - 1.4172706633607914 - -0.034028454429488486 + -0.04838421634999537 + 0.6353116924290141 * Data for sector (Irrep[U₁](-1), Irrep[U₁](0)) ← (Irrep[U₁](-1),): [:, :, 1] = - 2.0938037703289956 1.0321696314368398 + 1.2747796361807346 -0.6318399972016648 * Data for sector (Irrep[U₁](0), Irrep[U₁](-1)) ← (Irrep[U₁](-1),): [:, :, 1] = - 0.10639547137046734 - 1.3546117223872571
    julia> dim(A)20
    julia> convert(Array, A)4×4×4 Array{Float64, 3}: + -0.858077313563977 + 0.8260714618277442
    julia> dim(A)20
    julia> convert(Array, A)4×4×4 Array{Float64, 3}: [:, :, 1] = - -0.779601 2.56663 0.0 0.0 - -0.0595979 0.0838649 0.0 0.0 - 0.0 0.0 0.0 0.876967 - 0.0 0.0 0.442314 0.0 + -0.372314 0.400777 0.0 0.0 + -0.0337671 -0.762307 0.0 0.0 + 0.0 0.0 0.0 0.132456 + 0.0 0.0 0.470813 0.0 [:, :, 2] = - 1.11555 -3.00112 0.0 0.0 - 0.786646 0.307603 0.0 0.0 - 0.0 0.0 0.0 -0.577826 - 0.0 0.0 -1.30983 0.0 + -1.52927 -1.92086 0.0 0.0 + -1.03534 -3.21458 0.0 0.0 + 0.0 0.0 0.0 0.916285 + 0.0 0.0 -0.161043 0.0 [:, :, 3] = - 0.0 0.0 1.41727 0.0 - 0.0 0.0 -0.0340285 0.0 - -0.399879 0.229526 0.0 0.0 - 0.0 0.0 0.0 0.0 + 0.0 0.0 -0.0483842 0.0 + 0.0 0.0 0.635312 0.0 + 1.22105 -0.833386 0.0 0.0 + 0.0 0.0 0.0 0.0 [:, :, 4] = - 0.0 0.0 0.0 0.106395 - 0.0 0.0 0.0 1.35461 - 0.0 0.0 0.0 0.0 - 2.0938 1.03217 0.0 0.0
    julia> V = Rep[U₁×ℤ₂]((0, 0) => 2, (1, 1) => 1, (-1, 0) => 1)Rep[U₁ × ℤ₂]((0, 0)=>2, (-1, 0)=>1, (1, 1)=>1)
    julia> dim(V)4
    julia> A = TensorMap(randn, V*V, V)TensorMap((Rep[U₁ × ℤ₂]((0, 0)=>2, (-1, 0)=>1, (1, 1)=>1) ⊗ Rep[U₁ × ℤ₂]((0, 0)=>2, (-1, 0)=>1, (1, 1)=>1)) ← Rep[U₁ × ℤ₂]((0, 0)=>2, (-1, 0)=>1, (1, 1)=>1)): + 0.0 0.0 0.0 -0.858077 + 0.0 0.0 0.0 0.826071 + 0.0 0.0 0.0 0.0 + 1.27478 -0.63184 0.0 0.0
    julia> V = Rep[U₁×ℤ₂]((0, 0) => 2, (1, 1) => 1, (-1, 0) => 1)Rep[U₁ × ℤ₂]((0, 0)=>2, (-1, 0)=>1, (1, 1)=>1)
    julia> dim(V)4
    julia> A = TensorMap(randn, V*V, V)TensorMap((Rep[U₁ × ℤ₂]((0, 0)=>2, (-1, 0)=>1, (1, 1)=>1) ⊗ Rep[U₁ × ℤ₂]((0, 0)=>2, (-1, 0)=>1, (1, 1)=>1)) ← Rep[U₁ × ℤ₂]((0, 0)=>2, (-1, 0)=>1, (1, 1)=>1)): * Data for sector ((Irrep[U₁](0) ⊠ Irrep[ℤ₂](0)), (Irrep[U₁](0) ⊠ Irrep[ℤ₂](0))) ← ((Irrep[U₁](0) ⊠ Irrep[ℤ₂](0)),): [:, :, 1] = - 0.10852501168330857 -0.2186897057746766 - -0.5659891356991952 0.5496505105029061 + 1.1465647656550364 0.2583160689439612 + 1.6050290767959954 -0.013567462825044745 [:, :, 2] = - -0.39272327299725784 -0.9506806714563776 - 0.761601611338074 1.0587819718000901 + 0.22135918574380217 2.3024932417758346 + -1.241666106785227 0.5828997320698889 * Data for sector ((Irrep[U₁](-1) ⊠ Irrep[ℤ₂](0)), (Irrep[U₁](0) ⊠ Irrep[ℤ₂](0))) ← ((Irrep[U₁](-1) ⊠ Irrep[ℤ₂](0)),): [:, :, 1] = - 0.12668478135635008 0.46361275725091816 + 0.10515329149455717 -1.2487283668376319 * Data for sector ((Irrep[U₁](0) ⊠ Irrep[ℤ₂](0)), (Irrep[U₁](-1) ⊠ Irrep[ℤ₂](0))) ← ((Irrep[U₁](-1) ⊠ Irrep[ℤ₂](0)),): [:, :, 1] = - 0.84274417059752 - 0.3210823255979409 + -0.87315257512941 + 1.4969614880455315 * Data for sector ((Irrep[U₁](1) ⊠ Irrep[ℤ₂](1)), (Irrep[U₁](0) ⊠ Irrep[ℤ₂](0))) ← ((Irrep[U₁](1) ⊠ Irrep[ℤ₂](1)),): [:, :, 1] = - -0.9719938122517562 0.3001067490232932 + 0.21207836828930193 -0.4102889617266814 * Data for sector ((Irrep[U₁](0) ⊠ Irrep[ℤ₂](0)), (Irrep[U₁](1) ⊠ Irrep[ℤ₂](1))) ← ((Irrep[U₁](1) ⊠ Irrep[ℤ₂](1)),): [:, :, 1] = - -1.0441565950410723 - -0.637572020803191
    julia> dim(A)16
    julia> convert(Array, A)4×4×4 Array{Float64, 3}: + -0.2946668319482426 + -0.09934234365384438
    julia> dim(A)16
    julia> convert(Array, A)4×4×4 Array{Float64, 3}: [:, :, 1] = - 0.108525 -0.21869 0.0 0.0 - -0.565989 0.549651 0.0 0.0 - 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 + 1.14656 0.258316 0.0 0.0 + 1.60503 -0.0135675 0.0 0.0 + 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 [:, :, 2] = - -0.392723 -0.950681 0.0 0.0 - 0.761602 1.05878 0.0 0.0 - 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 + 0.221359 2.30249 0.0 0.0 + -1.24167 0.5829 0.0 0.0 + 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 [:, :, 3] = - 0.0 0.0 0.842744 0.0 - 0.0 0.0 0.321082 0.0 - 0.126685 0.463613 0.0 0.0 - 0.0 0.0 0.0 0.0 + 0.0 0.0 -0.873153 0.0 + 0.0 0.0 1.49696 0.0 + 0.105153 -1.24873 0.0 0.0 + 0.0 0.0 0.0 0.0 [:, :, 4] = - 0.0 0.0 0.0 -1.04416 - 0.0 0.0 0.0 -0.637572 - 0.0 0.0 0.0 0.0 - -0.971994 0.300107 0.0 0.0

    Here, the dim of a TensorMap returns the number of linearly independent components, i.e. the number of non-zero entries in the case of an abelian symmetry. Also note that we can use × (obtained as \times+TAB) to combine different symmetry groups. The general space associated with symmetries is a GradedSpace, which is parametrized to the type of symmetry. For a group G, the fully specified type can be obtained as Rep[G], while for more general sectortypes I it can be constructed as Vect[I]. Furthermore, ℤ₂Space (also Z2Space as non-Unicode alternative) and U₁Space (or U1Space) are just convenient synonyms, e.g.

    julia> Rep[U₁](0=>3,1=>2,-1=>1) == U1Space(-1=>1,1=>2,0=>3)true
    julia> V = U₁Space(1=>2,0=>3,-1=>1)Rep[U₁](0=>3, 1=>2, -1=>1)
    julia> for s in sectors(V) + 0.0 0.0 0.0 -0.294667 + 0.0 0.0 0.0 -0.0993423 + 0.0 0.0 0.0 0.0 + 0.212078 -0.410289 0.0 0.0

    Here, the dim of a TensorMap returns the number of linearly independent components, i.e. the number of non-zero entries in the case of an abelian symmetry. Also note that we can use × (obtained as \times+TAB) to combine different symmetry groups. The general space associated with symmetries is a GradedSpace, which is parametrized to the type of symmetry. For a group G, the fully specified type can be obtained as Rep[G], while for more general sectortypes I it can be constructed as Vect[I]. Furthermore, ℤ₂Space (also Z2Space as non-Unicode alternative) and U₁Space (or U1Space) are just convenient synonyms, e.g.

    julia> Rep[U₁](0=>3,1=>2,-1=>1) == U1Space(-1=>1,1=>2,0=>3)true
    julia> V = U₁Space(1=>2,0=>3,-1=>1)Rep[U₁](0=>3, 1=>2, -1=>1)
    julia> for s in sectors(V) @show s, dim(V, s) end(s, dim(V, s)) = (Irrep[U₁](0), 3) (s, dim(V, s)) = (Irrep[U₁](1), 2) (s, dim(V, s)) = (Irrep[U₁](-1), 1)
    julia> U₁Space(-1=>1,0=>3,1=>2) == GradedSpace(Irrep[U₁](1)=>2, Irrep[U₁](0)=>3, Irrep[U₁](-1)=>1)true
    julia> supertype(GradedSpace)ElementarySpace

    Note that GradedSpace is not immediately parameterized by some group G, but actually by the set of irreducible representations of G, denoted as Irrep[G]. Indeed, GradedSpace also supports a grading that is derived from the fusion ring of a (unitary) pre-fusion category. Also note that the order in which the charges and their corresponding subspace dimensionality are specified is irrelevant, and that the charges, henceforth called sectors (which is a more general name for charges or quantum numbers) are of a specific type, in this case Irrep[U₁] == U1Irrep. However, the Vect[I] constructor automatically converts the keys in the list of Pairs it receives to the correct type. Alternatively, we can directly create the sectors of the correct type and use the generic GradedSpace constructor. We can probe the subspace dimension of a certain sector s in a space V with dim(V, s). Finally, note that GradedSpace is also a subtype of EuclideanSpace, which implies that it still has the standard Euclidean inner product and we assume all representations to be unitary.

    TensorKit.jl also allows for non-abelian symmetries such as SU₂. In this case, the vector space is characterized via the spin quantum number (i.e. the irrep label of SU₂) for each of its subspaces, and is created using SU₂Space (or SU2Space or Rep[SU₂] or Vect[Irrep[SU₂]])

    julia> V = SU₂Space(0=>2,1/2=>1,1=>1)Rep[SU₂](0=>2, 1/2=>1, 1=>1)
    julia> dim(V)7
    julia> V == Vect[Irrep[SU₂]](0=>2, 1=>1, 1//2=>1)true

    Note that now V has a two-dimensional subspace with spin zero, and two one-dimensional subspaces with spin 1/2 and spin 1. However, a subspace with spin j has an additional 2j+1 dimensional degeneracy on which the irreducible representation acts. This brings the total dimension to 2*1 + 1*2 + 1*3. Creating a tensor with SU₂ symmetry yields

    julia> A = TensorMap(randn, V*V, V)TensorMap((Rep[SU₂](0=>2, 1/2=>1, 1=>1) ⊗ Rep[SU₂](0=>2, 1/2=>1, 1=>1)) ← Rep[SU₂](0=>2, 1/2=>1, 1=>1)):
     * Data for fusiontree FusionTree{Irrep[SU₂]}((0, 0), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()):
     [:, :, 1] =
    - -0.7990240674726695   0.2506245605718548
    -  0.382474797805887   -1.0850541557070839
    + -0.22943313128065412  -0.05026711556183826
    + -0.1627162505640535    0.37042946585392456
     
     [:, :, 2] =
    - -1.0818983738691685  -0.5404960412183746
    -  0.7902554526264685  -0.3947141595909602
    + -0.5011698333360459  -0.14847554237704644
    + -1.0894459201144657  -1.1741598979517747
     * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()):
     [:, :, 1] =
    - 0.11169582248020664
    + 0.9711030201002975
     
     [:, :, 2] =
    - -1.547539299393771
    + 2.153781723767041
     * Data for fusiontree FusionTree{Irrep[SU₂]}((1/2, 1/2), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((0,), 0, (false,), ()):
     [:, :, 1] =
    - -0.970477734882855
    + -0.3103253471982209
     
     [:, :, 2] =
    - -0.7201082552081826
    + 1.4549390402233526
     * Data for fusiontree FusionTree{Irrep[SU₂]}((0, 1/2), 1/2, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()):
     [:, :, 1] =
    - -1.1190342859512623
    -  2.5790069984881088
    + 0.6060699168775527
    + 1.6971848453120675
     * Data for fusiontree FusionTree{Irrep[SU₂]}((1/2, 1), 1/2, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()):
     [:, :, 1] =
    - -2.2589459727558885
    + -0.1305863794908599
     * Data for fusiontree FusionTree{Irrep[SU₂]}((1/2, 0), 1/2, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()):
     [:, :, 1] =
    - 0.7572614761504017  0.2071773521031059
    + -0.6731190569189784  -0.01868679810032286
     * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1/2), 1/2, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1/2,), 1/2, (false,), ()):
     [:, :, 1] =
    - -0.042706686970451845
    + -1.5868995727149084
     * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()):
     [:, :, 1] =
    - -1.066114521736883
    + -0.07766401863982358
     * Data for fusiontree FusionTree{Irrep[SU₂]}((1, 0), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()):
     [:, :, 1] =
    - -0.05485694749285251  1.2319740345041674
    + 0.37975671868103783  -0.2578506811217786
     * Data for fusiontree FusionTree{Irrep[SU₂]}((1/2, 1/2), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()):
     [:, :, 1] =
    - 0.38219190030737815
    + -0.557782105058753
     * Data for fusiontree FusionTree{Irrep[SU₂]}((0, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()):
     [:, :, 1] =
    - -0.9797396657086248
    -  1.7536400446414928
    julia> dim(A)24
    julia> convert(Array, A)7×7×7 Array{Float64, 3}: + 0.28326619432191347 + -0.31971288378879126
    julia> dim(A)24
    julia> convert(Array, A)7×7×7 Array{Float64, 3}: [:, :, 1] = - -0.799024 0.250625 0.0 0.0 0.0 0.0 0.0 - 0.382475 -1.08505 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 -0.686231 0.0 0.0 0.0 - 0.0 0.0 0.686231 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0644876 - 0.0 0.0 0.0 0.0 0.0 -0.0644876 0.0 - 0.0 0.0 0.0 0.0 0.0644876 0.0 0.0 + -0.229433 -0.0502671 0.0 0.0 0.0 0.0 0.0 + -0.162716 0.370429 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 -0.219433 0.0 0.0 0.0 + 0.0 0.0 0.219433 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.560667 + 0.0 0.0 0.0 0.0 0.0 -0.560667 0.0 + 0.0 0.0 0.0 0.0 0.560667 0.0 0.0 [:, :, 2] = - -1.0819 -0.540496 0.0 0.0 0.0 0.0 0.0 - 0.790255 -0.394714 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 -0.509193 0.0 0.0 0.0 - 0.0 0.0 0.509193 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 -0.893472 - 0.0 0.0 0.0 0.0 0.0 0.893472 0.0 - 0.0 0.0 0.0 0.0 -0.893472 0.0 0.0 + -0.50117 -0.148476 0.0 0.0 0.0 0.0 0.0 + -1.08945 -1.17416 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 1.0288 0.0 0.0 0.0 + 0.0 0.0 -1.0288 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 1.24349 + 0.0 0.0 0.0 0.0 0.0 -1.24349 0.0 + 0.0 0.0 0.0 0.0 1.24349 0.0 0.0 [:, :, 3] = - 0.0 0.0 -1.11903 0.0 0.0 0.0 0.0 - 0.0 0.0 2.57901 0.0 0.0 0.0 0.0 - 0.757261 0.207177 0.0 0.0 0.0 -1.3042 0.0 - 0.0 0.0 0.0 0.0 1.84442 0.0 0.0 - 0.0 0.0 0.0 -0.0348699 0.0 0.0 0.0 - 0.0 0.0 0.0246567 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.60607 0.0 0.0 0.0 0.0 + 0.0 0.0 1.69718 0.0 0.0 0.0 0.0 + -0.673119 -0.0186868 0.0 0.0 0.0 -0.0753941 0.0 + 0.0 0.0 0.0 0.0 0.106623 0.0 0.0 + 0.0 0.0 0.0 -1.2957 0.0 0.0 0.0 + 0.0 0.0 0.916197 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [:, :, 4] = - 0.0 0.0 0.0 -1.11903 0.0 0.0 0.0 - 0.0 0.0 0.0 2.57901 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 -1.84442 - 0.757261 0.207177 0.0 0.0 0.0 1.3042 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 -0.0246567 0.0 0.0 0.0 - 0.0 0.0 0.0348699 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.60607 0.0 0.0 0.0 + 0.0 0.0 0.0 1.69718 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 -0.106623 + -0.673119 -0.0186868 0.0 0.0 0.0 0.0753941 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 -0.916197 0.0 0.0 0.0 + 0.0 0.0 1.2957 0.0 0.0 0.0 0.0 [:, :, 5] = - 0.0 0.0 0.0 0.0 -0.97974 0.0 0.0 - 0.0 0.0 0.0 0.0 1.75364 0.0 0.0 - 0.0 0.0 0.382192 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - -0.0548569 1.23197 0.0 0.0 0.0 -0.753857 0.0 - 0.0 0.0 0.0 0.0 0.753857 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.283266 0.0 0.0 + 0.0 0.0 0.0 0.0 -0.319713 0.0 0.0 + 0.0 0.0 -0.557782 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.379757 -0.257851 0.0 0.0 0.0 -0.0549168 0.0 + 0.0 0.0 0.0 0.0 0.0549168 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [:, :, 6] = - 0.0 0.0 0.0 0.0 0.0 -0.97974 0.0 - 0.0 0.0 0.0 0.0 0.0 1.75364 0.0 - 0.0 0.0 0.0 0.27025 0.0 0.0 0.0 - 0.0 0.0 0.27025 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 -0.753857 - -0.0548569 1.23197 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.753857 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.283266 0.0 + 0.0 0.0 0.0 0.0 0.0 -0.319713 0.0 + 0.0 0.0 0.0 -0.394412 0.0 0.0 0.0 + 0.0 0.0 -0.394412 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 -0.0549168 + 0.379757 -0.257851 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0549168 0.0 0.0 [:, :, 7] = - 0.0 0.0 0.0 0.0 0.0 0.0 -0.97974 - 0.0 0.0 0.0 0.0 0.0 0.0 1.75364 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.382192 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0 0.0 -0.753857 - -0.0548569 1.23197 0.0 0.0 0.0 0.753857 0.0
    julia> norm(A) ≈ norm(convert(Array, A))true

    In this case, the full Array representation of the tensor has again many zeros, but it is less obvious to recognize the dense blocks, as there are additional zeros and the numbers in the original tensor data do not match with those in the Array. The reason is of course that the original tensor data now needs to be transformed with a construction known as fusion trees, which are made up out of the Clebsch-Gordan coefficients of the group. Indeed, note that the non-zero blocks are also no longer labeled by a list of sectors, but by pair of fusion trees. This will be explained further in the manual. However, the Clebsch-Gordan coefficients of the group are only needed to actually convert a tensor to an Array. For working with tensors with SU₂Space indices, e.g. contracting or factorizing them, the Clebsch-Gordan coefficients are never needed explicitly. Instead, recoupling relations are used to symbolically manipulate the basis of fusion trees, and this only requires what is known as the topological data of the group (or its representation theory).

    In fact, this formalism extends beyond the case of group representations on vector spaces, and can also deal with super vector spaces (to describe fermions) and more general (unitary) fusion categories. Preliminary support for these generalizations is present in TensorKit.jl and will be extended in the near future.

    All of these concepts will be explained throughout the remainder of this manual, including several details regarding their implementation. However, to just use tensors and their manipulations (contractions, factorizations, ...) in higher level algorithms (e.g. tensoer network algorithms), one does not need to know or understand most of these details, and one can immediately refer to the general interface of the TensorMap type, discussed on the last page. Adhering to this interface should yield code and algorithms that are oblivious to the underlying symmetries and can thus work with arbitrary symmetric tensors.

    + 0.0 0.0 0.0 0.0 0.0 0.0 0.283266 + 0.0 0.0 0.0 0.0 0.0 0.0 -0.319713 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 -0.557782 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 0.0 + 0.0 0.0 0.0 0.0 0.0 0.0 -0.0549168 + 0.379757 -0.257851 0.0 0.0 0.0 0.0549168 0.0
    julia> norm(A) ≈ norm(convert(Array, A))true

    In this case, the full Array representation of the tensor has again many zeros, but it is less obvious to recognize the dense blocks, as there are additional zeros and the numbers in the original tensor data do not match with those in the Array. The reason is of course that the original tensor data now needs to be transformed with a construction known as fusion trees, which are made up out of the Clebsch-Gordan coefficients of the group. Indeed, note that the non-zero blocks are also no longer labeled by a list of sectors, but by pair of fusion trees. This will be explained further in the manual. However, the Clebsch-Gordan coefficients of the group are only needed to actually convert a tensor to an Array. For working with tensors with SU₂Space indices, e.g. contracting or factorizing them, the Clebsch-Gordan coefficients are never needed explicitly. Instead, recoupling relations are used to symbolically manipulate the basis of fusion trees, and this only requires what is known as the topological data of the group (or its representation theory).

    In fact, this formalism extends beyond the case of group representations on vector spaces, and can also deal with super vector spaces (to describe fermions) and more general (unitary) fusion categories. Preliminary support for these generalizations is present in TensorKit.jl and will be extended in the near future.

    All of these concepts will be explained throughout the remainder of this manual, including several details regarding their implementation. However, to just use tensors and their manipulations (contractions, factorizations, ...) in higher level algorithms (e.g. tensoer network algorithms), one does not need to know or understand most of these details, and one can immediately refer to the general interface of the TensorMap type, discussed on the last page. Adhering to this interface should yield code and algorithms that are oblivious to the underlying symmetries and can thus work with arbitrary symmetric tensors.

    diff --git a/dev/search/index.html b/dev/search/index.html index c752c29e..05b514e6 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · TensorKit.jl

    Loading search...

      +Search · TensorKit.jl

      Loading search...