Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation #17

Merged
merged 10 commits into from
Nov 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,5 @@ jobs:
group: "${{ matrix.group }}"
julia-version: "${{ matrix.version }}"
os: "${{ matrix.os }}"
secrets: inherit
secrets:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
36 changes: 36 additions & 0 deletions .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: Documentation

on:
push:
branches:
- 'master'
- 'main'
- 'release-'
tags: '*'
pull_request:
workflow_dispatch:

jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
version:
- '1' # automatically expands to the latest stable 1.x release of Julia
os:
- ubuntu-latest
arch:
- x64
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@latest
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- name: Install dependencies
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
- name: Build and deploy
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # For authentication with SSH deploy key
run: julia --project=docs/ docs/make.jl
6 changes: 4 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
/Manifest.toml
.vscode/
Manifest.toml
.vscode/
.DS_Store
docs/build/
4 changes: 4 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[deps]
BlockTensorKit = "5f87ffc2-9cf1-4a46-8172-465d160bd8cd"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
TensorKit = "07d1fe3e-3e46-537d-9eac-e9e13d0d4cec"
25 changes: 25 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
using Documenter
using BlockTensorKit

pages = [
"Home" => "index.md",
"Manual" => ["SumSpace" => "sumspaces.md", "BlockTensors" => "blocktensors.md"],
"Library" => "lib.md",
]

makedocs(;
modules=[BlockTensorKit],
sitename="BlockTensorKit.jl",
authors="Lukas Devos",
warnonly=[:missing_docs, :cross_references],
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", nothing) == "true",
mathengine=MathJax(),
repolink="https://github.com/lkdvos/BlockTensorKit.jl.git",
),
pages=pages,
pagesonly=true,
repo="github.com/lkdvos/BlockTensorKit.jl.git",
)

deploydocs(; repo="github.com/lkdvos/BlockTensorKit.jl.git", push_preview=true)
71 changes: 55 additions & 16 deletions docs/src/blocktensor.md → docs/src/blocktensors.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,58 +25,97 @@ SparseBlockTensorMap{TT}(undef, space::TensorMapSumSpace)

where `TT<:AbstractTensorMap` is the type of the individual tensors, and `space` is the `TensorMapSumSpace` that defines the structure of the block tensor.

!!! note

In rare cases, `undef_blocks` can also be used, which won't allocate the component tensors.
In these cases it is left up to the user to not access elements before they are allocated.

Similarly, they can be initialized from a list of tensors:

```julia
BlockTensorMap{TT}(tensors::AbstractArray{AbstractTensorMap,N}, space::TensorMapSumSpace)
SparseBlockTensorMap{TT}(tensors::Dict{CartesianIndex{N},AbstractTensorMap}, space::TensorMapSumSpace)
```

!!!note In analogy to `TensorKit`, most of the functionality that requires a `space` object can equally well be called in terms of `codomain(space), domain(space)`, if that is more convenient.
Typically though, the most convenient way of obtaining a block tensor is by using one of `zeros`, `rand` or `randn`, as well as their sparse counterparts `spzeros` or `sprand`.

```@repl blocktensors
using TensorKit, BlockTensorKit
using BlockTensorKit: ⊕
V = ℂ^1 ⊕ ℂ^2;
W = V * V → V;
t = rand(W)
eltype(t)
s = sprand(W, 0.5)
eltype(s)
```

!!! note

In analogy to `TensorKit`, most of the functionality that requires a `space` object can equally well be called in terms of `codomain(space), domain(space)`, if that is more convenient.

### Indexing

For indexing operators, `AbstractBlockTensorMap` behaves like an `AbstractArray{AbstractTensorMap}`, and the individual tensors can be accessed via the `getindex` and `setindex!` functions.
In particular, the `getindex` function returns a `TT` object, and the `setindex!` function expects a `TT` object.

```julia

Both linear and cartesian indexing styles are supported.

```@repl blocktensors
t[1] isa eltype(t)
t[1] == t[1, 1, 1]
t[2] = 3 * t[2]
s[1] isa eltype(t)
s[1] == s[1, 1, 1]
s[1] += 2 * s[1]
```

Slicing operations are also supported, and the `AbstractBlockTensorMap` can be sliced in the same way as an `AbstractArray{AbstractTensorMap}`.
There is however one elementary difference: as the slices still contain tensors with the same amount of legs, there can be no reduction in the number of dimensions.
In particular, in contrast to `AbstractArray`, scalar dimensions are not discarded:

```julia
In particular, in contrast to `AbstractArray`, scalar dimensions are not discarded, and as a result, linear index slicing is not allowed.

```@repl blocktensors
ndims(t[1, 1, :]) == 3
ndims(t[:, 1:2, [1, 1]]) == 3
t[1:2] # error
```

### VectorInterface.jl

As part of the `TensorKit` interface, `AbstractBlockTensorMap` also implements `VectorInterface`.
This means that you can efficiently add, scale, and compute the inner product of `AbstractBlockTensorMap` objects.

```julia

```@repl blocktensors
t1, t2 = rand!(similar(t)), rand!(similar(t));
add(t1, t2, rand())
scale(t1, rand())
inner(t1, t2)
```

For further in-place and possibly-in-place methods, see [`VectorInterface.jl`](https://github.com/Jutho/VectorInterface.jl)

### TensorOperations.jl

The `TensorOperations.jl` interface is also implemented for `AbstractBlockTensorMap`.
In particular, the `AbstractBlockTensorMap` can be contracted with other `AbstractBlockTensorMap` objects, as well as with `AbstractTensorMap` objects.
In order for that mix to work, the `AbstractTensorMap` objects are automatically converted to `AbstractBlockTensorMap` objects with a single tensor, i.e. the sum spaces will be a sum of one space.

```julia

As a consequence, as soon as one of the input tensors is blocked, the output tensor will also be blocked, even though its size might be trivial.
In these cases, `only` can be used to retrieve the single element in the `BlockTensorMap`.

```@repl blocktensors
@tensor t3[a; b] := t[a; c d] * conj(t[b; c d])
@tensor t4[a; b] := t[1, :, :][a; c d] * conj(t[1, :, :][b; c d]) # blocktensor * blocktensor = blocktensor
t4 isa AbstractBlockTensorMap
only(t4) isa eltype(t4)
@tensor t5[a; b] := t[1][a; c d] * conj(t[1:1, 1:1, 1:1][b; c d]) # tensor * blocktensor = blocktensor
t5 isa AbstractBlockTensorMap
only(t5) isa eltype(t5)
```

### Factorizations

Currently, there is only rudimentary support for factorizations of `AbstractBlockTensorMap` objects.
In particular, the implementations are not yet optimized for performance, and the factorizations are typically carried out by mapping to a dense tensor, and then performing the factorization on that tensor.

```julia

```
!!! note

!!!note Most factorizations do not need to retain the additional imposed block structure. In particular, constructions of orthogonal bases will typically mix up the subspaces, and as such the resulting vector spaces will be `SumSpace`s of a single term.
Most factorizations do not retain the additional imposed block structure. In particular, constructions of orthogonal bases will typically mix up the subspaces, and as such the resulting vector spaces will be `SumSpace`s of a single term.
5 changes: 3 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
*A Julia package for handling arrays-of-tensors, built on top of [TensorKit.jl](https://github.com/Jutho/TensorKit.jl)*

```@meta
CurrentModule = TensorKit
CurrentModule = BlockTensorKit
DocTestSetup = :(using TensorKit, BlockTensorKit)
```

## Package summary
Expand Down Expand Up @@ -33,4 +34,4 @@ Finally, we collect all docstrings.

```@contents
Pages = ["sumspaces.md", "blocktensor.md", "sparseblocktensor.md", "lib.md"]
```
```
4 changes: 2 additions & 2 deletions docs/src/lib.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
## Library index

```@autodocs

```
Modules = [BlockTensorKit]
```
60 changes: 47 additions & 13 deletions docs/src/sumspaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,30 +7,63 @@ These spaces are a natural extension of the `TensorKit` vector spaces, and you c

In `BlockTensorKit`, we provide a type `SumSpace` that allows you to define such direct sums.
They can be defined either directly via the constructor, or by using the `⊕` operator.

```@example sumspaces

```
In order for the direct sum to be wll-defined, all components must have the same value of `isdual`.

Essentially, that is all there is to it, and you can now use these `SumSpace` objects much in the same way as you would use an `IndexSpace` object in `TensorKit`.
In particular, it adheres to the interface of `ElementarySpace`, which means that you can query the properties as you would expect.

```@example sumspaces

!!! note

The operator `⊕` is used in both TensorKit and BlockTensorKit, and therefore it must be explicitly imported to avoid name clashes.
Both functions achieve almost the same thing, as `BlockTensorKit.⊕` can be thought of as a _lazy_ version of `TensorKit.⊕`.

```@repl sumspaces
using TensorKit, BlockTensorKit
using BlockTensorKit: ⊕
V = ℂ^1 ⊕ ℂ^2 ⊕ ℂ^3
ℂ^2 ⊕ (ℂ^2)' ⊕ ℂ^2 # error
dim(V)
isdual(V)
isdual(V')
field(V)
spacetype(V)
InnerProductStyle(V)
```

The main difference is that the object retains the information about the individual spaces, and you can query them by indexing into the object.

```@example sumspaces

```@repl sumspaces
length(V)
V[1]
```

### `ProductSumSpace` and `TensorMapSumSpace`

Because these objects are naturally `ElementarySpace` objects, they can be used in the construction of `ProductSpace` and `HomSpace` objects, and in particular, they can be used to define the spaces of `TensorMap` objects.
Additionally, when mixing spaces and their sumspaces, all components are promoted to `SumSpace` instances.

```@repl sumspaces
V1 = ℂ^1 ⊕ ℂ^2 ⊕ ℂ^3
V2 = ℂ^2
V1 ⊗ V2 ⊗ V1' == V1 * V2 * V1' == ProductSpace(V1,V2,V1') == ProductSpace(V1,V2) ⊗ V1'
V1^3
dim(V1 ⊗ V2)
dims(V1 ⊗ V2)
dual(V1 ⊗ V2)
spacetype(V1 ⊗ V2)
spacetype(typeof(V1 ⊗ V2))
```

```@example sumspaces

```@repl sumspaces
W = V1 → V2
field(W)
dual(W)
adjoint(W)
spacetype(W)
spacetype(typeof(W))
W[1]
W[2]
dim(W)
```

### `SumSpaceIndices`
Expand All @@ -39,6 +72,7 @@ Finally, since the `SumSpace` object is the underlying structure of a blocked te
For this, we provide the `SumSpaceIndices` object, which can be used to efficiently iterate over the indices of the individual spaces.
In particular, we expose the `eachspace` function, similar to `eachindex`, to obtain such an iterator.

```@example sumspaces

```
```@repl sumspaces
W = V1 * V2 → V2 * V1
eachspace(W)
```
8 changes: 4 additions & 4 deletions src/tensors/abstractblocktensor/conversion.jl
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Conversion
# ----------
function Base.convert(::Type{T}, t::AbstractBlockTensorMap) where {T<:TensorMap}
cod = ProductSpace{spacetype(t),numout(t)}(join.(codomain(t).spaces))
dom = ProductSpace{spacetype(t),numin(t)}(join.(domain(t).spaces))
cod = ProductSpace{spacetype(t),numout(t)}(oplus.(codomain(t).spaces))
dom = ProductSpace{spacetype(t),numin(t)}(oplus.(domain(t).spaces))

tdst = similar(t, cod ← dom)
for (f₁, f₂) in fusiontrees(tdst)
Expand All @@ -13,8 +13,8 @@ function Base.convert(::Type{T}, t::AbstractBlockTensorMap) where {T<:TensorMap}
end
# disambiguate
function Base.convert(::Type{TensorMap}, t::AbstractBlockTensorMap)
cod = ProductSpace{spacetype(t),numout(t)}(join.(codomain(t).spaces))
dom = ProductSpace{spacetype(t),numin(t)}(join.(domain(t).spaces))
cod = ProductSpace{spacetype(t),numout(t)}(oplus.(codomain(t).spaces))
dom = ProductSpace{spacetype(t),numin(t)}(oplus.(domain(t).spaces))

tdst = similar(t, cod ← dom)
for (f₁, f₂) in fusiontrees(tdst)
Expand Down
Loading
Loading