2. Useful Resources#
-Other useful resources for learning about tensor networks include (but are certainly not limited to):
- -diff --git a/pr-preview/pr-27/.buildinfo b/pr-preview/pr-27/.buildinfo deleted file mode 100644 index 98d3b246..00000000 --- a/pr-preview/pr-27/.buildinfo +++ /dev/null @@ -1,4 +0,0 @@ -# Sphinx build info version 1 -# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: 17bce47ad0b4e17f37025ac947dda5f8 -tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/pr-preview/pr-27/0-Preliminaries/Resources.html b/pr-preview/pr-27/0-Preliminaries/Resources.html deleted file mode 100644 index 73ddb995..00000000 --- a/pr-preview/pr-27/0-Preliminaries/Resources.html +++ /dev/null @@ -1,582 +0,0 @@ - - - - - - - - - -
- - -Useful Resources
- -Other useful resources for learning about tensor networks include (but are certainly not limited to):
- -Getting Started with Numerics
- -On this page, there are some links with relevant information for getting started with -numerical computing. We point to some references for the Julia programming language, as well -as some resources for learning about version control software.
-Version control software is a tool used in software development (and sometimes in other -fields) to manage and track changes made to a project’s source code, documents, or any other -set of files. It allows multiple contributors to work collaboratively on a project, keeping -a history of changes, and facilitating the organization and synchronization of different -versions of the project. The most popular version control system is -git, which is a free tool developed by Linus Torvalds in 2005, and -has become the de facto standard in the software development industry.
-Again, multiple resources are available for learning about git. From the official website, -the book Pro Git is a good place to start. For a more -dynamic approach, you can learn git through -this interactive tutorial.
-In order to get started with Julia, there are many resources already available. The -official documentation is a good place to start, and a -full getting started exposition can be found for example -here. There is also a -learning page that has tutorials on different topics, a -list of books, and much more.
-Additionally, there is an active forum for asking -questions, as well as a slack channel and a stack overflow -page.
-Julia has a very active open-source community, and many packages are available for different -purposes. These typically have their own documentation, and are hosted on GitHub. An -(incomplete) list of packages that are relevant for this course are given below:
- -Also check out the GitHub page for our organization, -which hosts and/or links many of the relevant software repositories.
-There are many additional software libraries available for tensor network computations, or -more generally for quantum physics research. Below you can find an incomplete list of some -of these.
- -Fock Space and Second Quantisation
- -When working with basis vectors using the occuppation number representation, we might -consider dropping the overall constraint \(\sum_{j=1}^L n_j = N\). This amounts to working in -a larger Hilbert space, which is known as the Fock space, and consists of the direct sum of -all physical (symmetrised or antisymmetrised) Hilbert spaces \(\mathbb{H}^{(N)}\) for -different particle numbers \(N\), going all the way from \(N=0\):
-In the case of fermions and with a finite-dimensional single-particle Hilbert space -\(\mathbb{H}^{(1)} \cong \mathbb{C}^L\), the upper limit in the direct sum is \(N=L\), i.e. -there are no states with \(N > L\) and so the associated Hilbert spaces are zero-dimensional. -This direct sum furthermore also contains the case \(N=0\), which we have not discussed -before. In the previous subsection we started the construction of \(\mathbb{H}^{(N)}\) from a -given single particle Hilbert space \(\mathbb{H}^{(1)}\). When there are no particles in the -system, there is only a single state in which it can be, corresponding to having all -occupation numbers \(n_j = 0\) for all \(j\). Hence, for \(N=0\) particles, the Hilbert space -\(\mathbb{H}^{(0)}\) is spanned by a single state, which we typically denote as -\(\ket{\Omega}=\ket{0,0,\dots,0}\) and refer to as the vacuum state. Note that this vacuum -state is normalised, and is thus very different from an actual zero vector of the vector -space, which has norm zero.
-The Fock space becomes a Hilbert space simply by incorporating the inner product from each -of its summands. States within the different summands of this direct sum are defined to be -orthogonal, i.e. \(\braket{\varphi^{(M)} \vert \psi^{(N)}}=0\) for all \(M\)-particle states -\(\ket{\varphi^{(M)}}\) and \(N\)-particle states \(\ket{\psi^{(N)}}\) with \(M \neq N\).
-The main benefits of using the formalism of second quantisation are not about losing the -overall particle number constraint, but for working with operators, in particular to -describe (interacting) Hamiltonians. In first quantisation, we need to specify a Hamiltonian -given a particular number of particles, i.e.\ the number of particles is an external -parameter of the system. Using the Fock space, we can now define operators in such a way -that their action is immediately defined for states with an arbitrary number of particles, -including even states which are superpositions over different particle numbers.
-Hereto, we first introduce operators that enable us to connect the different particle number -sectors, by creating (adding) or annihilating (removing) particles in the system. In -particular, we denote with \(\hat{a}_j^+\) the operator that adds a new particle in the mode -\(j\) in the system and with \(\hat{a}_j^-\) the operator that removes a particle that is in -mode \(j\) from the system. As it turns out that both operators are related via the adjoint, -i.e. \(\braket{\Phi| \hat{a}_j^+ \Psi} = \braket{\hat{a}_j^- \Phi | \Psi}\), we use the -simpler notation \(\hat{a}_j\) for the annihilation operator and \(\hat{a}_j^\dagger\) for the -creation operator. To construct these operators in a mathematically precise and -constructive way is actually somewhat tedious (but see -Wikipedia). -We just summarize their main properties. In particular, we want to have the property that -the (anti)symmetrized states satisfy
-It is immediately clear that, because of the (anti)symmetry, this requires that
-From the normalisation of these states, it also follows that
-With respect to the normalized basis vectors, using the occupation representation, we have
-which can be summarized using
-It then follows easily that the operator \(\hat{n}_j = \hat{a}_j^\dagger \hat{a}_j\) satisfies
-and thus measures the number of particles in mode \(j\). The operators \(\hat{n}_j\) are -referred to as number operators. The total number of particles can then be measured using
-but the Fock space does of course contain states which are superpositions over different -particle numbers (and which are thus not eigenstates of \(\hat{N}\)).
-Furthermore, by studying how single particle states \(\ket{j} \equiv \hat{a}_j^\dagger -\ket{\Omega}\) change under a change of single particle basis, or thus, a transformation to a -new set of modes, we can deduce how the associated creation and annihilation operators -transform. Suppose we have a different single-particle basis, which for clarity we label -with greek letters \(\kappa = 1,\ldots,L\). We then find
-from which we obtain
-Note that the transformation matrix \(\braket{j \vert \kappa}\) between two orthonormal bases -correspond to a unitary matrix. These transformation rules will be employed often, for -example, to switch between a position and momentum space representation.
-Note
-The bosonic creation and annihilation operators are of course reminiscent from the operators -introduced for diagonalising the single particle harmonic oscillator model. Indeed, out of -the bosonic creation and annihilation operator associated to every mode \(j\) we can build two -Hermitian operators
-which than satisfy the well known commutation relations \(\left[\hat{X}_j, \hat{P}_k\right] = -\mathrm{i} \delta_{j,k}\). In second quantisation, the Fock space of bosons built from a -single particle system with \(L\) modes can equivalently be thought of as a regular tensor -product space of \(L\) distinguishable quantum particles moving on the real line, or -technically, as \(\left(L^2(\mathbb{R})\right)^{\otimes L}\).
-Note
-For fermions, we can also construct Hermitian operators out of the creation and annihilation -operators, which we denote as
-In this case, we find that they satisfy the anticommutation relation
-so that the \(\hat{\eta}^{(1)}\) type operators and \(\hat{\eta}^{(2)}\) type operators behave -similarly. In that case, one often uses a different notation by setting
-and thus \(\{\hat{\chi}_k, \hat{\chi}_l\} = \delta_{k,l}\) for all \(k, l = 1,\ldots, 2L\). -These Hermitian fermionic operators are referred to as Majorana operators.
-Note furthermore that the Fock space of fermions built from a single particle system with -\(L\) modes looks remarkably like a system of \(L\) qubits, i.e. the tensor product -\((\mathbb{C}^2)^{\otimes L}\). While this is true for how the occupation number basis vectors -are labelled, one important fact is that the operators \(\hat{a}_j\) and \(\hat{a}_j^\dagger\) -should not be thought of as local operators that act nontrivially on the single site \(j\), -and as the identity elsewhere, since they do not mutually commute, but rather anticommute. -It is possible to map these fermionic creation and annihilation operators to ‘nonlocal’ -qubit operators using the -Jordan-Wigner transformation.
-With these creation and annihilation operators, we can now represent general operators in a -way that does not depend on the precise number of particles in the system. The simplest case -are ‘single-particle’ operators, i.e. operators that were defined with respect to the -single-particle Hilbert space \(\mathbb{H}^{(1)}\). The easiest case are operators which are -are diagonal with respect to the chosen single-particle basis. In that case, every particle -one of the eigenmodes of the single-particle operator will give a contribution that equals -the associated eigenvalue. Hence, the many-body representation of such an operator is given -by
-However, we can easily transform away from the basis of eigenmodes to a general set of -modes, and then find
-Vice versa, if you are given an operator that only contains terms where every term contains -exactly one creation and one annilation operator, then it is especially easy to diagonalise -this operator, since one only needs to diagonalise the corresponding single-particle version -of the operator. When the Hamiltonian of the many-body system is of this form, the system is -said to be free or noninteracting.
-Note
-There is a larger class of operators that can easily be diagonalised, namely operators where -every term is quadratic in the creation and annihilation operators. This means that every -term contains either a creation and an annilation operator, or two creation operators, or -two annihilation operators. Such Hamiltonians are said to be quadratic or Gaussian, and can -be diagonalised using a -Bogoliubov transformation.
-Similarly, there exist two-particle operators, in particular, typical interaction terms in -the Hamiltonian such as the Coulomb interaction between electrons. Such operators take the -form
-and can be translated to act on the full Fock space as
-As soon as such type of operators are present in the Hamiltonian (which thus contain more -than two creation of annihilation operators), it becomes impossible to diagonalise the -Hamiltonian based on a simple calculation in the single-particle Hilbert space, and the -exponentially large many-body Hilbert space need to be considered.
-The Hilbert Space of Many-Body Physics
- -All of the previous axioms remain valid for a composite system consisting of several quantum -degrees of freedom. However, we need to know how to describe the state of the system, and -thus more specifically, how to define the Hilbert space associated to such a system. It -turns out that quantum mechanics forces us to distinguish two cases.
-Consider a quantum system composed out of two subsystems, which we call \(A\) and \(B\), -sometimes referred to as Alice and Bob in quantum information contexts. These can themselves -already be many-body systems. Suppose we know the Hilbert space \(\mathbb{H}^A\) in which to -describe states of subsystem \(A\) when considered as an isolated system on itself, and -analoguously for \(\mathbb{H}^B\). Now consider both systems together, but where they do not -interact, so that we can still treat them independently. In particular, we can prepare -subsystem \(A\) in a state \(\ket{\psi^A}\) and subsystem \(B\) in a state \(\ket{\psi^B}\). We -should also be able to describe these two independent subsystems jointly, so that there must -exist a map from the two arguments \((\ket{\psi^A}, \ket{\varphi^B}) \in \mathbb{H}^A \times -\mathbb{H}^B\) to a single state which we denote as \(\ket{ \psi^A} \otimes \ket{\varphi^B}\) -and that lives in a joint Hilbert space \(\mathbb{H}^{AB}\) that we have yet to determine.
-Now, it makes sense that, if we build superpositions in one of the two subsystems, while -keeping the other fixed, this also correspond to a superposition in the joint description of -both systems together. This leads to
-and similarly
-Hence, the Hilbert space \(\mathbb{H}^{AB}\) that we are trying to construct must contain all -states \(\ket{\psi^A} \otimes \ket{\varphi^B}\) for all \(\ket{\psi^A} \in \mathbb{H}^A\) and -all \(\ket{\varphi^B} \in \mathbb{H}^B\), all possible linear combinations thereof (in order -to be a vector space), but in such a way that the above equalities hold. This construction, -which can be made mathematically precise, is known as the tensor product of vector spaces -\(\mathbb{H}^{AB} = \mathbb{H}^A \otimes \mathbb{H}^B\).
-We have also denoted the output of the map from two states \((\ket{\psi^A}, \ket{\varphi^B}) -\in \mathbb{H}^A \times \mathbb{H}^B\) to \(\mathbb{H}^A \otimes \mathbb{H}^B\) using the same -tensor product symbol, and refer to such a state as a (tensor) product state \(\ket{ \psi^A} -\otimes \ket{\varphi^B}\). Importantly, however, the tensor product space \(\mathbb{H}^A -\otimes \mathbb{H}^B\) certainly contains vectors which are not product states, such as
-This forms the basis for quantum correlations and the concept of (quantum) entanglement, -which will be a fundamental property of quantum many-body systems. That the Hilbert space of -a composite system is given by the tensor product of the individual Hilbert spaces is often -introduced as a separate axiom. The deductive (but informal) argument just given can however -be turned into a proof that depends only on the axioms given above (in fact only on the -first two).
-As expected (and required), it can be shown that the tensor product of two Hilbert spaces is -again a Hilbert space, if we define its inner product in the following way. We first define -the inner product for product states as
-and then extend this definition by linearity (in the second argument and antilinearity in -the first argument).
-In practice, given two finite-dimensional Hilbert spaces \(\mathbb{H}^A \cong -\mathbb{C}^{d^A}\) and \(\mathbb{H}^B \cong \mathbb{C}^{d^B}\) with a basis \(\{ \ket{j}, -j=1,\dots, d^A\}\) and \(\{\ket{k}, k=1,\dots, d^B\}\), the tensor product space is spanned by -a basis composed of all products
-and thus has dimension \(d^A \cdot d^B\). A general state \(\ket{\Psi} \in -\mathbb{H}^{A}\otimes \mathbb{H}^B\) can then be expanded as
-The expansion coefficients \(\Psi_{jk}\) thus have two indices, and it is often useful to -think of them as a matrix. Note that we will almost always use this product basis, also -referred to as the computational basis, for working with tensor product spaces. However, -one can certainly also use more complicated basis choices, where the basis vectors are not -simple product states. One well known choice that you might remember from your quantum -mechanics course is in the case of two spin-1/2 systems. If we denote the basis for a single -spin-1/2 system as \(\{\ket{\uparrow},\ket{\downarrow}\}\), then the product basis for a -system consisting of two spin-1/2 systems is given by \(\{\ket{\uparrow,\uparrow}, -\ket{\downarrow,\uparrow}, \ket{\uparrow,\downarrow}, \ket{\downarrow,\downarrow}, \}\). -However, in the context of spin coupling (see Section on Symmetries), one also uses the -coupled basis
-Note that we also use the same tensor product notation as an operation to map operators from -the subsystems into operators acting on the full tensor product Hilbert space. In -particular, the process of measuring operator \(\hat{A}\) in subsystem \(A\) and simultaneously -operator \(\hat{B}\) in subsystem \(B\) is associated with an operator \(\hat{A}\otimes \hat{B}\) -acting on \(\mathbb{H}^A \otimes \mathbb{H}^B\), the action of which is first defined on the -product states as
-and then extended by linearity. It furthermore holds that
-With respect to a product basis, the matrix representation of \(\left(\hat{A} \otimes -\hat{B}\right)\) is given by the -Kronecker product.
-When we are only interested in an operator \(\hat{O}\) acting on subsystem \(A\) without doing -anything on subsystem \(B\), we should create the operator \(\hat{O} \otimes \hat{1}_B\), with -\(\hat{1}_B\) the identity operator of the Hilbert space \(\mathbb{H}^B\). Often, we will omit -this explicit tensor product with the identity operator, and simply use some notation which -indicates that an operator acts on a certain subsystem, such as \(\hat{O}^{(A)} = \hat{O} -\otimes \hat{1}_B\). This also makes it explicit that operators defined on different -subsystems, when lifted to act on the full Hilbert space, commute, i.e.
-The tensor product construction extends readily to systems with multiple subsystems. -Consider for example a system consisting of qubits, where every individual qubit has an -associated Hilbert space \(\mathbb{C}^2\) with basis denoted as \(\{\ket{0},\ket{1}\}\). The -Hilbert space \(\mathbb{H}^N\) of \(N\) qubits is then spanned by a computational basis which we -can denote as
-Hence, the Hilbert space thus has dimension \(2^N\), and a general state \(\ket{\Psi}\) has -expansion coefficients
-which can be interpreted as a single vector of length \(2^N\), or as a \(N\)-dimensional tensor, -where every tensor index ranges over the two values 0 and 1. This exponential increase of -the Hilbert space dimension with the number of particles is exactly why the quantum -many-body problem is so difficult, but also essential for providing a quantum computer with -its speed-up. It is exactly these type of quantum states living in a many-body Hilbert -space, which is thus composed of many tensor product factors, that we will represent as a -tensor network.
-Finally, we also have to specify the Hamiltonian of a many-body system. It typically takes -the form of a sum of terms, where every individual term acts nontrivially on only a few -subsystems. One important example that will reappear throughout these tutorials is the -“Quantum Ising Model with transverse magnetic field”, which acts on a system composed of -qubits or spin-1/2 particles, and is defined as
-Here, the summation variables \(i\) and \(j\) correspond to the sites of a lattice. The notation -\(\sum_{\langle i,j \rangle}\) denotes a sum over pairs of neighbouring lattice sites \(i\) and -\(j\). The second sum contains terms \(\sigma^x_i\) which act nontrivially only on the site \(i\), -and as the identity operator elsewhere. If, for example, we enumerate the sites from \(1\) to -\(N\), it would act as
-with \(\sigma^x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\) the Pauli x matrix, and \(1\) -the \(2 x 2\) unit matrix. The first set of terms in \(\hat{H}\) acts nontrivially on two sites, -and is defined analoguously, using the Pauli z matrices \(\sigma^z = \begin{bmatrix} 1 & 0 \\ -0 & -1 \end{bmatrix}\).
-The tensor product construction needs to be revised when discussing the Hilbert space of a -system composed of identical particles. Consider for example a system made out of \(N\) -identical particles. To every individual particle we can associate a particular Hilbert -space, which we denote as \(\mathbb{H}^{(1)}\), for example \(\mathbb{H}^{(1)} = -L^2(\mathbb{R})\) for a particle moving on the real line, or \(\mathbb{H}^{(1)} = -\mathbb{C}^L\) for a particle living on the sites of a chain of length \(L\).
-If we temporarily assign each of the \(N\) particles a label \(n=1, \dots, N\), then the Hilbert -space of the composite system would be given by the \(N\)-fold tensor product -\(\widetilde{\mathbb{H}}^{(N)} = \left(\mathbb{H}^{(1)}\right)^{\otimes N}\). However, for -identical particles, our labeling is completely arbitrary. For the case of \(N=2\) particles -on a chain of \(L\) sites, we cannot distinguish between the state \(\ket{j_1, j_2}\) where -particle \(1\) is on site \(j_1\) and particle \(2\) is on site \(j_2\) versus the state \(\ket{j_2, -j_1}\) where site \(j_1\) is occupied by the particle that we gave label \(2\) and site \(j_2\) is -occupied by the particle with label \(1\). A general redefinition of the particle labels -amounts to a permutation, and we have to require that no physical measurement can -distinguish between such permutations. Hence, this permutation invariance does not behave -like a regular symmetry (like e.g. rotation symmetry, one can still construct observables -along preferred directions such that they can detect rotations).
-We are forced to restrict our tensor product Hilbert space -\(\left(\mathbb{H}^{(1)}\right)^{\otimes N}\) to the subspace \(\mathbb{H}^{(N)}\) of physical -states which are not affected by acting with such permutations. Note that, due to the fact -that quantum states actually correspond to rays of vectors, it is still allowed that the -vectors in \(\mathbb{H}^{(N)}\) pick up a phase factor when applying certain permutations. It -is a result in the representation theory of the permutation group that there are only two -possibilities. Either the phase factor is always absent (or thus 1), or the phase factor is -(-1) for odd permutations and (+1) for even permutations, i.e. the phase factor equals the -sign(ature) of the permutation. Identical particles for which the phase factor is always one -are known as bosons, whereas those with the nontrival phase factor choice correspond to -fermions. Indeed, the nontrivial phase factor automatically gives rise to Pauli’s -exclusion principle: two fermions cannot be in the same quantum state, since \(P_{12} -\ket{j_1,j_2} = \ket{j_2,j_1} = -\ket{j_1,j_2}\) and for \(j_1=j_2\) we would thus find -\(\ket{j,j} = -\ket{j,j}\).
-Bosons are thus described by states which are symmetric under permutations, whereas fermions -are described by states which are called antisymmetric. We can define an operator on -\(\tilde{\mathbb{H}}^{(N)} = \left(\mathbb{H}^{(1)}\right)^{\otimes N}\) that maps any given -state onto such a (anti)symmeric state, namely by first defining its action on product -states as
-and then extending it by linearity. Here, \(S_N\) is the symmetric group containing all -permutations \(\sigma\) of \(N\) elements, where the permutation \(\sigma\) is a bijective map -from integers \(j \in \{1,\dots,N\}\) to a new number \(\sigma(j) \in \{1,\dots,N\}\). The -sign(ature) \(\epsilon_\sigma\) of the permutation takes the value \(+1\) or \(-1\), depending on -whether the permutation \(\sigma\) can be obtained by composing an even or odd number of -elementary transpositions. An elementary transposition \(\tau_{i,j}\) is a permutation which -only interchanges the two numbers \(i\) and \(j \neq i\):
-Note that \(\hat{S}^{\pm}\) does not necessarily yield a normalised state, and can indeed even -map a state to zero, in order to give rise to Pauli’s exclusion principle: -\(\hat{S}^-\ket{j,j} = 0\). The image of \(\hat{S}^{\pm}\) contains all states with the proper -behaviour under relabeling permutations, and thus correspond to the physical Hilbert space -for bosons or fermions:
-Note that in this case, the physical Hilbert space is not a tensor product. However, we can -think of it as a subspace of an auxiliary Hilbert space, \( \widetilde{\mathbb{H}}^{(N)}\), -which is a tensor product. The restriction to this subspace can thus be thought of as a -constraint, and the same scenario happens in other constrained quantum systems. The most -notable example is that of quantum gauge theories, where there is an extensive set of -constraints, namely that physical quantum states need to be gauge invariant.
-Now consider a single particle Hilbert space \(\mathbb{H}^{(1)}\) with an orthonormal basis -\(\{\ket{j}, j=1,\ldots,L\}\), for example where \(\ket{j}\) corresponds to the particle being -positioned on site \(j\) of a lattice with \(L\) sites. We also refer to these single particle -states as modes. To construct a basis for \(\mathbb{H}^{(N)}\), we can start from the tensor -product basis of \(\widetilde{\mathbb{H}}^{(N)}\) and apply \(\hat{S}^{\pm}\) to each of its -\(L^N\) elements. Let us henceforth denote these states as
-The application of \(\hat{S}^{\pm}\) will create certain linear dependences. In particular, -states \( \ket{j_1,j_2, \ldots, j_N}\) that contain the same set of modes \(j_k\), i.e. for -which the \(j_k\)’s are related by a permutation, are equal (up to a sign in the case of -\(\hat{S}^-\)). We can thus select a single state by ordering the \(j_k\) arguments. -Furthermore, in the case of \(\hat{S}^{-}\), the state is mapped to zero as soon as two \(j_k\) -values coincide, so we can eliminate such states. If we thus restrict the set to states -\(\ket{j_1,j_2,\ldots ,j_N}\) which are such that the modes are ordered as \(j_1 < j_2 < \ldots -< j_N\) (for fermions) or \(j_1 \leq j_2 \leq \ldots \leq j_N\) (for bosons), then we have a -linearly independent set of states. For fermions, this implies in particular that we need to -have \(N \leq L\), there cannot be more fermions in the system then there are linearly -independent modes (single particle states).
-Finally, one can wonder about the normalisation of these states. For fermions, the -superposition created by \(\hat{S}^-\) contains \(N!\) terms, which are mutually orthogonal, so -that the resulting state is normalised, because of the \(1/\sqrt{N!}\) prefactor in the -definition of \(\hat{S}^{-}\). More generally, one then finds
-For bosons, the situation is more complicated in the case that some \(j_k\) values coincide. -Some of the \(N!\) terms created by \(\hat{S}^+\) are then equal and contribute differently to -the norm. If we denote with \(n_1, n_2, \ldots, n_L\) the number of \(j\) values that equal the -value \(1, 2, \ldots, L\), i.e. the number of particles in mode \(1, 2, \ldots, L\), then we -find
-This more general exprression is also valid for fermions, where every \(n_j\) is restricted to -be zero or one. In fact, the values \(n_j\) for \(j=1,\ldots,L\) completely characterise the -state, and can thus be used to relabel the basis. Instead of specifying the mode \(j_k\) that -each particle \(k=1,\ldots,N\) occupies (where the labeling of the particles is arbitrary -because they are identical), we can move to a mode-based description and thus specify the -number of particles in each mode, also known as the mode occupation number. We can then -refer to the basis vectors as
-where \(n_j = 0, 1\) (fermions) or \(n_j = 0,1,2, \ldots \) (bosons) and furthermore -\(\sum_{j=1}^{L} n_j = N\). Furthermore, we define these states to be normalised to 1, i.e. we -absorb a suitable normalisation factor when defining \(\ket{n_1, n_2, \ldots, n_L}\) in terms -of the construction above.
-This way of labelling the basis states now is again reminiscent of a tensor product -structure, i.e. we could think of \(\ket{n_1, n_2, \ldots, n_L}\) as the tensor product of -states \(\ket{n_j}\) associated to every mode, and where the Hilbert space associated with -such a mode is two-dimensional in the case of fermions, or infinite-dimensional in the case -of bosons. However, there is still a global constraint \(\sum_{j=1}^{L} n_j = N\) so that we -cannot let the different \(n_j\) values vary completely independently from each other. -Furthermore, some caution is now needed as to what it means to have operators acting on -these different “mode Hilbert spaces”. The correct formalism is that of second quantisation, -which we introduce next.
-Note
-In many applications, people do still work with the framework of first quantisation, and -consider \(N\)-particle states constructed by symmetrising or antisymmetrising the tensor -product of \(N\) single-particle states, in a so-called independent particle model or -approximation. Such states are quite cumbersome to work with. As can already be seen, the -antisymmetric case is slightly easier and is known as a Slater determinant. Indeed, the -antisymmetrisation formula is reminiscent of the Leibniz formula of a determinant, and for -example the inner product between two Slater determinants constructed from -\(\{\ket{\psi_n},n=1,\ldots,N\}\) and \(\{\ket{\varphi_n},n=1,\ldots,N\}\) is given by the -determinant of the matrix containing all overlaps \(\braket{\varphi_m \vert \psi_n}\). Slater -determinants form the basis of Hartree-Fock theory for approximating the state of electrons -in an atom or molecule.
-The bosonic version occurs in the context of Bose-Einstein condensation and cold atom -systems more generally. In that case, the inner product between two such states gives rise -to a determinant-like formula, but without the minus signs. This construction is known as -the permenant, but unlike the determinant it is very hard to compute in general and really -requires to explicitly sum up all \(N!\) terms.
-Interesting States and Observables in Quantum Many-Body Physics
- -Having introduced the Hilbert space and Hamiltonian of quantum many-body systems, we still -need to define which states we are actually interested in, and which type of observables we -want to compute for such states. So far, we have only mentioned that isolated systems have a -quantum state which corresponds to a vector (or rather a ray of vectors) in its Hilbert -space \(\mathbb{H}\). Before answering this question, we first need to generalize our concept -of a quantum state.
-More abstractly and generally, the quantum state of a system can be introduced as a map from -observables (operators) to numbers (expectation values). This is typically denoted as \(\rho: -\mathrm{End}(\mathbb{H}) \mapsto \mathbb{C}:\hat{A} \to \hat{A}\). Here, -\(\mathrm{End}(\mathbb{H})\) is the set of linear operators (a.k.a endomorphisms) on -\(\mathbb{H}\). This set is itself a vector space, as we can consider linear combinations of -linear operators. Furthermore, as we can compose two linear operators and obtain a new -linear operator, we have a product operation, which makes \(\mathrm{End}(\mathbb{H})\) into an -algebra. Finally, we have defined the concept of the adjoint of an operator, which in -mathematics terminology gives \(\mathrm{End}(\mathbb{H})\) the structure of a -\(C^\ast\)-algebra.
-The map \(\rho\) that represents a quantum state should have a number of properties, that -generalise those of the case we have encountered so far, where \(\rho(\hat{A}) = -\frac{\braket{\Psi\vert \hat{A} \vert \Psi}}{\braket{\Psi | \Psi}}\). In particular, this map -is linear with respect to linear combinations of operators. This implies that it can be -written as \(\rho(\hat{A}) = \mathrm{Tr}\left[\hat{\rho}\hat{A}\right]\), where \(\hat{\rho}\) -is now itself an element of \(\mathrm{End}(\mathbb{H})\) (technically, \(\rho\) is an element -from the dual space of \(\mathrm{End}(\mathbb{H})\)). Furthermore, we must have that our state -gives rise to nonnegative and normalised probabilities, which implies that
-\(\rho(\hat{1}) = \mathrm{Tr}\left[\hat{\rho}\right] = 1\)
\(\rho(\hat{P}) \geq 0\) for any projector, and more generally, for any positive definite -operator \(\hat{P}\). This implies that the associated operator \(\hat{\rho}\), known as the -density operator or density matrix (when expressed with respect to a chosen basis), -is itself a positive (and thus self-adjoint) operator, which is furthermore normalised -to have trace one.
The particular case where the state of the system was given by a vector -\(\ket{\Psi}\in\mathbb{H}\) corresponds to -\(\hat{\rho}=\frac{\ket{\Psi}\bra{\Psi}}{\braket{\Psi\vert \Psi}}\) and thus satisfies -\(\hat{\rho}^2=\hat{\rho}\), i.e. \(\hat{\rho}\) is itself a projector. Such states are called -pure states. All density operators which do not have this property are called mixed -states.
-Being positive definite operators, any density operator admits a spectral decomposition of -the form
-where the states \(\{\ket{\Phi_n}\}\) form an orthonormal set and the eigenvalues \(p_n\) -satisfy \(\sum_{n} p_n =1 \) and \(p_n \geq 0\) (which together also yields \(p_n < 1\)).
-Mixed states arise in the quantum world in two scenarios:
-If the system is not isolated, but is rather a subsystem of a larger system and -interacting with its complement therein. This is discussed in the next section.
Even for an isolated system, it can happen that the state is not exactly known and one -must deal with classical uncertaintity and probability. Indeed, a mixed state can be -interpreted as a statistical ensemble. If the system can be prepared into different (not -necessarily orthogonal) states \(\{\ket{\Psi_1}, \ket{\Psi_2}, \ldots\}\) with -probabilities \(p_1, p_2, \ldots\) that sum up to one, then the state of the system is -given by
-Note that this does not necessarily correspond to the spectral decomposition of -\(\hat{\rho}\), as the states \(\ket{\Psi_i}\) are not necessarily orthogonal. It is nonetheless -a valid density operator. More generally, given two density operator \(\hat{\rho}_1\) and -\(\hat{\rho}_2\), aany convex combination \(\hat{\rho} = p \hat{\rho}_1 + (1-p) \hat{\rho}_2\) -with thus \(0 \leq p \leq 1\) is a valid density operator.
-To a mixed state, we can associate the Von Neumann entropy
-with \(p_n\) the eigenvalues of \(\hat{\rho}\). For a pure state, the Von Neumann entropy -evaluates to zero (using \(\lim_{x\to 0} x \log x = 0\)). The maximal value of the Von Neumann -entropy is obtained when all values \(p_n\) are equal so that \(\hat{\rho} \sim \hat{1}\). -Because of normalisation, we then have \(p_n = 1/d\) with \(d\) the Hilbert space dimension and -thus obtain
-In a many-body system, the Hilbert space dimension scales exponentially with the number of -sites or number of degrees of freedom in the system. If we call this quantity the “volume” -of the system, than we can conclude that the maximal value of the Von Neumann entropy is -thus proportional to the volume of the system.
-Depending on the context, the interpretation and meaning of the Von Neumann entropy can -differ, as we discuss below.
-Consider a bipartite system composed of two subsystems \(A\) and \(B\), with thus \(\mathbb{H} = -\mathbb{H}^{(A)} \otimes \mathbb{H}^{(B)}\). Now suppose that we are only interested in -measuring observables that act non-trivially on subsystem \(A\). This might be the case if -subsystem \(A\) is the actual quantum system that we want to model, but it is not isolated and -instead interacting with an environment, corresponding to subsystem \(B\). With the axioms so -far, we are forced to include the environment into our discussion. Only the combined system
-environment can be assigned a pure state \(\ket{\Psi}\). However, this seems complete -overkill, as the environment might extend the whole universe and so it will be impossible to -know the complete state \(\ket{\Psi}\). Since we are only interested in observables that act -nontrivially on the system \(A\), i.e. all observables that we want to measure take the form -\(\hat{O}^A = \hat{O} \otimes \hat{1}_B\), and thus we expect that a reduced and simplified -description must exist.
Let us now assume that the Hilbert space of subsystem \(A\) is spanned by a basis -\(\{\ket{\psi_k}, k=1,\dots, d^A\}\) and the Hilbert space of subsystem \(B\) is spanned by a -basis \(\{\ket{\varphi_l}, l=1,\ldots, d^B\}\). A reduced description for the system \(A\) can -be obtained by observing that we can write
-Hence, subsystem \(A\) an be described in terms of a mixed state that is obtained as
-This construction is known as a partial trace and the resulting mixed state of subsystem -\(A\) as the reduced density operator. It is based on the fact that by using a tensor -product basis for the joint Hilbert space \(\mathbb{H} = \mathbb{H}^{(A)} \otimes -\mathbb{H}^{(B)}\), a trace operation leads to a double sum, namely one over all basis -vectors for \(\mathbb{H}^A\) and one over all basis vectors for \(\mathbb{H}^B\). Hence, the -complete trace can be interpreted as the composition of two partial traces, one over -subsystem \(A\) and one over subsystem \(B\). If all relevant operators act trivially on \(B\), -the partial trace over \(B\) can be performed directly on the state \(\hat{\rho}^{(AB)} = -\ket{\Psi}\bra{\Psi}\) and gives rise to the reduced density matrix \(\hat{\rho}^{(A)}\). Some -notes are in order.
-While we made reference to a specific tensor product basis to define this construction, -the concepts of reduced density operator and partial trace do not depend on the specific -choice of basis for \(\mathbb{H}^{(A)}\) and \(\mathbb{H}^{(B)}\). The construction only -requires a tensor product basis to expose the tensor product structure of \(\mathbb{H}\).
While we have assumed that the total system is described by a pure state -\(\hat{\rho}^{(AB)} = \ket{\Psi}\bra{\Psi}\). However, for the construction of the reduced -density opeator as \(\hat{\rho}^{(A)} = \mathrm{Tr}_B \hat{\rho}^{(AB)}\) this is not -necessary.
With can expand the whole construction with respect to an explicitly chosen basis. If -\(\ket{\Psi} = \sum_{k=1}^{d^A} \sum_{l=1}^{d^B} \Psi_{k,l} \ket{k}\otimes \ket{l}\), we -find
-and
-If the reduced density operator \(\hat{\rho}^A\) is pure, this indicates that the state -\(\ket{\Psi}\) was itself a tensor product. In all other cases, the subsystems \(A\) and \(B\) are -said to be entangled. This entanglement can be quantified by computing the Von Neumann -entropy \(S(\hat{\rho}^A)\), which is then called the entanglement entropy of the combined -system \(A\) and \(B\). Indeed, that this entropy is a property of how both subsystems are -entangled follows from the fact that \(S(\hat{\rho}^A) = S(\hat{\rho}^B)\), i.e.\ it doesn’t -matter whether the Von Neumann entropy of the reduced density operator of subsystem \(A\) or -of subsystem \(B\) is computed. This is only true if the total system is in a pure state -\(\ket{\Psi}\). When also the total system is in a mixed state, because of classical -randomness, then it is harder to differentiate between true quantum entanglement and -classical probability.
-To conclude, we analyze the case where the combined system is in a pure state a bit more. -If we again expand \(\ket{\Psi}\) with respect to the tensor product basis as
-and interpret its expansion coefficients \(\Psi_{k,l}\) as the entries of a \(d^A \times d^B\) -matrix \(C\). The reduced density operators can now be written as
-Hence, the reduced density matrices for subsystems \(A\) and \(B\) are related by the fact that -they correspond to the two different ways in which we can multiply the matrix \(C\) with its -Hermitian conjugate \(C^\dagger\). It is a well-known result from linear algebra that for two -matrices \(A \in \mathbb{C}^{d_1 \times d_2}\) and \(B \in \mathbb{C}^{d_2 \times d_1}\), the -square matrices \(A B \in \mathbb{C}^{d_1 \times d_1}\) and \(BA \in \mathbb{C}^{d_2 \times -d_2}\) have the same set of nonzero eigenvalues, counted with degeneracy. If \(d_1 \neq d_2\), -the larger of the two matrices will have additional eigenvalues zero. This result already -proofs the equality \(S(\hat{\rho}^{(A)}) = S(\hat{\rho}^{(B)})\).
-However, we can even make this more explicit. We can decompose the matrix \(C \in -\mathbb{C}^{d^A \times d^B}\) as \(C = U S V^\dagger\) with \(U\) and \(V\) unitary matrices (of -size \(d^A \times d^A\) and \(d^B \times d^B\) respectively), and \(S\) a \(d^A \times d^B\) matrix -which only has nonzero entries on the diagonal. Furthermore, the nonzero entries of \(S\) can -be chosen positive in descending order. This decomposition is known as the singular value -decomposition. For further reference, we denote the diagonal elements of \(S\) as \(s_i = -S_{i,i}\) for \(i=1,\ldots, \mathrm{min}(d^A,d^B)\).
-The unitary matrices \(U\) and \(V\) can be interpreted as basis transforms in the subsystems -\(A\) and \(B\) respectively, i.e. they define a new basis (which is thus specific to the chosen -state \(\ket{\Psi}\)), which we denote as \(\{\ket{\psi^{(A)}_k}, k= 1,\ldots, d^A\}\) and -\(\{\ket{\psi^{(B)}_l}, l= 1,\ldots, d^B\}\). We can then write
-This particular way of writing the bipartite state \(\ket{\Psi}\) is known as the Schmidt decomposition. -As a result, the reduced matrices appear immidiately in diagonalised form. The singular values -\(s_i\), or rather their squares \(p_i = (s_i)^2\) are referred to as Schmidt coefficients, and -together make up the entanglement spectrum. The entanglement entropy is then given by
-In all of this, it is clear that subsystems \(A\) and \(B\) were treated on equal footing, and -it does in fact not matter which of the two is chosen to probe the entanglement structure of -the state.
-If the entanglement entropy evaluates to zero, the two subsystems \(A\) and \(B\) are said to be -unentangled, and the state \(\ket{\Psi}\) actually factorises as a tensor product -\(\ket{\psi^A} \otimes \ket{\psi^B}\). As soon as the entanglement entropy is nonzero, the two -subsystems are entangled. As stated above, the entropy is upper bounded, in this case by the -logarithm of the smallest of the two Hilbert space dimensions \(d^A\) or \(d^B\). Thus, if \(d^A -\leq d^B\), we find that the entanglement entropy satisfies
-It turns out that states that are randomly selected from the Hilbert space typically have -an entanglement entropy that is close to maximal.
-With the concept of mixed states and quantum entanglement at hand, we can now discuss -physically interesting states. Firstly, when considering a system that is in contact with a -large environment that acts as a heat bath at temperature \(T\), it will equilibrate. The -state of the system at equilibrium is then given by the so-called Gibbs state
-where the normalization factor
-is typically referred to as the partition function. Here, \(\beta = \frac{1}{k_B T}\) with -\(T\) de temperature and \(k_B\) Boltzmann’s constant. Henceforth, we simply refer to \(\beta\) as -inverse temperature.
-The Von Neumann entropy of the Gibbs state corresponds to the thermodynamical notion of -entropy. In a many-body system, the thermal entropy at finite temperature will be extensive -and thus scale with the volume of the system, just like the energy expectation value, so -that together the free energy \(E - T S\) is minimised.
-At infinite temperature (\(\beta=0\)), we obtain \(\hat{\rho} \sim \hat{1}\) and the Von Neumann -entropy reaches its upper bound. At zero temperature (\(\beta \to +\infty\)), we obtain -\(\hat{\rho} \sim \hat{P}_0\), with \(\hat{P}_0\) the projector onto the eigenspace of lowest -energy. Hence, in that case the Von Neumann entropy is given by \(\log(d_0)\), with \(d_0\) the -lowest energy eigenvalue, i.e. the dimension of the ground state subspace. Most quantum -lattice systems have a single or at least a small number of linearly indepenent ground -states, so that \(d_0\) is a small number independent of the system size. However, there are -also cases where the number of ground states scales exponentially with the system size, and -the thermal entropy remains extensive at zero temperature. This then constitutes a violation -of the infamous third law of thermodynamics.
-While the heat bath or environment with which the system interacts is in practice typically -much “larger” (in terms of number of degrees of freedom and thus Hilbert space dimension), -we can use a property that any mixed state can be obtained as the reduced density operator -from a pure state in a Hilbert space that is the tensor product of two copies of the -system’s Hilbert space, or thus, where the environment is just exactly as large as the -system. Writing a mixed state \(\hat{\rho}\) as the reduced density operator of a pure state -\(\ket{\Psi}\) in a Hilbert space \(\mathbb{H} = \mathbb{H}^S \otimes \mathbb{H}^E\) is known as -a purification. With respect to the purification, all expectation values can be obtained -as
-which can be an advantage if one has an efficient (mathematical, computational, …) formalism -for working with pure states. We can thus always construct such a purification by just -taking the environment to be a copy of the system. Note that the environment in this -construction is merely an auxiliary tool, and has no physical meaning or relation to the -actual environment. If the Hilbert space of the system is described by a basis \(\{\ket{j}, -j=1,\dots, d\}\), then we can first build a purification if the infinite temperature state as
-In particular, if the system is a many-body system and \(\ket{j}\) is itself already a tensor -product basis state \(\ket{j_1} \otimes \ket{j_2} \otimes \ldots\), we can organise the -environment so that matching tensor product factors between system and environment are taken -together. This has the advantage that the infinite temperature state can be written as
-and still has an overall tensor product structure. A purification of the finite temperature -state can then be obtained as
-This expression now looks remarkibly similar to how to compute a time-evolved state, by -replacing \(-\mathrm{i} t \mapsto -\beta/2\). Hence, methods that solve Schrödinger’s equation -and are sufficiently general to also work with imaginary values of the time coordinate can -be used to prepare thermal states.
-Note
-Purifications of thermal states are also referred to as thermofield double states, -especially in the high energy physics literature, i.e. in the context of quantum field -theory, holography and quantum gravity.
-As quantum effects are most pronounced at zero temperature, we typically assume to be -operating in this regime. It then follows that we are mostly interested in the lowest energy -states of the Hamiltonian, and in particular in the ground state(s).
-A trivial but nonetheless important property of ground states is that they can easily be -characterised as states that minimise the expectation value \(\braket{\Psi \vert \hat{H} | -\Psi}\). Indeed, this forms the basis for the variational principle. If we have a set of -trial states, in which there are a number of free parameters, than we can construct an -approximation to the ground state by ‘simply’ minimising the energy expectation value of the -Hamiltonian with respect to these free parameters. How good this approximation is in -practice depends on the properties of both the Hamiltonian and the trial states. However, -one way to quantify the quality of the ground state approximation is by computing the energy -variance
-The Hamiltonians for quantum lattice systems that we are interested in, will typically -contain a sum of terms where every individual term acts nontrivially only in a small patch -of the lattice. For such Hamiltonians, the low-energy states have a special property. -Hereto, we consider arbitrary bipartitions of the system, where one subsystem corresponds to -a (connected) region of sites, whereas the complement, i.e.\ the remaining sites in the -lattice, make up the second subsystem. The ground state will in general not factorize into a -tensor product, as it contains correlations and entanglement between these two (arbitrarily -chosen) subsystems. As pointed out above, the maximal value of entanglement entropy is given -by the logarithm of the smallest of the two Hilbert space dimensions. Assuming that our -chosen region of sites is smaller than its complement, its Hilbert space will itself scale -exponentially with the number of sites in that region, i.e.\ with its volume. Hence, the -entanglement entropy computed for such a bipartition has an upper bound that is proportional -to the volume of the subsystem. As was also stated above, random states typically satisfy -this upper bound. However, the special property of low-energy states of locally interacting -Hamiltonians is exactly that they have much less entanglement. They typically have an -entanglement entropy that only scales with the common area between the subsystem and its -complement. This scaling behavior is referred to as the area law of entanglement entropy -and provides the key motivation for approximating such low energy states using a tensor -network decomposition. It indicates that the most important quantum correlations are short -range (just like the interactions that generate them), and are thus situated across the -boundary connecting the subsystem and its complement. However, this does not exclude that -there is also a small amount of nontrivial long-range correlations in the system.
-Aside from the ground state, one might also be interested in the first excited states, as -these will be important for understanding how the system at zero temperature reacts to -external perturbations. In a macroscopically large many-body system, one should not expect -that the energy spectrum consists of a number of discrete levels with gaps in between. The -lowest-energy excited states in a quantum lattice system can typically be given a -particle-like interpretation, as is well known from quantum field theory. They correspond to -small bumps of energy, i.e. they can be thought of as perturbations of the ground state in a -small region and thus have an anergy cost that does not scale with the system size. However, -because of kinetic energy-like terms, actual eigenstates will not correspond to having this -energy bump in a localized region, but will rather be in a superposition where the -“particle” is delocalized. In particular, in the case of a translation invariant system, the -eigenstates will also be momentum eigenstates, and thus describe a particle that is in a -momentum superposition across the lattice. The energy (surplus) of such a particle like -excitation will thus be a number \(\epsilon(k)\) that is of order \(1\) independent of the -system size, and that depends on the specific momentum. If \(\epsilon(k)\) is everywhere lower -bounded by some value \(\Delta\) (again indepenent of system size), then the system is said to -be gapped. However, in some systems, \(\epsilon(k)\) can become zero for particular values of -\(k\). Such systems are called gapless, and they often correspond to phase transition points, -where the nature of the ground state radically changes if the parameters in the Hamiltonian -are varied.
-The energy spectrum of a quantum lattice system will then consist of one or a few ground -states, the energy of which is an extensive number that is most easily expressed as some -energy density per site. To study the excited states, it is then convenient to shift the -energy scale such that the ground state energy is zero. The lowest excited states will then -correspond to particles which can be created at a certain momentum. In an energy-momentum -diagram, their dispersion relation \(\epsilon(k)\) will apear as an isolated band. Note that a -system can have different types of such particle-like excitations, each with their own -dispersion relation. Higher up in the energy spectrum we start to obtain regions -corresponding to states with two- or more particles, that are travelling independent from -each other. For such states, the energy can be obtained simply as the sum of the indivual -particles in the state, and since the relative momentum of the particles can change while -keeping the total momentum fixed, the energy in such states can also vary continuously (at -least in the thermodynamic limit).
-Aside from low-energy eigenstates of the Hamiltonian, we are often also interested in states -that have a non-trivial time depence. One particular use case is where one starts from the -ground state \(\ket{\Psi_0}\) of a certain Hamiltonian \(\hat{H}_0\), and then some parameters -in the Hamiltonian are suddenly changed, so that the Hamiltonian now corresponds to a new -operator \(\hat{H}_1\). This sudden change is reminiscent of quenching a system, and such a -setup is called a global quench. We then want to compute
-With respect to \(\hat{H}_1\), the state \(\ket{\Psi_0}\) will no longer be a ground state and -most likely not even be an eigenstate. However, it will have a certain (extensive) -energy expectation value that is preserved throughout the evolution.
-In terms of entanglement and correlations, even when \(\ket{\Psi_0}\) is a state with an area -law entanglement scaling (because it is a low-energy state of another local Hamiltonian, -namely \(\hat{H}_0\)), the entanglement in the state will grow rapidly with time. Indeed, the -bipartite entanglement entropy for will tend to grow linearly with time. From the -perspective of a given subsystem, its entropy will grow from an initial value proportional -to the area of the subsystem, until it saturates at a value that is propertional to the -volume of the subsystem. This process is known as thermalisation, as it turns out that at -that point, the subsystem is locally indistinguishable from a Gibbs state with a temperature -set by the energy density of the initial state. The subsystem has thermalized with respect -to its complement behaving as an environment or heat bath. The larger the subsystem, the -longer it will take before thermalisation is complete, and the overall state of the global -system remains a pure state, albeit a highly entangled one.
-There are a number of typical observables that we want to measure for a given quantum state -(ground state or thermal state) of a quantum lattice system. The first are operators which -have the same structure as the Hamiltonian, in being given by a sum of terms where every -individual term acts nontrivially only on a single site, or a small patch of neighbouring -sites. Furthermore, these terms all act identically, except that they are translated to the -different patches that make up the lattice. Typical examples include the energy itself or -specific contributions to it (kinetic energy, interaction energy, … ) as well as the total -number of particles, a total magnetisation, and other similar quantities.
-Let us use the transverse field Ising model as an example. An interesting quantity in the -transverse field Ising model is the longitudinal magnetisation, given by -\(\hat{S}^z=\frac{1}{2} \sum_{n} \sigma^z_n\). The expectation value of such operators is -extensive, and so we are typically interested in the associated density, which is obtained -by dividing out the volume factor. With respect to a translation-invariant state, this is -equivalent to simply measuring the expectation value of a single term, i.e. -\(\frac{1}{2} \sigma^z\) in the case of the longitudinal magnetisation, which is thus a local -operator. The precise position where it acts does then not really matter.
-Such quantities come up for example to measure potential symmetry breaking, where an operator that should -have zero expectation value given the symmetries of the Hamiltonian, actually acquires a -nonzero expectation value. Indeed, given that the Ising Hamiltonian has -the property that \(\left[\hat{H},\hat{U}\right] = 0\), where \(\hat{U} = \bigotimes_{n} \sigma^x_n\), -we also expect the ground state \(\ket{\Psi_0}\) to satisfy -\(\hat{U} \ket{\Psi_0} \sim \ket{\Psi_0}\), where the proportionality factor can only be plus -or minus one, due to \(\hat{U}^2 = \hat{1}\). The magnetisation in the \(z\)-direction, on the -other hand, satisfies \(\hat{U}^\dagger \hat{S}^z \hat{U} = - \hat{S}^z\), so that we expect
-Indeed, if the ground state is unique, this is precisely what happens. However, it can -happen that there are multiple linearly independent ground states. In that case, the -restriction of \(\hat{S}^z\) into the ground subspace can be nontrivial, and there exist -specific ground state choices for which the expectation value is nonzero. For reasons that -go beyond what can be explained here, it are typically these states which are easiest to -create or approximate (they have lower entanglement). Such operators that can characterise -the presence of symmetry breaking are referred to as order parameters.
-Nonetheless, the use of local operators to probe the ground state properties is somewhat -limited. A different class of observables that are of typical interest are (static) -correlation functions, which take the form
-where \(\hat{A}_i\) is a local operator acting on or in the neighbourhood of site (or unit -cell) \(i\) and \(\hat{B}_j\) is a local operator acting on or in the neighbourhood of site \(j\). -Typically, the operators \(\hat{A}\) and \(\hat{B}\) are chosen such that their local expectation -value is zero. If this is not the case, we can subtract these local expectation values by -redefining
-We are then interested in the dependence of this correlation function on the positions \(i\) -and \(j\). In particular, in a translation invariant system, it is only the relative lattice -vector from site \(i\) to site \(j\) on which this quantity depends. It can be proven that if -\(\ket{\Psi}\) is the unique ground state of a gapped local Hamiltonian, then the asymptotic -behaviour of such correlation functions is that they decay exponentially in the distance -between the two sites. This exponential thus defines a length scale \(\xi\) via \(\exp(-d/\xi)\) -with \(d\) the relevant distance. The length scale \(\xi\) is known as the correlation length of -the system.
-When the system is gapless, static correlation functions still go to zero in the limit of -infinite separation distance, but rather decay as an algebraic function of the distance, -i.e. they give rise to power laws. The exponents that appear in these power laws do -typically have universal values that are set by general properties such as the number of -spatial dimensions, the global symmetries in the system, etc. In particular for the case of -one-dimensional system, there is a rich literature and well developed framework for -analysing such gapless systems using methods from conformal field theory.
-Finally, in systems with potential symmetry breaking, the static correlation function of the -order parameter with itself is an extremely useful diagnostic. In particular, when the system -has symmetry breaking, the large distance limit of the correlation function does not vanish -and the system is said to contain long range order. Unlike the expectation value of the local -order parameter, which can have a nonzero expectation value for particularly chosen -symmetry breaking ground states but is still zero for other choices of ground states, the -value of the correlation function and its large distance limit is insensitive to the specifically -chosen ground state out of the ground subspace that it is computed with. For the transverse-field -Ising model, symmetry breaking will thus be present whenever
-does not decay to zero limit for large distance between sites \(i\) and \(j\). The limiting -value of this correlation function can then be considered as \(m^2\), i.e. the square of the -local magnetisation that would be measured in some states of the ground subspace.
-Strictly speaking, the ground state static correlation function does not provide information -about excited states or other dynamical information of the Hamiltonian. In most physical -system, it however does provide some qualitative information. Since the static correlation -function, considered as a matrix with rows \(i\) and columns \(j\), has a particular structure -resulting from translation invariance, it can be diagonalised by a (multidimensional) -discrete Fourier transform. The resulting eigenvalues depend on the lattice momentum -\(\kappa\) and are known as the static structure factor \(S(\kappa)\). In particular, in the -case of a gapped system with unique ground state, these values are well defined for all -\(\kappa\). Nonetheless, it can be argued (using different techniques) that maxima for -\(S(\kappa)\) will correspond to momenta where the single particle excitations have minima in -their dispersion relations. For critical systems, \(S(\kappa)\) can also have algebraic -divergences, whereas in the case of long range order, \(S(\kappa)\) will contain a Dirac-delta -type of divergence, typically at zero momentum, unless there is some spatially repeating -pattern in the way symmetry is broken (and thus also translation invariance is broken).
-More detailed quantitative information about the spectrum of excited states is contained in -the time-dependent correlation function
-On the first line, we have used operator -\(\hat{A}(t) = \mathrm{e}^{+\mathrm{i} t \hat{H}}\hat{A} \mathrm{e}^{-\mathrm{i} t \hat{H}}\) -in the Heisenberg picture. In going from the second to the third line, we have used that -\(\ket{\Psi_0}\) is the ground state of \(\hat{H}\) with ground state energy \(E_0\). Again, this -quantity will depend on the relative lattice vector connecting sites \(i\) and \(j\) in a -translation-invariant system. Once again, we can diagonalise the spatial dependence using a -multidimensional discrete Fourier transform. If we now furthermore also perform a Fourier -transform of the time-dependence into frequency space, we obtain the dynamical structure -factor given by
-Here, \(\hat{A}(k\kappa)\) and \(\hat{B}(\kappa)\) correspond to the discrete Fourier transforms -of \(\hat{A}_i\), which amounts to the momentum superposition. As a consequence, -\(\hat{B}(\kappa) \ket{\Psi_0}\) is a state with definite momentum \(k\) (provided the ground -state is translation invariant), and thus only has overlap with excited states -\(\ket{\Psi_n}\) with momentum \(\kappa\). Because of the factor \(\delta(\omega - (E_n -E_0))\), -the dynamical structure factor \(S^{A,B}(\kappa, \omega)\) can be nonzero only if there exist -eigenstates with momentum \(\kappa\) and excitation energy \(\omega\) in the spectrum of the -Hamiltonian \(\hat{H}\). By studying \(S^{A,B}(\kappa, \omega)\) for different choices of -operators \(\hat{A}\) and \(\hat{B}\), we can detect all eigenstates and map out the full -(low-energy) spectrum of \(\hat{H}\).
-Quantum Mechanics and its Postulates
- -While the energy levels of the hydrogen atom played an important role in the historical -development of quantum mechanics, it became almost immediately clear that the true challenge -is in applying the laws of quantum mechanics to systems with many interacting particles or -fields. Note that the formalism of quantum mechanics, and in particular its postulates, are -generically valid and not restricted to the description of a single particle. Quantum field -theory also follows these postulates and is thus not a generalisation of quantum mechanics, -but rather a specific case of it. These postulates characterise the mathematical model by -which quantum mechanics describes physical systems, and more specifically how it represents -states, observables, measurements and dynamics. We briefly reiterate these postulates and -base our discussion on the wonderfull lecture notes “Quantum Information and Computation” by -John Preskill.
-The state of an isolated quantum system is associated to a ray of vectors in a complex -Hilbert space \(\mathbb{H}\).
-A Hilbert space is a metric complete inner product space. Let us unpack this definition:
-\(\mathbb{H}\) is a vector space in this case over the complex numbers. We will denote -elements of this vector space with Dirac’s ket notation \(\ket{\psi}\). In particular, we -can build linear combinations
-for all \(a, b \in \mathbb{C}\) and all \(\ket{\psi_1}, \ket{\psi_2} \in \mathbb{H}\).
-\(\mathbb{H}\) has an inner product, which maps two vectors \(\ket{\psi}\) and \(\ket{\varphi}\) -onto a scalar \(\braket{\varphi|\psi} \in \mathbb{C}\) with the properties of
-Linearity: \(\bra{\varphi} ( a \ket{\psi_1} + b \ket{\psi_2}) = a \braket{ \varphi | \psi_1} + b \braket{ \varphi | \psi_2}\)
Skew-symmetry: \(\braket{ \varphi | \psi} = \braket{ \psi | \varphi}^\ast\)
-Positivity: \(\braket{ \psi | \psi} \geq 0\) with equality only if \(\ket{\psi} = 0\).
This last property enables us to define a norm \(\lVert \psi \rVert = \lVert \ket{\psi} - \rVert = \sqrt{\braket{\psi|\psi}}\), which satisfies known properties such as \(\lVert \psi - \rVert = 0 \Leftrightarrow \ket{\psi} = 0\) \(\lVert a \ket{\psi} \rVert = \vert a\vert - \lVert \psi \rVert\) and the triangle inequality \(\lVert \ket{\varphi} + \ket{\psi} \rVert - \leq \lVert \varphi \rVert + \lVert \psi \rVert\).
-The final property of metric completeness is a technical requirement that is only -relevant in infinite-dimensional Hilbert spaces. Firstly, a metric is a notation of -distance between the elements in \(\mathbb{H}\), which is provided by the norm of the -difference, i.e. \(d(\varphi, \psi) = \lVert \varphi - \psi \rVert\).
-Completeness of the metric is a specific property that guarantees that certain sequences -of vectors are guaranteed to have a limit value that also exists in \(\mathbb{H}\). This -is necessary to make sense of e.g. Fourier series.
-The state of a quantum system is associated to a ray of vectors, which is the -one-dimensional space \(\{ a \ket{\psi} , \forall a \in \mathbb{C}\}\) spanned by a single -(nonzero) vector \(\ket{\psi} \in \mathbb{H}\). We will describe the state of the system using -a single representative \(\ket{\psi}\) of this ray, which we typically choose such that -\(\braket{ \psi | \psi} = 1\). However, this does not fix the representative completely, as we -can still add arbitrary phases \(\exp(\mathrm{i}\alpha)\), i.e. \(\ket{\psi}\) and -\(\mathrm{e}^{\mathrm{i}\alpha} \ket{\psi}\) describe the same state.
-The best known Hilbert space from your courses on single-particle quantum mechanics is -probably \(L^2(\mathbb{R}^n)\), the Hilbert space for a single quantum particle moving in the -\(n\)-dimensional coordinate space \(\mathbb{R}^n\) (typically \(n=1,2,3\)). This Hilbert space -corresponds to the space of all square-integrable functions \(\psi:\mathbb{R}^d \to -\mathbb{C}: x \mapsto \psi(x)\) and the inner product is given by
-However, this is already a complicated Hilbert space from a technical perspective. Hilbert -spaces can also be finite-dimensional, i.e. \(\mathbb{C}^d\), the space of column vectors of -length \(d\), with the standard Euclidean inner product
-These Hilbert spaces will be very important in our discussion. The simplest nontrivial case -corresponds to \(d=2\) and the associated quantum system is known under various names. It is -often referred to as a qubit in the context of quantum information theory. There are various -ways in which qubits can be physically realised. Another common example of a two-dimensional -Hilbert space is for describing the spin degree of freedom of an electron, or another -particle with spin quantum number 1/2. Such a reduced description (forgetting about the -position) is possible if the electron is localised in space, for example when it is strongly -bound to an atom.
-If we do want to describe a particle that moves in space, we might also consider it to exist -only at discrete positions in space, i.e. on a lattice. For example, on a one-dimensional -lattice (a.k.a. a chain) with \(L\) sites, the Hilbert space would also correspond to -\(\mathbb{H} = \mathbb{C}^L\) and the standard basis vectors \(\vert j \rangle, j=1,\dots,L\) -correspond to the state of the system if the particle is exactly localised on site \(j\). We -can also consider infinitely large lattices, e.g. the one-dimensional chain where there is a -site associated with every \(j \in \mathbb{Z}\) (or the n-dimensional hypercubic lattice -\(\mathbb{Z}^n\)). The resulting Hilbert space is then spanned by the states \(\vert j \rangle\) -for all \(j \in \mathbb{Z}\), and is thus infinite-dimensional but with a straightforward -countably infinite basis.
-Of course, our goal is to find the Hilbert space of a many body system. We return to this -question below and devote a complete section to it.
-Physical observables of the system correspond to self-adjoint (a.k.a. Hermitian) linear -operators on the Hilbert space \(\mathbb{H}\).
-An operator \(\hat{A}\) on \(\mathbb{H}\) is a linear map \(\hat{A}:\mathbb{H} \to \mathbb{H}\), -i.e. a map from vectors to vectors that satisfies
-The adjoint of an operator \(\hat{A}\) is a new operator \(\hat{A}^\dagger\) that is constructed -such that
-for all \(\ket{\varphi}, \ket{\psi} \in \mathbb{H}\) and where \(\vert \hat{A}\psi \rangle = -\hat{A} \ket{\psi}\). This definition requires that \((a_1 \hat{A}_1 + a_2 \hat{A}_2)^\dagger -= a_1^\ast \hat{A}_1^\dagger + a_2^\ast \hat{A}_2^\dagger\) and \((\hat{A}_1 -\hat{A}_2)^\dagger = \hat{A}_2^\dagger \hat{A}_1^\dagger\).
-A self-adjoint operator is an operator such that \(\hat{A}^\dagger = \hat{A}\) or thus
-for all \(\ket{\varphi}, \ket{\psi} \in \mathbb{H}\). Linear combinations of self-adjoint -operators with real coefficients are self-adjoint. The composition of two self-adjoint -linear operators \(\hat{A}_1 \hat{A}_2\) is self-adjoint if and only if
-i.e. if the operators also commute. Self-adjoint operators have real eigenvalues, and -eigenvectors associated to distinct eigenvalues are orthogonal. In a finite-dimensional -Hilbert space, self-adjoint operators admit a spectral decomposition
-where \(\hat{P}_n\) is the spectral projector onto the eigenspace associated with \(\lambda_n\). -The spectral projectors satisfy \(\hat{P}_n \hat{P}_m = \delta_{n,m} \hat{P}_n\), -\(\hat{P}_n^\dagger = \hat{P}_n\) and \(\sum_{n} \hat{P}_n = \mathbb{1}\), the identity -operator. If \(\lambda_n\) has one-dimensional eigenspace spanned by the eigenvector -\(\vert\phi_n\rangle\), then
-where the denominator can be omitted if the eigenvector is normalised.
-In the language of matrices, these properties can be rephrased as follows: With respect to -an orthonormal basis choice, self-adjoint operators are represented as hermitian matrices. -Such matrices can be diagonalised by a unitary transformation, or thus, we can construct a -complete basis consisting of eigenvectors. With respect to this basis, the self-adjoint -operator is represented by a diagonal matrix with real values on the diagonal.
-Given an observable to which we associate the operator \(\hat{A}\), we now need to prescribe -the result of measuring this observable with respect to a system that is in a state -\(\ket{\psi}\). The most compact way of describing the result is by stating that, the -expectation value \(\braket{\hat{A}}\) (= the mean value of the measurement when averaging -over an ensemble of identical copies of the system) is given by
-By exploiting the fact that this also prescribes the expectation value of all higher moments -\(\braket{\hat{A}^k}\), this determines the full probability distribution of the measurement -outcome, and yields the more familiar result: The only possible measurement outcomes are -given by the eigenvalues \(\lambda_n\) of \(\hat{A}\), and for a system in state \(\ket{\psi}\) -(now assumed normalized), the probability of obtaining \(\lambda_n\) is given by \(p_n = -\braket{\psi \vert \hat{P}_n \vert \psi}\) with \(\hat{P}_n\) the spectral projector from -above. In the case that \(\lambda_n\) has a single (linearly independent) eigenvector -\(\ket{\phi_n}\) (also assumed normalised), this amounts to \(p_n = \vert -\braket{\phi_n|\psi}\vert^2\).
-There is a second part to the measurement postulate, which states that, if the measurement -is immediately repeated (without intermediate dynamics, as described by the next postulate), -then the same measurement outcome is obtained. Because the measurement outcome with respect -to the initial state \(\ket{\psi}\) is probabilistic and can yield different results, this -requires that after the first measurement, the state changes is changed. This is the -well-known collapse of the wave function. More specifically, if a measurement of -observable \(\hat{A}\) is performed in a system with state \(\ket{\psi}\) and the measurement -value \(\lambda_n\) is obtained, then the state of the system changes to
-Note that the denominator cannot vanish, as otherwise the probability of having obtained -measurement outcome \(\lambda_n\) would have been zero in the first place.
-During time intervals without measurements, the state of an isolated quantum system evolves -unitarily according to the (first order linear) differential equation
-known as the Schr”{o}dinger equation, where \(\hat{H}(t)\) is the Hamiltonian of the system, -which may itself be time-dependent. In the case of a time-independent Hamiltonian, we can -define the evolution operator
-which relates states at different times via \(\ket{\psi(t)} = \hat{U}(t, t') \ket{\psi(t')}\) -and is clearly a unitary operator. Clearly, we need to know the Hamiltonian of a system in -order to even start thinking about modelling its quantum properties. We will always assume -that the Hamiltonian is given. In practice, however, the situation can be much more -complicated. Typically, we want to build only an effective quantum description of the system -(e.g. only the electrons, only certain electrons, \(\ldots\)) and not start all the way down -at the level of fundamental particles and the standard model (which is also only an -effective model valid up to some energy scale).
-Quantum-to-Classical Mapping
- -In this final section, we introduce a general technique that essentially enables us to map -any quantum lattice system in \(d\) dimensions to a classical partition function in \(d+1\) -dimensions, up to some caveats that we will return to at the end of this section.
-Remember that thermal expectation values are given by
-with the thermal partition function \(Z(\beta)\) given by
-The ground state physics is encoded in the limit \(\beta \to \infty\). Note that, if the -system has a unique ground state, we can obtain the ground state \(\ket{\Psi_0}\) of a quantum -system by starting from essentially a random state \(\ket{\Phi}\) and evolving it in imaginary -time \(\tau = -\mathrm{i} t\) for sufficiently long
-Expanding the initial state \(\ket{\phi}\) in the energy eigenbasis of \(\hat{H}\), we see that -the only condition is that it is not orthogonal to the ground state (subspace). In addition, -the ground state will be well approximated if \(\tau \Delta E \gg 1\), with -\(\Delta E=E_1 - E_0\) the energy gap. This imaginary time evolution also forms the basic -ingredient of several numerical algorithms for approximating ground states of quantum many -body systems, often in combination with the Suzuki-Trotter decomposition which is introduced -below.
-Using this approach, the following expression for the ground state expectation value of on -operator \(\hat{O}\) is obtained
-This expression can be compared to the thermal expectation value with \(\beta = 2\tau\); the -only difference is in the boundary conditions.
-For a quantum many body system, taking the exponential is as hard as determining the full -diagonalisation of the hamiltonian, which is impossible due to the exponentially large -Hilbert space. If the hamiltonian is a sum of local terms, each of these terms can be -exponentiated easily, but for arbitrary \(\tau\) there is no relation ship between -\(\exp(-\tau \sum_{i} \hat{h}_i)\) and the individual \(\exp(-\tau \hat{h}_i)\), unless the -different \(\hat{h}_i\) commute. However, for an infinitesimal time step \(\epsilon\), we can -use to Baker-Campbell-Hausdorff formula (or better yet, the Zassenhaus formula) to obtain
-This then leads to the Suzuki-Trotter decomposition
-The product in the final expressions requires chosing a specific order, exactly because the terms \(\hat{h}_i\) -and thus also the factors \(\mathrm{e}^{-\frac{\tau}{M} \hat{h}_i}\) do not commute. The approximation -and error term are valid for arbitrary choices of ordering, but different orderings are not -equivalent. Particular choices can be more suitable for particular purposes. Furthermore, -note that splitting the time interval \([0,\tau]\) into small segments \(\epsilon = \tau/M\) is -also the starting point for deriving a path integral representation of the quantum partition -function. The next step is to insert a resolution of the identity in between the \(N\) -different factors, where the labels of the basis will behave as classical degrees of -freedom. For obtaining a path integral, the basis should be labeled by a number of -continuous degrees of freedom, which can then become continuous functions of time in the -limit \(\epsilon\to 0\). Here, instead, we will keep \(\epsilon\) small but finite, and use a -discrete basis.
-Let’s start with a quantum system in \(d=0\), i.e. a small number of spins, or in particular, a single spin, -described by a hamiltonian
-While we could in principle exponentiate \(\hat{H}\) directly as it is a \(2 \times 2\) matrix, -we will treat it using the Suzuki-Trotter decomposition. Throughout the remainder of this -section, we will use the \(\sigma^z\) basis, which we denote as \(\ket{1} = \ket{\uparrow}\) and -\(\ket{-1} = \ket{\downarrow}\). Inserting resolutions of the identity, we write
-with \(s_{M+1} = s_1\), \(M\epsilon =\beta\), and where
-where the parameters in the last line are given by -\(K= -\frac{1}{2}\log \tanh(\epsilon h_x)\), \(h = \epsilon h_z\) and -\(f_0 =\frac{1}{2}\log[\cosh(\epsilon h_x)\sinh(\epsilon h_x)]\).
-We thus obtain
-the partition function of the one-dimensional classical Ising model with periodic boundary -conditions. Indeed, \(\braket{s_{i}|\mathrm{e}^{-\epsilon H}|s_{i+1}}\) does exactly -correspond to the transfer matrix, and diagonalising the transfer matrix is the most -straightforward approach to solving the one-dimensional classical Ising model.
-We now apply the same approach to the Ising model with both transverse and longitudinal -field in \(d\) dimensions, on a hypercubic lattice. We separate the hamiltonian in two parts -according to
-Note that \(\hat{H}_1\) and \(\hat{H}_2\) in itself contain commuting terms, but of course don’t -mutually commute. We follow the same strategy, and will in every (imaginary) time step -introduce a resolution of the identity using the tensor product \(\sigma^z\) basis. We now denote -the basis at time step \(k\) as \(\ket{\{s_{i,k}\}}\), where \(i\) labels a site in the \(d\) -dimensional lattice hosting the quantum degrees of freedom, and \(k\) labels points along the -imaginary time axis, which emerges as a new dimension in the problem. We find
-as \(\hat{H}_1\) is diagonal in this basis, and
-with \(K_\perp = \log \tanh(\epsilon h_x)\) as before. Here, we have now ignored an overall -proportionality factor, which is irrelevant when using the partition function to compute -expectation values. With this, we find
-with \(K_{\parallel} = \epsilon J\) and \(h = \epsilon h_z\). We thus find the partition -function of the classical Ising model in \(d+1\) dimensions with anisotropic interaction -strengths, periodic boundary condition in the imaginary time direction and a number of sites -in the time direction given by \(M = \beta/\epsilon\). Hence, the ground state regime -\(\beta\to \infty\) corresponds to the thermodynamic limit in this additional time direction -of the corresponding classical system, so that there are many similarities (or actually, -equivalences) between between quantum phenomena in \(d\) dimensions and classical phenomena in -\(D=d+1\) dimensions. On the other hand, when the quantum system is at finite temperature -\(\beta\), the additional dimension is finite and never in the thermodynamic limit. In that -case, this extra dimension cannot cause new non-analyticities in the partition function and -finite temperature quantum systems in \(d\) dimensions are very similar to classical systems -in \(d\) dimensions.
-This quantum to classical mapping can also be inverted. Taking a codimension \(1\) slice out -of a \(D\)-dimensional classical partition function, one obtains a transfer matrix which can -be interpreted as the exponential of a quantum hamiltonian acting on the Hilbert space of a -\(d=D-1\) dimensional quantum system. Hence, methods for targetting quantum ground states in -\(d\) dimensions can also be used to study problems in \((d+1)\)-dimensional classical -statistical mechanics.
-It is clear that the quantum to classical mapping is not specific to the quantum Ising model -and can be applied to any hamiltonian. The path integral representation of the partition -function fits within the same scheme, and only differs in the fact that the limit -\(\epsilon\to 0\) is taken such that the additional dimension becomes continuous. This is -particularly natural if also the spatial dimensions of the quantum dimension are continuous, -i.e. if we have a quantum field theory. In this case, a \(D=d+1\) dimensional classical -statistical field theory is obtained. For relativistic quantum field theories, where it is -common practice to explicitly count the time dimension together with the space dimensions, -imaginary time evolution leads to an action, which is equivalently a hamiltonian of a -classical field theory in \(D=d+1\) spatial dimensions, with full Euclidean invariance.
-One might thus wonder if there is anything new to be learned from studying quantum ground -states. First of all, there is one important catch which we have overlooked so far. There is -no guarantee that the above process yields a classical partition function with Boltzmann -weights which are positive, or even real. While this may seem like a technical detail, it is -of major importance. The quantum to classical mapping is the basis behind the Quantum Monte -Carlo method, one the most successful numerical methods for studying quantum many body -systems. One maps the quantum problem to a classical partition function and then uses one of -the many flavours of Monte Carlo sampling. However, with non-positive Boltzmann weights, the -interpretation of a probability distribution is lost and no efficient sampling procedure can -be designed, as samples might annihilate each other. This is known as the sign problem.
-Secondly, for many non-relativistic quantum systems, the anisotropy between the imaginary -time direction and the spatial dimensions in the corresponding classical system cannot be -ignored, even at the critical point. In those cases, critical correlations behave -differently in the spatial and the time direction, which is characterised by a dynamical -critical exponent \(z\). The case \(z=1\) corresponds to the case where the critical point has -(emergent) rotation/Lorentz invariance between time and space.
-A final reason to study quantum systems directly is that certain concepts are more natural -in that setting. In particular, the last 15 years, ideas from quantum information theory, -and in particular the concept of entanglement, have made their way into the standard toolbox -to study and characterize quantum many body systems.
-(Multi-) Linear Algebra
- -Contents
- -This lecture covers some basic linear algebra concepts and operations, that will serve as -foundation for most of what follows. The goal is to provide some intuitive understanding of -the concepts, without insisting on too much mathematical rigour. The most important goal is -to introduce and define the concept of a tensor, without resorting to the usual mathematical -definition, which is not very intuitive.
-Simultaneously, the lecture also showcases some of the features of -TensorKit.jl, a Julia package that is extremely -well-suited for the demonstration of the concepts that are discussed.
-using TensorKit
-
Before discussing tensor networks, it is necessary to understand what tensors are. -Furthermore, before really understanding tensors, it is instructive to reiterate some basic -concepts of linear algebra for the case of vectors and matrices, which are nothing but -specific cases of tensors. In fact, many of the concepts and ideas that are introduced and -discussed are defined in terms of thinking of tensors as vectors or matrices.
-In what follows, vectors and matrices will be thought of from the viewpoint of computers, -where they are represented using regular one- and two-dimensional arrays of either real or -complex numbers. Nevertheless, much of the discussion can be readily generalized to -arbitrary vector spaces and linear maps.
-In general, a vector is an object in a vector space, which can be described by a list of -numbers that correspond to the components of the vector in some basis. For example, a vector -in a two-dimensional space is in its most general form described by -\(\vec{v} = \left[v_1, v_2\right]^T\).
-As a reminder, the defining properties of vector spaces make sure that the following -operations are well-defined:
-Vectors can be added together, i.e. \(\vec{v} + \vec{w}\) is a vector.
Vectors can be multiplied by scalars, i.e. \(\alpha \vec{v}\) is a vector.
These operations behave as expected, i.e. there is a notion of associativity, commutativity, and distributivity.
Given two such vector spaces (not necessarily distinct) it is possible to define a linear -map between them, which is just a function that preserves the vector space structure. In -other words, a linear map \(A \colon V \rightarrow W\) maps vectors from one vector space \(V\) -to another vector space \(W\). Because of the structure of vector spaces, and the requirement -of linearity, such a map is completely determined by its action on the basis vectors of \(V\). -This leads in a very natural way to the notion of a matrix by considering the following -construction, where \(v_i\) are the components of \(\vec{v}\) and \(w_i\) are the components of -\(\vec{w}\):
-where \(A_{ij}\) are the components of the matrix \(A\) in these bases. In other words, the -abstract notion of a linear map between vector spaces can be represented by a concrete -matrix, and the action of the map is the usual matrix product.
-In particular, it is instructive to think of the columns of the matrix \(A\) as labelling the -components of the input vector space, also called domain, while the rows label the -component of the output vector space, or codomain.
-In the context of Julia, we can create vector spaces, vectors and matrices through a syntax -that follows this very closely:
-V = ℂ^2 # type as \bbC<TAB>
-W = ComplexSpace(3) # equivalent to ℂ^3
-
-A = TensorMap(rand, Float64, W ← V) # ← as \leftarrow<TAB>
-v = Tensor(rand, Float64, V)
-w = A * v
-
-w[1] ≈ A[1,1] * v[1] + A[1,2] * v[2]
-
true
-
Note
-For linear maps, both notations \(V \rightarrow W\) and \(W \leftarrow V\) are used to denote
-their codomain and domain. The choice of notation is mostly a matter of taste, as left to
-right might seem more conventional for a language that reads from left to right, while right
-to left is more natural when considering the mathematical usage, where matrices typically
-act on vectors from left to right. In TensorKit, both notations are supported through the
-→
and ←
operators, and a Unicode-less version is also available, which defaults to ←
.
-Thus, the following are all equivalent:
A = TensorMap(rand, Float64, V → W)
-A = TensorMap(rand, Float64, W ← V)
-A = TensorMap(rand, Float64, W, V)
-
Using the same logic as above, it is possible to generalize the notion of a linear map by -making use of the tensor product, which is -nothing but an operation that can combine two vector spaces \(V\) and \(W\) into a new vector -space \(V \otimes W\). The tensor product is defined in such a way that the combination of -vectors from the original vector spaces preserves a natural notion of linearity, i.e. the -following equality holds for all vectors \(v \in V\), \(w \in W\), and scalars \(\lambda\):
-λ = rand()
-(λ * v) ⊗ w ≈ v ⊗ (λ * w) ≈ λ * (v ⊗ w)
-
true
-
This new vector space can be equipped with a canonical basis, which is constructed by taking -the tensor product of the basis vectors of the original vector spaces. For example, if \(V\) -and \(W\) are two-dimensional vector spaces with basis vectors \(v_i\) and \(w_j\), respectively, -then the basis vectors of \(V \otimes W\) are given by \(v_i \otimes w_j\). In other words, the -vectors in \(V \otimes W\) are linear combinations of all combinations of the basis vectors of -\(V\) and \(W\).
-When considering how to represent a vector in this new vector space, it can be written as a -list of numbers that correspond to the components of the vector in that basis. For example, -a vector in \(V \otimes W\) is described by:
-t = Tensor(rand, Float64, V ⊗ W)
-t[] # shorthand for extracting the multi-dimensional array of components
-
2×3 StridedViews.StridedView{Float64, 2, Matrix{Float64}, typeof(identity)}:
- 0.526571 0.413696 0.719603
- 0.487646 0.238492 0.286419
-
Here, the tentative name \(t\) was used to denote that this is in fact a tensor, where -\(t_{i_1i_2}\) are the components of that tensor \(t\) in the basis \(v_{i_1} \otimes w_{i_2}\). -Because of the induced structure of the tensor product, it is more natural and very common -to express this object not just as a list of numbers, but by reshaping that list into a -matrix. In this case, the components of the \(i_1\)-th row correspond to basis vectors that -are built from \(v_{i_1}\), and similarly the \(i_2\)-th column corresponds to basis vectors -that are built from \(w_{i_2}\).
-As the tensor product can be generalized to more than two vector spaces, this finally leads -to the general definition of a tensor as an element of the vector space that is built up -from the tensor product of an arbitrary number of vector spaces. Additionally, the -components of these objects are then naturally laid out in a multi-dimensional array, which -is then by a slight misuse of terminology also called a tensor.
-Note
-The reshaping operation of components from a list of numbers into a multi-dimensional array -is a mapping between linear indices \(I\) and Cartesian indices \(i_1, i_2, \cdots, -i_N\). This is a very common and useful trick which allows reinterpreting tensors as vectors, -or vice versa.
-LinearIndices((1:2, 1:3))
-
2×3 LinearIndices{2, Tuple{UnitRange{Int64}, UnitRange{Int64}}}:
- 1 3 5
- 2 4 6
-
collect(CartesianIndices((1:2, 1:3))) # collect to force printing
-
2×3 Matrix{CartesianIndex{2}}:
- CartesianIndex(1, 1) CartesianIndex(1, 2) CartesianIndex(1, 3)
- CartesianIndex(2, 1) CartesianIndex(2, 2) CartesianIndex(2, 3)
-
Due to the fact that the tensor product of vector spaces is a vector space in of itself, it -is again possible to define linear maps between such vector spaces. Keeping in mind the -definition of a linear map from (8.1), the columns now label components of the -input vector space, while the rows label components of the output vector space. Now however, -the components of the input and output vector spaces are themselves comprised of a -combination of basis vectors from the original vector spaces. If a linear order of these -combinations can be established, the linear map can again be represented by a matrix:
-V1 = ℂ^2
-V2 = ℂ^2
-W1 = ℂ^2
-W2 = ℂ^2
-
-A = TensorMap(rand, Float64, W1 ⊗ W2 ← V1 ⊗ V2)
-v = Tensor(rand, Float64, V1 ⊗ V2)
-w = A * v
-w[] ≈ reshape(reshape(A[], 4, 4) * reshape(v[], 4), 2, 2)
-
true
-
The attentive reader might have already noted that the definition of a linear map as a -matrix strongly resembles the definition of a vector in a tensor product vector space. This -is not a coincidence, and in fact the two can easily be identified by considering the -following identification (isomorphism):
-A = TensorMap(rand, Float64, W ← V)
-B = Tensor(rand, Float64, W ⊗ V')
-space(A, 2) == space(B, 2)
-
true
-
Note
-For finite-dimensional real or complex vector spaces without additional structure, this -isomorphism is trivial and is just the reshaping operation of the components of a vector -into a matrix. However, note that this is a choice, which is not unique, and already differs -for -row- and column-major order. In -a more general setting, the identification between \(V \otimes W^*\) and \(V \leftarrow W\) is -not an equivalence but an isomorphism. This means that it is still possible to relate one -object to the other, but the operation is not necessarily trivial.
-The entire discussion can be summarized and leads to the following equivalent definitions of -a tensor:
-A tensor is an element of a tensor product of vector spaces, which can be represented as a multi-dimensional array of numbers that indicate the components along the constituent basis vectors. Thus, a tensor is vector-like.
A tensor is a multi-linear map between vector spaces, which can be represented as a matrix that represents the action of the map on the basis vectors of the input vector space. Thus, a tensor is matrix-like.
The equivalence of these two definitions leads to the lifting of many important facets of -linear algebra to the multi-linear setting.
-Symmetries in Quantum Many-Body Physics
- -The goal of this section is to give a very gentle introduction to the concept of symmetries -in quantum many-body physics, and the notion of symmetric tensors. The general mathematical -framework of symmetries in physics (or at least the framework we will restrict to) is that -of group - and representation theory. Our goal is not to take this framework as a given and -illustrate it, but rather to first discuss a couple of important applications of symmetries -in the context of some concrete models and gradually build up to the more general framework. -We will finish our discussion with an outlook to generalizations of the framework presented -here. It goes without saying that we will only scratch the surface of this vast topic. The -interested reader is referred to the immense literature on this topic, or to a more -specialized course.
-Recall the one-dimensional transverse field Ising model defined above. Its degrees of -freedom are qubits ordered on a one-dimensional lattice, and its Hamiltonian reads
-Let us simply consider periodic boundary conditions. Besides the obvious translation -symmetry, which we will discuss below, this model is also invariant under flipping all spins -simultaneously in the Z-direction, i.e. in the Pauli Z basis: -\(\ket{\uparrow}\leftrightarrow\ket{\downarrow}\). That this operation constitutes a symmetry -is clear from the Hamiltonian as the energy of the first term only depends on the -neighbouring spins being (anti-)aligned, which is clearly spin flip-invariant. The second -spin is trivially invariant as this models an external magnetic field which is orthogonal to -the Z-direction.
-This spin flip is “implemented”, or more correctly “represented”, by the unitary operator -\(P=\bigotimes_i \sigma^x_i\). Notice that \(P^2=1\) in accordance with our intuition that -flipping all the spins twice is equivalent with leaving all spins untouched. The fact that -this operator represents a symmetry of the model then translates to \([H,P]=0\), or -equivalently \(P^\dagger HP=H\). Notice that the identity operator is also trivially a -symmetry (of every model) and thus the set \(\{1,P\}\) is closed under taking the product.
-Even though the Hamiltonian has the symmetry regardless of the value of the parameter \(h_x\), -you might know from a previous course that the ground state or ground state subspace are not -necessarily invariant under the symmetry, a phenomenon known as spontaneous symmetry -breaking (SSB) or symmetry breaking for short. Let us investigate the ground state subspace -of the transverse field Ising model in the extremal case of vanishing and infinite magnetic -field.
-\(h_x\rightarrow \infty\) In this case the model effectly reduces to a paramagnet. The -unique ground state is the product state \(\ket{\Psi_+}=\ket{+}^{\otimes N}\) where -\(\ket{+}=\frac{1}{\sqrt{2}}(\ket{\uparrow}+\ket{\downarrow})\) is the unique eigenvalue 1 -eigenvector of \(\sigma^x\). Notice that this state is invariant under the symmetry operator -\(P\), \(P\ket{\Psi_+}=\ket{\Psi_+}\). In other words, the ground state in this case is -symmetric. For reasons mentioned below this state is also considered to be disordered.
\(h_x=0\) In this case the energy is minimized by aligning all the spins and the model -behaves as a classical ferromagnet. Obviously, two distinct ground states are -\(\ket{\Psi_\uparrow}=\ket{\uparrow\uparrow...\uparrow}\) and -\(\ket{\Psi_\downarrow}=\ket{\downarrow\downarrow...\downarrow}\). Contrary to the previous -case they span a two-dimensional ground state subspace, and these states are not -symmetric. In fact, under the action of \(P\) they get mapped onto the other: -\(P\ket{\Psi_\uparrow}=\ket{\Psi_\downarrow}\) and vice versa. The ground state in this case -is thus symmetry broken, or ordered.
Since the ground state degeneracy is necessarily an integer, it is clear that it can not -change smoothly from two to one when the magnetic field is slowly turned on from -\(h_x = 0 \rightarrow \infty\). Therefore the Ising model for small \(h_x\) and large \(h_x\) are -said to belong to different phases, and for some finite value of \(h_x\) a phase transition -where the ground state degeneracy changes abruptly is expected to take place. As it turns -out, this change happens for \(h_x = 1\), at which point the Ising model becomes critical.
-Inspired by the credo of symmetry we can introduce a local operator which probes the phase -and can witness the phase transition. In the case of the Ising model this order parameter is -the local magnetisation on every site: \(O=\sum_i\sigma^z_i\). It is clear that this order -parameter anticommutes with the symmetry, \(P^\dagger OP=-O\), from which it follows that in -the symmetric phase the expectation value of the order parameter vanishes, -\(\braket{\Psi_+|O|\Psi_+}=0\), while in the ferromagnetic phase -\(\braket{\Psi_\uparrow|O|\Psi_\uparrow}>0\) and -\(\braket{\Psi_\downarrow|O|\Psi_\downarrow}<0\). Notice however that for the latter we could -also have chosen the ground state \(\ket{\Psi_\uparrow}+\ket{\Psi_\downarrow}\) in which case -the expectation value of \(O\) becomes 0. So it seems that the expectation value of the order -parameter is ill-defined is this phase. This can be remedied by first adding a small -symmetry breaking term \(\lambda\sum_i\sigma^z_i\) to the Hamiltonian which, depending on the -sign of \(\lambda\), selects one of the ground states \(\ket{\Psi_{\uparrow/\downarrow}}\) after -which the limit \(\lambda\rightarrow 0\) is taken.
-The synopsis of this example is thus the following. Symmetries in quantum many-body physics -(but also in single-particle quantum mechanics) are represented by unitary operators which -are closed under multiplication. Depending on the parameters in the Hamiltonian, part of -these symmetries can be broken by the ground state subspace, and this pattern of symmetry -breaking is a hallmark feature of different phases of the model. Different phases can be -probed by a local order parameter which does not commute with the symmetries. This paradigm -of classifying phases based on symmetry principles was first put forward by Landau -[Landau, 1937], and since then bears his name.
-You might remember Noether’s theorem from a course on field theory. It states that every -continuous symmetry of a system (in field theory most often defined via its Lagrangian) -gives rise to a conserved current. In the context of quantum physics Noether’s theorem -becomes almost trivial and states that the expectation value of every operator that commutes -with the Hamiltonian has a conserved expectation value:
-The proof is almost trivial and is left as a simple exercise.
-The simplest example of this principle is obviously the Hamiltonain that trivially -commutes with itself. The consequence is that the expectation value of the total energy is -conserved.
Another example is that of translation symmetry. Translation symmetry is implemented by -the operator \(T\) that acts on local operators \(O_i\) via \(T^\dagger O_iT=O_{i+1}\). Since -for a system with \(N\) sites we obviously have the identity \(T^N=1\), and \(T\) is unitary, -the eigenvalues of \(T\) are phases \(\exp(2\pi ip/N)\) where the quantum number -\(p=0,1,...,N-1\) is the momentum. By virtue of Noether, translation invariance is -understood to give rise to conservation of momentum, and thus momentum acts as a good -quantum number for the eigenstates of translationally invariant models.
Let us consider another non-trivial example to illustrate the implications of this theorem. -Recall the spins \(s\) XXZ Heisenberg model whose Hamiltonian reads
-The spin operators are \(2s + 1\)-dimensional and satisfy the \(\mathfrak{su}(2)\) commutation -relations
-Let us define the total spin
-From a direct computation it follows that in the case where \(\Delta=1\), and the model thus -reduces to the Heisenberg XXX model, \(H\) commutes with all \(S^a\), \([H,S^a]=0\), \(a=x,y,z\). -However, when \(\Delta\neq 1\) only the Z component \(S^z\) commutes with \(H\), \([H, S^z]=0\). -Notice the difference with the Ising model where the same symmetry was present for all -values of \(h_x\).
-This means that in the \(\Delta=1\) case the Hamiltonian is symmetric under the full \(SU(2)\) -(half integer s) or \(SO(3)\) (integer s) symmetry (see below), whereas when \(\Delta\neq 1\) -only an \(SO(2)\simeq U(1)\) symmetry generated by \(S^z\) is retained. If \(H\) commutes with -\(S^z\) it follows that it automatically also commutes with \(\exp(i\theta S^z)\), -\(\theta\in[0,2\pi)\). This operator has an interpretation as a rotation around the Z-axis -with an angle \(\theta\).
-According to Noether the Heisenberg model thus has conserved quantities associated with -these operators. Regardless of \(\Delta\) the Z component of the total spin is conserved, and -for \(\Delta=1\) all components of the total spin are conserved. In particular this means that -the eigenvalue \(M_z\) of \(S^z\) and \(S(S+1)\) of \(\vec{S}\cdot\vec{S}\) are good quantum numbers -to label the eigenstates of the Heisenberg Hamiltonian.
-Motivated by the examples from above, we will gently introduce some notions of group - and -representation theory that form the backbone of a general theory of symmetries.
-Roughly speaking a group \(G\) is a set of symmetry operators and a multiplication rule on how -to compose them. Let us motivate the definition one step a time.
-First of all notice that a model can have a finite or infinite (discrete or continuous) -number of symmetries. Clearly, the spin flip symmetry of the Ising model consists of only -one non-trivial symmetry operation, namely flipping all spins. The operator carrying out -this transformation is \(P=\bigotimes_i\sigma^x_i\). The XXZ model however has a continuous -symmetry, namely rotations around the Z-axis, that is implemented via \(\exp(i\theta S^z)\), -\(\theta\in[0,2\pi)\), where we should really think about every value of \(\theta\) as labeling -a different symmetry operation.
-These symmetries can be composed or multiplied to form a new symmetry operation. Take for -example flipping all the spins. Flipping all spins twice results in not flipping any spins -at all, which is trivially also a symmetry of the Hamiltonian. Next, consider also the -\(U(1)\) symmetry of the XXZ model. First rotating over \(\theta_2\) and then over \(\theta_1\) -gives a new rotation over \(\theta_1+\theta_2\): \(\exp(i\theta_1 S^z)\exp(i\theta_2 -S^z)=\exp(i(\theta_1+\theta_2) S^z)\). This leads to the first part of the definition of what -a group is.
-A group \(G\) is a set \(G=\{g_1,g_2,...\}\) endowed with a multiplication \(G\times -G\rightarrow G\). There exists an identity \(1\in G\) for the multiplication such that -\(1g=g1=g, \forall g\in G\).
Note that this multiplication is not necessarily abelian. A simple example is the full -\(SU(2)\) symmetry of the XXX model defined above. However, the composition of symmetries is -still associative:
-For all group elements \(g,h,k\) we have that \(g(hk)=(gh)k\).
A property we also would like to formalize is the fact that every symmetry transformation -can be undone. Take for example a \(U(1)\) rotation \(\exp(i\theta S^z)\), if we compose it with -the opposite rotation \(\exp(i(2\pi-\theta) S^z)\) we get the identity. Hence:
-Every group element \(g\) has a unique inverse \(g^{-1}\): \(gg^{-1}=g^{-1}g=1\).
Together 1. 2. and 3. constitute the definition of a group. Before mentioning some examples -let us also introduce the concept of a subgroup. As the name suggests, a subgroup is a -subset of a group which itself constitutes a group. Note for example that a rotation over -\(\pi\), \(\exp(i\pi S^z)\), together with the identity, generates a subgroup of \(\{\exp(2\pi -i\theta S^z|\theta\in[0,2\pi)\}\) with two elements.
-The concept of subgroups lies at the heart of symmetry breaking. Recall that in the -ferromagnetic phase, the Ising model breaks the spin flip symmetry. In Landau’s paradigm we -say that the pattern of symmetry breaking is \(\mathbb{Z}_2\rightarrow \{1\}\) (see below for -an explanation of the notation). In other words, the full symmetry group (\(\mathbb{Z}_2\)) is -broken in the ferromagnetic phase to a subgroup (the trivial group). More generally, a -theory with a \(G\) symmetry can undergo a pattern of symmetry breaking \(G\rightarrow H\) where -\(H\) is a subgroup of \(G\). The meaning of this symbolic expression is that the ground states -keep an H symmetry, and the ground state degeneracy is \(|G|/|H|\).
-The trivial group is a group with only one element that is than automatically also the -identity, and a trivial multiplication law. Above, it was denoted by \(\{1\}\).
\(\mathbb{Z}_N\) is the additive group of integers modulo \(N\). The group elements are the -integers \(\{0,1,...,N-1\}\) and the group multiplication is addition modulo \(N\). Hence it -is clearly a finite group. In particular, the spin flip symmetry from above corresponds to -the group \(\mathbb{Z}_2\). Notice that for all \(N\) \(\mathbb{Z}_N\) is abelian.
Another abelian group is \(U(1)\). This group is defined as -\(U(1)=\left\{z\in\mathbb{C}:|z|^2 = 1\right\}\), with group multiplication the -multiplication of complex numbers. Note we encountered this group in the XXZ model as -being the rotations around the Z axis: \(\{\exp(2\pi i\theta S^z|\theta\in[0,2\pi)\}\).
\(SU(2)\) is the group of unimodular unitary \(2\times 2\) matrices:
-The group multiplication is given by group multiplication. Similarly, one defines -\(SU(N),N\geq 2\). Note that none of these groups are abelian.
-The 3D rotation group or special orthogonal group \(SO(3)\) is the group of real \(3\times 3\) -orthogonal matrices with unit determinant:
-Similarly, one defines \(SO(N),N\geq 2\). Note that only \(SO(2)\) is abelian.
-In the above examples, we were dealing with the question which symmetry transformations -leave the Hamiltonian (and in the absence of symmetry breaking also the ground states) -invariant. These symmetry representations were implemented (represented) by invertible -linear operators, non-singular matrices, that form a closed set under multiplication. This -multiplication structure is what we identified as a group. What we could now do, is to take -a group as given, and wonder which linear transformations we can come up with that multiply -according to these multiplication rules. This is exactly the underlying idea of -representation theory. Representation theory deals with the question how groups can linearly -act on vector spaces.
-This immediately raises a plethora of questions such as if we can classify all -representations (up to some kind of equivalence), if there exists ‘minimal’ representations -and how we can construct new representations of known ones. A minimal answer to these -questions is the goal of this section.
-For the sake of these notes, a representation of a group \(G\) is thus a set of matrices -indexed by the group elements, \(\{X_g|g\in G\}\) that multiply according to the -multiplication rule of \(G\):
-Note that the identity is always mapped to the identity matrix!
-We call the dimension of the matrices \(X_g\) the dimension of the representation.
-Every group can be trivially represented by mapping every group element to the ‘matrix’ -(1). Obviously, this representation is one-dimensional and is called the trivial -representation.
Probably the simplest non-trivial representation, is the representation of \(\mathbb{Z}_2\) -that maps the non-trivial element to -1. Concretely, \(X_0=1, X_1=-1\), and indeed -\(X_1X_1=(-1)^2=X_0\). This representation is called the sign representation.
Let us construct a two-dimensional representation of \(\mathbb{Z}_2\). Since the Pauli -matrix \(\sigma^x\) (as any other Pauli matrix) squares to the identity, \(\sigma^x\) together -with the two-dimensional identity matrix constitutes a two-dimensional representation of -\(\mathbb{Z}_2\). In the notation from above, \(X_0=\mathbb{I}_2\), \(X_1=\sigma^x\). This -representation is called the regular representation of \(\mathbb{Z}_2\).
Given a representation \(\{X_g|g\in G\}\), the complex conjugate representation \(\bar X\) is -defined as \(\bar X=\{\bar X_g|g\in G\}\) which satisfies the defining property of -representations via \(\bar X_g\bar X_h= \overline{X_gX_h}=\bar X_{gh}\).
-Given two representations of \(G\), \(X\equiv\{X_g|g\in G\}\) and \(Y\equiv\{Y_g|g\in G\}\), there -are two obvious ways to construct a new representation.
-The first one is the tensor product representation defined via the Kronecker product of -matrices:
-You should check that these still satisfy the defining property of a representation. The -dimension of the tensor product is the product of the dimensions of the two representations -\(X\) and \(Y\).
-The other one is the direct sum:
-Its dimension is that the sum of the dimensions of \(X\) and \(Y\).
-It is clear that physical observables should not depend on any choice of basis. Therefore -two representations are (unitarily) equivalent when there is a unitary basis transformation -\(U\) such that \(X_g' =UX_gU^\dagger\). Note that \(U\) is independent of \(g\).
-Consider again the two-dimensional regular representation of \(\mathbb{Z}_2\) from above. The -basis transformation
-shows that this representation is equivalent to one where the non-trivial element of -\(\mathbb{Z}_2\) is represented by \(H\sigma^x H^\dagger=\sigma^z\). This illustrates that the -regular representation is equivalent to the direct sum of the trivial representation and the -sign representation!
-The crux of this example is the following. Some representations can, by an appropriate -choice of basis, be brought in a form where all \(X_g\) are simultaneously block-diagonal:
-These blocks correspond to invariant subspaces of the representation, i.e. subspaces that -transform amongst themselves under the action of the group.
-An irreducible representation, irrep for short, can then be defined as a representation that -can not be brought in a (non-trivial) block-diagonal form by any change of basis.
-It can be shown that every finite group has a finite number of irreps. The sum of the -dimensions squared is equal to the number of elements in the group: \(\sum_\alpha -d_\alpha^2=|G|\) where the sum is over all irreps labeled by \(\alpha\) and \(d_\alpha\) denote -their respective dimensions.
-One of the key questions of representation theory is what the irreps of a given group are -and how the tensor product of irreps (which is in general not an irrep!) decomposes in a -direct sum of irreps. The latter are sometimes known as the fusion rules. The basis -transformation that reduce a given representation in a direct sum of irreps is sometimes -called the Clebsch-Gordan coefficients, and are for some groups known explicitly. Before -discussing the example of \(SU(2)\), let us first state the most important result in -representation theory which is due to Schur.
-[Schur’s lemma] If a matrix \(Y\) commutes with all representation matrices of an irreducible -representation of a group G, \(X_gY=YX_g\) \(\forall g\in G\), then \(Y\) is proportional to the -identity matrix.
-The answer to the questions posed above is very well understood for the case of \(SU(2)\). You -probably know the answer from a previous course on quantum mechanics.
-The irreps of \(SU(2)\) can be labeled by its spin, let us call it \(s\), that takes values -\(s=0,1/2,1,3/2,...\). The dimension of the spin \(s\) representation is equal to \(2s+1\), so -there is exactly one irrep of every dimension. The spin \(s=0\) irrep corresponds to the -trivial representation.
-The fusion rules can be summarized as
-For example: \(\frac{1}{2}\otimes\frac{1}{2}\simeq 0\oplus 1\). The Clebsch-Gordan -coefficients for \(SU(2)\) have been computed analytically, and for low-dimensional irreps -have been tabulated for example -here.
-In physics we are often dealing with tensors that transform according to the tensor product -representation of a given group \(G\). A symmetric tensor can then be understood as a tensor -that transforms trivially under the action of \(G\), or more concretely under the tensor -product representation \(X\otimes\bar Y\otimes\bar Z\):
- -This has strong implications for the structure of the tensor \(T\). Notice that we didn’t -assume the representations \(X,Y\) and \(Z\) to be irreducible. As we argued above, an -appropriate change of basis can bring the representations \(X,Y\) and \(Z\) in block-diagonal -form where every block corresponds to an irrep of the group and every block can appear -multiple times, which we call the multiplicity of an irrep in the representation. Schur’s -lemma then implies that in this basis, the tensor becomes block-diagonal. In an appropriate -matricization of \(T\) we can thus write \(T=\bigoplus_c B_c\otimes\mathbb{I}_c\) where the -direct sum over \(c\) represents the decomposition of \(X\otimes\bar Y\otimes\bar Z\) in irreps -\(c\) that can appear multiple times. In other words, the generic symmetric tensor \(T\) can be -stored much more efficiently by only keeping track of the different blocks \(B_c\).
-TensorKit is particularly well suited for dealing with symmetric tensors. What TensorKit -does is exactly what was described in the previous paragraph, it keeps track of the block -structure of the symmetric tensor, hereby drastically reducing the amount of memory it takes -to store these objects, and is able to efficiently manipulate them by exploiting its -structure to the maximum.
-As a simple exercise, let us construct a rank 3 \(SU(2)\) symmetric tensor as above. For -example the spin \(1/2\) and spin \(1\) representation can be called via respectively
-using TensorKit
-
-s = SU₂Space(1/2 => 1)
-l = SU₂Space(1 => 1)
-
Rep[SU₂](1=>1)
-
Here, => 1
essentially means that we consider only one copy (direct summand) of these
-representations. If we would want to consider the direct sum \(\frac{1}{2}\oplus\frac{1}{2}\)
-we would write
ss = SU₂Space(1/2 => 2)
-
Rep[SU₂](1/2=>2)
-
A symmetric tensor can now be constructed as
-A = TensorMap(l ← s ⊗ s)
-
TensorMap(Rep[SU₂](1=>1) ← (Rep[SU₂](1/2=>1) ⊗ Rep[SU₂](1/2=>1))):
-* Data for fusiontree FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()) ← FusionTree{Irrep[SU₂]}((1/2, 1/2), 1, (false, false), ()):
-[:, :, 1] =
- 0.0
-
This tensor then has, by construction, the symmetry property that it transforms trivially
-under \(1\otimes\bar{\frac{1}{2}}\otimes\bar{\frac{1}{2}}\). The blocks can then be inspected
-by calling blocks
on the tensor, and we can also check that the dimensions of the domain
-and codomain are as expected:
@assert dim(domain(A)) == 4
-@assert dim(codomain(A)) == 3
-blocks(A)
-
TensorKit.SortedVectorDict{SU2Irrep, Matrix{Float64}} with 1 entry:
- 1 => [0.0;;]
-
We see that this tensor has one block that we can fill up with some data of our liking. Let -us consider another example
-B = TensorMap(s ← s ⊗ s)
-blocks(B)
-
TensorKit.SortedVectorDict{SU2Irrep, Matrix{Float64}}()
-
This tensor does not have any blocks! This is compatible with the fact that two spin 1/2’s -cannot fuse to a third spin 1/2. Finally let us consider a tensor with with more blocks:
-C = TensorMap(ss ← ss)
-blocks(C)
-
TensorKit.SortedVectorDict{SU2Irrep, Matrix{Float64}} with 1 entry:
- 1/2 => [2.3342e-313 6.15379e-313; 3.39519e-313 6.92768e-310]
-
This tensor has four non-trivial entries.
-Let us conclude with an outlook and some generalizations.
-Besides the “global” symmetries we considered here, you might also be familiar with gauge -symmetries from another course. Gauge theories are ubiquitous in physics and describe a -plethora of interesting physical phenomena. Gauge symmetries should however not be thought -of as actual symmetries transforming physically different states into each other, but -rather describe a redundancy in the description of the system. Nevertheless, group theory -also lies at the heart of these theories.
In this brief overview we mostly neglected spatial symmetries. Spatial symmetries can be -understood as transformations that translate, rotate or reflect the lattice. These kind of -symmetries thus don’t act “on site” anymore. The full classification of spatial symmetry -groups is notoriously rich and beautiful, especially in higher dimensions, and exploiting -them in algorithms can result in tremendous speedup and stability. We already encountered -the example of translation symmetry. One of the benefits of exploiting this symmetry in -tensor networks is e.g. that if the ground state of an infinite one-dimensional model does -not break translation invariance, this ground state can be well modelled by a uniform -matrix product state, a matrix product state consisting of one tensor repeated -indefinitely.
Inspired by the discovery of topological phases of matter and their anyonic excitations, -there has been a growing fascination with the exploration of non-invertible, or -categorical symmetries. These symmetries are beyond the scope of these notes. These -categorical symmetries are not described by groups but by more general and intricate -algebraic structures called fusion categories, of which (finite) groups and their -representations are specific examples. For an example of how spin chains with categorical -symmetries can be constructed, see for example [Feiguin et al., 2007]. TensorKit -allows for an efficient construction and storage of tensors which are symmetric with -respect to these more general kind of symmetries.
Tensor Network States
- -After our introduction on quantum many body systems and -tensor networks, we move on to considering how tensor networks can -characterize many-body systems. We start with a constructive approach to approximating an -arbitrary quantum state by a tensor network state. We then qualify in what settings such a -representation is efficient, and introduce several classes of tensor network states used in -different settings. We end this section by broadly commenting on how efficient manipulations -of tensor network states can be used to simulate quantum systems.
-Consider a quantum many body system which consists of physical spins with a local Hilbert -space \( \mathbb H_i = \mathbb C^d \) of dimension \( d \), which we will call the physical -dimension, are located at every site \( i \) of some lattice \( \Lambda \). This gives rise to -a total Hilbert space of the system \( \mathbb H = \bigotimes_{i = 1}^{N} \mathbb H_i = -\left( \mathbb C^d \right)^{\otimes N}\) where \( N = |\Lambda| \) is the total number of sites -in the lattice. A general quantum state in this many-body Hilbert space can be -represented in terms a set of \(d^N\) complex coefficients \(C_{s_1,s_2,...,s_N} \in \mathbb -C\), where \(s_i\in \{0,...,d-1\}\), with respect to the computational basis as
-The exponential increase in the number of coefficients with the system size means that it is -entirely impossible to store the full state vector of a quantum system of any reasonable -size in this way. For example, a system of \(N=100\) spins with \(d=2\) has \(2^{100} \approx -10^{30}\) coefficients, which is far more than the number of atoms in the universe.
-Instead of directly storing this full state vector, we can alternatively parametrize it as a -tensor network. Consider for example the case \(N=4\). We can then represent the state vector -as a tensor \(C_{s_1,s_2,s_3,s_4}\) with four indices, where each index corresponds to a -physical spin. The full state is then recovered as
- -We can now split the full tensor \(C\) into separate components by consecutively applying the -SVD between pairs of physical indices. For example, splitting out the first index we can -rewrite \(C\) as
- -In this expression we can interpret \(L^{(1)}\) a a \(d \times D\) matrix, \(\lambda^{(1)}\) as a -\(D \times D\) matrix and \(R^{(1)}\) as a \(D\) by \(d^{N-1}\) matrix. The horizontal edge in this -diagram is called a virtual bond and the dimension \(D\) of this bond is called the bond -dimension. The bond dimension is a measure of the entanglement in the state, and in this -case encodes the amount of entanglement between the first site and the rest of the system. -So far we have not actually done anything significant, since this decomposition of \(C\) -actually increased the total number of required coefficients, instead of reducing it. The -key point is that we can reduce the number of parameters by truncating \(\lambda^{(1)}\) to -only keep the \(D\) largest singular values. This results in a low rank approximation of the -original state, where the quality of the approximation is controlled by the chosen final -bond dimension \(D\).
-By repeatedly applying this procedure, grouping and splitting indices in the resulting -diagrams and absorbing the bond tensors \(\lambda^{(i)}\) into the site tensors we can -decompose \(C\) into a tensor network of any geometry. For example, we can approximate \(C\) as -the contraction of a square network to end up with a tensor network state of the form
- -In words, this expression means that for every basis state \( \ket{s_1,s_2,s_3,s_4} \) its -corresponding coefficient in the superposition is obtained by indexing all of the physical -legs pointing downward according to the corresponding physical basis state and contracting -the resulting network.
-We can therefore parametrize an arbitrary quantum state in terms of a set of local tensors -\(A^{(i)}\), where each of these tensors encodes a number of parameters that is polynomial in -its physical dimension \(d\) and bond dimensions \(D\) (which can in principle be different for -every virtual bond). For a general quantum state however, a good tensor network state -approximation requires a bond dimension which scales exponentially with the system size, -meaning that we have not actually gained anything in terms of efficiency. However, it turns -out that for many physically relevant states the bond dimension can be bounded by a constant -independent of the system size, in which case the tensor network representation leads to an -exponential reduction in the number of variational parameters.
-To see why this is the case, let us study the entanglement entropy of a tensor network -state. Consider the following two-dimensional network, where all physical indices have a -dimension \(d\) and we assume all virtual bonds have the same dimension \(D\),
- -We now want to quantify the entanglement between the shaded region \( \mathcal A\) and the -rest of the system for this specific state. To this end, we first recall the formula for the -bipartite entanglement entropy Eq. (6.1), and note that the number of -terms in this expression is determined by the number of non-zero Schmidt coefficients, the -latter of which is referred to as the Schmidt rank. Looking back now at our initial -decomposition of the full state tensor \(C\) by splitting out its first index above, we see -that the Shchmidt rank is precisely given by the bond dimension \(D\) across this cut. From -this, you should be able to convince yourself that the maximal entanglement entropy across -this cut is determined by the bond dimension as \(S \sim \log(D)\). Extending this line of -reasoning to our question of the entanglement between the region \( \mathcal A\) and the rest -of the system, we see that each virtual leg connecting \(\mathcal A\) to the rest of the -system can contribute a term \(\log(D)\) to the entanglement entropy. Therefore we arrive at
-where \( \partial \mathcal A \) is the size of the boundary of \(\mathcal A\) (which in this -two-dimensional case is its perimeter).
-Clearly, this tensor network state then naturally obeys an area law for its entanglement -entropy. In our discussion of the -low temparature properties of quantum many body systems however, we have -already seen that low-energy states of locally interacting Hamiltonians obey exactly such an -area law. It is this fact that tensor network states inherently encode area law entanglement -that makes them so well suited for representing low-energy states of quantum systems. They -can only target a tiny corner of the full exponentially large Hilbert space, but this corner -is precisely where the most relevant physics happens. This observation has given rise to a -large family of tensor network states which allow for an efficient parametrization of states -with varying geometries.
- -Note
-An equally important feature of tensor networks is that they, aside from providing an -efficient parametrization of states, also allow for efficient manipulations of these -states. This means that they can be used to compute interesting features of quantum systems, -and can be optimized to target states of specific interest such as ground states and -low-lying excitations. For all of the network geometries depicted above there exist -corresponding algorithms that put them to efficient use, some of which will be highlighted -in future sections of this tutorial.
-Tensor Network Theory
- -Contents
- -In this lecture we will introduce the basic concepts of tensor network theory. We will start -with a brief overview of the history of tensor networks and their relevance to modern -physics. We will then introduce the graphical notation that is often used to simplify -expressions, and discuss the relevant operations and decompositions along with their -computation complexity and their relevance to quantum many-body physics.
-This discussion is largely based on [Bridgeman and Chubb, 2017].
-This lecture also serves as a brief introduction to
-TensorOperations.jl, and showcases some more
-features of TensorKit.jl as well. Note that
-TensorKit already re-exports the @tensor
macro from TensorOperations, so it is not
-necessary to import it separately if TensorKit is already loaded.
using TensorKit
-using Test # for showcase testing
-
The history of tensor networks is a fascinating journey through the evolution of profound -theoretical ideas and evolutions, as well as the development of computational methods and -tools. These ideas have been developed in a variety of contexts, but have been especially -relevant to the study of quantum physics and machine learning.
-Early Foundations:
The roots of tensor networks can be traced back to the early development of linear algebra and matrix notation in the 19th century, pioneered by mathematicians like Arthur Cayley and James Sylvester.
The concept of tensors as multi-dimensional arrays of numbers began to emerge in the late 19th and early 20th centuries.
Matrix Product States and DMRG:
The birth of modern tensor network theory can be attributed to the introduction of MPS in the 1960s (?).
One of the earliest, and still most widely used tensor network algorithm is DMRG. It was developed by Steven White in 1992, and provides one of the most efficient methods for simulating one-dimensional quantum many-body systems.
Quantum Information Theory:
In the 1980s and 1990s, the field of quantum information theory began to emerge, driven by (add names here)
Concepts such as quantum entanglement and quantum information became central to the study of quantum many-body systems.
Higher-Dimensional Tensor Networks:
As the field progressed, tensor network methods were extended to higher-dimensional systems, leading to the emergence of more general tensor network states (TNS)..
Two-dimensional tensor networks such as Projected Entangled Pair States (PEPS) and Multi-scale Entanglement Renormalization Ansatz (MERA) were introduced in the early 2000s.
Tensor Networks in other disciplines:
Many of the concepts and methods developed in the context of tensor networks have been applied to other disciplines, one of the most prominent being machine learning.
Unsuprisingly, they also play a central role in quantum computing, where tensor network algorithms provide a natural language to explore quantum circuit simulations.
Ongoing Research and Applications
Tensor network theory continues to be a vibrant and evolving field with ongoing research in various directions, such as the development of efficient tensor contraction algorithms, the application of tensor networks for understanding quantum phases of matter, the development of tensor network algorithms for quantum computing, and the application of tensor networks to machine learning.
One of the main advantages of tensor networks is that they admit a very intuitive graphical -notation, which greatly simplifies the expressions involving numerous indices. This notation -is based on the idea of representing a single tensor as a node in a graph, where the indices -of the tensor are depicted by legs sticking out of it, one for each vector space. As an -example, a rank-four tensor \(R\) can be represented as:
- -In this notation, the individual components of the tensor can be recoverd by fixing the open -legs of a diagram to some value, and the resulting diagram is then a scalar. For example, -the component \(R_{i_1,i_2,i_3,i_4}\) is given by:
- -Because of the isomorphism (8.5), the legs of the tensor can be freely -moved around, as long as their order is preserved. In some contexts the shape of -the node and the direction of the tensor can imply certain properties, such as making an -explicit distinction between the isomorphic representations, but in what follows we will not -make this distinction.
-Furthermore, this naturally gives a notion of grouping and splitting of indices, which is -just a reinterpretation of a set of neighbouring vector spaces as a single vector space, and -the inverse operation. For example, the following diagrams are equivalent:
- -Owing to the freedom in choice of basis, the precise details of grouping and splitting are -not unique. One specific choice of convention is the tensor product basis, which is -precisely the one we have used in the discussion of multi-linear algebra. More concretely, -one choice that is often used is the Kronecker product, which in the setting of -column-major ordering is given explicitly by grouping indices as follows:
-Here \(d_i\) is the dimension of the corresponding vector space, and \(I\) is the resulting -linear index. Note again that so long as the chosen convention is consistent, the precise -method of grouping and splitting is immaterial.
-This can be conveniently illustrated by the reshape
function in Julia, which performs
-exactly this operation. For simple arrays, this operation does nothing but change the size
-property of the data structure, as the underlying data necessarily needs to be stored in a
-linear order in memory, as computer adresses are linear. Because of this, in tensor
-networks, these operations are typically left implicit.
A = reshape(1:(2^4), (2, 2, 2, 2))
-B = reshape(A, (4, 2, 2))
-C = reshape(A, (2, 4, 2))
-# ...
-
2×4×2 reshape(::UnitRange{Int64}, 2, 4, 2) with eltype Int64:
-[:, :, 1] =
- 1 3 5 7
- 2 4 6 8
-
-[:, :, 2] =
- 9 11 13 15
- 10 12 14 16
-
Of course, in order to really consider a tensor network, it is necessary to consider -diagrams that consist of multiple tensors, or in other words of multiple nodes. The simplest -such diagram represents the outer product of two tensors. This is represented by two -tensors being placed next to each other. The value of the resulting network is simply the -product of the constituents. For example, the outer product of a rank three tensor \(A\) and a -rank two tensor \(B\) is given by:
- -More complicated diagrams can be constructed by joining some of the legs of the constituent -tensors. In a matter similar to the conventional Einstein notation, this implies a summation -over the corresponding indices.
-If two legs from a single tensor are joined, this signifies a (partial) trace of a tensor -over these indices. For example, the trace of a rank three tensor \(A\) over two of its -indices is given by:
- -In this notation, the cyclic property of the trace follows by sliding one of the matrices -around the loop of the diagram. As this only changes the placement of the tensors in the -network, and not the value, the graphic proof of \(\text{Tr}(AB) = \text{Tr}(BA)\) is found.
- -The most common tensor operation used is contraction, which is the joining of legs from -different tensors. This can equivalently be thought of as a tensor product followed by a -trace. For example, the contraction between two pairs of indices of two rank-three tensors -is drawn as:
- -Familiar examples of contraction are vector inner products, matrix-vector multiplication, -matrix-matrix multiplication, and matrix traces.
- -Combining the operations defined above, it is possible to construct arbitrarily complicated -tensor networks, which can then be evaluated by a sequence of pair-wise operations. The -result then reduces to a tensor which has a rank equal to the number of open legs in the -network. For example, the following diagram represents a generic tensor network:
- -In order to evaluate such networks, it is necessary to define a notational convention for -specifying a network with text. One of the most common conventions is that of -Einstein notation, where each index of a -tensor is assigned a label, and repeated labels are implicitly summed over. For example, the outer product, trace, and inner product can respectively be obtained as:
-A = rand(2, 2, 2)
-B = rand(2, 2)
-@tensor C[i, j, k, l, m] := A[i, j, k] * B[l, m]
-@tensor D[i] := A[i, j, j]
-@tensor E[i, j, l] := A[i, j, k] * B[l, k]
-size(C), size(D), size(E)
-
((2, 2, 2, 2, 2), (2,), (2, 2, 2))
-
Note
-The @tensor
macro can be used to either create new tensors, using the :=
assignment, or
-to copy data into existing tensors using =
. In the latter case, the tensor must already
-exist and have the right dimensions, but less additional memory is allocated.
This notation is very useful indeed, but quickly becomes unwieldy when one wishes to specify -in what order the pairwise operations should be carried out. Thus, in the same spirit but -with a minor modification, the NCON notation was -introduced. In this notation, the indices of a tensor are assigned integers, and pairwise -operations happen in increasing order. Similarly, negative integers are assigned to open -legs, which determine their resulting position. For example, the diagram above -can be written as:
-B = rand(2, 2, 2, 2)
-C = rand(2, 2, 2, 2, 2)
-D = rand(2, 2, 2)
-E = rand(2, 2)
-F = rand(2, 2)
-@tensor begin
- A[-1, -2] := B[-1, 1, 2, 3] * C[3, 5, 6, 7, -2] * D[2, 4, 5] * E[1, 4] * F[6, 7]
-end
-
2×2 Matrix{Float64}:
- 2.66961 2.70201
- 2.73193 2.83059
-
While tensor networks are defined in such a way that their values are independent of the -order of pairwise operations, the computational complexity of evaluating a network can vary -wildly based on the chosen order. Even for simple matrix-matrix-vector multiplication, the -problem can easily be illustrated by considering the following two equivalent operations:
-If both \(A\) and \(B\) are square matrices of size \(N \times N\), and \(v\) and \(w\) are vectors of -length \(N\), the first operation requires \(2N^2\) floating point operations (flops), while the -second requires \(N^3 + N^2\) flops. This is a substantial difference, and it is clear that -the first operation is to be preferred.
-More generally, the amount of flops required for contracting a pair of tensors can be -determined by considering the fact that the amount of elements to compute is equal to the -product of the dimensions of the open indices, and the amount of flops required to compute -each element is equal to the product of the dimensions of the contracted indices. Due to -this fact, it is typically the most efficient to minimize the surface area of contraction, -which boils down to the heuristic of minimizing the amount of legs that are cut, also known -as bubbling.
-Many networks admit both efficient and inefficient contraction orders, and often it is -infeasible to compute the optimal order. Take for example a ladder-shaped network, which is -of particular relevance in the context of Matrix Product States, we can highlight a few -possible contraction orders, for which we leave it as an exercise to determine the -computational complexity:
- - -Determining the optimal order however is a problem that is known to be NP-hard, and thus no -algorithm exists that can efficiently compute optimal orders for larger networks. -Nevertheless, efficient implementations allows finding optimal orders for networks of up to -30-40 tensors [Pfeifer et al., 2014], but other methods exist that can be used to -determine good (not necessarily optimal) contraction orders.
-TensorOperations comes with some built-in tools for facilitating this process, and in
-particular the opt
keyword can be used to enable the use of the algorithm from
-[Pfeifer et al., 2014]. Because this uses the Julia macro system, this can be done at
-compilation time, and in other words only needs to be computed once.
@tensor opt=true begin
- A[i, j] := B[i, α, β, γ] * C[γ, ϵ, ζ, η, j] * D[β, δ, ϵ] * E[α, δ] * F[ζ, η]
-end
-
2×2 Matrix{Float64}:
- 2.66961 2.70201
- 2.73193 2.83059
-
Linear maps admit various kinds of factorizations, which are instrumental in a variety of -applications. They can be used to generate orthogonal bases, to find low-rank -approximations, or to find eigenvalues and vectors. In the context of tensors, the -established theory for factorizations of matrices can be generalized by interpreting tensors -as linear maps, and then applying the same factorization to the corresponding matrix -partition of the constituent vector spaces in a codomain and domain, after which everything -carries over. Thus, the only additional information that is required is the specification of -this partition. In this section we will discuss the most common factorizations of tensors, -but the reasoning can be generalized to any factorization of linear maps.
-S1 = ℂ^2 ⊗ ℂ^2 ⊗ ℂ^2
-S2 = ℂ^2 ⊗ ℂ^3
-
(ℂ^2 ⊗ ℂ^3)
-
The Eigen decomposition of a matrix -\(A\) is a factorization of the form:
-where \(V\) is a matrix of eigenvectors, and \(\Lambda\) is a diagonal matrix of eigenvalues. In -particular, the set of eigenvectors form a basis for all possible products \(Ax\), which is -the same as the image of the corresponding matrix transformation. For normal matrices, these -eigenvectors can be made orthogonal and the resulting decomposition is also called the -spectral decomposition.
-The eigenvalue decomposition mostly finds it use in the context of linear equations of the -form:
-where \(v\) is an eigenvector of \(A\) with eigenvalue \(\lambda\).
-For tensors, the eigenvalue decomposition is defined similarly, and the equivalent equation -is diagrammatically represented as:
- -A = TensorMap(randn, ComplexF64, S1, S1) # codomain and domain equal for eigendecomposition
-D, V = eig(A)
-@test A * V ≈ V * D
-
Test Passed
-
The -Singular Value Decomposition -(SVD) can be seen as a generalization of the eigendecomposition of a square normal matrix to -any rectangular matrix \(A\). Specifically, it is a factorization of the form -\(A = U \Sigma V^\dagger\) where \(U\) and \(V\) are isometric matrices -(\(U^\dagger U = V^\dagger V = \mathbb{1}\)), and \(\Sigma\) is a diagonal matrix of singular -values. The SVD is typically used to find low-rank approximations for matrices, and it was -shown [Eckart and Young, 1936] that the best rank-\(k\) approximation is given by the -SVD, where \(\Sigma\) is truncated to the first (largest) \(k\) singular values.
-Again, a tensorial version is defined by first grouping indices to form a matrix, and then -applying the SVD to that matrix.
- - -A = TensorMap(randn, ComplexF64, S1, S2)
-partition = ((1, 2), (3, 4, 5))
-U, S, V = tsvd(A, partition...)
-@test permute(A, partition) ≈ U * S * V
-@test U' * U ≈ id(domain(U))
-@test V * V' ≈ id(codomain(V))
-
Test Passed
-
The polar decomposition of a square -matrix \(A\) is a factorization of the form \(A = UP\), where \(U\) is a semi-unitary matrix and \(P\) is -a positive semi-definite Hermitian matrix. It can be interpreted as decomposing a linear -transformation into a rotation/reflection \(U\), combined with a scaling \(P\). The polar -decomposition is unique for all matrices that are full rank.
- -A = TensorMap(randn, ComplexF64, S1, S2)
-partition = ((1, 2), (3, 4, 5))
-Q, P = leftorth(A, partition...; alg=Polar())
-@test permute(A, partition) ≈ Q * P
-@test Q * Q' ≈ id(codomain(Q))
-@test (Q * Q')^2 ≈ (Q * Q')
-
Test Passed
-
The QR decomposition is a factorization of -the form \(A = QR\), where \(Q\) is an orthogonal matrix and \(R\) is an upper triangular matrix. -It is typically used to solve linear equations of the form \(Ax = b\), which admits a solution -of the form \(x = R^{-1} Q^\dagger b\). Here \(R^{-1}\) is particularly easy to compute because -of the triangular structure (for example by Gaussian elimination). Additionally, for -overdetermined linear systems, the QR decomposition can be used to find the least-squares -solution.
- - -A = TensorMap(randn, ComplexF64, S1, S2)
-partition = ((1, 2), (3, 4, 5))
-Q, R = leftorth(A, partition...; alg=QR())
-@test permute(A, partition) ≈ Q * R
-@test Q' * Q ≈ id(domain(Q))
-
Test Passed
-
The QR decomposition is unique up to a diagonal matrix of phases, and can thus be made -unique by requiring that the diagonal elements of \(R\) are positive. This variant is often -called QRpos. Additional variants exist that are flipped and/or transposed, such as the RQ, -QL, and LQ decompositions.
-Note
-Often it is useful to make a distinction between factorizations that are rank revealing, -and factorizations that are not. A factorization is rank revealing if the rank of the matrix -can be determined from the factorization. For example, the SVD is rank revealing, while the -QR decomposition is not. However, the trade-off being that the SVD decomposition is -substantially more expensive, the QR decomposition is often preferred in practice.
-Finally, the nullspace of a matrix \(A\) is the set of vectors \(x\) such that \(Ax = 0\). This is -typically determined via the SVD, where the nullspace is given by the right singular vectors -corresponding to zero singular values.
-A = TensorMap(randn, ComplexF64, S1, S2)
-partition = ((1, 2, 3), (4, 5))
-N = leftnull(A, partition...)
-@test norm(N' * permute(A, partition)) ≈ 0 atol=1e-14
-@test N' * N ≈ id(domain(N))
-
Test Passed
-
In this lecture we have introduced the basic concepts of tensor network theory. We have -defined tensors and the operations that are commonly performed, as well as the graphical -notation that is used to represent them. We have also discussed the computational complexity -of tensor networks, and the importance of finding efficient contraction orders. Finally, we -have discussed the most common tensor factorizations, and how they can be used.
-A Simple Tensor Network Algorithm
- -Having introduced tensor networks in general, with a focus on the case of MPS, we now turn -to the question of how to use them to solve specific problems. While a large number of -tensor network algorithms have been developed, many of them more advanced and/or efficient -than the ones we will discuss here, we will focus on a few simple algorithms that are easy -to understand and implement. Importantly, these algorithms are also the building blocks of -more advanced algorithms, for example in higher spatial dimensions.
-Effectively, we have already seen how to use MPS to compute expectation values or -correlation functions, or derive all kind of properties. Here, we focus on how to obtain the -desired MPS in the first place. In other words, given a certain problem, how can we optimize -an MPS, or a more general tensor network, to solve it?
-As a first example, let us consider the problem of simulating a quantum system. We can -formalize this idea as follows: Given a Hamiltonian \(H\), and some initial state -\(\ket{\psi_0}\) at time \(t=0\), is there a way to compute the time-evolved state -\(\ket{\psi(t)} = e^{-i H t} \ket{\psi_0}\) at some later time \(t\).
-In general, this is a very hard problem. For example, one could naively try to -compute the matrix exponential, -but this quickly becomes prohibitively expensive, as the dimension of the Hamiltonian scales -exponentially with the number of particles. However, for physically relevant systems the -Hamiltonian does not consist of a random matrix, but rather exhibits additional structure -that can be used to simplify the problem.
-A particularly powerful example can be found for systems with local interactions, where the -Hamiltonian is of the form:
-where \(h_{ij}\) denotes a local operator, acting only on a small number of sites. In this -case, although \(e^{-i H t}\) is unfeasible to compute, each of the constituent terms act only -on a much smaller subsystem and therefore \(e^{-ih_{ij}t}\) can be computed efficiently. -However, as these terms generally do not commute, we cannot simply apply them one after the -other. Instead, we can use the first-order Suzuki-Trotter decomposition to approximate the -time-evolution operator, which states that for any two Hermitian operators \(A\) and \(B\), and -any real number \(\Delta t\), we have:
-If we now split the full time interval \(t\) into \(m\) steps, we obtain the approximation
-where the approximation error can be managed by choosing a sufficiently large \(m\).
-Note
-There actually exist entire families of such exponential product approximations up to a -given order [Hatano and Suzuki, 2005]. For our purposes however, it is sufficient to -illustrate a simulation procedure using this first-order approximation.
-We can put the discussion above into practice by applying it to the example of a nearest-neighbour Hamiltonian on a one-dimensional lattice:
-where \(N\) is the number of sites and we are assuming periodic boundary conditions. We now -want to simulate the dynamics of this Hamiltonian in an efficient way using the -aforementioned approximation Eq. (14.1). The simplest way to do this is to -split the local terms into two groups, where terms within a group commute with each other, -but not with terms in the other group. For example, we could split the Hamiltonian into even -(\(H_e\)) and odd terms (\(H_o\)):
-It is a simple exercise to show that the local terms within a group commute, as they act on -non-overlapping sites. Therefore, if we can find a MPS representation of the initial state, -the procedure for simulating the time evolution is as follows:
- -This procedure does not solve the problem as-is, as evaluating this network exactly would -still require a bond dimension which grows exponentially with the number of layers \(m\). -Instead, we can retain an efficient description by locally truncating the bond dimension, by -computing an SVD an retaining only the largest \(\chi\) singular values.
- -Another important problem in quantum physics is the determination of the groundstate of a -given Hamiltonian. Again, this can be made more formal as follows: Given a Hamiltonian \(H\), -is there a way to find the state \(\ket{\psi_0}\) that minimizes the expectation value -\(\bra{\psi} H \ket{\psi}\).
-In fact, this problem faces the same difficulty as the one discussed above, namely that the -naive solution strategy involves finding the eigenvector of the Hamiltonian matrix with the -smallest eigenvalue, which again scales exponentially with the number of particles. However, -as before, we can exploit the structure of the Hamiltonian to find a more efficient -solution.
-In fact, the problem of finding groundstates can be mapped to the problem of simulating -dynamics, by making use of a trick known as imaginary time evolution. The idea is to -consider the time evolution operator \(e^{-i H t}\), but to replace the real time \(t\) by an -imaginary time \(\tau = i t\). If we now consider the limit \(\tau \to \infty\) and deal with -the normalization appropriately, we can see that applying the evolution operator to a state -\(\ket{\psi_0}\) will effectively project it on its lowest energy eigenstate, as all other -eigenstates will be damped out exponentially. In other words, we can find the groundstate of -a Hamiltonian by simulating its dynamics for a sufficiently long imaginary time.
-where we have made use of the fact that all but the first term in the sum are damped out. In -this regard, the groundstate search problem can also be tackled with the TEBD algorithm -discussed above, by simply replacing the real time \(t\) by an imaginary time \(\tau\) and -continuing time-evolution until convergence is reached.
-We have now seen a first example of algorithms that can be used for optimizing tensor -networks, either to simulate dynamics or to find groundstates. We conclude by mentioning -that this is only the tip of the iceberg, and that there exist many more algorithms that can -be used to solve a variety of problems.
-Outlook
-To close out this lecture, we briefly comment on the higher dimensional generalizations of -the TEBD procedure and the difficulties this brings with it. For local quantum Hamiltonians -in higher dimensions we can follow a similar procedure, where we split the full Hamiltonian -into sum of parts that each only contain non-overlapping local terms. Time evolution can -then be simulated by applying a similar sequence of layers, where in each layer we evolve -with all local operators in a given Hamiltonian part in parallel.
-The problem with this approach however is that the local update step tebd_trunc is -ill-conditioned for higher-dimensional networks if the full quantum state is is not taken -into account for the truncation. Indeed, while in the one-dimensional case the rest of the -network surrounding the sites we want to update can be brought into account exactly by -working in appropriate gauge, this is not possible in general. Consider for example a -general network where want to apply some update to the central site,
- -Since this network contains loops, there is no way to exactly capture the surrounding -network in general. One instead has to resort to approximation techniques for the -environments of a given update site, where the quality of the environment approximations -directly affects the stability of the local update. The simplest way of doing this is to use -the so-called simple update procedure [Jiang et al., 2008] where all loops in the -network are simply ignored and the environment is approximated by a product state,
- -More accurate results can be obtained by taking into account the full quantum state of the -system in each local update by means of the full update procedure -[Jordan et al., 2008]. However, this gain in accuracy comes with a substantial -increase in computational cost due to the full environment approximation at each step.
-Infinite Matrix Product States
- -This section discusses matrix product states (MPS) in the thermodynamic limit and their -properties. Our discussion is mostly based on the excellent review -[Vanderstraeten et al., 2019], which provides a thorough technical overview of -tangent-space methods for uniform MPS. The formal exposition is supplemented with some very -basic code examples on working with infinite MPS using -MPSKit.jl at the end of this section. For more -details on the numerical implementation of routines for uniform MPS we refer to the Julia -version of the tutorials on uniform MPS, -which is again based on [Vanderstraeten et al., 2019].
-Contents
-The finite MPS representation introduced in the previous previous section can be readily -extended to the thermodynamic limit by constructing a quantum state of an infinite spin system as a product of an infinite chain of tensors. For infinite systems which are invariant under translations, it is natural to also impose transation-invariance on the corresponding MPS. This leads to a uniform MPS which has the same tensor \(A^{(i)} := A\) at every site, where \(A\) again has a physical dimension \(d\) and bond dimension \(D\). In diagramatic notation, a uniform MPS can be represented as
- -Note
-In some cases, instead of assuming an MPS has the same tensor at each site it is more -natural to use a state with a non-trivial repeating unit cell. A uniform MPS with a unit -cell of size three would for example correspond to the state
- -While we will restrict our discussion to MPS with a single-site unit cell, most concepts and -techniques apply just as well to the multi-site unit cell case.
-One of the central objects when working with MPS in the thermodynamic limit is the transfer operator or -transfer matrix, defined in our case as
- -The transfer matrix corresponds to an operator acting on the space of \(D\times D\) matrices, -and can be interpreted as a 4-leg tensor \(\mathbb C^D \otimes \mathbb C^D \leftarrow \mathbb -C^D \otimes \mathbb C^D\). The transfer matrix can be shown to be a completely positive map, -such that its leading eigenvalue is a positive number. The eigenvalues of the transfer -matrix characterize the normalization and correlation length of a uniform MPS, while its -eigenvectors can be used to evaluate expectation values of local observables.
-The norm of a uniform MPS corresponds to a contraction of the form
- -Clearly, this norm is nothing more than an infinite product of MPS transfer matrices defined -above. Consider the spectral decomposition of the \(n\)th power \(\mathbb E^n\),
- -where \(l\) and \(r\) are the left and right fixed points which correspond to the largest -magnitude eigenvalue \(\lambda_0\) of \(\mathbb E\),
- -and the \(\lambda_i\) represent the remaining eigenvalues of smaller mangitude, where writing the spectral decomposition we have implicitly assumed that the fixed points are properly normalized as
- -Taking the -limit of this spectral decomposition, it follows that the infinite product of transfer matrices reduces -to a projector onto the fixed points corresponding to the leading eigenvalue \(\lambda_0\),
- -To ensure a properly normalized state we should therefore rescale the leading eigenvalue -\(\lambda_0\) to one by rescaling the MPS tensor as \(A \leftarrow A / \sqrt{\lambda_0}\).
-With these properties in place, the norm of an MPS reduces to the overlap between the -boundary vectors and the fixed points. Since there is no effect of the boundary vectors on -the bulk properties of the MPS, we can always choose these such that MPS is properly -normalized as \( \left \langle \psi(\bar{A})\middle | \psi(A) \right \rangle = 1\).
-The fixed points of the transfer matrix can for example be used to compute expectation values of -operators. Suppose we wish to evaluate expectation values of an extensive operator,
-If we assume that each \(O_n\) acts on a single site and we are working with a properly -normalized MPS, translation invariance dictates that the expectation value of \(O\) is given -by the contraction
- -In the uniform gauge, we can use the fixed points of the transfer matrix to contract -everything to the left and to the right of the operator, such that we are left with the -contraction
- -Correlation functions are computed similarly. Let us look at
-where \(m\) and \(n\) are abritrary locations in the chain, and, because of translation -invariance, the correlation function only depends on the difference \(m-n\). Again, we -contract everything to the left and right of the operators by inserting the fixed points \(l\) -and \(r\), so that
- -From this expression, we learn that it is the transfer matrix that determines the -correlations in the ground state. Indeed, if we again use the spectral decomposition of the -transfer matrix, recalling that now \(\lambda_0 = 1\), we can see that the correlation -function reduces to
- -The first part is just the product of the expectation values of \(O^\alpha\) and \(O^\beta\), -called the disconnected part of the correlation function, and the rest is an exponentially -decaying part. This expression implies that connected correlation functions of an MPS -always decay exponentially, which is one of the reasons why MPS generally have a harder -time dealing with critical states. The correlation length \(\xi\) is determined by the second -largest eigenvalue of the transfer matrix \(\lambda_1\) as
-Note
-The subleading eigenvalues of the transfer matrix typically also have a physical meaning, -because they correspond to subleading correlations in the system. For example, by focussing -on eigenvalues in a specific symmetry sector one can target the correlations associated to -exitations corresponding to that particular symmetry. The subleading eigenvalues also play a -crucial role in the powerful technique of finite entanglement scaling for infinite MPS -[Rams et al., 2018]. Using this framework we can accurately capture critical phenomena -using MPS, despite the ansatz inherently having exponentially decaying correlations.
-While a given MPS tensor \(A\) corresponds to a unique state \(\left | \psi(A) \right \rangle\), -the converse is not true, as different tensors may give rise to the same state. This is -easily seen by noting that the gauge transform
- -leaves the physical state invariant. We may use this freedom in parametrization to impose -canonical forms on the MPS tensor \(A\).
-We start by considering the left-orthonormal form of an MPS, which is defined in terms of -a tensor \(A_L\) that satisfies the condition
- -We can find the gauge transform \(L\) that brings \(A\) into this form
- -using an iterative procedure based on the QR docomposition, where starting from some initial -guess \(L^0\) we repeatedly perform the QR-based update
- -This iterative procedure is bound to converge to a fixed point for which -\(L^{(i+1)}=L^{(i)}=L\) and \(A_L\) is left orthonormal by construction:
- -Note that this left gauge choice still leaves room for unitary gauge transformations
- -which can be used to bring the right fixed point \(r\) into diagonal form. Similarly, we can -find the gauge transform that brings \(A\) into right-orthonormal form
- -such that
- -and the left fixed point \(l\) is diagonal. A right-orthonormal tensor \(A_R\) and a matrix \(R\) -such that \(A R = R A_R\) can be found using a similar iterative procedure.
-Finally, we can define a mixed gauge for the uniform MPS by choosing one site, the ‘center -site’, and bringing all tensors to the left of it in the left-orthonormal form and all the -tensors to the right of it in the right-orthonormal form. Defining a new tensor \(A_C\) on the -center site, we obtain the form
- -By contrast, the original representation using the same tensor at every site is commonly -referred to as the uniform gauge. The mixed gauge has an intuitive interpretation. -Defining \(C = LR\), this tensor then implements the gauge transform that maps the -left-orthonormal tensor to the right-orthonormal one, thereby defining the center-site -tensor \(A_C\):
- -This relation is called the mixed gauge condition and allows us to freely move the center -tensor \(A_C\) through the MPS, linking the left- and right orthonormal tensors.
-Finally we may bring \(C\) into diagonal form by performing a singular value decomposition \(C -= USV^\dagger\) and absorbing \(U\) and \(V^\dagger\) into the definition of \(A_L\) and \(A_R\) -using the residual unitary gauge freedom
- - -In the mixed gauge, we can locate the center site where the operator is acting, and then -contract everything to the left and right to the identity to arrive at the particularly -simple expression for the expectation value
- -The mixed canonical form with a diagonal \(C\) now allows to straightforwardly write down a -Schmidt decomposition of the state across an arbitrary bond in the chain
-where the states \(\left | \psi^i_L(A_L) \right \rangle\) and \(\left | \psi^i_R(A_R) \right -\rangle\) are orthogonal states on half the lattice. The diagonal elements \(C_i\) are exactly -the Schmidt coefficient of any bipartition of the MPS, and as such determine its bipartite -entanglement entropy
-The mixed canonical form also enables efficient truncatation of an MPS. The sum in the above -Schmidt decomposition can be truncated, giving rise to a new MPS that has a reduced bond -dimension for that bond. This truncation is optimal in the sense that the norm between the -original and the truncated MPS is maximized. To arrive at a translation invariant truncated -MPS, we can truncate the columns of the absorbed isometries \(U\) and \(V^\dagger\) -correspondingly, thereby transforming every tensor \(A_L\) or \(A_R\). The truncated MPS in -the mixed gauge is then given by
- -We note that the resulting state based on this local truncation is not guaranteed to -correspond to the MPS with a lower bond dimension that is globally optimal. This would -require a variational optimization of the cost function.
-MPSKit.InfiniteMPS
#The Julia package MPSKit.jl provides many tools -for working with infinite MPS. Without going into much detail, we can already check some -aspects of our discussion above with this numerical implementation.
-We can construct an
-MPSKit.InfiniteMPS
-by specifying the physical and virtual vector spaces of the MPS. We will use standard
-complex vector spaces as specified by a
-TensorKit.ComplexSpace
,
-and choose a physical dimension \(d = 3\) and bond dimension \(D = 5\).
using MPSKit, TensorKit
-
-d = 3 # physical dimension
-D = 5 # bond dimension
-mps = InfiniteMPS(ℂ^d, ℂ^D)
-
single site InfiniteMPS:
-│ ⋮
-│ CR[1]: TensorMap(ℂ^5 ← ℂ^5)
-├── AL[1]: TensorMap((ℂ^5 ⊗ ℂ^3) ← ℂ^5)
-│ ⋮
-
The infinite MPS is automatically stored in the mixed canonical form introduced above. For -example, we can check that its normalization is indeed characterized by the center gauge -tensors \(A_C\) and \(C\).
-using LinearAlgebra
-
-@show norm(mps)
-@show norm(mps.AC[1])
-@show norm(mps.CR[1]);
-
norm(mps) = 0.9999999999999999
-norm(mps.AC[1]) = 0.9999999999999999
-norm(mps.CR[1]) = 1.0
-
We can also explicitly verify the mixed gauge conditions on \(A_L\), \(A_R\), \(A_C\) and \(C\) by
-evaluating the corresponding tensor network diagrams using the
-TensorOperations.@tensor
macro.
using TensorOperations
-
-@tensor AL_id[-1; -2] := mps.AL[1][1 2; -2] * conj(mps.AL[1][1 2; -1])
-@tensor AR_id[-1; -2] := mps.AR[1][-1 1; 2] * conj(mps.AR[1][-2 1; 2])
-
-@assert AL_id ≈ id(space(mps.AL[1], 3)') "AL not in left-orthonormal form!"
-@assert AR_id ≈ id(space(mps.AR[1], 1)) "Ar not in right-orthonormal form!"
-
-@tensor LHS[-1 -2; -3] := mps.AL[1][-1 -2; 1] * mps.CR[1][1; -3]
-@tensor RHS[-1 -2; -3] := mps.CR[1][-1; 1] * mps.AR[1][1 -2; -3]
-
-@assert LHS ≈ RHS && RHS ≈ mps.AC[1] "Center gauge MPS tensor not consistent!"
-
We can also easily evaluate the expectation value of local operators
-O = TensorMap(randn, ℂ^d ← ℂ^d)
-expectation_value(mps, O)
-
1-element Vector{ComplexF64}:
- 0.12559307141051923 + 0.10955622901083523im
-
as well as compute the correlation length encoded in the MPS.
-correlation_length(mps)
-
0.40097713703530957
-
MPSKit.jl exports a variety of infinite MPS algorithms, some of which will be discussed in -the next section.
-Matrix Product Operators and Applications
- -If Matrix Product States are a tensor network way of representing quantum states in one -dimension, we can similarly use tensor networks to represent the operators that act on -these states. Matrix Product Operators (MPOs) form a structured and convenient description -of such operators, that can capture most (if not all) relevant operators. Additionally, they -also form a natural way of representing the transfer matrix of a 2D statistical mechanical -system, and can even be used to study higher dimensional systems by mapping them to quasi-1D -systems.
-In this lecture, we will discuss the construction of MPOs, as well as showcase their use -through MPSKit.jl and -MPSKitModels.jl.
-using TensorKit
-using MPSKit
-using MPSKitModels
-
In general, an MPO is a chain of tensors, where each tensor has two physical indices and two -virtual indices:
- -Before discussing one-dimensional transfer matrices, let us first consider how partition -functions of two-dimensional classical many-body systems can be naturally represented as a -tensor network. To this end, consider the partition function of the -classical Ising model,
-where \(s_i\) denotes a configuration of spins, and \(H(\{s_i\})\) is the corresponding -energy, as determined by the Hamiltonian:
-where the first sum is over nearest neighbors.
-As the expression for the partition function is an exponential of a sum, we can also write -it as a product of exponentials, which can be reduced to the following network:
- -Here, the black dots at the vertices represent Kronecker \(\delta\)-tensors,
- -and the matrices \(t\) encode the Boltzmann weights associated to each nearest-neighbor interaction,
- -It is then simple, albeit somewhat involved to check that contracting this network gives -rise to the partition function, where the sum over all configurations is converted into the -summations in the contractions of the network. Finally, it is more common to absorb the edge -tensors into the vertex tensors by explicitly contracting them, such that the remaining -network consists of tensors at the vertices only:
- -Note
-Because there are two edges per vertex, an intuitive way of absorbing the edge tensors is to -absorb for example the left and bottom edge tensors into the vertex tensor. However, this -leads to a slightly asymmetric form, and more commonly the square root \(q\) of the Boltzmann -matrices is taken, such that each vertex tensor absorbs such a factor from each of the -edges, resulting in a rotation-invariant form.
- -In particular, the construction of the operator that makes up the MPO can be achieved in a -few lines of code, through the use of TensorKit:
-β = 1.0
-
-# construct edge tensors
-t = TensorMap(ComplexF64[exp(β) exp(-β); exp(-β) exp(β)], ℂ^2, ℂ^2)
-q = sqrt(t)
-
-# construct vertex tensors
-δ = TensorMap(zeros, ComplexF64, ℂ^2 ⊗ ℂ^2, ℂ^2 ⊗ ℂ^2)
-δ[1, 1, 1, 1] = 1.0
-δ[2, 2, 2, 2] = 1.0
-
-# absorb edge tensors
-@tensor O[-1 -2; -3 -4] := δ[1 2; 3 4] * q[-1; 1] * q[-2; 2] * q[3; -3] * q[4; -4]
-
TensorMap((ℂ^2 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^2)):
-[:, :, 1, 1] =
- 7.321388457312336 + 0.0im 0.49999999999999895 + 0.0im
- 0.49999999999999895 + 0.0im 0.06766764161830616 + 0.0im
-
-[:, :, 2, 1] =
- 0.49999999999999906 + 0.0im 0.06766764161830614 + 0.0im
- 0.06766764161830614 + 0.0im 0.49999999999999906 + 0.0im
-
-[:, :, 1, 2] =
- 0.499999999999999 + 0.0im 0.06766764161830616 + 0.0im
- 0.06766764161830616 + 0.0im 0.49999999999999906 + 0.0im
-
-[:, :, 2, 2] =
- 0.06766764161830616 + 0.0im 0.499999999999999 + 0.0im
- 0.499999999999999 + 0.0im 7.321388457312336 + 0.0im
-
In order to then evaluate the partition function, we can use the -Transfer-matrix method, which is a -technique that splits the two-dimensional network into rows (or columns) of so-called -transfer matrices, which are already represented as MPOs. In fact, this method has even led -to the famous exact solution of the two-dimensional Ising model by Onsager. -[Onsager, 1944].
- -In the context of tensor networks, this technique is even useful beyond exactly solvable -cases, as efficient algorithms exist to determine the product of an MPO with an MPS in an -approximate manner. This allows us to efficiently split the computation of the partition -function in a sequence of one-dimensional contractions, thus reducing the complexity of the -problem by solving it layer by layer.
-Importantly, this technique is not limited to finite systems, and in fact allows for the -computation of the partition function of systems directly in the thermodynamic limit, -alleviating the need to consider finite-size effects and extrapolation techniques. The key -insight that allows for this is that the partition function may be written as
-where \(T\) is the row-to-row transfer matrix, and \(N\) is the number of rows (or columns) in -the network. If we then consider the spectral decomposition of the transfer matrix, we can -easily show that as the number of rows goes to infinity, the largest eigenvalue of the -transfer matrix dominates, and the partition function is given by
-where \(\lambda_{\mathrm{max}}\) is the largest eigenvalue of the transfer matrix, and -\(\ket{\psi}\) is the corresponding (MPS) eigenvector. In other words, the partition function -can be computed if it is possible to find the largest eigenvalue of the transfer matrix, for -which efficient algorithms exist.
-For example, one can resort to many types of boundary MPS techniques -[Zauner-Stauber et al., 2018], which are a generic class of algorithms to -numerically solve these kinds of problems. In particular, they all rely on an efficient way -of finding an (approximate) solution to the following problem:
- -In order to compute relevant quantities for such systems, we can verify that the expectation -value of an operator \(O\) is given by the weighing the value of that operator for a given -microstate, with the probability of that microstate:
-For a local operator \(O_i\), this can again be written as a tensor network, where a single -Kronecker tensor at a vertex is replaced with a tensor measuring the operator, and then -absorbing the remaining edge tensors:
- -For example, in the case of the magnetisation \(O = \sigma_z\), the tensor \(M\) can be -explicitly constructed as follows:
-Z = TensorMap(ComplexF64[1.0 0.0; 0.0 -1.0], ℂ^2, ℂ^2)
-@tensor M[-1 -2; -3 -4] := δ[1 2; 3 4] * Z[4; 5] * q[-1; 1] * q[-2; 2] * q[3; -3] * q[5; -4]
-
TensorMap((ℂ^2 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^2)):
-[:, :, 1, 1] =
- 7.3210757428908035 + 0.0im 0.4953999296304103 + 0.0im
- 0.4953999296304103 + 0.0im -1.0097568878600302e-17 + 0.0im
-
-[:, :, 2, 1] =
- 0.4953999296304104 + 0.0im -3.8901691517473414e-18 + 0.0im
- -3.8901691517473414e-18 + 0.0im -0.49539992963041035 + 0.0im
-
-[:, :, 1, 2] =
- 0.49539992963041035 + 0.0im 3.993515140140208e-18 + 0.0im
- 3.993515140140208e-18 + 0.0im -0.4953999296304104 + 0.0im
-
-[:, :, 2, 2] =
- 3.993515140140208e-18 + 0.0im -0.4953999296304103 + 0.0im
- -0.4953999296304103 + 0.0im -7.3210757428908035 + 0.0im
-
Using this network, the expectation value can be computed by first contracting the top and -bottom part, replacing them by their fixed-point MPS representations, and then contracting -the remaining MPS-MPO-MPS sandwich. This is achieved by similarly contracting the left and -right part, replacing them by their fixed-point tensors, which are commonly called the -environments \(G_L\) and \(G_R\), respectively. The final resulting network is then just a -local network, which can be contracted efficiently.
- -Note
-This process of sequentally reducing the dimensionality of the network can even be further -extended, where 3D systems can be studied by first determining a 2D boundary PEPS, for which -a 1D boundary MPS can be determined, which admits 0D boundary tensors. This kind of -algorithms are commonly referred to as boundary methods.
-For quantum systems in one spatial dimension, the construction of MPOs boils down to the -ability to write a sum of local operators in MPO-form. The resulting operator has a very -specific structure, and is often referred to as a Jordan block MPO.
-For example, if we consider the -Transverse-field Ising model,
-it can be represented as an MPO through the (operator-valued) matrix,
-along with the boundary vectors,
-The Hamiltonian on \(N\) sites is then given by the contraction
-Note
-While the above example can be constructed from building blocks that are strictly local -operators, this is not always the case, especially when symmetries are involved. In those -cases, the elements of the matrix \(W\) have additional virtual legs that are contracted -between different sites.
-An intuitive approach to construct such MPOs is to consider the sum of local -terms by virtue of a -finite-state machine. This is a -mathematical model of computation that consists of a finite set of states, and a set of -transitions between those states. In the context of MPOs, this is realised by associating -each virtual level with a state, and each transition then corresponds to applying a local -operator. In that regard, the MPO is then a representation of the state of the finite-state -machine, and the matrix \(W\) is the transition matrix of the machine.
-In general, the matrix \(W\) can then be thought of as a block matrix with entries
-which corresponds to the finite-state diagram:
- -It can then be shown that this MPO generates all single-site local operators \(D\), two-site -operators \(CB\), three-site operators \(CAB\), and so on. In other words, the MPO is a -representation of the sum of all local operators, and by carefully extending the structure -of the blocks \(A\), \(B\), \(C\), and \(D\), it is possible to construct MPOs that represent sums -of generic local terms, and even approximate long-range interactions by a sum of -exponentials.
-To gain a bit more understanding of this, we can use the following code to reconstruct the -total sum of local terms, starting from the Jordan MPO construction:
-using Symbolics
-
-L = 4
-# generate W matrices
-@variables A[1:L] B[1:L] C[1:L] D[1:L]
-Ws = map(1:L) do l
- return [1 C[l] D[l]
- 0 A[l] B[l]
- 0 0 1]
-end
-
-# generate boundary vectors
-Vₗ = [1, 0, 0]'
-Vᵣ = [0, 0, 1]
-
-# expand the MPO
-expand(Vₗ * prod(Ws) * Vᵣ)
-
In order to compute expectation values of such MPOs, we can use the same technique as -before, and sandwich the MPO between two MPSs.
- -However, care must be taken when the goal is to determine a local expectation value density, -as this is not necessarily well-defined. In fact, the MPO represents the sum of all local -terms, and sandwiching it will always lead to the total energy. In order to consistently -define local contributions, a choice must be made how to distribute this among the sites. -For example, even in the case of two-site local operators, it is unclear if this local -expectation value should be accredited to the left, or right site, or even split between -both sites. In the implementation of MPSKit, the chosen convention is to distribute the -expectation value evenly among its starting and ending point, in order to not overcount -contributions of long-range interactions.
-Typically this is achieved by renormalizing the environment tensors in a particular way, -such that then local expectation values can be obtained by either contracting the first row -of \(W\) with the right regularized environment, or the last column of \(W\) with the left -regularized environment. This respectively yields the expectation value of all terms -starting at that site, or all terms ending at that site.
-Again, it can prove instructive to write this out explicitly for some small examples to gain -some intuition. Doing this programatically, we get all terms starting at some site as -follows:
-Ws_reg_right = Ws .- Ref([1 0 0; 0 0 0; 0 0 0])
-expand(Vₗ * Ws_reg_right[end-2] * Ws_reg_right[end-1] * Ws_reg_right[end] * Vᵣ)
-
and similarly all terms ending at some site as follows:
-Ws_reg_left = Ws .- Ref([0 0 0; 0 0 0; 0 0 1])
-expand(Vₗ * Ws_reg_left[1] * Ws_reg_left[2] * Ws_reg_left[3] * Vᵣ)
-
In the thermodynamic limit, the same MPO construction can be used to represent the infinite -sum of local terms. However, special care must be taken when considering expectation values, -as now only local expectation values are well-defined, and the total energy diverges with -the system size.
-This is achieved by considering the same regularization of the environment tensors, such -that the divergent parts are automatically removed. This construction can be found in more -detail in [Hubig et al., 2017].
-Finally, it is worth noting that the MPO construction can also be used to study -two-dimensional systems, by mapping them to quasi-one-dimensional systems. This is typically -achieved by imposing periodic boundary conditions in one of the spatial directions, and then -snaking an MPS through the resulting lattice. In effect, this leads to a one-dimensional -model with longer-range interactions, which can then be studied using the standard MPS -techniques. However, the -no free lunch theorem applies here as -well, and the resulting model will typically require a bond dimension that grows -exponentially with the periodic system size, in order to achieve the area law of -entanglement in two-dimensional systems.
-@mpoham
Macro#While the above construction of MPOs is quite general, it is also quite cumbersome to
-manually construct, especially when dealing with complicated lattices or non-trivial unit
-cells. To this end, the package
-MPSKitModels.jl offers a convenient way of
-constructing these MPOs automatically, by virtue of the @mpoham
macro. This macro allows
-for the construction of MPOs by specifying the local operators that are present in the
-Hamiltonian, and the lattice on which they act. For example, we can construct the MPO for
-the Heisenberg models with nearest- or next-nearest-neighbor interactions as follows:
J₁ = 1.2
-SS = S_exchange() # predefined operator in MPSKitModels
-
-lattice = InfiniteChain(1)
-H₁ = @mpoham begin
- sum(J₁ * SS{i, j} for (i, j) in nearest_neighbours(lattice))
-end
-
MPOHamiltonian{ComplexSpace, TrivialTensorMap{ComplexSpace, 2, 2, Matrix{ComplexF64}}, ComplexF64}(MPSKit.SparseMPOSlice{ComplexSpace, TrivialTensorMap{ComplexSpace, 2, 2, Matrix{ComplexF64}}, ComplexF64}[[TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.6 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.6 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- -0.42426406871192834 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.42426406871192845 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- -0.7071067811865476 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.7071067811865475 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-]])
-
lattice = InfiniteCylinder(4)
-H₂ = @mpoham begin
- sum(J₁ * SS{i, j} for (i, j) in nearest_neighbours(lattice))
-end
-
MPOHamiltonian{ComplexSpace, TrivialTensorMap{ComplexSpace, 2, 2, Matrix{ComplexF64}}, ComplexF64}(MPSKit.SparseMPOSlice{ComplexSpace, TrivialTensorMap{ComplexSpace, 2, 2, Matrix{ComplexF64}}, ComplexF64}[[TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.6 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.6 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- -0.42426406871192834 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.42426406871192845 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.6 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.6 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- -0.42426406871192834 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.42426406871192845 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- -0.7071067811865476 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.7071067811865475 + 0.0im
-; … ; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-], [TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; … ; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-], [TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; … ; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-], [TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; … ; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- -0.7071067811865476 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.7071067811865475 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-]])
-
J₂ = 0.8
-lattice = InfiniteCylinder(4)
-H₃ = @mpoham begin
- sum(J₁ * SS{i, j} for (i, j) in nearest_neighbours(lattice)) +
- sum(J₂ * SS{i, j} for (i, j) in next_nearest_neighbours(lattice))
-end
-
MPOHamiltonian{ComplexSpace, TrivialTensorMap{ComplexSpace, 2, 2, Matrix{ComplexF64}}, ComplexF64}(MPSKit.SparseMPOSlice{ComplexSpace, TrivialTensorMap{ComplexSpace, 2, 2, Matrix{ComplexF64}}, ComplexF64}[[TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.6 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.6 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- -0.42426406871192834 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.42426406871192845 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- -0.7071067811865476 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.7071067811865475 + 0.0im
-; … ; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-], [TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; … ; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- -0.7071067811865476 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.7071067811865475 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-], [TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.4 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.4 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- -0.2828427124746189 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.282842712474619 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; … ; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- -0.7071067811865476 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.7071067811865475 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-], [TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; … ; TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^3 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- … TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 1, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 3] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-]])
-
In conclusion, Matrix Product Operators are a powerful tool to represent quantum operators -as well as transfer matrices. They allow for efficient and versatile expressions of -expectation values, and form the building block for many tensor network algorithms, both in -(1+1) or (2+0) dimensions, as well as in higher dimensions.
-Matrix Product States
- -Having introduced tensor network states in general in the previous section, we now turn to -the specific case of a single spatial dimension. In this setting we will discuss matrix -product states and their properties.
-We work in the same setting as the section on -tensor network states, where now our physical spins of local -physical dimension \(d\) are laid out on a linear chain with \(N\) sites. As before, we can -decompose the full tensor of coefficients \(C\) of a given quantum state \(\ket{\psi}\)
-into a network of local tensors by consecutive SVDs, where each time we only keep some -suitable number of singular values \(D_i\) for every cut. For the case of \(N=4\) we can -explicitly carry out this full procedure in the following way:
- -Absorbing the bond tensors \(\lambda^{(i)}\) into the neighboring site tensor we end up with a -matrix product state (MPS),
- -Once again, the horizontal edges connecting the different site tensors are called virtual -bonds and the dimension \(D_i\) of these bonds is called the bond dimension. This bond -dimension controls the precision of our low-rank approximation, where in the limit of -increasing bond dimension an MPS can approximate any quantum state to arbitrary precision. -However, in accordance with our previous discussion we expect to be able to -describe low-energy states of local Hamiltonians using a maximal bond dimension \(D\) that -scales with the boundary of the system, which in one dimension is just constant in the -system size.
-To see where the ‘matrix product’ in the name comes from, we can write out such a state in a -more explicit form as
-Interpreting the \(A_{s_i}^{(i)}\) as \(D \times D\) matrices, we indeed recognize that the -corresponding coefficient in the computational basis is given by their matrix product. This -becomes even more explicit if we consider periodic boundary conditions \(N \sim 1\), where we -can write a corresponding MPS
- -as
-This class of states can be used as a variational anzatz to for example perform -time evolution or find ground states within the -manifold of MPS of a given bond dimension \(D\). Before moving on to MPS algorithms, we first -summarize some of the key properties that make these states so easy to work with.
-Note
-Alternatively, one could introduce MPS as a so-called projected entangled-pair state -(PEPS) in the following way. Consider again our chain of length \(N\), where we now place two -ancillary spins of dimension \(D\) on every site. We then maximally entangle each of these -ancillary spins with the corresponding spin on the neighboring site, resulting in a chain of -entangled pairs of the form \(\ket{\phi}=\sum_{j=0}^{d-1}\ket{j,j}\). Finally, we project the -two spins at each site onto the local physical Hilbert space of dimension \(d\), resulting in -a state
- -If we write the projectors \(\mathcal P^{(i)}: \mathbb {C}^D \otimes \mathbb{C}^D \rightarrow -\mathbb{C}^d\) as
-you should be able to see that the resulting state is precisely an MPS with tensors -\(A^{(i)}\). This construction originates from ideas in quantum information theory -regarding entanglement and teleportation. While it may seem a bit involved, this -construction has a very natural generalization to two and more spatial dimensions, -where it has been put to extensive use.
-As a first property we study the entanglement structure of an MPS. Consider an MPS with a -fixed bond dimension \(D\). We can choose any bond of the MPS to make a bipartition of the -system, where all sites to the left of the bond belong to one subsystem and all sites to the -right of the bond belong to the other subsystem. Performing an SVD across this bond, -just as we have done in our initial construction, gives us a corresponding set of singular -values \(s_i\). As we have seen before, the squares of these singular values -are the Schmidt coefficients of the bipartition, and make up the entanglement spectrum of -the MPS. The entanglement spectrum of a given low-energy state encodes many interesting -properties of the corresponding system, and can for example be used to recognize symmetries -and detect phase transitions.
-The same Schmidt coefficients can be used to compute the bipartite entanglement entropy -across the cut as
-For one-dimensional states which obey an area law, the maximum entropy across a cut is -bounded by a constant. Again we see that by increasing the bond dimension sufficiently, we -will always be able to saturate this value of \(S\), confirming our previous statement that -MPS faithfully capture these states. Furthermore, even for states that do not satisfy an -area law, such as critical states, carefully relating entanglement properties such as the -entanglement entropy to the correlation length (which are then both controlled by the bond -dimension) allows to derive many key properties of the critical system.
-Not all MPS describe different physical states. One can perform a set of transformations on -an MPS to change the local tensors, but that leave the physical state unchanged. These -transformations are called gauge transformations. The freedom to choose a particular MPS -representation when describing a given physical state is also called a gauge freedom. The -gauge transformations that leave an MPS invariant are given by basis transforms on the -virtual level. Given such a basis transform \(M\) as a \(D\times D\) invertible matrix, it is -easy to see that the following procedure relates two different MPS which encode the same -physical state,
- -where the \(B^{(i)}\) tensors are obtained by absorbing \(M^{-1}\) and \(M\) on the left and right -virtual index of \(A^{(i)}\) respectively.
-Since a physical state can be described by many different MPS by virtue of the gauge -freedom, one can wonder if the same state can be described by two MPS that are not related -by a gauge transform. This question is answered by the fundamental theorem of MPS, which -states that any two translationally invariant MPS are equal if and only if their tensors are -related by a gauge transform. So the gauge freedom is the only freedom we have in describing -a physical state by an MPS.
-From a more practical point of view, this gauge freedom is exploited in various algorithms -by making use of so-called canonical forms. One common canonical form for example is the -left-canonical form, where we gauge transform a given MPS tensor \(A\) to a different tenor -\(A_L\) which satisfies the condition \(\sum_{s_i=0}^{d-1}(A_L)^\dagger_{s_i}(A_L)_{s_i} = \mathbb -1_{D\times D}\), or in pictures,
- -A similar form can be imposed to obtain the right-canonical form in a finite MPS, we can -bring the tensors in left- or right canonical form as we wish by factorizing them into -appropriate isometries using the QR or polar decomposition. This moving of -the gauge center has proved essential in finite MPS algorithms, as we will see -later.
-Another key feature of MPS is that expectation values of local operators can be computed -efficiently. Consider an operator \(O_i\) acting on site \(i\) of a state given by an MPS -\(\ket{\psi[A]}\). To calculate this expectation value we introduce the object
- -which we will call the \(O\)-transfer matrix. In the case of \(O=\mathbb 1\) we just call it the -transfer matrix. For a finite MPS with periodic boundary conditions, this can be written -succinctly as
-This quantity can be evaluated -efficiently by contracting the corresponding diagram from end to end.
-The method to calculate expectation values can be easily generalized to calculate two-point -correlation functions of an MPS. Consider two operators \(O_i\) and \(Q_j\) each acting on -respectively site \(i\) and \(j\). The correlator between these two operators is denoted by -\(\braket{\psi|O_i O_j|\psi}\), and corresponds to the contraction
- -Adopting the transfer matrix notation this becomes
-The dependence of the correlator on \(i\) and \(j\), for large separations between the two, is -dominated by the factor \(\mathbb{E}\), taken to some large power. This operator is in fact a -completely positive map (CP map), and one can always normalize the state \(\ket{\psi}\) such -that the dominant eigenvalue of \(\mathbb{E}\) is 1 and the others lie within the unit disk. -Making use of the spectral decomposition, we can show that the exponential dependence on the -eigenvalues corresponds to an exponential decay in the correlation functions, meaning that -an MPS always has exponentially decaying correlations. We will return to this point in more -detail when considering infinite systems in the next section.
-MPSKit.FiniteMPS
#To be added.
-Fixed-Point algorithms
- -In this section we introduce two algorithms for approximating the ground state of local gapped Hamiltonians using matrix product state techniques. Approximating ground states in a variational manner boils down to minimizing
-over a restricted class of states \(D\). For simplicity, we will assume the Hamiltonian under consideration has an MPO representation of the form
- -which can encode interactions of arbitrary range as discussed in the previous section. In this formulation, approximating the ground state of \(H\) is equivalent to finding the MPS fixed point the MPO Hamiltonian corresponding to the eigenvalue \(\Lambda\) with the smallest real part,
- -In the algorithms discussed below we optimize over matrix product states of a fixed finite bond dimension. In the first algorithm known as DMRG (density matrix renormalization group) the states we consider are finite MPS, whereas the second algorithm VUMPS (variational uniform matrix product state algorithm), as the name suggests, optimizes over uniform MPS. Hence, VUMPS enables direct optimization in the thermodynamic limit, without breaking translation invariance.
-Our exposition of DMRG closes follows the one in [Bridgeman and Chubb, 2017], and that of VUMPS closely follows the excellent set of lecture notes [Vanderstraeten et al., 2019].
-Starting from a random MPS ansatz, DMRG tries to approximate the ground state by sequentially optimizing over all the MPS tensors one by one and sweeping through the chain, until convergence is reached. Let us discuss this algorithm in a bit more detail step by step.
-Let us consider a random ansatz, by taking random tensors \(\{A_1,A_2,...,A_L\}\), \(L\) being the number of sites. Fixing all tensors but the one at site \(i\), the local tensor \(A_i\) is updated according to
- -Though seemingly daunting we can turn this problem in a simple eigenvalue problem by making full use of the mixed gauge. By bringing all tensors on the right of \(A_i\) in the right canonical form and those to the left in left canonical form the denominator simply becomes \(\braket{A_i|A_i}\) and the update reduces to
- - -Here the effective Hamiltonian \(\mathcal H_i\), defined as
- -encodes the effect of the full system Hamiltonian on the current center site \(i\). The variational problem of the local update can then be solved by finding the eigenvector of \(\mathcal{H}_i\) corresponding to the smallest real eigenvalue, and this repeatedly for every site sweeping back and forth through the chain, each time moving the orthogonality center of the MPS. At each update step a large part of the effective Hamiltonian can simply be reused, making the routine very efficient. Notice however that DMRG manifestly breaks translation invariance by updating one tensor at a time. As we will see, VUMPS does not suffer from this artefact.
-From this brief explanation it should be clear that DMRG is a surprisingly simple algorithm. Nevertheless DMRG has proven itself time and time again, and is the most successful algorithm for variationally approximating the ground state of local gapped (1+1)d Hamiltonians. DMRG is implemented in MPSKit and can be called by DMRG()
.
Let us illustrate the use of DMRG in MPSKit by approximating the ground state of the transverse field Ising model. The Ising model is implemented in MPSKitModels as follows
-where we are free to choose the parameters \(J\), \(h_x\) and \(h_z\), and \(X\) and \(Z\) are the generators of \(\mathfrak{su}(2)\), and thus differ from the usual Pauli matrices by a factor of \(\frac{1}{2}\).
-Let us consider 16 lattice sites, bond dimension 12, open boundary conditions and let’s stick to the default critical values of \(h_x=0.5\) and \(h_z=0\). Finding the ground state using DMRG then only takes a handful of iterations!
-using TensorKit, MPSKit, MPSKitModels
-
-d = 2 # Physical dimension
-L = 16 # Length spin chain
-D = 12 # Bond dimension
-
-H = transverse_field_ising()
-
-algorithm = DMRG(); # Summon DMRG
-Ψ = FiniteMPS(L, ℂ^d, ℂ^D) # Random MPS ansatz with bond dimension D
-Ψ₀,_ = find_groundstate(Ψ, H, algorithm);
-
┌ Info: DMRG iteration:
-│ iter = 1
-│ ϵ = 0.0005326848329962604
-│ λ = -20.01638790045522 - 1.69387690237017e-16im
-└ Δt = 0.646787812
-┌ Info: DMRG iteration:
-│ iter = 2
-│ ϵ = 1.3447595970721666e-7
-│ λ = -20.016387900460202 + 2.3832857257682006e-17im
-└ Δt = 0.082970472
-
┌ Info: DMRG iteration:
-│ iter = 3
-│ ϵ = 3.2547213592364735e-8
-│ λ = -20.016387900460266 - 5.933982876424318e-16im
-└ Δt = 0.022579385
-┌ Info: DMRG iteration:
-│ iter = 4
-│ ϵ = 1.2051191981316597e-8
-│ λ = -20.016387900460273 - 8.779908258550268e-17im
-└ Δt = 0.024247745
-┌ Info: DMRG iteration:
-│ iter = 5
-│ ϵ = 4.468305529377547e-9
-│ λ = -20.01638790046028 - 2.2329965386914683e-15im
-└ Δt = 0.015626563
-┌ Info: DMRG iteration:
-│ iter = 6
-│ ϵ = 1.6576613024032395e-9
-│ λ = -20.01638790046028 + 1.9513234593767897e-15im
-└ Δt = 0.018433804
-┌ Info: DMRG iteration:
-│ iter = 7
-│ ϵ = 6.154532192441363e-10
-│ λ = -20.016387900460302 + 1.923899804039612e-16im
-└ Δt = 0.013056537
-┌ Info: DMRG iteration:
-│ iter = 8
-│ ϵ = 2.2868822183759553e-10
-│ λ = -20.016387900460288 - 4.195021525164557e-17im
-└ Δt = 0.011804665
-
┌ Info: DMRG iteration:
-│ iter = 9
-│ ϵ = 8.504255231816188e-11
-│ λ = -20.016387900460266 + 9.980536216141623e-17im
-└ Δt = 0.01104208
-┌ Info: DMRG iteration:
-│ iter = 10
-│ ϵ = 3.164944418490319e-11
-│ λ = -20.016387900460277 + 3.711808572814786e-17im
-└ Δt = 0.010065283
-┌ Info: DMRG iteration:
-│ iter = 11
-│ ϵ = 1.1787761332126287e-11
-│ λ = -20.016387900460273 + 8.868584677716098e-16im
-└ Δt = 0.009222207
-┌ Info: DMRG iteration:
-│ iter = 12
-│ ϵ = 4.392374800389212e-12
-│ λ = -20.016387900460266 + 1.2190353089600007e-15im
-└ Δt = 0.008492783
-┌ Info: DMRG iteration:
-│ iter = 13
-│ ϵ = 1.6394607745955364e-12
-│ λ = -20.016387900460273 - 5.28239256777398e-16im
-└ Δt = 0.007997797
-┌ Info: DMRG iteration:
-│ iter = 14
-│ ϵ = 6.110346266128261e-13
-│ λ = -20.016387900460266 + 1.532956536656155e-16im
-└ Δt = 0.007693257
-┌ Info: DMRG summary:
-│ ϵ = 2.0e-12
-│ λ = -20.016387900460266 + 1.532956536656155e-16im
-└ Δt = 1.144352462
-
As mentioned above, VUMPS optimizes uniform MPS directly in the thermodynamic limit. Since the total energy becomes unbounded in this limit, our objective should be to rather minimize the energy density. When working in the mixed gauge, this minimization problem can be represented diagrammatically as
- -where we have introduced the left- and right fixed points \(F_L\) and \(F_R\) defined as
- -which obey the normalization condition
- -The VUMPS algorithm offers the advantage of global optimalization by design, since the algorithm, contrary to DMRG, does not rely on individual updates of local tensors.
-Given a Hamiltonian of the form mentioned above and an intial random uniform MPS defined by \(\{A_L, A_R,C\}\), VUMPS approximates the ground state by finding an approximate solution to the fixed-point equations
-A detailed derivation that these equations characterize the variational minimum in the manifold of uniform MPS is beyond the scope of these notes, but see [Vanderstraeten et al., 2019].
-In these equations the effective Hamiltonians \(H_{A_C}\) and \(H_{C}\) acting on \(A_C\) and \(C\) respectively are given by is given by
- - -The last equation then simply states that \(C\) intertwines the left - and right-orthonormal form of the tensor \(A\).
-Let us now explain step-by-step how VUMPS finds an approximate solution to the fixed-point equations in an iterative way.
-We initialize the algorithm with the random guess \(\{A_L, A_R,C\}\), and chose a tolerance \(\eta\).
We first solve the first two eigenvalue equations
-using for example an Arnoldi algorithm with the previous approximations of \(A_C\) and \(C\) as initial guess. This yields two tensors \(\tilde A_C\) and \(\tilde C\).
-From \(\tilde A_C\) and \(\tilde C\) we compute \(\tilde A_L\) and \(\tilde A_R\) that minimize following two-norms
-and thus approximately solve the last equation. Note that the minimum is taken over respectively left - and right isometric matrices. We comment below on the analytic soltuion of these equations and how this analytic solution can be approximated efficiently.
-Update \(A_L\leftarrow\tilde A_L\), \(A_R\leftarrow\tilde A_R\) and \(C\leftarrow\tilde C\).
Evaluate \(\epsilon=\max(\epsilon_L,\epsilon_R)\) and repeat until \(\epsilon\) is below the tolerance \(\eta\).
Let us finally comment on solving the minimization problem to approximate \(\tilde A_{L/R}\).
-A beautiful result in linear algebra states that the minimum is exactly given by \(\tilde A_L=U_lV_l^\dagger\) where \(U_l\) and \(V_l\) are the isometries arising from the singular value decomposition of \(\tilde A_C\tilde C^\dagger=U_l\Sigma_lV_l^\dagger\), and similarly \(\tilde A_R=U_rV_r^\dagger\), where \(\tilde C^\dagger\tilde A_C=U_r\Sigma_rV_r^\dagger\). Even though this approach will work well for the first iteration steps, this might not be the best solution close to convergence. When approaching the exact solution \(A^s_C=A^s_LC=CA^s_R\) the singular values in \(\Sigma_{l/r}\) become really small so that in finite precision arithmetic the singular vector in the isometries \(U_{l/r}\) and \(V_{l/r}\) are poor approximations of the exact singular vectors. A robust and close to optimal solution turns out to be
-where the \(U\)’s are the unitaries appearing in the polar decomposition of
-Let us demonstrate the algorithm using MPSKit by estimating the ground state energy density of the spin 1 XXX model. The VUMPS algorithm is called in the same way as we called DMRG. We initialize a random initial MPS with bond dimension 12 and physical dimension 3 (because the spin 1 representation of SU(2) is \(2\cdot1+1=3\)-dimensional). Obviously we don’t have to specify a system size because we work directly in the thermodynamic limit.
-H = heisenberg_XYZ()
-
-Ψ = InfiniteMPS(ℂ^3, ℂ^D)
-algorithm = VUMPS()
-Ψ₀, envs = find_groundstate(Ψ, H, algorithm);
-
┌ Info: VUMPS iteration:
-│ iter = 1
-│ ϵ = 0.4719404564358398
-│ λ = -0.09959695122131748 - 1.3040636633559757e-16im
-└ Δt = 0.053156709
-┌ Info: VUMPS iteration:
-│ iter = 2
-│ ϵ = 0.4506152025399737
-│ λ = -0.5583932234087311 + 5.553135352688308e-17im
-└ Δt = 0.010223283
-┌ Info: VUMPS iteration:
-│ iter = 3
-│ ϵ = 0.3629368844908232
-│ λ = -0.993574501246331 + 7.795711024065943e-17im
-└ Δt = 0.024041981
-┌ Info: VUMPS iteration:
-│ iter = 4
-│ ϵ = 0.13370307456046374
-│ λ = -1.3549047523508608 + 6.284940422381764e-17im
-└ Δt = 0.007571753
-┌ Info: VUMPS iteration:
-│ iter = 5
-│ ϵ = 0.01582598186552914
-│ λ = -1.4008384693488047 - 1.9982973682606948e-16im
-└ Δt = 0.007492494
-┌ Info: VUMPS iteration:
-│ iter = 6
-│ ϵ = 0.007459679693948124
-│ λ = -1.4012988366021029 - 5.764860989061647e-17im
-└ Δt = 0.008910909
-┌ Info: VUMPS iteration:
-│ iter = 7
-│ ϵ = 0.00399154578462199
-│ λ = -1.4013561841111188 - 2.1883569012788706e-17im
-└ Δt = 0.008854833
-┌ Info: VUMPS iteration:
-│ iter = 8
-│ ϵ = 0.0011455497777277282
-│ λ = -1.4013783791455072 - 5.5190095310059743e-17im
-└ Δt = 0.016109313
-┌ Info: VUMPS iteration:
-│ iter = 9
-│ ϵ = 0.000401541135649868
-│ λ = -1.4013803528079771 - 9.954377398999163e-17im
-└ Δt = 0.006348695
-┌ Info: VUMPS iteration:
-│ iter = 10
-│ ϵ = 0.00013514788004829803
-│ λ = -1.4013806079316597 - 2.3677164203916684e-17im
-└ Δt = 0.006413757
-
┌ Info: VUMPS iteration:
-│ iter = 11
-│ ϵ = 4.793582818793851e-5
-│ λ = -1.4013806388259225 + 9.400567752639473e-17im
-└ Δt = 0.006579898
-┌ Info: VUMPS iteration:
-│ iter = 12
-│ ϵ = 1.6853849012249328e-5
-│ λ = -1.4013806428928055 + 4.663267041884612e-17im
-└ Δt = 0.006457859
-┌ Info: VUMPS iteration:
-│ iter = 13
-│ ϵ = 6.14302972102875e-6
-│ λ = -1.40138064343216 - 3.9411673633208964e-18im
-└ Δt = 0.011694586
-┌ Info: VUMPS iteration:
-│ iter = 14
-│ ϵ = 2.2191341201556632e-6
-│ λ = -1.401380643506309 - 2.71145561760739e-17im
-└ Δt = 0.006486793
-┌ Info: VUMPS iteration:
-│ iter = 15
-│ ϵ = 8.247539552814379e-7
-│ λ = -1.4013806435165632 + 6.27746085727569e-17im
-└ Δt = 0.006606467
-┌ Info: VUMPS iteration:
-│ iter = 16
-│ ϵ = 3.0381270697001354e-7
-│ λ = -1.4013806435180085 - 7.360988577709973e-17im
-└ Δt = 0.006506651
-┌ Info: VUMPS iteration:
-│ iter = 17
-│ ϵ = 1.1439909425967689e-7
-│ λ = -1.4013806435182143 + 4.740638298943028e-17im
-└ Δt = 0.006377809
-┌ Info: VUMPS iteration:
-│ iter = 18
-│ ϵ = 4.274157397995229e-8
-│ λ = -1.4013806435182454 + 8.230437511059935e-17im
-└ Δt = 0.011077852
-┌ Info: VUMPS iteration:
-│ iter = 19
-│ ϵ = 1.6237952207617785e-8
-│ λ = -1.4013806435182474 + 1.7286444964567578e-17im
-└ Δt = 0.006469751
-┌ Info: VUMPS iteration:
-│ iter = 20
-│ ϵ = 6.130775774164611e-9
-│ λ = -1.401380643518249 - 4.388913559151589e-17im
-└ Δt = 0.006485772
-┌ Info: VUMPS iteration:
-│ iter = 21
-│ ϵ = 2.3439697211408005e-9
-│ λ = -1.4013806435182496 + 9.076610585685075e-17im
-└ Δt = 0.006400072
-┌ Info: VUMPS iteration:
-│ iter = 22
-│ ϵ = 8.921580512534823e-10
-│ λ = -1.4013806435182496 - 3.543113964500985e-18im
-└ Δt = 0.007674595
-┌ Info: VUMPS iteration:
-│ iter = 23
-│ ϵ = 3.427344780625515e-10
-│ λ = -1.401380643518248 + 1.6912865276329293e-17im
-└ Δt = 0.011840929
-
┌ Info: VUMPS iteration:
-│ iter = 24
-│ ϵ = 1.3129414037177357e-10
-│ λ = -1.4013806435182494 - 2.264590234517779e-18im
-└ Δt = 0.006903282
-┌ Info: VUMPS iteration:
-│ iter = 25
-│ ϵ = 5.0630289785976875e-11
-│ λ = -1.401380643518249 - 1.3308398136231881e-17im
-└ Δt = 0.006563748
-┌ Info: VUMPS iteration:
-│ iter = 26
-│ ϵ = 1.949807197019253e-11
-│ λ = -1.401380643518248 + 5.248240045237138e-17im
-└ Δt = 0.006484149
-┌ Info: VUMPS iteration:
-│ iter = 27
-│ ϵ = 7.542652840093537e-12
-│ λ = -1.4013806435182508 + 7.973535265547894e-17im
-└ Δt = 0.006532749
-┌ Info: VUMPS iteration:
-│ iter = 28
-│ ϵ = 2.9171689579777305e-12
-│ λ = -1.4013806435182499 - 5.080907190674386e-18im
-└ Δt = 0.010962064
-┌ Info: VUMPS iteration:
-│ iter = 29
-│ ϵ = 1.1310794158535638e-12
-│ λ = -1.4013806435182494 + 5.464103894193135e-17im
-└ Δt = 0.005739366
-┌ Info: VUMPS iteration:
-│ iter = 30
-│ ϵ = 4.3952525290723206e-13
-│ λ = -1.401380643518249 - 1.8882969228324614e-17im
-└ Δt = 0.005644267
-┌ Info: VUMPS summary:
-│ ϵ = 4.3952525290723206e-13
-│ λ = -1.401380643518249 - 1.8882969228324614e-17im
-└ Δt = 1.166622834
-
It takes about 30 iterations and a second or two to reach convergence. Let us gauge how well the ground state energy density was approximated by calling
-expectation_value(Ψ₀, H)
-
1-element PeriodicArray{ComplexF64, 1}:
- -1.4013806435182494 - 3.681340623886102e-17im
-
The value we obtain here is to be compared with the quasi-exact value -1.401 484 038 971 2(2) obtained in [Haegeman et al., 2011]. As you can see, even with such a small bond dimension we can easily approximate the ground state energy up to 3 decimals.
-Time Evolution
- -In this segment of the tutorial, we delve into some time evolution techniques for MPS. In particular we will focus on the TDVP and Time Evolution MPO methods. Another method (i)TEBD has already been explained in an earlier section. Following this we briefly explain how imaginary time evolution can be used to find ground states and how thermal density matrices can be simulated using MPS. -At the end of this section we offer some basic code examples.
-In the case of quantum many body systems time evolution amounts to solving the time dependent Schrodinger equation
-for a given Hamiltonian \(\hat{H}\) with initial condition \(\ket{\Psi_0}=\ket{\Psi(t_0)}\). For a time independent Hamiltonian the solution is given by
-By approximating the time evolution operator \(U\) in the Tensor Network language, we can also study real-time dynamics.
-Note
-One should keep in mind that time evolution in general will increase the entanglement of the state so that in practice time evolution can only be done with Tensor Networks for relatively modest times. For example in case of a quench the entanglement for 1D systems grows as \(S \sim t\) , (see [Calabrese and Cardy, 2005]) so that the bond dimension \(D \sim \exp(at)\) in order to accurately follow the dynamics.
-Contents
- -The Time-Dependent Variational Principle is an old concept, originaly developed by Dirac and Frenkel in the 1930’s. The idea is to solve the schrodinger equation by minimizing
-In the case of MPS, we can parametrize the state \(\ket{\Psi(t)}\) by a set of time dependent matrices \(\{A_1(t),A_2(t),\dots A_N(t)\}\) (where N is the system size for finiteMPS or the size of the unit cell for infinite MPS). In other words the state \(\ket{\Psi(t)}\) lives in a manifold determined by these matrices, the MPS-manifold. Geometrically the solution of the minimization problem is given by the projection of the RHS of the schrodinger equation onto the MPS manifold
-where \(\hat{P}_{T\ket{\Psi(A)}}\) is the operator that projects the state onto the tangent space. As a consequence the time-evolving state will never leave the MPS manifold and parametrization in terms of \(A(t)\) makes sense. One can in principle work out the above equation on the level of the \(A\) matrices and try to solve the above equation. This gives a complicated set of (non-linear) equations that can be solved by one’s favourite finite difference scheme, but requires the inversion of matrices with small singular values (and thus numerical instabilities) [Haegeman et al., 2011], [Haegeman et al., 2016]. Instead, it turns out that a natural and inversion free way of solving this equation is possible if we use the gauge freedom of MPS.
-For a finite MPS, one can show that in the mixed gauge the action of the projection operator onto \(\hat{H}\ket{\Psi(A)}\) is given by [Vanderstraeten et al., 2019]
- -The projector action consists of two sums, one where an effective Hamiltonian \(\hat{H}_{\text{eff}}^{A_C}\) acts on the \(A_C\) on site \(n\) and one where \((\hat{H}_{\text{eff}}^{C})\) acts on the bond tensor \(C\) to the right of it. The effective Hamiltonias are given by
- -Thanks to the decomposition of \(\hat{P}_{T\ket{\Psi(A)}} \hat{H}\ket{\Psi(A)}\) (17.2) now resembles an ODE of the form
-This type of ODE can be solved by a splitting method [Lubich et al., 2015] i.e. we solve \(\frac{d}{dt} Y = A(Y)\) and \(\frac{d}{dt} Y = B(Y)\) seperately and then combine the two results to obtain an approximate solution to (17.3). Applying this idea to (17.2) we thus need to solve equations of the form
-and
-These can be further simplified by noting that we can put all the time dependence inside one tensor, which we choose to be either \(A_C(n)\) or \(C(n)\). It is then sufficient to solve
-and
-for each site \(n\) seperately. These can be integrated exactly to give
-and
-A natural way to combine the seperate solutions is to perform a sweep-like update. Starting from the first site we do:
-TDVP algorithm for finite MPS
- -At the end of the chain one only updates the \(A_C\) since there is no \(C\) there.
-Doing the above left to right sweep gives a first order integrator i.e. we have solved the time evolution up to order \(\mathcal{O}(dt^2)\). Since the terms can be solved in any order we can also perform a reverse sweep i.e. working from right to left. Combining this with the left to right sweep yields a second order integrator (because the reverse sweep is the adjoint of the forwards sweep).
-For an infinite MPS one could also do a sweep-like update until some criteria converges to obtain new tensors \(\{A_L,C,A_C,A_R\}\). However this can be costly since one has to iterate until convergence. Instead we can exploit the translational invariance of the system by demanding that \(C=\tilde{C}\). Since \(\tilde{C}=\exp(idt \hat{H}_{\text{eff}}^{C}) C(n,t+dt)\) we can turn things around and find
-Given the newly found \(C(n,t+dt)\) and \(A_C(n,t+dt)\) one can determine a new \(A_L\), giving a MPS for \(t+dt\).
-Note
-Unlike other time evolution methods, TDVP retains some of the physical -properties of the Schrodinger equation it is trying to solve. First of all, it acts trivial on (numerical) eigenstates of \(\hat{H}\) since then \(\hat{H}_{\text{eff}}^{A_C}[A_C] \propto A_C\) and the whole MPS picks up a phase equal to \(e^{-i dtE}\). In addition it conserves energy and is time-reversible for time-independent Hamiltonians [Vanderstraeten et al., 2019].
-Perhaps the most natural way to perform the time evolution would be to write the time evolution operator as a MPO. The evolved state would then simply be the contraction of this MPO onto an MPS (see ref). The exponential implementing the time evolution can be approximated up to any order by its trunctated Taylor series
-The MPO approximation of the time evolution operator then boils down to implementing powers of \(\hat{H}\) in an efficient (i.e. with the lowest possible MPO bond dimension) and size-extensive way [Damme et al., 2023]. For example, for a MPO Hamiltonian of the form
- -which corresponds to the Hamiltonian
-The first order approximation of \(U=\exp(-\tau\hat{H})\) is given in MPO form by
- -Doing the matrix multiplcation (and remembering that for MPO the boundary conditions are so the we need to track the upper left expression) we find
-as desired.
-The trick for generating the first order approximation involves removing the third “level” from the MPO form of H and multiplying with the appropriate factor of τ. This can be visualised as follows
- -This method can be extended to any desired order in \(\tau\) as outlined in [Damme et al., 2023].
-Besides simulating dynamics, any time evolution method can also be used to find the groundstate of \(\hat{H}\) by taking \(t\) to be imaginary. The basis for this idea is the fact that
-where \(\ket{\Psi}\) is any initial state not orthogonal to the ground state. Indeed expanding the initial state in the eigenbasis of \(\hat{H}\) we have \(\ket{\Psi} = \sum_i c_i \ket{E_i}\) with \(\ket{E_i}\) the eigenstate corresponding to energy \(E_i\) with ordering \(E_0 < E_1 < E_2 < \dots\). Then
-In taking the limit \(\tau\to+\infty\) the slowest vanishing exponential is that of \(E_0\). In this way the ground state gets projected out of the initial state. Demanding that the state is normalized gives
-which gives the ground state up to an irrelevant phase factor.
-It is possible to use time evolution methods to construct thermal density operators i.e. \(\rho = \frac{1}{Z}e^{-\beta \hat{H}}\) with \(\beta=1/T\) and \(Z\) a normalization constant. The idea here is to write \(\rho\) as an MPO
- -with the constraint that
- -Here the triangles represent an isometry that fuses the two legs together into a bigger leg. This particular form ensures that \(\rho\) is a positive semi-definite operator and thus physical [Verstraete et al., 2004]. Note that for \(d_k=1\) we obtain the density matrix of a pure state. We can represent \(\rho\) as the density matrix of pure state (i.e. a MPS) by introducing ancillas \(\{\ket{a_k}\}\) so that
- -where the thicker physical legs indicate that they contain both the \(s\) and \(a\) degrees of freedom. One immediately sees that \(\rho=\text{Tr}_a({\ket{\Psi}\bra{\Psi}})\). The thermal density operators \(\rho(\beta)\) for any \(\beta\) can then be found by starting from the \(\beta=0\) state \(\rho(0)=\mathbf{1}\) and performing imaginary time evolution
-with \(\Delta \tau = \frac{\beta}{2M}\) [Verstraete et al., 2004].
-MPSKit.timestep,make_time_mpo
#Below is some code on how MPSKit and MPSKitModels can be used out-of-the-box to perform time evolution.
-using TensorKit,MPSKit,MPSKitModels
-using Plots
-
H₀ = transverse_field_ising(;J=1.0,g=0.0);
-
-#Create a random MPS with physical bond dimension d=2 and virtual D=10 and optimize it
-Ψ = InfiniteMPS([2],[10]);
-(gs,envs) = find_groundstate(Ψ,H₀,VUMPS(;verbose=false));
-
-#Let's check some expectation values
-sz_gs = expectation_value(gs,σᶻ()) # we have found the |↑↑...↑> or |↓↓...↓> state
-E_gs = expectation_value(gs,H₀,envs)
-
-# time evolution Hamiltonian
-Ht = transverse_field_ising(;J=1.0,g=0.25);
-Ebefore = real(expectation_value(gs,Ht)[1])
-dt = 0.1
-
-# Let's do one time step with TDVP
-alg = TDVP();
-envs = environments(gs,Ht);
-(Ψt,envs) = timestep(gs,Ht,dt,alg,envs);
-szt_tdvp = real(expectation_value(Ψt,σᶻ())[1]);
-Et_tdvp = real(expectation_value(Ψt,Ht,envs)[1]);
-
-# let's make a first order time evolution mpo out of Ht
-Ht_mpo = make_time_mpo(Ht, dt, TaylorCluster{1}());
-
-(Ψt,_) = approximate(gs, (Ht_mpo, gs), VUMPS(; verbose=false));
-szt_tmpo = real(expectation_value(Ψt,σᶻ())[1]);
-Et_tmpo = real(expectation_value(Ψt,Ht)[1]);
-
-@show szt_tdvp-szt_tmpo
-@show Ebefore-Et_tmpo
-@show Ebefore-Et_tdvp;
-
szt_tdvp - szt_tmpo = 2.0289947257001728e-5
-Ebefore - Et_tmpo = 1.284094658693391e-5
-Ebefore - Et_tdvp = -2.21243089626455e-7
-
We see that \(<σᶻ(t)>\) for both methods after one timestep are reasonably close, but that the energy (density) is (more) conserved for the tdvp method. -If we were to do the time evolution for many timesteps and for different orders of time evolution MPO we would end up with the following plot
-We clearly see that increasing the order of the time evolution mpo improves the result.
-# We can also find the groundstate using imaginary time evolution
-H = transverse_field_ising(;J=1.0,g=0.35);
-
-# Here we will do 1 iteration and see that the energy has dropped
-Ψ = InfiniteMPS([2],[10]);
-Ψenv = environments(Ψ,H) ;
-Ebefore = real(expectation_value(Ψ,H,envs)[1]);
-(Ψ,Ψenv) = timestep(Ψ,H,-1im*dt,TDVP(),Ψenv);
-Eafter = real(expectation_value(Ψ,H,Ψenv)[1]);
-
-Eafter < Ebefore
-
true
-
If we were to do this for many iterations the energy of the evolved state eventually reach that of the ground state as shown below.
-Finite Entanglement Scaling
- -A Symmetric Tensor Deep Dive: Constructing Your First Tensor Map
- -In this tutorial, we will demonstrate how to construct specific TensorMap
s which are
-relevant to some common physical systems, with an increasing degree of complexity. We will
-assume the reader has gone through the tutorial sections on
-tensor network theory and
-symmetries in tensor networks. In going through these examples we aim
-to provide a relatively gently introduction to the meaning of
-symmetry sectors and
-vector spaces within the
-context of TensorKit.jl,
-how to initialize a TensorMap
over a given vector space
-and finally how to manually set the data of a symmetric TensorMap
. We will keep our
-discussion as intuitive and simple as possible, only adding as many technical details as
-strictly necessary to understand each example. When considering a different physical system
-of interest, you should then be able to adatpt these recipes and the intuition behind them
-to your specific problem at hand.
Note
-Many of these examples are already implemented in the -MPSKitModels.jl package, in which case we -basically provide a narrated walk-through of the corresponding code.
-using LinearAlgebra
-using TensorKit
-using MPSKitModels
-using WignerSymbols
-using SUNRepresentations
-using Test # for showcase testing
-
As the most basic example, we will consider the -1-dimensional transverse-field Ising model, -whose Hamiltonian is given by
-Here, \(X_i\) and \(Z_i\) are the -Pauli operators acting on site \(i\), and the -first sum runs over pairs of nearest neighbors \(\langle i, j \rangle\). This model has a -global \(\mathbb{Z}_2\) symmetry, as it is invariant under the transformation \(U H U^\dagger = -H\) where the symmetry transformation \(U\) is given by a global spin flip,
-We will circle back to the implications of this symmetry later.
-As a warmup we will implement the Hamiltonian (18.1) in the standard way by
-encoding the matrix elements of the single-site operators \(X\) and \(Z\) into aan array of
-complex numbers, and then combine them in a suitable way to get the Hamiltonian terms.
-Instead of using plain Julia arrays, we will use a representation in terms of TensorMap
s
-over complex vector spaces. These will essentially just be wrappers around base arrays at
-this point, but their construction requires some consideration of the notion of spaces,
-which generalize the notion of size
for arrays. Each of the operators \(X\) and \(Z\) acts on
-a local 2-dimensional complex vector space. In the context of TensorKit.jl such a space can
-be represented as ComplexSpace(2)
, or using the convenient shorthand ℂ^2
. A single-site
-Pauli operator maps from a domain physical space to a codomain physical space, and can
-therefore be represented as instances of a TensorMap(..., ℂ^2 ← ℂ^2)
. The corresponding
-data can then be filled in by hand according to the familiar Pauli matrices in the following
-way:
# initialize numerical data for Pauli matrices
-x_mat = ComplexF64[0 1; 1 0]
-z_mat = ComplexF64[1 0; 0 -1]
-
-# construct physical Hilbert space
-V = ℂ^2
-
-# construct the physical operators as TensorMaps
-X = TensorMap(x_mat, V ← V)
-Z = TensorMap(z_mat, V ← V)
-
-# combine single-site operators into two-site operator
-ZZ = Z ⊗ Z
-
TensorMap((ℂ^2 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^2)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- -1.0 + 0.0im -0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im -1.0 + 0.0im
- 0.0 + 0.0im -0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im -0.0 + 0.0im
- -0.0 + 0.0im 1.0 - 0.0im
-
We can easily verify that our operators have the desired form by checking their data in the -computational basis:
-ZZ
-
TensorMap((ℂ^2 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^2)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- -1.0 + 0.0im -0.0 + 0.0im
-
-[:, :, 1, 2] =
- 0.0 + 0.0im -1.0 + 0.0im
- 0.0 + 0.0im -0.0 + 0.0im
-
-[:, :, 2, 2] =
- 0.0 + 0.0im -0.0 + 0.0im
- -0.0 + 0.0im 1.0 - 0.0im
-
X
-
TensorMap(ℂ^2 ← ℂ^2):
- 0.0 + 0.0im 1.0 + 0.0im
- 1.0 + 0.0im 0.0 + 0.0im
-
Note
-In order to combine these local operators into a concrete Hamiltonian that can be used in
-MPSKit.jl we can make use of the convenient
-@mpoham
macro exported by
-MPSKitModels.jl. For an infinite translation
-invariant Ising chain, we can use the following piece of code which produces the Hamiltonian
-in an interesting-looking form (see MPSKit.jl for details on this format).
lattice = InfiniteChain(1)
-H = @mpoham begin
- sum(nearest_neighbours(lattice)) do (i, j)
- return ZZ{i,j}
- end + sum(vertices(lattice)) do i
- return X{i}
- end
-end
-
MPSKit.MPOHamiltonian{ComplexSpace, TrivialTensorMap{ComplexSpace, 2, 2, Matrix{ComplexF64}}, ComplexF64}(MPSKit.SparseMPOSlice{ComplexSpace, TrivialTensorMap{ComplexSpace, 2, 2, Matrix{ComplexF64}}, ComplexF64}[[TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- -1.4142135623730945 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.414213562373095 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-
-[:, :, 2, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- -0.7071067811865476 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.7071067811865475 + 0.0im
-; TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 0.0 + 0.0im
- TensorMap((ℂ^1 ⊗ ℂ^2) ← (ℂ^2 ⊗ ℂ^1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im 0.0 + 0.0im
-
-[:, :, 2, 1] =
- 0.0 + 0.0im 1.0 + 0.0im
-]])
-
Let us now return to the global \(\mathbb{Z}_2\) invariance of the Hamiltonian
-(18.1), and consider what this implies for its local terms \(ZZ\) and \(X\).
-Representing these operators as TensorMap
s, the invariance of \(H\) under a global
-\(\mathbb{Z}_2\) transformation implies the following identities for the local tensors:
Recalling the discussion on symmetries in tensor networks, we recognize -that these identitities precisely mean that these local tensors transform trivially under a -tensor product representation of \(\mathbb{Z}_2\). This implies that, in an appropriate basis -for the local physical vector space, our local tensors would become block-diagonal where -each so-called matrix block is labeled by a \(\mathbb{Z}_2\) irrep. From the same -discussion, we recall that the appropriate local basis transformation is precisely the one -that brings the local representation \(X\) into block-diagonal form. Clearly, this -transformation is nothing more than the Hadamard transformation which maps the computational -basis of \(Z\) eigenstates \(\{\ket{\uparrow}, \ket{\downarrow}\}\) to that of the \(X\) -eigenstates \(\{\ket{+}, \ket{-}\}\) defined as \(\ket{+} = \frac{\ket{\uparrow} + -\ket{\downarrow}}{\sqrt{2}}\) and \(\ket{-} = \frac{\ket{\uparrow} - -\ket{\downarrow}}{\sqrt{2}}\). In the current context, this basis is referred to as the -irrep basis of \(\mathbb{Z}_2\), where the local basis state \(\ket{+}\) corresponds to the -trivial representation of \(\mathbb{Z}_2\) while \(\ket{-}\) corresponds to the sign -representation.
-Next, let’s make the statement that ‘the matrix blocks of the local tensors are labeled by -\(\mathbb{Z}_2\) irreps’ more concrete. To this end, consider the action of \(ZZ\) in the irrep -basis, which is given by the four nonzero matrix elements
-If we denote the trivial \(\mathbb{Z}_2\) irrep by \('0'\), corresponding to a local \(\ket{+}\) -state, and the sign irrep by \('1'\), corresponding to a local \(\ket{-}\) state, and recall -that in this notation the fusion rules of \(\mathbb{Z}_2\) are given by addition modulo 2, we -can associate each of the above matrix elements to a so-called fusion tree of -\(\mathbb{Z}_2\) irreps with a corresponding coefficient of 1,
- -From this we can observe our previous statement very clearly: the \(ZZ\) operator indeed
-consists of two distinct two-dimensional matrix blocks, each of which are labeled by the
-value of the coupled irrep on the middle line of each fusion tree. The first block
-corresponds to the even coupled irrep ‘0’, and acts within the two-dimensional subspace
-spanned by \(\{\ket{+,+}, \ket{-,-}\}\), while the second block corresponds to the odd coupled
-irrep ‘1’, and acts within the two-dimensional subspace spanned by \(\{\ket{+,-},
-\ket{-,+}\}\). In TensorKit.jl, this block-diagonal structure of a symmetric tensor is
-explicitly encoded into its representation as a TensorMap
, where only the matrix blocks
-corresponding to each coupled irrep are stored.
For our current purposes however, we never really need to explicitly consider these matrix
-blocks. Indeed, when constructing a TensorMap
it is sufficient to set its data by manually
-assigning a matrix element to each fusion tree of the form above labeled
-by a given tensor product of irreps. This matrix element is then automatically inserted into
-the appropriate matrix block. So, for the purpose of this tutorial we will interpret a
-symmetric TensorMap
simply as a list of fusion trees, to each of which corresponds a
-certain reduced matrix element.
Note
-In general, such a reduced matrix element is not necessarily a scalar, but rather an array -whose size is determined by the degeneracy of the irreps in the codomain and domain of the -fusion tree. For this reason, a reduced matrix element associated to a given fusion tree is -also referred to as an array block. In the following we will use terms ‘reduced matrix -element’, ‘array block’ or just ‘block’ interchangeably. However, it should be remembered -that these are distinct from the matrix blocks in the block-diagonal decomposition of the -tensor.
-This view of the underlying symmetry structure in terms of fusion trees and corresponding
-array blocks is a very convenient way of working with the TensorMap
type. Consider a
-generic fusion tree of the form
which can be used to label a block of a TensorMap
corresponding to a two-site operator.
-This object should actually be seen as a pair of fusion trees. The first member of the
-pair, related to the codomain of the TensorMap
, is referred to as the splitting tree and
-encodes how the coupled charge \(c\) splits into the uncoupled charges \(s_1\) and \(s_2\).
-The second member of the pair, related to the domain of the TensorMap
, is referred to as
-the fusion tree and encodes how the uncoupled charges \(f_1\) and \(f_2\) fuse to the coupled
-charge \(c\). Both the splitting and fusion tree can be represented as a
-TensorKit.FusionTree
-instance. You will find such a FusionTree
has the following properties encoded into its
-fields:
uncoupled::NTuple{N,I}
: a list of N
uncoupled charges of type I<:Sector
coupled::I
: a single coupled charge of type I<:Sector
isdual::NTuple{N,Bool}
: a list of booleans indicating whether the corresponding uncoupled charge is dual
innerlines::NTuple{M,I}
: a list of inner lines of type I<:Sector
of length M = N - 2
vertices::NTuple{L,T}
: list of fusion vertex labels of type T
and length L = N - 1
For our current application only uncoupled
and coupled
are relevant, since
-\(\mathbb{Z}_2\) irreps are self-dual and have Abelian fusion rules. We will come back to
-these other properties when discussion more involved applications. Given some TensorMap
,
-the method TensorKit.fusiontrees(t::TensorMap)
returns an iterator over all pairs of
-splitting and fusion trees that label the blocks of t
.
We can now put this into practice by directly constructing the \(ZZ\) operator in the irrep
-basis as a \(\mathbb{Z}_2\)-symmetric TensorMap
. We will do this in three steps:
First we construct the physical space at each site as a \(\mathbb{Z}_2\)-graded vector space.
Then we initialize an empty TensorMap
with the correct domain and codomain vector spaces built from the previously constructed physical space.
And finally we iterate over all splitting and fusion tree pairs and manually fill in the corresponding nonzero blocks of the operator.
After the basis transform to the irrep basis, we can view the two-dimensional complex
-physical vector space we started with as being spanned by the trivial and sign irrep of
-\(\mathbb{Z}_2\). In the language of TensorKit.jl, this can be implemented as a Z2Space
, an
-alias for a
-\(\mathbb{Z}_2\)-graded vector space
-Vect[Z2Irrep]
, which contains the trivial irrep Z2Irrep(0)
with degeneracy 1 and the
-sign irrep Z2Irrep(1)
with degeneracy 1. We can define this space in the following way and
-check its dimension:
V = Z2Space(0 => 1, 1 => 1)
-dim(V)
-
2
-
Given this physical space, we can initialize the \(ZZ\) operator as an empty TensorMap
with
-the appropriate structure.
ZZ = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-
TensorMap((Rep[ℤ₂](0=>1, 1=>1) ⊗ Rep[ℤ₂](0=>1, 1=>1)) ← (Rep[ℤ₂](0=>1, 1=>1) ⊗ Rep[ℤ₂](0=>1, 1=>1))):
-* Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
The output of this command again demonstrates the underlying structure of a symmetric
-tensor. We see that all eight valid fusion trees with two incoming irreps and two outgoing
-irreps of the type above are listed with their corresponding block data. Each
-of these blocks is an array of shape \((1, 1, 1, 1)\) since each irrep occuring in the space
-\(V\) has degeneracy 1. Using the fusiontrees
method and the fact that we can index a
-TensorMap
using a splitting/fusion tree pair, we can now fill in the nonzero blocks of the
-operator by observing that the \(ZZ\) operator flips the irreps of the uncoupled charges in
-the domain with respect to the codomain, as shown in the diagrams above. Flipping a given
-Z2Irrep
in the codomain can be implemented by fusing them with the sign irrep
-Z2Irrep(1)
, giving:
flip_charge(charge::Z2Irrep) = only(charge ⊗ Z2Irrep(1))
-for (s, f) in fusiontrees(ZZ)
- if s.uncoupled == map(flip_charge, f.uncoupled)
- ZZ[s, f] .= 1
- end
-end
-ZZ
-
TensorMap((Rep[ℤ₂](0=>1, 1=>1) ⊗ Rep[ℤ₂](0=>1, 1=>1)) ← (Rep[ℤ₂](0=>1, 1=>1) ⊗ Rep[ℤ₂](0=>1, 1=>1))):
-* Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](0)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](1), Irrep[ℤ₂](0)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1), Irrep[ℤ₂](0)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](0), Irrep[ℤ₂](1)) ← (Irrep[ℤ₂](0), Irrep[ℤ₂](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
Indeed, the resulting TensorMap
exactly encodes the matrix elements of the \(ZZ\) operator
-shown in the diagrams above. The \(X\) operator can be constructed in
-a similar way. Since it is by definition diagonal in the irrep basis with blocks directly
-corresponding to the trivial and sign irrep, its construction is particularly simple:
X = TensorMap(zeros, ComplexF64, V ← V)
-for (s, f) in fusiontrees(X)
- if only(f.uncoupled) == Z2Irrep(0)
- X[s, f] .= 1
- else
- X[s, f] .= -1
- end
-end
-X
-
TensorMap(Rep[ℤ₂](0=>1, 1=>1) ← Rep[ℤ₂](0=>1, 1=>1)):
-* Data for sector (Irrep[ℤ₂](0),) ← (Irrep[ℤ₂](0),):
- 1.0 + 0.0im
-* Data for sector (Irrep[ℤ₂](1),) ← (Irrep[ℤ₂](1),):
- -1.0 + 0.0im
-
Given these local operators, we can use them to construct the full manifestly -\(\mathbb{Z}_2\)-symmetric Hamiltonian.
-Note
-An important observation is that when explicitly imposing the \(\mathbb{Z}_2\) symmetry we -directly constructed the full \(ZZ\) operator as a single symmetric tensor. This in contrast -to the case without symmetries, where we constructed a single-site \(Z\) operator and then -combined them into a two-site operator. Clearly this can no longer be done when imposing -\(\mathbb{Z}_2\), since a single \(Z\) is not invariant under conjugation with the symmetry -operator \(X\). One might wonder whether it is still possible to construct a two-site -Hamiltonian term by combining local objects. This is possible if one introduces an auxiliary -index on the local tensors that carries a non-trivial charge. The intuition behind this will -become more clear in the next example.
-For our next example we will consider the -Bose-Hubbard model, which -describes interacting bosons on a lattice. The Hamiltonian of this model is given by
-This Hamiltonian is defined on the Fock space associated to a chain of bosons, -where the action bosonic creation, annihilation and number operators \(a^+\), \(a^-\) and \(N = -a^+ a^-\) in the local occupation number basis is given by
-Their bosonic nature can be summarized by the familiar the commutation relations
-This Hamiltonian is invariant under conjugation by the global particle number operator, \(U H -U^\dagger = H\), where
-This invariance corresponds to a \(\mathrm{U}(1)\) particle number symmetry, which can again
-be manifestly imposed when constructing the Hamiltonian terms as TensorMap
s. From the
-representation theory of \(\mathrm{U}(1)\) we know that it’s irreps are all one-dimensional
-and can be labeled by integers, where the fusion of two irreps is given by addition.
We recall from our discussion on the \(\mathbb{Z}_2\) symmetric Ising model that, in order to -construct the Hamiltonian terms as symmetric tensors, we should work in the irrep basis -where the symmetry transformation is block diagonal. In the current case, the symmetry -operation is the particle number operator, which is already diagonal in the occupation -number basis. Therefore, we don’t need an additional local basis transformation this time, -and can just observe that each local basis state can be identified with the \(\mathrm{U}(1)\) -irrep associated to the corresponding occupation number.
-Following the same approach as before, we first write down the action of the Hamiltonian -terms in the irrep basis:
-It is then a simple observation that these matrix elements are exactly captured by the -following \(\mathrm{U}(1)\) fusion trees with corresponding block values:
- -This gives us all the information necessary to construct the corresponding TensorMap
s. We
-follow the same steps as outlined in the previous example, starting with the construction of
-the physical space. This will now be a \(\mathrm{U}(1)\) graded vector space U1Space
, where
-each basis state \(\ket{n}\) in the occupation number basis is represented by the
-corresponding \(\mathrm{U}(1)\) irrep U1Irrep(n)
with degeneracy 1. While this physical
-space is in principle infinite dimensional, we will impose a cutoff in occupation number at
-a maximum of 5 bosons per site, giving a 6-dimensional vector space:
cutoff = 5
-V = U1Space(n => 1 for n in 0:cutoff)
-
Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1)
-
We can now initialize the \(a^+ a^-\), \(a^- a^+\) and \(N\) operators as empty TensorMap
s with
-the correct domain and codomain vector spaces, and fill in the nonzero blocks associated to
-the fusion trees shown above. To do this we need access to the integer
-label of the \(\mathrm{U}(1)\) irreps in the fusion and splitting trees, which can be accessed
-through the charge
field of the U1Irrep
type.
a⁺a⁻ = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-for (s, f) in fusiontrees(a⁺a⁻)
- if s.uncoupled[1] == only(f.uncoupled[1] ⊗ U1Irrep(1)) && s.uncoupled[2] == only(f.uncoupled[2] ⊗ U1Irrep(-1))
- a⁺a⁻[s, f] .= sqrt(s.uncoupled[1].charge * f.uncoupled[2].charge)
- end
-end
-a⁺a⁻
-
TensorMap((Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1) ⊗ Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1)) ← (Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1) ⊗ Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1))):
-* Data for sector (Irrep[U₁](0), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](0)) ← (Irrep[U₁](2), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 1.4142135623730951 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 1.4142135623730951 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](2)) ← (Irrep[U₁](0), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](3)) ← (Irrep[U₁](0), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](0), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 1.7320508075688772 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](0)) ← (Irrep[U₁](3), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](1)) ← (Irrep[U₁](3), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](0)) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 1.7320508075688772 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](3)) ← (Irrep[U₁](1), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 2.449489742783178 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 2.449489742783178 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 3.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 2.23606797749979 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 2.23606797749979 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 2.8284271247461903 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 2.8284271247461903 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 3.4641016151377544 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 3.4641016151377544 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 3.1622776601683795 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 3.1622776601683795 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](5)) ← (Irrep[U₁](2), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](4)) ← (Irrep[U₁](2), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 3.872983346207417 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](2)) ← (Irrep[U₁](4), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 3.872983346207417 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](5)) ← (Irrep[U₁](3), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 4.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](3)) ← (Irrep[U₁](5), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](2)) ← (Irrep[U₁](5), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](3)) ← (Irrep[U₁](5), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](5)) ← (Irrep[U₁](3), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 4.47213595499958 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 4.47213595499958 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 5.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
a⁻a⁺ = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-for (s, f) in fusiontrees(a⁻a⁺)
- if s.uncoupled[1] == only(f.uncoupled[1] ⊗ U1Irrep(-1)) && s.uncoupled[2] == only(f.uncoupled[2] ⊗ U1Irrep(1))
- a⁻a⁺[s, f] .= sqrt(f.uncoupled[1].charge * s.uncoupled[2].charge)
- end
-end
-a⁻a⁺
-
TensorMap((Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1) ⊗ Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1)) ← (Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1) ⊗ Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1))):
-* Data for sector (Irrep[U₁](0), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](0)) ← (Irrep[U₁](2), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 1.4142135623730951 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 1.4142135623730951 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](2)) ← (Irrep[U₁](0), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](3)) ← (Irrep[U₁](0), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](0), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](0)) ← (Irrep[U₁](3), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](1)) ← (Irrep[U₁](3), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 1.7320508075688772 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](0)) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](3)) ← (Irrep[U₁](1), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 1.7320508075688772 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 2.449489742783178 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](2), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 2.449489742783178 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](2)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](1)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](0)) ← (Irrep[U₁](4), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 2.23606797749979 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](2), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 2.8284271247461903 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](0), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 2.8284271247461903 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 3.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](0)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](3)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](0), Irrep[U₁](5)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 2.23606797749979 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](1), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](5), Irrep[U₁](1)):
-[:, :, 1, 1] =
- 3.1622776601683795 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 3.4641016151377544 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 3.1622776601683795 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](1), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](1)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 3.4641016151377544 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](2)) ← (Irrep[U₁](4), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](5)) ← (Irrep[U₁](2), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](3)) ← (Irrep[U₁](2), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](4)) ← (Irrep[U₁](2), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](2)) ← (Irrep[U₁](2), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 4.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](2)) ← (Irrep[U₁](4), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](5)) ← (Irrep[U₁](3), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 3.872983346207417 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](2)) ← (Irrep[U₁](3), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](2), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](3)) ← (Irrep[U₁](5), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 3.872983346207417 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](2)) ← (Irrep[U₁](5), Irrep[U₁](2)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](3)) ← (Irrep[U₁](5), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](3)):
-[:, :, 1, 1] =
- 4.47213595499958 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](3)) ← (Irrep[U₁](3), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](5)) ← (Irrep[U₁](3), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](4)) ← (Irrep[U₁](3), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](3)) ← (Irrep[U₁](4), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](3), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 4.47213595499958 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](4)) ← (Irrep[U₁](5), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](4)):
-[:, :, 1, 1] =
- 5.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](4)) ← (Irrep[U₁](4), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](4), Irrep[U₁](5)) ← (Irrep[U₁](4), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](5), Irrep[U₁](5)) ← (Irrep[U₁](5), Irrep[U₁](5)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
N = TensorMap(zeros, ComplexF64, V ← V)
-for (s, f) in fusiontrees(N)
- N[s, f] .= f.uncoupled[1].charge
-end
-N
-
TensorMap(Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1) ← Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1)):
-* Data for sector (Irrep[U₁](0),) ← (Irrep[U₁](0),):
- 0.0 + 0.0im
-* Data for sector (Irrep[U₁](1),) ← (Irrep[U₁](1),):
- 1.0 + 0.0im
-* Data for sector (Irrep[U₁](2),) ← (Irrep[U₁](2),):
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](3),) ← (Irrep[U₁](3),):
- 3.0 + 0.0im
-* Data for sector (Irrep[U₁](4),) ← (Irrep[U₁](4),):
- 4.0 + 0.0im
-* Data for sector (Irrep[U₁](5),) ← (Irrep[U₁](5),):
- 5.0 + 0.0im
-
Just as in the \(\mathbb{Z}_2\) case, it is obvious that we cannot directly construct the
-creation and annihilation operators as instances of a TensorMap(..., V ← V)
since they are
-not invariant under conjugation by the symmetry operator. However, it is possible to
-construct them as TensorMap
s using an auxiliary vector space, based on the following
-intuition. The creation operator \(a^+\) violates particle number conservation by mapping the
-occupation number \(n\) to \(n + 1\). From the point of view of representation theory, this
-process can be thought of as the fusion of an U1Irrep(n)
with an U1Irrep(1)
, naturally
-giving the fusion product U1Irrep(n + 1)
. This means we can represent \(a^+\) as a
-TensorMap(..., V ← V ⊗ A)
, where the auxiliary vector space A
contains the \(+1\) irrep
-with degeneracy 1, A = U1Space(1 => 1)
. Similarly, the decrease in occupation number when
-acting with \(a^-\) can be thought of as the splitting of an U1Irrep(n)
into an
-U1Irrep(n - 1)
and an U1Irrep(1)
, leading to a representation in terms of a
-TensorMap(..., A ⊗ V ← V)
. Based on these observations, we can represent the matrix
-elements (18.4) as blocks labeled by the \(\mathrm{U}(1)\) fusion trees
We can then combine these operators to get the appropriate Hamiltonian terms,
- -Note
-Although we have made a suggestive distinction between the ‘left’ and ‘right’ versions of
-the operators \(a_L^\pm\) and \(a_R^\pm\), one can actually be obtained from the other by
-permuting the physical and auxiliary indices of the corresponding TensorMap
s. This
-permutation has no effect on the actual array blocks of the tensors due to the
-bosonic braiding style
-of \(\mathrm{U}(1)\) irreps, so the left and right operators can in essence be seen as the
-‘same’ tensors. This is no longer the case when considering fermionic systems, where
-permuting indices can in fact change the array blocks as we will see next. As a consequence,
-it is much less clear how to construct two-site symmetric operators in terms of local
-symmetric objects.
The explicit construction then looks something like
-A = U1Space(1 => 1)
-
Rep[U₁](1=>1)
-
a⁺ = TensorMap(zeros, ComplexF64, V ← V ⊗ A)
-for (s, f) in fusiontrees(a⁺)
- a⁺[s, f] .= sqrt(f.uncoupled[1].charge+1)
-end
-a⁺
-
TensorMap(Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1) ← (Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1) ⊗ Rep[U₁](1=>1))):
-* Data for sector (Irrep[U₁](1),) ← (Irrep[U₁](0), Irrep[U₁](1)):
-[:, :, 1] =
- 1.0 + 0.0im
-* Data for sector (Irrep[U₁](2),) ← (Irrep[U₁](1), Irrep[U₁](1)):
-[:, :, 1] =
- 1.4142135623730951 + 0.0im
-* Data for sector (Irrep[U₁](3),) ← (Irrep[U₁](2), Irrep[U₁](1)):
-[:, :, 1] =
- 1.7320508075688772 + 0.0im
-* Data for sector (Irrep[U₁](4),) ← (Irrep[U₁](3), Irrep[U₁](1)):
-[:, :, 1] =
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](5),) ← (Irrep[U₁](4), Irrep[U₁](1)):
-[:, :, 1] =
- 2.23606797749979 + 0.0im
-
a⁻ = TensorMap(zeros, ComplexF64, A ⊗ V ← V)
-for (s, f) in fusiontrees(a⁻)
- a⁻[s, f] .= sqrt(f.uncoupled[1].charge)
-end
-a⁻
-
TensorMap((Rep[U₁](1=>1) ⊗ Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1)) ← Rep[U₁](0=>1, 1=>1, 2=>1, 3=>1, 4=>1, 5=>1)):
-* Data for sector (Irrep[U₁](1), Irrep[U₁](0)) ← (Irrep[U₁](1),):
-[:, :, 1] =
- 1.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](1)) ← (Irrep[U₁](2),):
-[:, :, 1] =
- 1.4142135623730951 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](2)) ← (Irrep[U₁](3),):
-[:, :, 1] =
- 1.7320508075688772 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](3)) ← (Irrep[U₁](4),):
-[:, :, 1] =
- 2.0 + 0.0im
-* Data for sector (Irrep[U₁](1), Irrep[U₁](4)) ← (Irrep[U₁](5),):
-[:, :, 1] =
- 2.23606797749979 + 0.0im
-
It is then simple to check that this is indeed what we expect.
-@tensor a⁺a⁻_bis[-1 -2; -3 -4] := a⁺[-1; -3 1] * a⁻[1 -2; -4]
-@tensor a⁻a⁺_bis[-1 -2; -3 -4] := a⁻[1 -1; -3] * a⁺[-2; -4 1]
-@tensor N_bis[-1 ; -2] := a⁺[-1; 1 2] * a⁻[2 1; -2]
-
-@test a⁺a⁻_bis ≈ a⁺a⁻ atol=1e-14
-@test a⁻a⁺_bis ≈ a⁻a⁺ atol=1e-14
-@test N_bis ≈ N atol=1e-14
-
Test Passed
-
Note
-From the construction of the Hamiltonian operators -in terms of creation and annihilation operators we clearly see that they are -invariant under a transformation \(a^\pm \to e^{\pm i\theta} a^\pm\). More generally, any -invertible transformation on the auxiliary space leaves the resulting contraction unchanged. -This ambiguity in the definition clearly shows that one should really always think in terms -of the fully symmetric procucts of \(a^+\) and \(a^-\) rather than in terms of these operators -themselves. In particular, one can always decompose such a symmetric product into the -form above by means of an SVD.
-While we have already covered quite a lot of ground towards understanding symmetric tensors -in terms of fusion trees and corresponding blocks, the symmetries considered so far have -been quite ‘simple’ in the sense that sectors corresponding to irreps of \(\mathbb{Z}_2\) and -\(\mathrm{U}(1)\) have -Abelian fusion rules -and -bosonic exchange statistics. -This means that the fusion of two irreps always gives a single irrep as the fusion product, -and that exchanging two irreps in a tensor product is trivial. In practice, this implies -that for tensors with these symmetries the fusion trees are completely fixed by the -uncoupled charges, which uniquely define both the inner lines and the coupled charge, and -that tensor indices can be permuted freely without any ‘strange’ side effects.
-In the following we will consider examples with fermionic and even anyonic exchange -statistics, and non-Abelian fusion rules. In going through these examples it will become -clear that the fusion trees labeling the blocks of a symmetric tensor imply more information -than just a labeling.
-As a simple example we will consider the Kitaev chain, which describes a chain of -interacting spinless fermions with nearest-neighbor hopping and pairing terms. The -Hamiltonian of this model is given by
-where \(N_i = c_i^+ c_i^-\) is the local particle number operator. As opposed to the previous -case, the fermionic creation and annihilation operators now satisfy the anticommutation -relations
-These relations justify the choice of the relative minus sign in the hopping and pairing -terms. Indeed, since fermionic operators on different sites always anticommute, these -relative minus signs are needed to ensure that the Hamiltonian is Hermitian, since \(\left( -c_i^+ c_j^- \right)^\dagger = c_j^+ c_i^- = - c_i^- c_j^+\) and \(\left( c_i^+ c_j^+ -\right)^\dagger = c_j^- c_i^- = - c_i^- c_j^-\). The anticommutation relations also naturally -restrict the local occupation number to be 0 or 1, leading to a well-defined notion of -fermion-parity. The local fermion-parity operator is related to the fermion number -operator as \(Q_i = (-1)^{n_i}\), and is diagonal in the occupation number basis. The -Hamiltonian (18.5) is invariant under conjugation by the global fermion-parity -operator, \(Q H Q^\dagger = H\), where
-This fermion parity symmetry, which we will denote as \(f\mathbb{Z}_2\), is a -\(\mathbb{Z}_2\)-like symmetry in the sense that it has a trivial representation, which we -call even and again denote by ‘0’, and a sign representation which we call odd and -denote by ‘1’. The fusion rules of these irreps are the same as for \(\mathbb{Z}_2\). Similar -to the previous case, the local symmetry operator \(Q_i\) is already diagonal, so the -occupation number basis coincides with the irrep basis and we don’t need an additional basis -transform. The important difference with a regular \(\mathbb{Z}_2\) symmetry is that the -irreps of \(f\mathbb{Z}_2\) have fermionic braiding statistics, in the sense that exhanging -two odd irreps gives rise to a minus sign.
-In TensorKit.jl, an \(f\mathbb{Z}_2\)-graded vector spaces is represented as a
-Vect[FermionParity]
space, where a given \(f\mathbb{Z}_2\) irrep can be represented as a
-FermionParity
-sector instance. Using the simplest instance of a vector space containing a single even and
-odd irrep, we can already demonstrate the corresponding fermionic braiding behavior by
-performing a permutation
-on a simple TensorMap
.
V = Vect[FermionParity](0 => 1, 1 => 1)
-t = TensorMap(ones, ComplexF64, V ← V ⊗ V)
-
TensorMap(Vect[FermionParity](0=>1, 1=>1) ← (Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1))):
-* Data for sector (FermionParity(0),) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1] =
- 1.0 + 0.0im
-* Data for sector (FermionParity(0),) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1] =
- 1.0 + 0.0im
-* Data for sector (FermionParity(1),) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1] =
- 1.0 + 0.0im
-* Data for sector (FermionParity(1),) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1] =
- 1.0 + 0.0im
-
permute(t, ((1,), (3, 2)))
-
TensorMap(Vect[FermionParity](0=>1, 1=>1) ← (Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1))):
-* Data for sector (FermionParity(0),) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1] =
- 1.0 + 0.0im
-* Data for sector (FermionParity(0),) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1] =
- -1.0 + 0.0im
-* Data for sector (FermionParity(1),) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1] =
- 1.0 + 0.0im
-* Data for sector (FermionParity(1),) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1] =
- 1.0 + 0.0im
-
In other words, when exchanging the two domain vector spaces, the block of the TensorMap
-for which both corresponding irreps are odd picks up a minus sign, exactly as we would
-expect for fermionic charges.
We can directly construct the Hamiltonian terms as symmetric TensorMap
s using the same
-procedure as before starting from their matrix elements in the occupation number basis.
-However, in this case we should be a bit more careful about the precise definition of the
-basis states in composite systems. Indeed, the tensor product structure of fermionic systems
-is inherently tricky to deal with, and should ideally be treated in the context of
-super vector spaces. For two sites, we
-can define the following basis states on top of the fermionic vacuuum \(\ket{00}\):
This definition in combination with the anticommutation relations above give rise to the -nonzero matrix elements
-While the signs in these expressions may seem a little unintuitive at first sight, they are -essential to the fermionic nature of the system. Indeed, if we for example work out the -matrix element of \(c_1^- c_2^+\) we find
-Once we have these matrix elements the hard part is done, and we can naively associate these -to the following \(f\mathbb{Z}_2\) fusion trees with corresponding block values,
- -Given this information, we can go through the same procedure again to construct \(c^+ c^-\),
-\(c^- c^+\) and \(N\) operators as TensorMap
s over \(f\mathbb{Z}_2\)-graded vector spaces.
V = Vect[FermionParity](0 => 1, 1 => 1)
-
Vect[FermionParity](0=>1, 1=>1)
-
c⁺c⁻ = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-odd = FermionParity(1)
-for (s, f) in fusiontrees(c⁺c⁻)
- if s.uncoupled[1] == odd && f.uncoupled[2] == odd && f.coupled == odd
- c⁺c⁻[s, f] .= 1
- end
-end
-c⁺c⁻
-
TensorMap((Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1)) ← (Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1))):
-* Data for sector (FermionParity(0), FermionParity(0)) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(1)) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(0)) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(1)) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(0)) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(1)) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(0)) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(1)) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
c⁻c⁺ = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-for (s, f) in fusiontrees(c⁻c⁺)
- if f.uncoupled[1] == odd && s.uncoupled[2] == odd && f.coupled == odd
- c⁻c⁺[s, f] .= -1
- end
-end
-c⁻c⁺
-
TensorMap((Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1)) ← (Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1))):
-* Data for sector (FermionParity(0), FermionParity(0)) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(1)) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(0)) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(1)) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(0)) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(1)) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1, 1] =
- -1.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(0)) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(1)) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
c⁺c⁺ = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-odd = FermionParity(1)
-for (s, f) in fusiontrees(c⁺c⁺)
- if s.uncoupled[1] == odd && f.uncoupled[1] != odd && f.coupled != odd
- c⁺c⁺[s, f] .= 1
- end
-end
-c⁺c⁺
-
TensorMap((Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1)) ← (Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1))):
-* Data for sector (FermionParity(0), FermionParity(0)) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(1)) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(0)) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(1)) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(0)) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(1)) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(0)) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(1)) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
c⁻c⁻ = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-for (s, f) in fusiontrees(c⁻c⁻)
- if s.uncoupled[1] != odd && f.uncoupled[2] == odd && f.coupled != odd
- c⁻c⁻[s, f] .= -1
- end
-end
-c⁻c⁻
-
TensorMap((Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1)) ← (Vect[FermionParity](0=>1, 1=>1) ⊗ Vect[FermionParity](0=>1, 1=>1))):
-* Data for sector (FermionParity(0), FermionParity(0)) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(1)) ← (FermionParity(0), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(0)) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1, 1] =
- -1.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(1)) ← (FermionParity(1), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(0)) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(1)) ← (FermionParity(1), FermionParity(0)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(1), FermionParity(0)) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for sector (FermionParity(0), FermionParity(1)) ← (FermionParity(0), FermionParity(1)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
N = TensorMap(zeros, ComplexF64, V ← V)
-for (s, f) in fusiontrees(N)
- N[s, f] .= f.coupled == odd ? 1 : 0
-end
-N
-
TensorMap(Vect[FermionParity](0=>1, 1=>1) ← Vect[FermionParity](0=>1, 1=>1)):
-* Data for sector (FermionParity(0),) ← (FermionParity(0),):
- 0.0 + 0.0im
-* Data for sector (FermionParity(1),) ← (FermionParity(1),):
- 1.0 + 0.0im
-
Note
-Working with fermionic systems is inherently tricky, as can already be seen from something -as simple as computing matrix elements of fermionic operators. Similarly, while constructing -symmetric tensors that correspond to the symmetric Hamiltonian terms was still quite -straightforward, it is far less clear in this case how to construct these terms as -contractions of local symmetric tensors representing individual creation and annihilation -operators in this case. While such a decomposition can always be in principle obtained using -a (now explicitly fermionic) SVD, manually constructing such tensors as we did in the -bosonic case is far from trivial. Trying this would be a good exercise in working with -fermionic symmetries, but it is not something we will do here.
-We will now move on to systems which have more complicated non-Abelian symmetries. For a
-non-Abelian symmetry group \(G\), the fact that its elements do not all commute has a profound
-impact on its representation theory. In particular, the irreps of such a group can be higher
-dimensional, and the fusion of two irreps can give rise to multiple different irreps. On the
-one hand this means that fusion trees of these irreps are no longer completely determined by
-the uncoupled charges. Indeed, in this case some of the
-internal structure of the FusionTree
type we have ignored before will
-become relevant (of which we will give an example below). On the other
-hand, it follows that fusion trees of irreps now not only label blocks, but also encode a
-certain nontrivial symmetry structure. We will make this statement more precise in the
-following, but the fact that this is necessary is quite intuitive. If we recall our original
-statement that symmetric tensors consist of blocks associated to fusion trees which carry
-irrep labels, then for higher-dimensional irreps the corresponding fusion trees must encode
-some additional information that implicitly takes into account the internal structure of the
-representation spaces. In particular, this means that the conversion of an operator, given
-its matrix elements in the irrep basis, to the blocks of the corresponding symmetric
-TensorMap
is less straightforward since it requires an understanding of exactly what this
-implied internal structure is. Therefore, we require some more discussion before we can
-actually move on to an example.
We’ll start by discussing the general structure of a TensorMap
which is symmetric under a
-non-Abelian group symmetry. We then given an example based on \(\mathrm{SU}(2)\), where we
-construct the Heisenberg Hamiltonian using two different approaches. Finally, we show how
-the more intuitive approach can be used to obtain an elegant generalization to the
-\(\mathrm{SU}(N)\)-symmetric case.
Let us recall some basics of representation theory first. Consider a group \(G\) and a
-corresponding representation space \(V\), such that every element \(g \in G\) can be realized as
-a unitary operator \(U_g : V \to V\). Let \(h\) be a TensorMap
whose domain and codomain are
-given by the tensor product of two of these representation spaces. Recall that, by
-definition, the statement that ‘\(h\) is symmetric under \(G\)’ means that
for every \(g \in G\). If we label the irreducible representations of \(G\) by \(l\), then any -representation space can be decomposed into a direct sum of irreducible representations, \(V -= \bigoplus_l V^{(l)}\), in such a way that \(U_g\) is block-diagonal where each block is -labeled by a particular irrep \(l\). For each irrep space \(V^{(l)}\) we can define an -orthonormal basis labeled as \(\ket{l, m}\), where the auxiliary label \(m\) can take -\(\text{dim}\left( V^{(l)} \right)\) different values. Since we know that tensors are -multilinear maps over tensor product spaces, it is natural to consider the tensor product of -representation spaces in more detail.
-From the representation theory of groups, -it is known that the product of two irreps can in turn be decomposed into a direct sum of -irreps, \(V^{(l_1)} \otimes V^{(l_2)} \cong \bigoplus_{k} V^{(k)}\). The precise nature of -this decomposition, also refered to as the Clebsch-Gordan problem, is given by the -so-called Clebsch-Gordan coefficients, which we will denote as \(C^{k}_{l_1,l_2}\). This set -of coefficients, which can be interpreted as a \(\text{dim}\left( V^{(l_1)} \right) \times -\text{dim}\left( V^{(l_2)} \right) \times \text{dim}\left( V^{(l_3)} \right)\) array, that -encodes how a basis state \(\ket{k,n} \in V^{(k)}\) corresponding to some term in the direct -sum can be decomposed into a linear combination of basis vectors \(\ket{l_1,m_1} \otimes -\ket{l_2,m_2}\) of the tensor product space:
-These recoupling coefficients turn out to be essential to the structure of symmetric
-tensors, which can be best understood in the context of the
-Wigner-Eckart theorem. This
-theorem implies that for any
-TensorMap
\(h\) that is symmetric under \(G\), its matrix elements in the
-tensor product irrep basis are given by the product of Clebsch-Gordan coefficients which
-characterize the coupling of the basis states in the domain and codomain, and a so-called
-reduced matrix element which only depends on the irrep labels. Concretely, the matrix
-element \(\bra{l_1,m_1} \otimes \bra{l_2,m_2} h \ket{l_3,m_3} \otimes \ket{l_4,m_4}\) is given
-by
Here, the sum runs over all possible irreps \(k\) in the fusion product \(l_3 \otimes l_4\) and -over all basis states \(\ket{k,n}\) of \(V^{(k)}\). The reduced matrix elements \(h_{\text{red}}\) -are independent of the basis state labels and only depend on the irrep labels themselves. -Each reduced matrix element should be interpreted as being labeled by an irrep fusion tree,
- -The fusion tree itself in turn implies the Clebsch-Gordan coefficients \(C^{k}_{l_1,l_2}\) and -conjugate coefficients \({C^{\dagger}}_{k}^{l_1,l_2}\) encode the splitting (decomposition) of -the coupled basis state \(\ket{k,n}\) to the codomain basis states \(\ket{l_1,m_1} \otimes -\ket{l_2,m_2}\) and the coupling of the domain basis states \(\ket{l_3,m_3} \otimes -\ket{l_4,m_4}\) to the coupled basis state \(\ket{k,n}\) respectively.
-The Wigner-Eckart theorem dictates that this structure in terms of Clebsch-Gordan
-coefficients is necessary to ensure that the corresponding tensor is symmetric. It is
-precisely this structure that is inherently encoded into the fusion tree part of a symmetric
-TensorMap
. In particular, the array block value associated to each fusion tree in a
-symmetric tensor is precisely the reduced matrix element in the Clebsch-Gordan
-decomposition.
As a small demonstration of this fact, we can make a simple \(\mathrm{SU}(2)\)-symmetric
-tensor with trivial block values and verify that its implied symmetry structure exactly
-corresponds to the expected Clebsch-Gordan coefficient. In TensorKit.jl, a
-\(\mathrm{SU}(2)\)-graded vector space is represented as an
-SU2Space
,
-where a given \(\mathrm{SU}(2)\) irrep can be represented as an
-SU2Irrep
-instance of integer or halfinteger spin as encoded in its j
field. If we construct a
-TensorMap
whose symmetry structure corresponds to the coupling of two spin-\(\frac{1}{2}\)
-irreps to a spin-\(1\) irrep, we can then convert it to a plain array and compare it to the
-\(\mathrm{SU}(2)\) Clebsch-Gordan coefficients exported by the
-WignerSymbols.jl package.
V1 = SU2Space(1 => 1)
-V2 = SU2Space(1//2 => 1)
-t = TensorMap(ones, ComplexF64, V1 ← V2 ⊗ V2)
-
TensorMap(Rep[SU₂](1=>1) ← (Rep[SU₂](1/2=>1) ⊗ Rep[SU₂](1/2=>1))):
-* Data for fusiontree FusionTree{Irrep[SU₂]}((1,), 1, (false,), ()) ← FusionTree{Irrep[SU₂]}((1/2, 1/2), 1, (false, false), ()):
-[:, :, 1] =
- 1.0 + 0.0im
-
ta = convert(Array, t)
-
3×2×2 Array{ComplexF64, 3}:
-[:, :, 1] =
- 1.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.707107+0.0im
- 0.0+0.0im 0.0+0.0im
-
-[:, :, 2] =
- 0.0+0.0im 0.0+0.0im
- 0.707107+0.0im 0.0+0.0im
- 0.0+0.0im 1.0+0.0im
-
The conversion gives us a \(3 \times 3 \times 2\) array, which exactly corresponds to the size -of the \(C_{1}^{\frac{1}{2},\frac{1}{2}}\) Clebsch-Gordan array. In order to explicitly -compare whether the entries match we need to know the ordering of basis states assumed by -TensorKit.jl when converting the tensor to its matrix elements in the irrep basis. For -\(\mathrm{SU}(2)\) the irrep basis is ordered in ascending magnetic quantum number \(m\), which -gives us a map \(m = i - (l+1)\) for mapping an array index to a corresponding magnetic -quantum number for the spin-\(l\) irrep.
-for i1 in 1:dim(V1), i2 in 1:dim(V2), i3 in 1:dim(V2)
- # map basis state index to magnetic quantum number
- m1 = i1 - (1 + 1)
- m2 = i2 - (1//2 + 1)
- m3 = i3 - (1//2 + 1)
- @test ta[i1, i2, i3] ≈ clebschgordan(1//2, m2, 1//2, m3, 1, m1)
-end
-
Based on this discussion, we can quantify the aforementioned ‘difficulties’ in the inverse
-operation of what we just demonstrated, namely converting a given operator to a symmetric
-TensorMap
given only its matrix elements in the irrep basis. Indeed, it is now clear that
-this precisely requires isolating the reduced matrix elements introduced above. Given the
-matrix elements of the operator in the irrep basis, this can in general be done by solving
-the system of equations implied by the Clebsch-Gordan decomposition. A
-simpler way to achieve the same thing is to make use of the fact that the
-Clebsch-Gordan tensors form a complete orthonormal basis
-on the coupled space. Indeed, by projecting out the appropriate Clebsch-Gordan coefficients
-and using their orthogonality relations, we can construct a diagonal operator on each
-coupled irrep space \(V^{(k)}\). Each of these diagonal operators is proportional to the
-identity, where the proportionality factor is precisely the reduced matrix element
-associated to the corresponding irrep fusion tree.
This procedure works for any group symmetry, and all we need are matrix elements of the -operator in the irrep basis and the Clebsch-Gordan coefficients. In the following we -demonstrate this explicit procedure for the particular example of \(G = \mathrm{SU}(2)\). -However, it should be noted that for general groups the Clebsch-Gordan coefficients may not -be as easy to compute (in general, no closed formulas exist). In addition, the procedure for -manually projecting out the reduced matrix elements requires being particularly careful -about the correspondence between the basis states used to define the original matrix -elements and those implied by the Clebsch-Gordan coefficients. Therefore, it is often easier -to directly construct the symmetric tensor based on some representation theory, as we will -see below.
-Consider the spin-1 Heisenberg model with Hamiltonian
-where \(\vec{S} = (S^x, S^y, S^z)\) are the spin operators. The physical Hilbert space at each
-site is the three-dimensional spin-1 irrep of \(\mathrm{SU}(2)\). Each two-site exchange
-operator \(\vec{S}_i \cdot \vec{S}_j\) in the sum commutes with a global transformation \(g \in
-\mathrm{SU}(2)\), so that it satisfies the above symmetry condition.
-Therefore, we can represent it as an \(\mathrm{SU}(2)\)-symmetric TensorMap
, as long as we
-can isolate its reduced matrix elements.
In order to apply the above procedure, we first require the matrix elements in the irrep
-basis. These can be constructed as a \(3 \times 3 \times 3 \times 3\) array SS
using the
-familiar representation of the \(\mathrm{SU}(2)\) generators in the spin-1 representation,
-with respect to the \(\{\ket{1,-1}, \ket{1,0}, \ket{1,1}\}\) basis.
Sx = 1 / sqrt(2) * ComplexF64[0 1 0; 1 0 1; 0 1 0]
-Sy = 1 / sqrt(2) * ComplexF64[0 1im 0; -1im 0 1im; 0 -1im 0]
-Sz = ComplexF64[-1 0 0; 0 0 0; 0 0 1]
-
-@tensor SS_arr[-1 -2; -3 -4] := Sx[-1; -3] * Sx[-2; -4] + Sy[-1; -3] * Sy[-2; -4] + Sz[-1; -3] * Sz[-2; -4]
-
3×3×3×3 Array{ComplexF64, 4}:
-[:, :, 1, 1] =
- 1.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
-
-[:, :, 2, 1] =
- 0.0+0.0im 1.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
-
-[:, :, 3, 1] =
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 1.0+0.0im 0.0+0.0im
- -1.0+0.0im 0.0+0.0im 0.0+0.0im
-
-[:, :, 1, 2] =
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 1.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
-
-[:, :, 2, 2] =
- 0.0+0.0im 0.0+0.0im 1.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 1.0+0.0im 0.0+0.0im 0.0+0.0im
-
-[:, :, 3, 2] =
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 1.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
-
-[:, :, 1, 3] =
- 0.0+0.0im 0.0+0.0im -1.0+0.0im
- 0.0+0.0im 1.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
-
-[:, :, 2, 3] =
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 1.0+0.0im 0.0+0.0im
-
-[:, :, 3, 3] =
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 0.0+0.0im
- 0.0+0.0im 0.0+0.0im 1.0+0.0im
-
The next step is to project out the reduced matrix elements by taking the overlap with the -appropriate Clebsch-Gordan coefficients, according to -the Clebsch-Gordan decomposition given above. In our current case of a spin-1 -physical space we have \(l_1 = l_2 = l_3 = l_4 = 1\), and the coupled irrep \(k\) can therefore -take the values \(0, 1, 2\). The reduced matrix element for a given \(k\) can then be -implemented in the following way:
-function get_reduced_element(k)
- # construct Clebsch-Gordan coefficients for coupling 1 ⊗ 1 to k
- CG = zeros(ComplexF64, 3, 3, 2*k + 1)
- for m1 in -k:k, m2 in -1:1, m3 in -1:1
- CG[m2 + 2, m3 + 2, m1 + k + 1] = clebschgordan(1, m2, 1, m3, k, m1)
- end
-
- # project out diagonal matrix on coupled irrep space
- @tensor reduced_matrix[-1; -2] := CG[1 2; -1] * SS_arr[1 2; 3 4] * conj(CG[3 4; -2])
-
- # check that it is proportianal to the identity
- @assert isapprox(reduced_matrix, reduced_matrix[1, 1] * I; atol=1e-12)
-
- # return the proportionality factor
- return reduced_matrix[1, 1]
-end
-
get_reduced_element (generic function with 1 method)
-
If we use this to compute the reduced matrix elements for \(k = 0, 1, 2\),
-get_reduced_element(0)
-
-1.9999999999999993 + 0.0im
-
get_reduced_element(1)
-
-1.0 + 0.0im
-
get_reduced_element(2)
-
1.0 + 0.0im
-
we can read off the entries
-These can then be used to construct the symmetric TensorMap
representing the exchange
-interaction:
V = SU2Space(1 => 1)
-SS = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-for (s, f) in fusiontrees(SS)
- k = Int(f.coupled.j)
- SS[s, f] .= get_reduced_element(k)
-end
-SS
-
TensorMap((Rep[SU₂](1=>1) ⊗ Rep[SU₂](1=>1)) ← (Rep[SU₂](1=>1) ⊗ Rep[SU₂](1=>1))):
-* Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()):
-[:, :, 1, 1] =
- -1.9999999999999993 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()):
-[:, :, 1, 1] =
- -1.0 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 2, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 2, (false, false), ()):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-
As noted above, the explicit procedure of projecting out the reduced matrix elements from -the action of an operator in the irrep basis can be a bit cumbersome for more complicated -groups. However, using some basic representation theory we can bypass this step altogether -for the Heisenberg model. First, we rewrite the exchange interaction in the following way:
-Here, \(\vec{S}_i\) and \(\vec{S}_j\) are spin operators on the physcial irrep, while total spin -operator \(\vec{S}_i + \vec{S}_j\) can be decomposed onto the different coupled irreps \(k\). It -is a well known fact that the quadratic sum of the generators of \(\mathrm{SU}(2)\), often -refered to as the -quadratic Casimir, -commutes with all generators. By -Schur’s lemma, it must then act -proportionally to the identity on every irrep, where the corresponding eigenvalue is -determined by the spin irrep label. In particular, we have for each irrep \(l\)
-It then follows from Eq. (18.6) that the reduced matrix elements of the -exchange interaction are completely determined by the eigenvalue of the quadratic Casimir on -the uncoupled and coupled irreps. Indeed, to each fusion tree we can associate a -well-defined value
- -This gives us all we need to directly construct the exchange interaction as a symmetric
-TensorMap
,
V = SU2Space(1 => 1)
-SS = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-for (s, f) in fusiontrees(SS)
- l3 = f.uncoupled[1].j
- l4 = f.uncoupled[2].j
- k = f.coupled.j
- SS[s, f] .= (k * (k + 1) - l3 * (l3 + 1) - l4 * (l4 + 1)) / 2
-end
-SS
-
TensorMap((Rep[SU₂](1=>1) ⊗ Rep[SU₂](1=>1)) ← (Rep[SU₂](1=>1) ⊗ Rep[SU₂](1=>1))):
-* Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 0, (false, false), ()):
-[:, :, 1, 1] =
- -2.0 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 1, (false, false), ()):
-[:, :, 1, 1] =
- -1.0 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₂]}((1, 1), 2, (false, false), ()) ← FusionTree{Irrep[SU₂]}((1, 1), 2, (false, false), ()):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-
which gives exactly the same result as the previous approach.
-Note
-This last construction for the exchange interaction immediatly generalizes to any value of -the physical spin. All we need is to fill in the appropriate values for the uncoupled irreps -\(l_1\), \(l_2\), \(l_3\) and \(l_4\).
-We end this subsection with some comments on the generalization of the above discussion to
-\(\mathrm{SU}(N)\). As foreshadowed above, the irreps of \(\mathrm{SU}(N)\) in general have an
-even more complicated structure. In particular, they can admit so-called fusion
-multiplicities, where the fusion of two irreps can have not only multiple distinct
-outcomes, but they can even fuse to a given irrep in multiple inequivalent ways. We can
-demonstrate this behavior for the adjoint representation of \(\mathrm{SU}(3)\). For this we
-can use the the
-SUNRepresentations.jl
-package which provides an interface for working with irreps of \(\mathrm{SU}(N)\) and their
-Clebsch-Gordan coefficients. A particular representation is represented by an SUNIrrep{N}
-which can be used with TensorKit.jl. The eight-dimensional adjoint representation of
-\(\mathrm{SU}(3)\) is given by
l = SU3Irrep("8")
-
Irrep[SU₃]("8")
-
If we look at the possible outcomes of fusing two adjoint irreps, we find the by now -familiar non-Abelian fusion behavior,
-collect(l ⊗ l)
-
5-element Vector{SU3Irrep}:
- "1"
- "27"
- "10"
- "8"
- "10⁺"
-
However, this particular fusion has multiplicities, since the adjoint irrep can actually -fuse to itself in two distinct ways. The full decomposition of this fusion product is given -by
-This fusion multiplicity can be detected by using
-Nsymbol
-method from TensorKit.jl to inspect the number of times l
appears in the fusion product
-l ⊗ l
,
Nsymbol(l, l, l)
-
2
-
When working with irreps with fusion multiplicities, each FusionTree
carries additional
-vertices
labels which label which of the distinct fusion vertices is being referred to. We
-will return to this at the end of this section.
Given the generators \(T^k\) of \(\mathrm{SU}(N)\), we can define a generalized Heisenberg model -using a similar exchange interaction, giving the Hamiltonian
-For a particular choice of physical irrep, the exchange interaction can again be constructed
-as a symmetric TensorMap
by first rewriting it as
For any \(N\), the quadratic Casimir
-commutes with all \(\mathrm{SU}(N)\) generators, meaning it has a well defined eigenvalue in -each irrep. This observation then immediately given the reduced matrix elements of the -exchange interaction as
- -Using these to directly construct the corresponding symmetric TensorMap
is much simpler
-than going through the explicit projection procedure using Clebsch-Gordan coefficients.
For the particular example of \(\mathrm{SU}(3)\), the generators are given by \(T^k = -\frac{1}{2} \lambda^k\) , where \(\lambda^k\) are the -Gell-Mann matrices. -Each irrep can be labeled as \(l = D(p,q)\) where \(p\) and \(q\) are refered to as the Dynkin -labels. The eigenvalue of the quadratic Casimir for a given irrep is given by -Freudenthal’s formula,
-Using SUNRepresentations.jl, we can compute the Casimir as
-function casimir(l::SU3Irrep)
- p, q = dynkin_label(l)
- return (p^2 + q^2 + 3 * p + 3 * q + p * q) / 3
-end
-
casimir (generic function with 1 method)
-
If we use the adjoint representation of \(\mathrm{SU}(3)\) as physical space, the Heisenberg -exchange interaction can then be constructed as
-V = Vect[SUNIrrep{3}](SU3Irrep("8") => 1)
-TT = TensorMap(zeros, ComplexF64, V ⊗ V ← V ⊗ V)
-for (s, f) in fusiontrees(TT)
- l3 = f.uncoupled[1]
- l4 = f.uncoupled[2]
- k = f.coupled
- TT[s, f] .= (casimir(k) - casimir(l3) - casimir(l4)) / 2
-end
-TT
-
TensorMap((Rep[SU₃]("8"=>1) ⊗ Rep[SU₃]("8"=>1)) ← (Rep[SU₃]("8"=>1) ⊗ Rep[SU₃]("8"=>1))):
-* Data for fusiontree FusionTree{Irrep[SU₃]}(("8", "8"), "1", (false, false),(), (1,)) ← FusionTree{Irrep[SU₃]}(("8", "8"), "1", (false, false),(), (1,)):
-[:, :, 1, 1] =
- -3.0 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (2,)) ← FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (2,)):
-[:, :, 1, 1] =
- -1.5 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (1,)) ← FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (2,)):
-[:, :, 1, 1] =
- -1.5 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (2,)) ← FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (1,)):
-[:, :, 1, 1] =
- -1.5 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (1,)) ← FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (1,)):
-[:, :, 1, 1] =
- -1.5 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₃]}(("8", "8"), "10⁺", (false, false),(), (1,)) ← FusionTree{Irrep[SU₃]}(("8", "8"), "10⁺", (false, false),(), (1,)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₃]}(("8", "8"), "10", (false, false),(), (1,)) ← FusionTree{Irrep[SU₃]}(("8", "8"), "10", (false, false),(), (1,)):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-* Data for fusiontree FusionTree{Irrep[SU₃]}(("8", "8"), "27", (false, false),(), (1,)) ← FusionTree{Irrep[SU₃]}(("8", "8"), "27", (false, false),(), (1,)):
-[:, :, 1, 1] =
- 1.0 + 0.0im
-
Circling back to our earlier remark, we clearly see that the fusion trees of this tensor -indeed have non-trivial vertex labels.
-f = collect(fusiontrees(TT))[3][2]
-
FusionTree{Irrep[SU₃]}(("8", "8"), "8", (false, false),(), (2,))
-
f.vertices
-
(2,)
-
Note
-While we have given an explicit example using \(\mathrm{SU}(3)\) with the adoint irrep on the -physical level, the same construction holds for the general \(\mathrm{SU}(N)\) with arbitrary -physical irreps. All we require is the expression for the eigenvalues of the quadratic -Casimir in each irrep.
-While we have focussed exclusively on group-like symmetries in our discussion so far, the -framework of symmetric tensors actually extends beyond groups to so-called -categorical symmetries. -These are quite exotic symmetries characterized in terms of -the topological data of a unitary fusion category. -While the precise details of all the terms in these statements fall beyond the scope of this -tutorial, we can give a simple example of a Hamiltonian model with a categorical symmetry -called the golden chain.
-This is a one-dimensional system defined as a spin chain, where each physical ‘spin’ -corresponds to a so-called Fibonacci anyon. There are two -such Fibonacci anyons, which we will denote as \(1\) and \(\tau\). They obey the fusion rules
-The Hilbert space of a chain of Fibonacci anyons is not a regular tensor product space, but -rather a constrained Hilbert space where the only allowed basis states are labeled by -valid Fibonacci fusion configurations. In the golden chain model, we define a -nearest-neighbor Hamiltonian on this Hilbert space by imposing an energy penalty when two -neighboring anyons fuse to a \(\tau\) anyon.
-Even just writing down an explicit expression for this interaction on such a constrained
-Hilbert space is not entirely straightforward. However, using the framework of symmetric
-tensors it can actually be explicitly constructed in a very straightforward way. Indeed,
-TensorKit.jl supports a dedicated
-FibonacciAnyon
-sector type which can be used to construct precisely such a constrained Fibonacci-graded
-vector space. A Hamiltonian
which favors neighboring anyons fusing to the vacuum can be constructed as a TensorMap
on
-the product space of two Fibonacci-graded physical spaces
V = Vect[FibonacciAnyon](:τ => 1)
-
Vect[FibonacciAnyon](:τ=>1)
-
and assigning the following nonzero block value to the two-site fusion trees
- -This allows us to define this, at first sight, exotic and complicated Hamiltonian in a few -simple lines of code,
-h = TensorMap(ones, V ⊗ V ← V ⊗ V)
-for (s, f) in fusiontrees(h)
- h[s, f] .= f.coupled == FibonacciAnyon(:I) ? -1 : 0
-end
-h
-
TensorMap((Vect[FibonacciAnyon](:τ=>1) ⊗ Vect[FibonacciAnyon](:τ=>1)) ← (Vect[FibonacciAnyon](:τ=>1) ⊗ Vect[FibonacciAnyon](:τ=>1))):
-* Data for fusiontree FusionTree{FibonacciAnyon}((:τ, :τ), :I, (false, false), ()) ← FusionTree{FibonacciAnyon}((:τ, :τ), :I, (false, false), ()):
-[:, :, 1, 1] =
- -1.0 + 0.0im
-* Data for fusiontree FusionTree{FibonacciAnyon}((:τ, :τ), :τ, (false, false), ()) ← FusionTree{FibonacciAnyon}((:τ, :τ), :τ, (false, false), ()):
-[:, :, 1, 1] =
- 0.0 + 0.0im
-
Note
-In the previous section we have stressed the role of Clebsch-Gordan coefficients in the -structure of symmetric tensors, and how they can be used to map between the representation -of an operator in the irrep basis and its symmetric tensor representation. However, for -categorical symmetries such as the Fibonacci anyons, there are no Clebsch-Gordan -coefficients. Therefore, the ‘matrix elements of the operator in the irrep basis’ are not -well-defined, meaning that a Fibonacci-symmetric tensor cannot actually be converted to a -plain array in a meaningful way.
-Add section on product symmetries and how to work with them.
-Discuss the Hubbard model with \(f\mathbb{Z}_2 \boxtimes \mathrm{U}(1) \boxtimes \mathrm{SU}(2)\) as an example.
Add a section on classical \(O(N)\) models to illustrate the (‘Fourier’) transformation from the group element to the irrep basis for continuous symmetries.
References
- -Jacob C Bridgeman and Christopher T Chubb. Hand-waving and interpretive dance: an introductory course on tensor networks. Journal of Physics A: Mathematical and Theoretical, 50(22):223001, 2017. arXiv:1603.03039, doi:10.1088/1751-8121/aa6dc3.
-Pasquale Calabrese and John Cardy. Evolution of entanglement entropy in one-dimensional systems. Journal of Statistical Mechanics: Theory and Experiment, 2005(04):P04010, 2005. arXiv:cond-mat/0503393, doi:10.1088/1742-5468/2005/04/p04010.
-Maarten Van Damme, Jutho Haegeman, Ian McCulloch, and Laurens Vanderstraeten. Efficient higher-order matrix product operators for time evolution. 2023. arXiv:2302.14181.
-Carl Eckart and Gale Young. The approximation of one matrix by another of lower rank. Psychometrika, 1(3):211–218, Sep 1936. doi:10.1007/BF02288367.
-Adrian Feiguin, Simon Trebst, Andreas WW Ludwig, Matthias Troyer, Alexei Kitaev, Zhenghan Wang, and Michael H Freedman. Interacting anyons in topological quantum liquids: the golden chain. Physical Review Letters, 98(16):160409, 2007. arXiv:cond-mat/0612341, doi:10.1103/PhysRevLett.98.160409.
-Jutho Haegeman, J Ignacio Cirac, Tobias J Osborne, Iztok Pižorn, Henri Verschelde, and Frank Verstraete. Time-dependent variational principle for quantum lattices. Physical Review Letters, 107(7):070601, 2011. arXiv:1103.0936, doi:10.1103/PhysRevLett.107.070601.
-Jutho Haegeman, Christian Lubich, Ivan Oseledets, Bart Vandereycken, and Frank Verstraete. Unifying time evolution and optimization with matrix product states. Physical Review B, 94:165116, 2016. arXiv:1408.5056, doi:10.1103/PhysRevB.94.165116.
-Naomichi Hatano and Masuo Suzuki. Finding Exponential Product Formulas of Higher Orders. In Quantum Annealing and Other Optimization Methods, volume 679, pages 37–68. 2005. arXiv:math-ph/0506007, doi:10.1007/11526216_2.
-C. Hubig, I. P. McCulloch, and U. Schollwöck. Generic construction of efficient matrix product operators. Phys. Rev. B, 95:035129, Jan 2017. URL: https://link.aps.org/doi/10.1103/PhysRevB.95.035129, doi:10.1103/PhysRevB.95.035129.
-H. C. Jiang, Z. Y. Weng, and T. Xiang. Accurate determination of tensor network state of quantum lattice models in two dimensions. Physical Review Letters, 101(9):090603, 2008. arXiv:0806.3719, doi:10.1103/PhysRevLett.101.090603.
-J. Jordan, R. Orús, G. Vidal, F. Verstraete, and J. I. Cirac. Classical Simulation of Infinite-Size Quantum Lattice Systems in Two Spatial Dimensions. Physical Review Letters, 101(25):250602, 2008. arXiv:cond-mat/0703788, doi:10.1103/PhysRevLett.101.250602.
-L. D. Landau. On the theory of phase transitions. Journal of Experimental and Theoretical Physics, 7:19–32, 1937. doi:10.1016/B978-0-08-010586-4.50034-1.
-Christian Lubich, Ivan V. Oseledets, and Bart Vandereycken. Time integration of tensor trains. SIAM Journal on Numerical Analysis, 53(2):917–941, 2015. arXiv:1407.2042, doi:10.1137/140976546.
-Lars Onsager. Crystal statistics. i. a two-dimensional model with an order-disorder transition. Physical Review, 65:117–149, 1944. doi:10.1103/PhysRev.65.117.
-Robert N. C. Pfeifer, Jutho Haegeman, and Frank Verstraete. Faster identification of optimal contraction sequences for tensor networks. Physical Review E, 90:033315, 2014. arXiv:1304.6112, doi:10.1103/PhysRevE.90.033315.
-Marek M. Rams, Piotr Czarnik, and Lukasz Cincio. Precise extrapolation of the correlation function asymptotics in uniform tensor network states with application to the Bose-Hubbard and XXZ models. Physical Review X, 8(4):041033, 2018. arXiv:1801.08554, doi:10.1103/PhysRevX.8.041033.
-Laurens Vanderstraeten, Jutho Haegeman, and Frank Verstraete. Tangent-space methods for uniform matrix product states. SciPost Physics Lecture Notes, pages 007, 2019. arXiv:1810.07006, doi:10.21468/SciPostPhysLectNotes.7.
-F. Verstraete, J. J. Garc\'ıa-Ripoll, and J. I. Cirac. Matrix product density operators: simulation of finite-temperature and dissipative systems. Physical Review Letters, 93:207204, 2004. arXiv:cond-mat/0406426, doi:10.1103/PhysRevLett.93.207204.
-V. Zauner-Stauber, L. Vanderstraeten, M. T. Fishman, F. Verstraete, and J. Haegeman. Variational optimization algorithms for uniform matrix product states. Physical Review B, 97(4):045145, 2018. arXiv:1701.07035, doi:10.1103/PhysRevB.97.045145.
-Short
- */ - .o-tooltip--left { - position: relative; - } - - .o-tooltip--left:after { - opacity: 0; - visibility: hidden; - position: absolute; - content: attr(data-tooltip); - padding: .2em; - font-size: .8em; - left: -.2em; - background: grey; - color: white; - white-space: nowrap; - z-index: 2; - border-radius: 2px; - transform: translateX(-102%) translateY(0); - transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1); -} - -.o-tooltip--left:hover:after { - display: block; - opacity: 1; - visibility: visible; - transform: translateX(-100%) translateY(0); - transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1); - transition-delay: .5s; -} - -/* By default the copy button shouldn't show up when printing a page */ -@media print { - button.copybtn { - display: none; - } -} diff --git a/pr-preview/pr-27/_static/copybutton.js b/pr-preview/pr-27/_static/copybutton.js deleted file mode 100644 index 2ea7ff3e..00000000 --- a/pr-preview/pr-27/_static/copybutton.js +++ /dev/null @@ -1,248 +0,0 @@ -// Localization support -const messages = { - 'en': { - 'copy': 'Copy', - 'copy_to_clipboard': 'Copy to clipboard', - 'copy_success': 'Copied!', - 'copy_failure': 'Failed to copy', - }, - 'es' : { - 'copy': 'Copiar', - 'copy_to_clipboard': 'Copiar al portapapeles', - 'copy_success': '¡Copiado!', - 'copy_failure': 'Error al copiar', - }, - 'de' : { - 'copy': 'Kopieren', - 'copy_to_clipboard': 'In die Zwischenablage kopieren', - 'copy_success': 'Kopiert!', - 'copy_failure': 'Fehler beim Kopieren', - }, - 'fr' : { - 'copy': 'Copier', - 'copy_to_clipboard': 'Copier dans le presse-papier', - 'copy_success': 'Copié !', - 'copy_failure': 'Échec de la copie', - }, - 'ru': { - 'copy': 'Скопировать', - 'copy_to_clipboard': 'Скопировать в буфер', - 'copy_success': 'Скопировано!', - 'copy_failure': 'Не удалось скопировать', - }, - 'zh-CN': { - 'copy': '复制', - 'copy_to_clipboard': '复制到剪贴板', - 'copy_success': '复制成功!', - 'copy_failure': '复制失败', - }, - 'it' : { - 'copy': 'Copiare', - 'copy_to_clipboard': 'Copiato negli appunti', - 'copy_success': 'Copiato!', - 'copy_failure': 'Errore durante la copia', - } -} - -let locale = 'en' -if( document.documentElement.lang !== undefined - && messages[document.documentElement.lang] !== undefined ) { - locale = document.documentElement.lang -} - -let doc_url_root = DOCUMENTATION_OPTIONS.URL_ROOT; -if (doc_url_root == '#') { - doc_url_root = ''; -} - -/** - * SVG files for our copy buttons - */ -let iconCheck = `` - -// If the user specified their own SVG use that, otherwise use the default -let iconCopy = ``; -if (!iconCopy) { - iconCopy = `` -} - -/** - * Set up copy/paste for code blocks - */ - -const runWhenDOMLoaded = cb => { - if (document.readyState != 'loading') { - cb() - } else if (document.addEventListener) { - document.addEventListener('DOMContentLoaded', cb) - } else { - document.attachEvent('onreadystatechange', function() { - if (document.readyState == 'complete') cb() - }) - } -} - -const codeCellId = index => `codecell${index}` - -// Clears selected text since ClipboardJS will select the text when copying -const clearSelection = () => { - if (window.getSelection) { - window.getSelection().removeAllRanges() - } else if (document.selection) { - document.selection.empty() - } -} - -// Changes tooltip text for a moment, then changes it back -// We want the timeout of our `success` class to be a bit shorter than the -// tooltip and icon change, so that we can hide the icon before changing back. -var timeoutIcon = 2000; -var timeoutSuccessClass = 1500; - -const temporarilyChangeTooltip = (el, oldText, newText) => { - el.setAttribute('data-tooltip', newText) - el.classList.add('success') - // Remove success a little bit sooner than we change the tooltip - // So that we can use CSS to hide the copybutton first - setTimeout(() => el.classList.remove('success'), timeoutSuccessClass) - setTimeout(() => el.setAttribute('data-tooltip', oldText), timeoutIcon) -} - -// Changes the copy button icon for two seconds, then changes it back -const temporarilyChangeIcon = (el) => { - el.innerHTML = iconCheck; - setTimeout(() => {el.innerHTML = iconCopy}, timeoutIcon) -} - -const addCopyButtonToCodeCells = () => { - // If ClipboardJS hasn't loaded, wait a bit and try again. This - // happens because we load ClipboardJS asynchronously. - if (window.ClipboardJS === undefined) { - setTimeout(addCopyButtonToCodeCells, 250) - return - } - - // Add copybuttons to all of our code cells - const COPYBUTTON_SELECTOR = 'div.highlight pre'; - const codeCells = document.querySelectorAll(COPYBUTTON_SELECTOR) - codeCells.forEach((codeCell, index) => { - const id = codeCellId(index) - codeCell.setAttribute('id', id) - - const clipboardButton = id => - `` - codeCell.insertAdjacentHTML('afterend', clipboardButton(id)) - }) - -function escapeRegExp(string) { - return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string -} - -/** - * Removes excluded text from a Node. - * - * @param {Node} target Node to filter. - * @param {string} exclude CSS selector of nodes to exclude. - * @returns {DOMString} Text from `target` with text removed. - */ -function filterText(target, exclude) { - const clone = target.cloneNode(true); // clone as to not modify the live DOM - if (exclude) { - // remove excluded nodes - clone.querySelectorAll(exclude).forEach(node => node.remove()); - } - return clone.innerText; -} - -// Callback when a copy button is clicked. Will be passed the node that was clicked -// should then grab the text and replace pieces of text that shouldn't be used in output -function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") { - var regexp; - var match; - - // Do we check for line continuation characters and "HERE-documents"? - var useLineCont = !!lineContinuationChar - var useHereDoc = !!hereDocDelim - - // create regexp to capture prompt and remaining line - if (isRegexp) { - regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)') - } else { - regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)') - } - - const outputLines = []; - var promptFound = false; - var gotLineCont = false; - var gotHereDoc = false; - const lineGotPrompt = []; - for (const line of textContent.split('\n')) { - match = line.match(regexp) - if (match || gotLineCont || gotHereDoc) { - promptFound = regexp.test(line) - lineGotPrompt.push(promptFound) - if (removePrompts && promptFound) { - outputLines.push(match[2]) - } else { - outputLines.push(line) - } - gotLineCont = line.endsWith(lineContinuationChar) & useLineCont - if (line.includes(hereDocDelim) & useHereDoc) - gotHereDoc = !gotHereDoc - } else if (!onlyCopyPromptLines) { - outputLines.push(line) - } else if (copyEmptyLines && line.trim() === '') { - outputLines.push(line) - } - } - - // If no lines with the prompt were found then just use original lines - if (lineGotPrompt.some(v => v === true)) { - textContent = outputLines.join('\n'); - } - - // Remove a trailing newline to avoid auto-running when pasting - if (textContent.endsWith("\n")) { - textContent = textContent.slice(0, -1) - } - return textContent -} - - -var copyTargetText = (trigger) => { - var target = document.querySelector(trigger.attributes['data-clipboard-target'].value); - - // get filtered text - let exclude = '.linenos'; - - let text = filterText(target, exclude); - return formatCopyText(text, '', false, true, true, true, '', '') -} - - // Initialize with a callback so we can modify the text before copy - const clipboard = new ClipboardJS('.copybtn', {text: copyTargetText}) - - // Update UI with error/success messages - clipboard.on('success', event => { - clearSelection() - temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_success']) - temporarilyChangeIcon(event.trigger) - }) - - clipboard.on('error', event => { - temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_failure']) - }) -} - -runWhenDOMLoaded(addCopyButtonToCodeCells) \ No newline at end of file diff --git a/pr-preview/pr-27/_static/copybutton_funcs.js b/pr-preview/pr-27/_static/copybutton_funcs.js deleted file mode 100644 index dbe1aaad..00000000 --- a/pr-preview/pr-27/_static/copybutton_funcs.js +++ /dev/null @@ -1,73 +0,0 @@ -function escapeRegExp(string) { - return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string -} - -/** - * Removes excluded text from a Node. - * - * @param {Node} target Node to filter. - * @param {string} exclude CSS selector of nodes to exclude. - * @returns {DOMString} Text from `target` with text removed. - */ -export function filterText(target, exclude) { - const clone = target.cloneNode(true); // clone as to not modify the live DOM - if (exclude) { - // remove excluded nodes - clone.querySelectorAll(exclude).forEach(node => node.remove()); - } - return clone.innerText; -} - -// Callback when a copy button is clicked. Will be passed the node that was clicked -// should then grab the text and replace pieces of text that shouldn't be used in output -export function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") { - var regexp; - var match; - - // Do we check for line continuation characters and "HERE-documents"? - var useLineCont = !!lineContinuationChar - var useHereDoc = !!hereDocDelim - - // create regexp to capture prompt and remaining line - if (isRegexp) { - regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)') - } else { - regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)') - } - - const outputLines = []; - var promptFound = false; - var gotLineCont = false; - var gotHereDoc = false; - const lineGotPrompt = []; - for (const line of textContent.split('\n')) { - match = line.match(regexp) - if (match || gotLineCont || gotHereDoc) { - promptFound = regexp.test(line) - lineGotPrompt.push(promptFound) - if (removePrompts && promptFound) { - outputLines.push(match[2]) - } else { - outputLines.push(line) - } - gotLineCont = line.endsWith(lineContinuationChar) & useLineCont - if (line.includes(hereDocDelim) & useHereDoc) - gotHereDoc = !gotHereDoc - } else if (!onlyCopyPromptLines) { - outputLines.push(line) - } else if (copyEmptyLines && line.trim() === '') { - outputLines.push(line) - } - } - - // If no lines with the prompt were found then just use original lines - if (lineGotPrompt.some(v => v === true)) { - textContent = outputLines.join('\n'); - } - - // Remove a trailing newline to avoid auto-running when pasting - if (textContent.endsWith("\n")) { - textContent = textContent.slice(0, -1) - } - return textContent -} diff --git a/pr-preview/pr-27/_static/design-style.4045f2051d55cab465a707391d5b2007.min.css b/pr-preview/pr-27/_static/design-style.4045f2051d55cab465a707391d5b2007.min.css deleted file mode 100644 index 3225661c..00000000 --- a/pr-preview/pr-27/_static/design-style.4045f2051d55cab465a707391d5b2007.min.css +++ /dev/null @@ -1 +0,0 @@ -.sd-bg-primary{background-color:var(--sd-color-primary) !important}.sd-bg-text-primary{color:var(--sd-color-primary-text) !important}button.sd-bg-primary:focus,button.sd-bg-primary:hover{background-color:var(--sd-color-primary-highlight) !important}a.sd-bg-primary:focus,a.sd-bg-primary:hover{background-color:var(--sd-color-primary-highlight) !important}.sd-bg-secondary{background-color:var(--sd-color-secondary) !important}.sd-bg-text-secondary{color:var(--sd-color-secondary-text) !important}button.sd-bg-secondary:focus,button.sd-bg-secondary:hover{background-color:var(--sd-color-secondary-highlight) !important}a.sd-bg-secondary:focus,a.sd-bg-secondary:hover{background-color:var(--sd-color-secondary-highlight) !important}.sd-bg-success{background-color:var(--sd-color-success) !important}.sd-bg-text-success{color:var(--sd-color-success-text) !important}button.sd-bg-success:focus,button.sd-bg-success:hover{background-color:var(--sd-color-success-highlight) !important}a.sd-bg-success:focus,a.sd-bg-success:hover{background-color:var(--sd-color-success-highlight) !important}.sd-bg-info{background-color:var(--sd-color-info) !important}.sd-bg-text-info{color:var(--sd-color-info-text) !important}button.sd-bg-info:focus,button.sd-bg-info:hover{background-color:var(--sd-color-info-highlight) !important}a.sd-bg-info:focus,a.sd-bg-info:hover{background-color:var(--sd-color-info-highlight) !important}.sd-bg-warning{background-color:var(--sd-color-warning) !important}.sd-bg-text-warning{color:var(--sd-color-warning-text) !important}button.sd-bg-warning:focus,button.sd-bg-warning:hover{background-color:var(--sd-color-warning-highlight) !important}a.sd-bg-warning:focus,a.sd-bg-warning:hover{background-color:var(--sd-color-warning-highlight) !important}.sd-bg-danger{background-color:var(--sd-color-danger) !important}.sd-bg-text-danger{color:var(--sd-color-danger-text) !important}button.sd-bg-danger:focus,button.sd-bg-danger:hover{background-color:var(--sd-color-danger-highlight) !important}a.sd-bg-danger:focus,a.sd-bg-danger:hover{background-color:var(--sd-color-danger-highlight) !important}.sd-bg-light{background-color:var(--sd-color-light) !important}.sd-bg-text-light{color:var(--sd-color-light-text) !important}button.sd-bg-light:focus,button.sd-bg-light:hover{background-color:var(--sd-color-light-highlight) !important}a.sd-bg-light:focus,a.sd-bg-light:hover{background-color:var(--sd-color-light-highlight) !important}.sd-bg-muted{background-color:var(--sd-color-muted) !important}.sd-bg-text-muted{color:var(--sd-color-muted-text) !important}button.sd-bg-muted:focus,button.sd-bg-muted:hover{background-color:var(--sd-color-muted-highlight) !important}a.sd-bg-muted:focus,a.sd-bg-muted:hover{background-color:var(--sd-color-muted-highlight) !important}.sd-bg-dark{background-color:var(--sd-color-dark) !important}.sd-bg-text-dark{color:var(--sd-color-dark-text) !important}button.sd-bg-dark:focus,button.sd-bg-dark:hover{background-color:var(--sd-color-dark-highlight) !important}a.sd-bg-dark:focus,a.sd-bg-dark:hover{background-color:var(--sd-color-dark-highlight) !important}.sd-bg-black{background-color:var(--sd-color-black) !important}.sd-bg-text-black{color:var(--sd-color-black-text) !important}button.sd-bg-black:focus,button.sd-bg-black:hover{background-color:var(--sd-color-black-highlight) !important}a.sd-bg-black:focus,a.sd-bg-black:hover{background-color:var(--sd-color-black-highlight) !important}.sd-bg-white{background-color:var(--sd-color-white) !important}.sd-bg-text-white{color:var(--sd-color-white-text) !important}button.sd-bg-white:focus,button.sd-bg-white:hover{background-color:var(--sd-color-white-highlight) !important}a.sd-bg-white:focus,a.sd-bg-white:hover{background-color:var(--sd-color-white-highlight) !important}.sd-text-primary,.sd-text-primary>p{color:var(--sd-color-primary) !important}a.sd-text-primary:focus,a.sd-text-primary:hover{color:var(--sd-color-primary-highlight) !important}.sd-text-secondary,.sd-text-secondary>p{color:var(--sd-color-secondary) !important}a.sd-text-secondary:focus,a.sd-text-secondary:hover{color:var(--sd-color-secondary-highlight) !important}.sd-text-success,.sd-text-success>p{color:var(--sd-color-success) !important}a.sd-text-success:focus,a.sd-text-success:hover{color:var(--sd-color-success-highlight) !important}.sd-text-info,.sd-text-info>p{color:var(--sd-color-info) !important}a.sd-text-info:focus,a.sd-text-info:hover{color:var(--sd-color-info-highlight) !important}.sd-text-warning,.sd-text-warning>p{color:var(--sd-color-warning) !important}a.sd-text-warning:focus,a.sd-text-warning:hover{color:var(--sd-color-warning-highlight) !important}.sd-text-danger,.sd-text-danger>p{color:var(--sd-color-danger) !important}a.sd-text-danger:focus,a.sd-text-danger:hover{color:var(--sd-color-danger-highlight) !important}.sd-text-light,.sd-text-light>p{color:var(--sd-color-light) !important}a.sd-text-light:focus,a.sd-text-light:hover{color:var(--sd-color-light-highlight) !important}.sd-text-muted,.sd-text-muted>p{color:var(--sd-color-muted) !important}a.sd-text-muted:focus,a.sd-text-muted:hover{color:var(--sd-color-muted-highlight) !important}.sd-text-dark,.sd-text-dark>p{color:var(--sd-color-dark) !important}a.sd-text-dark:focus,a.sd-text-dark:hover{color:var(--sd-color-dark-highlight) !important}.sd-text-black,.sd-text-black>p{color:var(--sd-color-black) !important}a.sd-text-black:focus,a.sd-text-black:hover{color:var(--sd-color-black-highlight) !important}.sd-text-white,.sd-text-white>p{color:var(--sd-color-white) !important}a.sd-text-white:focus,a.sd-text-white:hover{color:var(--sd-color-white-highlight) !important}.sd-outline-primary{border-color:var(--sd-color-primary) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-primary:focus,a.sd-outline-primary:hover{border-color:var(--sd-color-primary-highlight) !important}.sd-outline-secondary{border-color:var(--sd-color-secondary) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-secondary:focus,a.sd-outline-secondary:hover{border-color:var(--sd-color-secondary-highlight) !important}.sd-outline-success{border-color:var(--sd-color-success) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-success:focus,a.sd-outline-success:hover{border-color:var(--sd-color-success-highlight) !important}.sd-outline-info{border-color:var(--sd-color-info) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-info:focus,a.sd-outline-info:hover{border-color:var(--sd-color-info-highlight) !important}.sd-outline-warning{border-color:var(--sd-color-warning) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-warning:focus,a.sd-outline-warning:hover{border-color:var(--sd-color-warning-highlight) !important}.sd-outline-danger{border-color:var(--sd-color-danger) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-danger:focus,a.sd-outline-danger:hover{border-color:var(--sd-color-danger-highlight) !important}.sd-outline-light{border-color:var(--sd-color-light) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-light:focus,a.sd-outline-light:hover{border-color:var(--sd-color-light-highlight) !important}.sd-outline-muted{border-color:var(--sd-color-muted) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-muted:focus,a.sd-outline-muted:hover{border-color:var(--sd-color-muted-highlight) !important}.sd-outline-dark{border-color:var(--sd-color-dark) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-dark:focus,a.sd-outline-dark:hover{border-color:var(--sd-color-dark-highlight) !important}.sd-outline-black{border-color:var(--sd-color-black) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-black:focus,a.sd-outline-black:hover{border-color:var(--sd-color-black-highlight) !important}.sd-outline-white{border-color:var(--sd-color-white) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-white:focus,a.sd-outline-white:hover{border-color:var(--sd-color-white-highlight) !important}.sd-bg-transparent{background-color:transparent !important}.sd-outline-transparent{border-color:transparent !important}.sd-text-transparent{color:transparent !important}.sd-p-0{padding:0 !important}.sd-pt-0,.sd-py-0{padding-top:0 !important}.sd-pr-0,.sd-px-0{padding-right:0 !important}.sd-pb-0,.sd-py-0{padding-bottom:0 !important}.sd-pl-0,.sd-px-0{padding-left:0 !important}.sd-p-1{padding:.25rem !important}.sd-pt-1,.sd-py-1{padding-top:.25rem !important}.sd-pr-1,.sd-px-1{padding-right:.25rem !important}.sd-pb-1,.sd-py-1{padding-bottom:.25rem !important}.sd-pl-1,.sd-px-1{padding-left:.25rem !important}.sd-p-2{padding:.5rem !important}.sd-pt-2,.sd-py-2{padding-top:.5rem !important}.sd-pr-2,.sd-px-2{padding-right:.5rem !important}.sd-pb-2,.sd-py-2{padding-bottom:.5rem !important}.sd-pl-2,.sd-px-2{padding-left:.5rem !important}.sd-p-3{padding:1rem !important}.sd-pt-3,.sd-py-3{padding-top:1rem !important}.sd-pr-3,.sd-px-3{padding-right:1rem !important}.sd-pb-3,.sd-py-3{padding-bottom:1rem !important}.sd-pl-3,.sd-px-3{padding-left:1rem !important}.sd-p-4{padding:1.5rem !important}.sd-pt-4,.sd-py-4{padding-top:1.5rem !important}.sd-pr-4,.sd-px-4{padding-right:1.5rem !important}.sd-pb-4,.sd-py-4{padding-bottom:1.5rem !important}.sd-pl-4,.sd-px-4{padding-left:1.5rem !important}.sd-p-5{padding:3rem !important}.sd-pt-5,.sd-py-5{padding-top:3rem !important}.sd-pr-5,.sd-px-5{padding-right:3rem !important}.sd-pb-5,.sd-py-5{padding-bottom:3rem !important}.sd-pl-5,.sd-px-5{padding-left:3rem !important}.sd-m-auto{margin:auto !important}.sd-mt-auto,.sd-my-auto{margin-top:auto !important}.sd-mr-auto,.sd-mx-auto{margin-right:auto !important}.sd-mb-auto,.sd-my-auto{margin-bottom:auto !important}.sd-ml-auto,.sd-mx-auto{margin-left:auto !important}.sd-m-0{margin:0 !important}.sd-mt-0,.sd-my-0{margin-top:0 !important}.sd-mr-0,.sd-mx-0{margin-right:0 !important}.sd-mb-0,.sd-my-0{margin-bottom:0 !important}.sd-ml-0,.sd-mx-0{margin-left:0 !important}.sd-m-1{margin:.25rem !important}.sd-mt-1,.sd-my-1{margin-top:.25rem !important}.sd-mr-1,.sd-mx-1{margin-right:.25rem !important}.sd-mb-1,.sd-my-1{margin-bottom:.25rem !important}.sd-ml-1,.sd-mx-1{margin-left:.25rem !important}.sd-m-2{margin:.5rem !important}.sd-mt-2,.sd-my-2{margin-top:.5rem !important}.sd-mr-2,.sd-mx-2{margin-right:.5rem !important}.sd-mb-2,.sd-my-2{margin-bottom:.5rem !important}.sd-ml-2,.sd-mx-2{margin-left:.5rem !important}.sd-m-3{margin:1rem !important}.sd-mt-3,.sd-my-3{margin-top:1rem !important}.sd-mr-3,.sd-mx-3{margin-right:1rem !important}.sd-mb-3,.sd-my-3{margin-bottom:1rem !important}.sd-ml-3,.sd-mx-3{margin-left:1rem !important}.sd-m-4{margin:1.5rem !important}.sd-mt-4,.sd-my-4{margin-top:1.5rem !important}.sd-mr-4,.sd-mx-4{margin-right:1.5rem !important}.sd-mb-4,.sd-my-4{margin-bottom:1.5rem !important}.sd-ml-4,.sd-mx-4{margin-left:1.5rem !important}.sd-m-5{margin:3rem !important}.sd-mt-5,.sd-my-5{margin-top:3rem !important}.sd-mr-5,.sd-mx-5{margin-right:3rem !important}.sd-mb-5,.sd-my-5{margin-bottom:3rem !important}.sd-ml-5,.sd-mx-5{margin-left:3rem !important}.sd-w-25{width:25% !important}.sd-w-50{width:50% !important}.sd-w-75{width:75% !important}.sd-w-100{width:100% !important}.sd-w-auto{width:auto !important}.sd-h-25{height:25% !important}.sd-h-50{height:50% !important}.sd-h-75{height:75% !important}.sd-h-100{height:100% !important}.sd-h-auto{height:auto !important}.sd-d-none{display:none !important}.sd-d-inline{display:inline !important}.sd-d-inline-block{display:inline-block !important}.sd-d-block{display:block !important}.sd-d-grid{display:grid !important}.sd-d-flex-row{display:-ms-flexbox !important;display:flex !important;flex-direction:row !important}.sd-d-flex-column{display:-ms-flexbox !important;display:flex !important;flex-direction:column !important}.sd-d-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}@media(min-width: 576px){.sd-d-sm-none{display:none !important}.sd-d-sm-inline{display:inline !important}.sd-d-sm-inline-block{display:inline-block !important}.sd-d-sm-block{display:block !important}.sd-d-sm-grid{display:grid !important}.sd-d-sm-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-sm-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 768px){.sd-d-md-none{display:none !important}.sd-d-md-inline{display:inline !important}.sd-d-md-inline-block{display:inline-block !important}.sd-d-md-block{display:block !important}.sd-d-md-grid{display:grid !important}.sd-d-md-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-md-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 992px){.sd-d-lg-none{display:none !important}.sd-d-lg-inline{display:inline !important}.sd-d-lg-inline-block{display:inline-block !important}.sd-d-lg-block{display:block !important}.sd-d-lg-grid{display:grid !important}.sd-d-lg-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-lg-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 1200px){.sd-d-xl-none{display:none !important}.sd-d-xl-inline{display:inline !important}.sd-d-xl-inline-block{display:inline-block !important}.sd-d-xl-block{display:block !important}.sd-d-xl-grid{display:grid !important}.sd-d-xl-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-xl-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}.sd-align-major-start{justify-content:flex-start !important}.sd-align-major-end{justify-content:flex-end !important}.sd-align-major-center{justify-content:center !important}.sd-align-major-justify{justify-content:space-between !important}.sd-align-major-spaced{justify-content:space-evenly !important}.sd-align-minor-start{align-items:flex-start !important}.sd-align-minor-end{align-items:flex-end !important}.sd-align-minor-center{align-items:center !important}.sd-align-minor-stretch{align-items:stretch !important}.sd-text-justify{text-align:justify !important}.sd-text-left{text-align:left !important}.sd-text-right{text-align:right !important}.sd-text-center{text-align:center !important}.sd-font-weight-light{font-weight:300 !important}.sd-font-weight-lighter{font-weight:lighter !important}.sd-font-weight-normal{font-weight:400 !important}.sd-font-weight-bold{font-weight:700 !important}.sd-font-weight-bolder{font-weight:bolder !important}.sd-font-italic{font-style:italic !important}.sd-text-decoration-none{text-decoration:none !important}.sd-text-lowercase{text-transform:lowercase !important}.sd-text-uppercase{text-transform:uppercase !important}.sd-text-capitalize{text-transform:capitalize !important}.sd-text-wrap{white-space:normal !important}.sd-text-nowrap{white-space:nowrap !important}.sd-text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.sd-fs-1,.sd-fs-1>p{font-size:calc(1.375rem + 1.5vw) !important;line-height:unset !important}.sd-fs-2,.sd-fs-2>p{font-size:calc(1.325rem + 0.9vw) !important;line-height:unset !important}.sd-fs-3,.sd-fs-3>p{font-size:calc(1.3rem + 0.6vw) !important;line-height:unset !important}.sd-fs-4,.sd-fs-4>p{font-size:calc(1.275rem + 0.3vw) !important;line-height:unset !important}.sd-fs-5,.sd-fs-5>p{font-size:1.25rem !important;line-height:unset !important}.sd-fs-6,.sd-fs-6>p{font-size:1rem !important;line-height:unset !important}.sd-border-0{border:0 solid !important}.sd-border-top-0{border-top:0 solid !important}.sd-border-bottom-0{border-bottom:0 solid !important}.sd-border-right-0{border-right:0 solid !important}.sd-border-left-0{border-left:0 solid !important}.sd-border-1{border:1px solid !important}.sd-border-top-1{border-top:1px solid !important}.sd-border-bottom-1{border-bottom:1px solid !important}.sd-border-right-1{border-right:1px solid !important}.sd-border-left-1{border-left:1px solid !important}.sd-border-2{border:2px solid !important}.sd-border-top-2{border-top:2px solid !important}.sd-border-bottom-2{border-bottom:2px solid !important}.sd-border-right-2{border-right:2px solid !important}.sd-border-left-2{border-left:2px solid !important}.sd-border-3{border:3px solid !important}.sd-border-top-3{border-top:3px solid !important}.sd-border-bottom-3{border-bottom:3px solid !important}.sd-border-right-3{border-right:3px solid !important}.sd-border-left-3{border-left:3px solid !important}.sd-border-4{border:4px solid !important}.sd-border-top-4{border-top:4px solid !important}.sd-border-bottom-4{border-bottom:4px solid !important}.sd-border-right-4{border-right:4px solid !important}.sd-border-left-4{border-left:4px solid !important}.sd-border-5{border:5px solid !important}.sd-border-top-5{border-top:5px solid !important}.sd-border-bottom-5{border-bottom:5px solid !important}.sd-border-right-5{border-right:5px solid !important}.sd-border-left-5{border-left:5px solid !important}.sd-rounded-0{border-radius:0 !important}.sd-rounded-1{border-radius:.2rem !important}.sd-rounded-2{border-radius:.3rem !important}.sd-rounded-3{border-radius:.5rem !important}.sd-rounded-pill{border-radius:50rem !important}.sd-rounded-circle{border-radius:50% !important}.shadow-none{box-shadow:none !important}.sd-shadow-sm{box-shadow:0 .125rem .25rem var(--sd-color-shadow) !important}.sd-shadow-md{box-shadow:0 .5rem 1rem var(--sd-color-shadow) !important}.sd-shadow-lg{box-shadow:0 1rem 3rem var(--sd-color-shadow) !important}@keyframes sd-slide-from-left{0%{transform:translateX(-100%)}100%{transform:translateX(0)}}@keyframes sd-slide-from-right{0%{transform:translateX(200%)}100%{transform:translateX(0)}}@keyframes sd-grow100{0%{transform:scale(0);opacity:.5}100%{transform:scale(1);opacity:1}}@keyframes sd-grow50{0%{transform:scale(0.5);opacity:.5}100%{transform:scale(1);opacity:1}}@keyframes sd-grow50-rot20{0%{transform:scale(0.5) rotateZ(-20deg);opacity:.5}75%{transform:scale(1) rotateZ(5deg);opacity:1}95%{transform:scale(1) rotateZ(-1deg);opacity:1}100%{transform:scale(1) rotateZ(0);opacity:1}}.sd-animate-slide-from-left{animation:1s ease-out 0s 1 normal none running sd-slide-from-left}.sd-animate-slide-from-right{animation:1s ease-out 0s 1 normal none running sd-slide-from-right}.sd-animate-grow100{animation:1s ease-out 0s 1 normal none running sd-grow100}.sd-animate-grow50{animation:1s ease-out 0s 1 normal none running sd-grow50}.sd-animate-grow50-rot20{animation:1s ease-out 0s 1 normal none running sd-grow50-rot20}.sd-badge{display:inline-block;padding:.35em .65em;font-size:.75em;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem}.sd-badge:empty{display:none}a.sd-badge{text-decoration:none}.sd-btn .sd-badge{position:relative;top:-1px}.sd-btn{background-color:transparent;border:1px solid transparent;border-radius:.25rem;cursor:pointer;display:inline-block;font-weight:400;font-size:1rem;line-height:1.5;padding:.375rem .75rem;text-align:center;text-decoration:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;vertical-align:middle;user-select:none;-moz-user-select:none;-ms-user-select:none;-webkit-user-select:none}.sd-btn:hover{text-decoration:none}@media(prefers-reduced-motion: reduce){.sd-btn{transition:none}}.sd-btn-primary,.sd-btn-outline-primary:hover,.sd-btn-outline-primary:focus{color:var(--sd-color-primary-text) !important;background-color:var(--sd-color-primary) !important;border-color:var(--sd-color-primary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-primary:hover,.sd-btn-primary:focus{color:var(--sd-color-primary-text) !important;background-color:var(--sd-color-primary-highlight) !important;border-color:var(--sd-color-primary-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-primary{color:var(--sd-color-primary) !important;border-color:var(--sd-color-primary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-secondary,.sd-btn-outline-secondary:hover,.sd-btn-outline-secondary:focus{color:var(--sd-color-secondary-text) !important;background-color:var(--sd-color-secondary) !important;border-color:var(--sd-color-secondary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-secondary:hover,.sd-btn-secondary:focus{color:var(--sd-color-secondary-text) !important;background-color:var(--sd-color-secondary-highlight) !important;border-color:var(--sd-color-secondary-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-secondary{color:var(--sd-color-secondary) !important;border-color:var(--sd-color-secondary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-success,.sd-btn-outline-success:hover,.sd-btn-outline-success:focus{color:var(--sd-color-success-text) !important;background-color:var(--sd-color-success) !important;border-color:var(--sd-color-success) !important;border-width:1px !important;border-style:solid !important}.sd-btn-success:hover,.sd-btn-success:focus{color:var(--sd-color-success-text) !important;background-color:var(--sd-color-success-highlight) !important;border-color:var(--sd-color-success-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-success{color:var(--sd-color-success) !important;border-color:var(--sd-color-success) !important;border-width:1px !important;border-style:solid !important}.sd-btn-info,.sd-btn-outline-info:hover,.sd-btn-outline-info:focus{color:var(--sd-color-info-text) !important;background-color:var(--sd-color-info) !important;border-color:var(--sd-color-info) !important;border-width:1px !important;border-style:solid !important}.sd-btn-info:hover,.sd-btn-info:focus{color:var(--sd-color-info-text) !important;background-color:var(--sd-color-info-highlight) !important;border-color:var(--sd-color-info-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-info{color:var(--sd-color-info) !important;border-color:var(--sd-color-info) !important;border-width:1px !important;border-style:solid !important}.sd-btn-warning,.sd-btn-outline-warning:hover,.sd-btn-outline-warning:focus{color:var(--sd-color-warning-text) !important;background-color:var(--sd-color-warning) !important;border-color:var(--sd-color-warning) !important;border-width:1px !important;border-style:solid !important}.sd-btn-warning:hover,.sd-btn-warning:focus{color:var(--sd-color-warning-text) !important;background-color:var(--sd-color-warning-highlight) !important;border-color:var(--sd-color-warning-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-warning{color:var(--sd-color-warning) !important;border-color:var(--sd-color-warning) !important;border-width:1px !important;border-style:solid !important}.sd-btn-danger,.sd-btn-outline-danger:hover,.sd-btn-outline-danger:focus{color:var(--sd-color-danger-text) !important;background-color:var(--sd-color-danger) !important;border-color:var(--sd-color-danger) !important;border-width:1px !important;border-style:solid !important}.sd-btn-danger:hover,.sd-btn-danger:focus{color:var(--sd-color-danger-text) !important;background-color:var(--sd-color-danger-highlight) !important;border-color:var(--sd-color-danger-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-danger{color:var(--sd-color-danger) !important;border-color:var(--sd-color-danger) !important;border-width:1px !important;border-style:solid !important}.sd-btn-light,.sd-btn-outline-light:hover,.sd-btn-outline-light:focus{color:var(--sd-color-light-text) !important;background-color:var(--sd-color-light) !important;border-color:var(--sd-color-light) !important;border-width:1px !important;border-style:solid !important}.sd-btn-light:hover,.sd-btn-light:focus{color:var(--sd-color-light-text) !important;background-color:var(--sd-color-light-highlight) !important;border-color:var(--sd-color-light-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-light{color:var(--sd-color-light) !important;border-color:var(--sd-color-light) !important;border-width:1px !important;border-style:solid !important}.sd-btn-muted,.sd-btn-outline-muted:hover,.sd-btn-outline-muted:focus{color:var(--sd-color-muted-text) !important;background-color:var(--sd-color-muted) !important;border-color:var(--sd-color-muted) !important;border-width:1px !important;border-style:solid !important}.sd-btn-muted:hover,.sd-btn-muted:focus{color:var(--sd-color-muted-text) !important;background-color:var(--sd-color-muted-highlight) !important;border-color:var(--sd-color-muted-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-muted{color:var(--sd-color-muted) !important;border-color:var(--sd-color-muted) !important;border-width:1px !important;border-style:solid !important}.sd-btn-dark,.sd-btn-outline-dark:hover,.sd-btn-outline-dark:focus{color:var(--sd-color-dark-text) !important;background-color:var(--sd-color-dark) !important;border-color:var(--sd-color-dark) !important;border-width:1px !important;border-style:solid !important}.sd-btn-dark:hover,.sd-btn-dark:focus{color:var(--sd-color-dark-text) !important;background-color:var(--sd-color-dark-highlight) !important;border-color:var(--sd-color-dark-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-dark{color:var(--sd-color-dark) !important;border-color:var(--sd-color-dark) !important;border-width:1px !important;border-style:solid !important}.sd-btn-black,.sd-btn-outline-black:hover,.sd-btn-outline-black:focus{color:var(--sd-color-black-text) !important;background-color:var(--sd-color-black) !important;border-color:var(--sd-color-black) !important;border-width:1px !important;border-style:solid !important}.sd-btn-black:hover,.sd-btn-black:focus{color:var(--sd-color-black-text) !important;background-color:var(--sd-color-black-highlight) !important;border-color:var(--sd-color-black-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-black{color:var(--sd-color-black) !important;border-color:var(--sd-color-black) !important;border-width:1px !important;border-style:solid !important}.sd-btn-white,.sd-btn-outline-white:hover,.sd-btn-outline-white:focus{color:var(--sd-color-white-text) !important;background-color:var(--sd-color-white) !important;border-color:var(--sd-color-white) !important;border-width:1px !important;border-style:solid !important}.sd-btn-white:hover,.sd-btn-white:focus{color:var(--sd-color-white-text) !important;background-color:var(--sd-color-white-highlight) !important;border-color:var(--sd-color-white-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-white{color:var(--sd-color-white) !important;border-color:var(--sd-color-white) !important;border-width:1px !important;border-style:solid !important}.sd-stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;content:""}.sd-hide-link-text{font-size:0}.sd-octicon,.sd-material-icon{display:inline-block;fill:currentColor;vertical-align:middle}.sd-avatar-xs{border-radius:50%;object-fit:cover;object-position:center;width:1rem;height:1rem}.sd-avatar-sm{border-radius:50%;object-fit:cover;object-position:center;width:3rem;height:3rem}.sd-avatar-md{border-radius:50%;object-fit:cover;object-position:center;width:5rem;height:5rem}.sd-avatar-lg{border-radius:50%;object-fit:cover;object-position:center;width:7rem;height:7rem}.sd-avatar-xl{border-radius:50%;object-fit:cover;object-position:center;width:10rem;height:10rem}.sd-avatar-inherit{border-radius:50%;object-fit:cover;object-position:center;width:inherit;height:inherit}.sd-avatar-initial{border-radius:50%;object-fit:cover;object-position:center;width:initial;height:initial}.sd-card{background-clip:border-box;background-color:var(--sd-color-card-background);border:1px solid var(--sd-color-card-border);border-radius:.25rem;color:var(--sd-color-card-text);display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;min-width:0;position:relative;word-wrap:break-word}.sd-card>hr{margin-left:0;margin-right:0}.sd-card-hover:hover{border-color:var(--sd-color-card-border-hover);transform:scale(1.01)}.sd-card-body{-ms-flex:1 1 auto;flex:1 1 auto;padding:1rem 1rem}.sd-card-title{margin-bottom:.5rem}.sd-card-subtitle{margin-top:-0.25rem;margin-bottom:0}.sd-card-text:last-child{margin-bottom:0}.sd-card-link:hover{text-decoration:none}.sd-card-link+.card-link{margin-left:1rem}.sd-card-header{padding:.5rem 1rem;margin-bottom:0;background-color:var(--sd-color-card-header);border-bottom:1px solid var(--sd-color-card-border)}.sd-card-header:first-child{border-radius:calc(0.25rem - 1px) calc(0.25rem - 1px) 0 0}.sd-card-footer{padding:.5rem 1rem;background-color:var(--sd-color-card-footer);border-top:1px solid var(--sd-color-card-border)}.sd-card-footer:last-child{border-radius:0 0 calc(0.25rem - 1px) calc(0.25rem - 1px)}.sd-card-header-tabs{margin-right:-0.5rem;margin-bottom:-0.5rem;margin-left:-0.5rem;border-bottom:0}.sd-card-header-pills{margin-right:-0.5rem;margin-left:-0.5rem}.sd-card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1rem;border-radius:calc(0.25rem - 1px)}.sd-card-img,.sd-card-img-bottom,.sd-card-img-top{width:100%}.sd-card-img,.sd-card-img-top{border-top-left-radius:calc(0.25rem - 1px);border-top-right-radius:calc(0.25rem - 1px)}.sd-card-img,.sd-card-img-bottom{border-bottom-left-radius:calc(0.25rem - 1px);border-bottom-right-radius:calc(0.25rem - 1px)}.sd-cards-carousel{width:100%;display:flex;flex-wrap:nowrap;-ms-flex-direction:row;flex-direction:row;overflow-x:hidden;scroll-snap-type:x mandatory}.sd-cards-carousel.sd-show-scrollbar{overflow-x:auto}.sd-cards-carousel:hover,.sd-cards-carousel:focus{overflow-x:auto}.sd-cards-carousel>.sd-card{flex-shrink:0;scroll-snap-align:start}.sd-cards-carousel>.sd-card:not(:last-child){margin-right:3px}.sd-card-cols-1>.sd-card{width:90%}.sd-card-cols-2>.sd-card{width:45%}.sd-card-cols-3>.sd-card{width:30%}.sd-card-cols-4>.sd-card{width:22.5%}.sd-card-cols-5>.sd-card{width:18%}.sd-card-cols-6>.sd-card{width:15%}.sd-card-cols-7>.sd-card{width:12.8571428571%}.sd-card-cols-8>.sd-card{width:11.25%}.sd-card-cols-9>.sd-card{width:10%}.sd-card-cols-10>.sd-card{width:9%}.sd-card-cols-11>.sd-card{width:8.1818181818%}.sd-card-cols-12>.sd-card{width:7.5%}.sd-container,.sd-container-fluid,.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container-xl{margin-left:auto;margin-right:auto;padding-left:var(--sd-gutter-x, 0.75rem);padding-right:var(--sd-gutter-x, 0.75rem);width:100%}@media(min-width: 576px){.sd-container-sm,.sd-container{max-width:540px}}@media(min-width: 768px){.sd-container-md,.sd-container-sm,.sd-container{max-width:720px}}@media(min-width: 992px){.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container{max-width:960px}}@media(min-width: 1200px){.sd-container-xl,.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container{max-width:1140px}}.sd-row{--sd-gutter-x: 1.5rem;--sd-gutter-y: 0;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-top:calc(var(--sd-gutter-y) * -1);margin-right:calc(var(--sd-gutter-x) * -0.5);margin-left:calc(var(--sd-gutter-x) * -0.5)}.sd-row>*{box-sizing:border-box;flex-shrink:0;width:100%;max-width:100%;padding-right:calc(var(--sd-gutter-x) * 0.5);padding-left:calc(var(--sd-gutter-x) * 0.5);margin-top:var(--sd-gutter-y)}.sd-col{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-auto>*{flex:0 0 auto;width:auto}.sd-row-cols-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}@media(min-width: 576px){.sd-col-sm{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-sm-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-sm-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-sm-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-sm-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-sm-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-sm-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-sm-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-sm-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-sm-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-sm-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-sm-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-sm-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-sm-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 768px){.sd-col-md{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-md-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-md-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-md-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-md-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-md-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-md-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-md-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-md-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-md-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-md-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-md-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-md-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-md-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 992px){.sd-col-lg{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-lg-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-lg-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-lg-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-lg-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-lg-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-lg-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-lg-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-lg-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-lg-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-lg-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-lg-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-lg-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-lg-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 1200px){.sd-col-xl{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-xl-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-xl-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-xl-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-xl-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-xl-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-xl-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-xl-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-xl-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-xl-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-xl-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-xl-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-xl-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-xl-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}.sd-col-auto{flex:0 0 auto;-ms-flex:0 0 auto;width:auto}.sd-col-1{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}.sd-col-2{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-col-3{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-col-4{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-col-5{flex:0 0 auto;-ms-flex:0 0 auto;width:41.6666666667%}.sd-col-6{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-col-7{flex:0 0 auto;-ms-flex:0 0 auto;width:58.3333333333%}.sd-col-8{flex:0 0 auto;-ms-flex:0 0 auto;width:66.6666666667%}.sd-col-9{flex:0 0 auto;-ms-flex:0 0 auto;width:75%}.sd-col-10{flex:0 0 auto;-ms-flex:0 0 auto;width:83.3333333333%}.sd-col-11{flex:0 0 auto;-ms-flex:0 0 auto;width:91.6666666667%}.sd-col-12{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-g-0,.sd-gy-0{--sd-gutter-y: 0}.sd-g-0,.sd-gx-0{--sd-gutter-x: 0}.sd-g-1,.sd-gy-1{--sd-gutter-y: 0.25rem}.sd-g-1,.sd-gx-1{--sd-gutter-x: 0.25rem}.sd-g-2,.sd-gy-2{--sd-gutter-y: 0.5rem}.sd-g-2,.sd-gx-2{--sd-gutter-x: 0.5rem}.sd-g-3,.sd-gy-3{--sd-gutter-y: 1rem}.sd-g-3,.sd-gx-3{--sd-gutter-x: 1rem}.sd-g-4,.sd-gy-4{--sd-gutter-y: 1.5rem}.sd-g-4,.sd-gx-4{--sd-gutter-x: 1.5rem}.sd-g-5,.sd-gy-5{--sd-gutter-y: 3rem}.sd-g-5,.sd-gx-5{--sd-gutter-x: 3rem}@media(min-width: 576px){.sd-col-sm-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-sm-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-sm-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-sm-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-sm-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-sm-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-sm-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-sm-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-sm-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-sm-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-sm-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-sm-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-sm-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-sm-0,.sd-gy-sm-0{--sd-gutter-y: 0}.sd-g-sm-0,.sd-gx-sm-0{--sd-gutter-x: 0}.sd-g-sm-1,.sd-gy-sm-1{--sd-gutter-y: 0.25rem}.sd-g-sm-1,.sd-gx-sm-1{--sd-gutter-x: 0.25rem}.sd-g-sm-2,.sd-gy-sm-2{--sd-gutter-y: 0.5rem}.sd-g-sm-2,.sd-gx-sm-2{--sd-gutter-x: 0.5rem}.sd-g-sm-3,.sd-gy-sm-3{--sd-gutter-y: 1rem}.sd-g-sm-3,.sd-gx-sm-3{--sd-gutter-x: 1rem}.sd-g-sm-4,.sd-gy-sm-4{--sd-gutter-y: 1.5rem}.sd-g-sm-4,.sd-gx-sm-4{--sd-gutter-x: 1.5rem}.sd-g-sm-5,.sd-gy-sm-5{--sd-gutter-y: 3rem}.sd-g-sm-5,.sd-gx-sm-5{--sd-gutter-x: 3rem}}@media(min-width: 768px){.sd-col-md-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-md-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-md-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-md-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-md-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-md-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-md-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-md-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-md-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-md-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-md-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-md-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-md-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-md-0,.sd-gy-md-0{--sd-gutter-y: 0}.sd-g-md-0,.sd-gx-md-0{--sd-gutter-x: 0}.sd-g-md-1,.sd-gy-md-1{--sd-gutter-y: 0.25rem}.sd-g-md-1,.sd-gx-md-1{--sd-gutter-x: 0.25rem}.sd-g-md-2,.sd-gy-md-2{--sd-gutter-y: 0.5rem}.sd-g-md-2,.sd-gx-md-2{--sd-gutter-x: 0.5rem}.sd-g-md-3,.sd-gy-md-3{--sd-gutter-y: 1rem}.sd-g-md-3,.sd-gx-md-3{--sd-gutter-x: 1rem}.sd-g-md-4,.sd-gy-md-4{--sd-gutter-y: 1.5rem}.sd-g-md-4,.sd-gx-md-4{--sd-gutter-x: 1.5rem}.sd-g-md-5,.sd-gy-md-5{--sd-gutter-y: 3rem}.sd-g-md-5,.sd-gx-md-5{--sd-gutter-x: 3rem}}@media(min-width: 992px){.sd-col-lg-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-lg-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-lg-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-lg-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-lg-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-lg-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-lg-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-lg-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-lg-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-lg-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-lg-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-lg-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-lg-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-lg-0,.sd-gy-lg-0{--sd-gutter-y: 0}.sd-g-lg-0,.sd-gx-lg-0{--sd-gutter-x: 0}.sd-g-lg-1,.sd-gy-lg-1{--sd-gutter-y: 0.25rem}.sd-g-lg-1,.sd-gx-lg-1{--sd-gutter-x: 0.25rem}.sd-g-lg-2,.sd-gy-lg-2{--sd-gutter-y: 0.5rem}.sd-g-lg-2,.sd-gx-lg-2{--sd-gutter-x: 0.5rem}.sd-g-lg-3,.sd-gy-lg-3{--sd-gutter-y: 1rem}.sd-g-lg-3,.sd-gx-lg-3{--sd-gutter-x: 1rem}.sd-g-lg-4,.sd-gy-lg-4{--sd-gutter-y: 1.5rem}.sd-g-lg-4,.sd-gx-lg-4{--sd-gutter-x: 1.5rem}.sd-g-lg-5,.sd-gy-lg-5{--sd-gutter-y: 3rem}.sd-g-lg-5,.sd-gx-lg-5{--sd-gutter-x: 3rem}}@media(min-width: 1200px){.sd-col-xl-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-xl-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-xl-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-xl-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-xl-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-xl-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-xl-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-xl-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-xl-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-xl-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-xl-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-xl-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-xl-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-xl-0,.sd-gy-xl-0{--sd-gutter-y: 0}.sd-g-xl-0,.sd-gx-xl-0{--sd-gutter-x: 0}.sd-g-xl-1,.sd-gy-xl-1{--sd-gutter-y: 0.25rem}.sd-g-xl-1,.sd-gx-xl-1{--sd-gutter-x: 0.25rem}.sd-g-xl-2,.sd-gy-xl-2{--sd-gutter-y: 0.5rem}.sd-g-xl-2,.sd-gx-xl-2{--sd-gutter-x: 0.5rem}.sd-g-xl-3,.sd-gy-xl-3{--sd-gutter-y: 1rem}.sd-g-xl-3,.sd-gx-xl-3{--sd-gutter-x: 1rem}.sd-g-xl-4,.sd-gy-xl-4{--sd-gutter-y: 1.5rem}.sd-g-xl-4,.sd-gx-xl-4{--sd-gutter-x: 1.5rem}.sd-g-xl-5,.sd-gy-xl-5{--sd-gutter-y: 3rem}.sd-g-xl-5,.sd-gx-xl-5{--sd-gutter-x: 3rem}}.sd-flex-row-reverse{flex-direction:row-reverse !important}details.sd-dropdown{position:relative}details.sd-dropdown .sd-summary-title{font-weight:700;padding-right:3em !important;-moz-user-select:none;-ms-user-select:none;-webkit-user-select:none;user-select:none}details.sd-dropdown:hover{cursor:pointer}details.sd-dropdown .sd-summary-content{cursor:default}details.sd-dropdown summary{list-style:none;padding:1em}details.sd-dropdown summary .sd-octicon.no-title{vertical-align:middle}details.sd-dropdown[open] summary .sd-octicon.no-title{visibility:hidden}details.sd-dropdown summary::-webkit-details-marker{display:none}details.sd-dropdown summary:focus{outline:none}details.sd-dropdown .sd-summary-icon{margin-right:.5em}details.sd-dropdown .sd-summary-icon svg{opacity:.8}details.sd-dropdown summary:hover .sd-summary-up svg,details.sd-dropdown summary:hover .sd-summary-down svg{opacity:1;transform:scale(1.1)}details.sd-dropdown .sd-summary-up svg,details.sd-dropdown .sd-summary-down svg{display:block;opacity:.6}details.sd-dropdown .sd-summary-up,details.sd-dropdown .sd-summary-down{pointer-events:none;position:absolute;right:1em;top:1em}details.sd-dropdown[open]>.sd-summary-title .sd-summary-down{visibility:hidden}details.sd-dropdown:not([open])>.sd-summary-title .sd-summary-up{visibility:hidden}details.sd-dropdown:not([open]).sd-card{border:none}details.sd-dropdown:not([open])>.sd-card-header{border:1px solid var(--sd-color-card-border);border-radius:.25rem}details.sd-dropdown.sd-fade-in[open] summary~*{-moz-animation:sd-fade-in .5s ease-in-out;-webkit-animation:sd-fade-in .5s ease-in-out;animation:sd-fade-in .5s ease-in-out}details.sd-dropdown.sd-fade-in-slide-down[open] summary~*{-moz-animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out;-webkit-animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out;animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out}.sd-col>.sd-dropdown{width:100%}.sd-summary-content>.sd-tab-set:first-child{margin-top:0}@keyframes sd-fade-in{0%{opacity:0}100%{opacity:1}}@keyframes sd-slide-down{0%{transform:translate(0, -10px)}100%{transform:translate(0, 0)}}.sd-tab-set{border-radius:.125rem;display:flex;flex-wrap:wrap;margin:1em 0;position:relative}.sd-tab-set>input{opacity:0;position:absolute}.sd-tab-set>input:checked+label{border-color:var(--sd-color-tabs-underline-active);color:var(--sd-color-tabs-label-active)}.sd-tab-set>input:checked+label+.sd-tab-content{display:block}.sd-tab-set>input:not(:checked)+label:hover{color:var(--sd-color-tabs-label-hover);border-color:var(--sd-color-tabs-underline-hover)}.sd-tab-set>input:focus+label{outline-style:auto}.sd-tab-set>input:not(.focus-visible)+label{outline:none;-webkit-tap-highlight-color:transparent}.sd-tab-set>label{border-bottom:.125rem solid transparent;margin-bottom:0;color:var(--sd-color-tabs-label-inactive);border-color:var(--sd-color-tabs-underline-inactive);cursor:pointer;font-size:var(--sd-fontsize-tabs-label);font-weight:700;padding:1em 1.25em .5em;transition:color 250ms;width:auto;z-index:1}html .sd-tab-set>label:hover{color:var(--sd-color-tabs-label-active)}.sd-col>.sd-tab-set{width:100%}.sd-tab-content{box-shadow:0 -0.0625rem var(--sd-color-tabs-overline),0 .0625rem var(--sd-color-tabs-underline);display:none;order:99;padding-bottom:.75rem;padding-top:.75rem;width:100%}.sd-tab-content>:first-child{margin-top:0 !important}.sd-tab-content>:last-child{margin-bottom:0 !important}.sd-tab-content>.sd-tab-set{margin:0}.sd-sphinx-override,.sd-sphinx-override *{-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}.sd-sphinx-override p{margin-top:0}:root{--sd-color-primary: #007bff;--sd-color-secondary: #6c757d;--sd-color-success: #28a745;--sd-color-info: #17a2b8;--sd-color-warning: #f0b37e;--sd-color-danger: #dc3545;--sd-color-light: #f8f9fa;--sd-color-muted: #6c757d;--sd-color-dark: #212529;--sd-color-black: black;--sd-color-white: white;--sd-color-primary-highlight: #0069d9;--sd-color-secondary-highlight: #5c636a;--sd-color-success-highlight: #228e3b;--sd-color-info-highlight: #148a9c;--sd-color-warning-highlight: #cc986b;--sd-color-danger-highlight: #bb2d3b;--sd-color-light-highlight: #d3d4d5;--sd-color-muted-highlight: #5c636a;--sd-color-dark-highlight: #1c1f23;--sd-color-black-highlight: black;--sd-color-white-highlight: #d9d9d9;--sd-color-primary-text: #fff;--sd-color-secondary-text: #fff;--sd-color-success-text: #fff;--sd-color-info-text: #fff;--sd-color-warning-text: #212529;--sd-color-danger-text: #fff;--sd-color-light-text: #212529;--sd-color-muted-text: #fff;--sd-color-dark-text: #fff;--sd-color-black-text: #fff;--sd-color-white-text: #212529;--sd-color-shadow: rgba(0, 0, 0, 0.15);--sd-color-card-border: rgba(0, 0, 0, 0.125);--sd-color-card-border-hover: hsla(231, 99%, 66%, 1);--sd-color-card-background: transparent;--sd-color-card-text: inherit;--sd-color-card-header: transparent;--sd-color-card-footer: transparent;--sd-color-tabs-label-active: hsla(231, 99%, 66%, 1);--sd-color-tabs-label-hover: hsla(231, 99%, 66%, 1);--sd-color-tabs-label-inactive: hsl(0, 0%, 66%);--sd-color-tabs-underline-active: hsla(231, 99%, 66%, 1);--sd-color-tabs-underline-hover: rgba(178, 206, 245, 0.62);--sd-color-tabs-underline-inactive: transparent;--sd-color-tabs-overline: rgb(222, 222, 222);--sd-color-tabs-underline: rgb(222, 222, 222);--sd-fontsize-tabs-label: 1rem} diff --git a/pr-preview/pr-27/_static/design-tabs.js b/pr-preview/pr-27/_static/design-tabs.js deleted file mode 100644 index 36b38cf0..00000000 --- a/pr-preview/pr-27/_static/design-tabs.js +++ /dev/null @@ -1,27 +0,0 @@ -var sd_labels_by_text = {}; - -function ready() { - const li = document.getElementsByClassName("sd-tab-label"); - for (const label of li) { - syncId = label.getAttribute("data-sync-id"); - if (syncId) { - label.onclick = onLabelClick; - if (!sd_labels_by_text[syncId]) { - sd_labels_by_text[syncId] = []; - } - sd_labels_by_text[syncId].push(label); - } - } -} - -function onLabelClick() { - // Activate other inputs with the same sync id. - syncId = this.getAttribute("data-sync-id"); - for (label of sd_labels_by_text[syncId]) { - if (label === this) continue; - label.previousElementSibling.checked = true; - } - window.localStorage.setItem("sphinx-design-last-tab", syncId); -} - -document.addEventListener("DOMContentLoaded", ready, false); diff --git a/pr-preview/pr-27/_static/doctools.js b/pr-preview/pr-27/_static/doctools.js deleted file mode 100644 index c3db08d1..00000000 --- a/pr-preview/pr-27/_static/doctools.js +++ /dev/null @@ -1,264 +0,0 @@ -/* - * doctools.js - * ~~~~~~~~~~~ - * - * Base JavaScript utilities for all Sphinx HTML documentation. - * - * :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. - * :license: BSD, see LICENSE for details. - * - */ -"use strict"; - -const _ready = (callback) => { - if (document.readyState !== "loading") { - callback(); - } else { - document.addEventListener("DOMContentLoaded", callback); - } -}; - -/** - * highlight a given string on a node by wrapping it in - * span elements with the given class name. - */ -const _highlight = (node, addItems, text, className) => { - if (node.nodeType === Node.TEXT_NODE) { - const val = node.nodeValue; - const parent = node.parentNode; - const pos = val.toLowerCase().indexOf(text); - if ( - pos >= 0 && - !parent.classList.contains(className) && - !parent.classList.contains("nohighlight") - ) { - let span; - - const closestNode = parent.closest("body, svg, foreignObject"); - const isInSVG = closestNode && closestNode.matches("svg"); - if (isInSVG) { - span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); - } else { - span = document.createElement("span"); - span.classList.add(className); - } - - span.appendChild(document.createTextNode(val.substr(pos, text.length))); - parent.insertBefore( - span, - parent.insertBefore( - document.createTextNode(val.substr(pos + text.length)), - node.nextSibling - ) - ); - node.nodeValue = val.substr(0, pos); - - if (isInSVG) { - const rect = document.createElementNS( - "http://www.w3.org/2000/svg", - "rect" - ); - const bbox = parent.getBBox(); - rect.x.baseVal.value = bbox.x; - rect.y.baseVal.value = bbox.y; - rect.width.baseVal.value = bbox.width; - rect.height.baseVal.value = bbox.height; - rect.setAttribute("class", className); - addItems.push({ parent: parent, target: rect }); - } - } - } else if (node.matches && !node.matches("button, select, textarea")) { - node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); - } -}; -const _highlightText = (thisNode, text, className) => { - let addItems = []; - _highlight(thisNode, addItems, text, className); - addItems.forEach((obj) => - obj.parent.insertAdjacentElement("beforebegin", obj.target) - ); -}; - -/** - * Small JavaScript module for the documentation. - */ -const Documentation = { - init: () => { - Documentation.highlightSearchWords(); - Documentation.initDomainIndexTable(); - Documentation.initOnKeyListeners(); - }, - - /** - * i18n support - */ - TRANSLATIONS: {}, - PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), - LOCALE: "unknown", - - // gettext and ngettext don't access this so that the functions - // can safely bound to a different name (_ = Documentation.gettext) - gettext: (string) => { - const translated = Documentation.TRANSLATIONS[string]; - switch (typeof translated) { - case "undefined": - return string; // no translation - case "string": - return translated; // translation exists - default: - return translated[0]; // (singular, plural) translation tuple exists - } - }, - - ngettext: (singular, plural, n) => { - const translated = Documentation.TRANSLATIONS[singular]; - if (typeof translated !== "undefined") - return translated[Documentation.PLURAL_EXPR(n)]; - return n === 1 ? singular : plural; - }, - - addTranslations: (catalog) => { - Object.assign(Documentation.TRANSLATIONS, catalog.messages); - Documentation.PLURAL_EXPR = new Function( - "n", - `return (${catalog.plural_expr})` - ); - Documentation.LOCALE = catalog.locale; - }, - - /** - * highlight the search words provided in the url in the text - */ - highlightSearchWords: () => { - const highlight = - new URLSearchParams(window.location.search).get("highlight") || ""; - const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); - if (terms.length === 0) return; // nothing to do - - // There should never be more than one element matching "div.body" - const divBody = document.querySelectorAll("div.body"); - const body = divBody.length ? divBody[0] : document.querySelector("body"); - window.setTimeout(() => { - terms.forEach((term) => _highlightText(body, term, "highlighted")); - }, 10); - - const searchBox = document.getElementById("searchbox"); - if (searchBox === null) return; - searchBox.appendChild( - document - .createRange() - .createContextualFragment( - '' + - '' + - Documentation.gettext("Hide Search Matches") + - "
" - ) - ); - }, - - /** - * helper function to hide the search marks again - */ - hideSearchWords: () => { - document - .querySelectorAll("#searchbox .highlight-link") - .forEach((el) => el.remove()); - document - .querySelectorAll("span.highlighted") - .forEach((el) => el.classList.remove("highlighted")); - const url = new URL(window.location); - url.searchParams.delete("highlight"); - window.history.replaceState({}, "", url); - }, - - /** - * helper function to focus on search bar - */ - focusSearchBar: () => { - document.querySelectorAll("input[name=q]")[0]?.focus(); - }, - - /** - * Initialise the domain index toggle buttons - */ - initDomainIndexTable: () => { - const toggler = (el) => { - const idNumber = el.id.substr(7); - const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); - if (el.src.substr(-9) === "minus.png") { - el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; - toggledRows.forEach((el) => (el.style.display = "none")); - } else { - el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; - toggledRows.forEach((el) => (el.style.display = "")); - } - }; - - const togglerElements = document.querySelectorAll("img.toggler"); - togglerElements.forEach((el) => - el.addEventListener("click", (event) => toggler(event.currentTarget)) - ); - togglerElements.forEach((el) => (el.style.display = "")); - if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); - }, - - initOnKeyListeners: () => { - // only install a listener if it is really needed - if ( - !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && - !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS - ) - return; - - const blacklistedElements = new Set([ - "TEXTAREA", - "INPUT", - "SELECT", - "BUTTON", - ]); - document.addEventListener("keydown", (event) => { - if (blacklistedElements.has(document.activeElement.tagName)) return; // bail for input elements - if (event.altKey || event.ctrlKey || event.metaKey) return; // bail with special keys - - if (!event.shiftKey) { - switch (event.key) { - case "ArrowLeft": - if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; - - const prevLink = document.querySelector('link[rel="prev"]'); - if (prevLink && prevLink.href) { - window.location.href = prevLink.href; - event.preventDefault(); - } - break; - case "ArrowRight": - if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; - - const nextLink = document.querySelector('link[rel="next"]'); - if (nextLink && nextLink.href) { - window.location.href = nextLink.href; - event.preventDefault(); - } - break; - case "Escape": - if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; - Documentation.hideSearchWords(); - event.preventDefault(); - } - } - - // some keyboard layouts may need Shift to get / - switch (event.key) { - case "/": - if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; - Documentation.focusSearchBar(); - event.preventDefault(); - } - }); - }, -}; - -// quick alias for translations -const _ = Documentation.gettext; - -_ready(Documentation.init); diff --git a/pr-preview/pr-27/_static/documentation_options.js b/pr-preview/pr-27/_static/documentation_options.js deleted file mode 100644 index 162a6ba8..00000000 --- a/pr-preview/pr-27/_static/documentation_options.js +++ /dev/null @@ -1,14 +0,0 @@ -var DOCUMENTATION_OPTIONS = { - URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), - VERSION: '', - LANGUAGE: 'en', - COLLAPSE_INDEX: false, - BUILDER: 'html', - FILE_SUFFIX: '.html', - LINK_SUFFIX: '.html', - HAS_SOURCE: true, - SOURCELINK_SUFFIX: '', - NAVIGATION_WITH_KEYS: false, - SHOW_SEARCH_SUMMARY: true, - ENABLE_SEARCH_SHORTCUTS: false, -}; \ No newline at end of file diff --git a/pr-preview/pr-27/_static/figures/alg/boundary_mps.svg b/pr-preview/pr-27/_static/figures/alg/boundary_mps.svg deleted file mode 100644 index ee734cd8..00000000 --- a/pr-preview/pr-27/_static/figures/alg/boundary_mps.svg +++ /dev/null @@ -1,190 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/ctmrg.svg b/pr-preview/pr-27/_static/figures/alg/ctmrg.svg deleted file mode 100644 index 0aed1ad7..00000000 --- a/pr-preview/pr-27/_static/figures/alg/ctmrg.svg +++ /dev/null @@ -1,411 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/lambda.svg b/pr-preview/pr-27/_static/figures/alg/lambda.svg deleted file mode 100644 index a07314d0..00000000 --- a/pr-preview/pr-27/_static/figures/alg/lambda.svg +++ /dev/null @@ -1,211 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/lattice.svg b/pr-preview/pr-27/_static/figures/alg/lattice.svg deleted file mode 100644 index 17b60c98..00000000 --- a/pr-preview/pr-27/_static/figures/alg/lattice.svg +++ /dev/null @@ -1,104 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/simple_update.svg b/pr-preview/pr-27/_static/figures/alg/simple_update.svg deleted file mode 100644 index 3e8768d5..00000000 --- a/pr-preview/pr-27/_static/figures/alg/simple_update.svg +++ /dev/null @@ -1,42 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/tebd.svg b/pr-preview/pr-27/_static/figures/alg/tebd.svg deleted file mode 100644 index ea02b62f..00000000 --- a/pr-preview/pr-27/_static/figures/alg/tebd.svg +++ /dev/null @@ -1,613 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/tebd_mps.svg b/pr-preview/pr-27/_static/figures/alg/tebd_mps.svg deleted file mode 100644 index a9f6fdc1..00000000 --- a/pr-preview/pr-27/_static/figures/alg/tebd_mps.svg +++ /dev/null @@ -1,593 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/tebd_trunc.svg b/pr-preview/pr-27/_static/figures/alg/tebd_trunc.svg deleted file mode 100644 index e4f88c49..00000000 --- a/pr-preview/pr-27/_static/figures/alg/tebd_trunc.svg +++ /dev/null @@ -1,152 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/tensor_network.svg b/pr-preview/pr-27/_static/figures/alg/tensor_network.svg deleted file mode 100644 index 0308da40..00000000 --- a/pr-preview/pr-27/_static/figures/alg/tensor_network.svg +++ /dev/null @@ -1,43 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/figures/alg/transfer.svg b/pr-preview/pr-27/_static/figures/alg/transfer.svg deleted file mode 100644 index 0f8acda0..00000000 --- a/pr-preview/pr-27/_static/figures/alg/transfer.svg +++ /dev/null @@ -1,136 +0,0 @@ - - diff --git a/pr-preview/pr-27/_static/file.png b/pr-preview/pr-27/_static/file.png deleted file mode 100644 index a858a410..00000000 Binary files a/pr-preview/pr-27/_static/file.png and /dev/null differ diff --git a/pr-preview/pr-27/_static/jquery-3.6.0.js b/pr-preview/pr-27/_static/jquery-3.6.0.js deleted file mode 100644 index fc6c299b..00000000 --- a/pr-preview/pr-27/_static/jquery-3.6.0.js +++ /dev/null @@ -1,10881 +0,0 @@ -/*! - * jQuery JavaScript Library v3.6.0 - * https://jquery.com/ - * - * Includes Sizzle.js - * https://sizzlejs.com/ - * - * Copyright OpenJS Foundation and other contributors - * Released under the MIT license - * https://jquery.org/license - * - * Date: 2021-03-02T17:08Z - */ -( function( global, factory ) { - - "use strict"; - - if ( typeof module === "object" && typeof module.exports === "object" ) { - - // For CommonJS and CommonJS-like environments where a proper `window` - // is present, execute the factory and get jQuery. - // For environments that do not have a `window` with a `document` - // (such as Node.js), expose a factory as module.exports. - // This accentuates the need for the creation of a real `window`. - // e.g. var jQuery = require("jquery")(window); - // See ticket #14549 for more info. - module.exports = global.document ? - factory( global, true ) : - function( w ) { - if ( !w.document ) { - throw new Error( "jQuery requires a window with a document" ); - } - return factory( w ); - }; - } else { - factory( global ); - } - -// Pass this if window is not defined yet -} )( typeof window !== "undefined" ? window : this, function( window, noGlobal ) { - -// Edge <= 12 - 13+, Firefox <=18 - 45+, IE 10 - 11, Safari 5.1 - 9+, iOS 6 - 9.1 -// throw exceptions when non-strict code (e.g., ASP.NET 4.5) accesses strict mode -// arguments.callee.caller (trac-13335). But as of jQuery 3.0 (2016), strict mode should be common -// enough that all such attempts are guarded in a try block. -"use strict"; - -var arr = []; - -var getProto = Object.getPrototypeOf; - -var slice = arr.slice; - -var flat = arr.flat ? function( array ) { - return arr.flat.call( array ); -} : function( array ) { - return arr.concat.apply( [], array ); -}; - - -var push = arr.push; - -var indexOf = arr.indexOf; - -var class2type = {}; - -var toString = class2type.toString; - -var hasOwn = class2type.hasOwnProperty; - -var fnToString = hasOwn.toString; - -var ObjectFunctionString = fnToString.call( Object ); - -var support = {}; - -var isFunction = function isFunction( obj ) { - - // Support: Chrome <=57, Firefox <=52 - // In some browsers, typeof returns "function" for HTML