-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Action of generic linear transformation f
not in accord with f.matrix()
#465
Comments
f
not in accord with f.matrix()
f
not in accord with f.matrix()
f
not in accord with f.matrix()
Thanks for the report. It would be helpful in future if you could upload notebooks to gist.github.com so that I can view them without downloading, unzipping, and running the notebook. I've done it for you again at https://gist.github.com/eric-wieser/784c026fa0292d4d33e0de969961f19f.
|
I think the underlying confusion is what matrices and their multiplication means in a non-orthogonal metric. A similar issue comes from trying to associate 1-vectors with column matrices: Clearly. (3) and (4) are not equivalent - so the question is, did we define the dot product wrong on our matrix representation? Or did we choose the wrong matrix representation in the first place? Which of (i), (ii), and (iii) do we want to declare incorrect definitions? |
In answer to eric-wieser: The matrix product to the immediate right of equivalence (ii) incorrectly represents A \cdot B. The value of A \cdot B should not depend on the basis used, but the expressions (4) do so depend. The correct expression comes from thinking of A \cdot B as arising from a covector (A \cdot)(-) acting on vector B. Call that covector "Alpha", with an uppercase "A". Let a = {a^i] and b = [b^j] be the n x 1 matrices representing vectors A and B with respect to basis e_1, ..., e_n. Let alpha = [a_j] = [a^i g_{ij}] be the 1 x n matrix representing the covector Alpha. (I'm using the summation convention, so a^i g_{ij} has an understood summation over the repeated index i.) And let g = [g_{ij} ] = [ e_i \cdot e_j ] be the covariant metric tensor matrix representing the metric tensor with respect to the basis. Then A \cdot B = (a^i e_i) \cdot (a^j e_j) = a^i (e_i \cdot e_j) b^j = a^i g_{ij} b^j = a^T g b. Alternately, A \cdot B = (a^i g_{ij}) b^j = a_j b^j = alpha b. These formulas are independent of the basis used. In conclusion, the matrix representation of the inner product A \cdot B should be a^T g b or alpha b where alpha = a^T g . Eric, do you think that the problem indicated in my post is arising from A \cdot B being mistakenly calculated by using matrix product a^T b instead of a^T g b? Where? I'm pretty sure of my mathematics. It's coding at which I'm an amateur. |
This is related to #461. Herein I will use tensor notation, with indexes placed covariantly (subscript level) and contravariantly (superscript level); in contrast, GAlgebra writes all indexes at subscript level.
Attached is a zip file which contains a Jupyter notebook; a pdf of the notebook; and unofficial GAlgebra module gprinter.py, which is used by the notebook. I made some slight modifications to method
matrix(self)
in the most recent release of lt.py, which modifications are described at the start of the notebook. Comments below are made with the modified method implemented.The modified method worked for all test cases on which I tried it. However testing revealed a new problem. Let's distinguish between a DESIRED linear transformation
f
, with matrix[ {f^i}_j ]
, and the ACTUAL transformationF
, with matrix[ {f^I}_j ]
, yielded by instantiation. The matrices are defined by the actions of the transformations on basis vectors, specificallyf(e_j) = \sum_{i=1}^n {f^i}_j e_i
andF(e_j) = \sum_{i=1}^n {F^i}_j e_i
. Specifically, takeF = GA.lt('f')
(the lower case "f" is intentional) to be a GENERIC transformation. ThenF.matrix()
returns a SymPy matrix[ {f^i}_j ]
(note lowercase "f), not the actual matrix[ {F^i}_j ]
ofF
. Entries{f^i}_j
are SymPy symbols. Use the matrix[ {f^i}_j ]
returned byF.matrix()
to define linear transformationf
, which we'll call the DESIRED transformation.f
andF
are the same if and only iff(e_j) = F(e_j)
for each basis vectore_j
, if and only if{f^i}_j = {f^i}_j
for alli
andj
.Investigation shows that instead one has
{F^i}_j = \sum_{k=1}^n {f^i}_k g^{kj}
, which is equivalent toF(e_j) = f(e^j)
. Notice that the free indexj
on the left side is at subscript level while on the right side it is at superscript level. Consequently the two transformationsf
andF
will be equal only when the metric is Euclidean metric and the basis is orthonormal.The above-described discrepancy between
F
andF.matrix()
does not occur whenF
is a SPECIFIC linear transformation, i.e. is instantiated by way of a command of the formF = GA.lt(a_list_of_lists)
.At a guess, that the problem should manifest for generic transformations but not for specific ones might have its source in different instantiation processes for specific and generic transformations.
I think I've accurately described the problem, but my meager coding skills aren't up to identifying where and how in lt.py the problem arises. At a guess it's in the code for the instantiation of GENERIC linear transformations.
Greg Grunberg (Greg1950)
GAlgebra's matrix() method.zip
The text was updated successfully, but these errors were encountered: