Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

actions of generic linear transformations; matrices of linear transformations #461

Open
Greg1950 opened this issue Sep 24, 2020 · 4 comments
Labels
Milestone

Comments

@Greg1950
Copy link
Contributor

Greg1950 commented Sep 24, 2020

Attached is a zipped version of a correction to GAlgebra's matrix() method.ipynb , a Jupyter notebook; the zip file also contains Alan Bromborsky's gprinter.py, which I use in the notebook to create output.

Previously I reported that method .matrix() was returning a nonstandard matrix representation of linear transformations. In the last Markdown cell of the notebook I indicate a code modification that corrects that behavior. The modification has been checked for both specific and generic transformations on Euclidean and Minkowskian scalar product spaces, and when the generating bases for such spaces are orthonormal, orthogonal, or oblique.

However when testing I discovered what I regard as a problem in GAlgebra's treatment of generic linear transformations. One would expect that instantiation f = Ga.lt('f') would produce a transformation f on geometric algebra Ga whose action on basis vector $e_j$ would take the form

$$ e_j \mapsto f_{1j} e_1 + f_{2j} e_2 + ... + f_{nj} e_n$$

However such is not the case. Somehow the metric tensor (or perhaps reciprocal metric tensor) is getting mixed into the expression to the right of the \mapsto symbol. This is demonstrated near the end of the notebook. I have not puzzled out the pattern of mixing of the $f_{ij}$ 's and the metric tensor components.

It would be nice if one could include MathJax expressions in these posts, as one can do in a Jupyter notebook's Markdown cells. Or maybe one can and I just don't know about it. Although I've been studying geometric algebra for over a decade, I'm a newbie at coding and use of GitHub.

Greg Grunberg (Greg1950)

a correction to GAlgebra's matrix() method.zip

@eric-wieser
Copy link
Member

eric-wieser commented Sep 24, 2020

t would be nice if one could include MathJax expressions in these posts

There's unfortunately no easy way to put latex in github. My usual strategy is to just paste a screen clipping, which on windows I get with windows + shift + s.

What you can do is upload the ipynb as a gist, like I've done here: https://gist.github.com/eric-wieser/c389a726a8b86f7e9d1586089a4fc5dc

Gist
From https://github.com//issues/461. GitHub Gist: instantly share code, notes, and snippets.

@eric-wieser
Copy link
Member

I think your suggested change of

                 self.mat = Dictionary_to_Matrix(self.lt_dict, self.Ga) * self.Ga.g
-                return self.mat
+                return self.mat * self.Ga.g_inv

would solve the problem - but it can be simplified to

-                self.mat = Dictionary_to_Matrix(self.lt_dict, self.Ga) * self.Ga.g
+                self.mat = Dictionary_to_Matrix(self.lt_dict, self.Ga)
                 return self.mat

I do wonder why @abrombo added this multiplication in the first place. Perhaps there's an interpretation that we're missing.

@Greg1950
Copy link
Contributor Author

On its face, post-multiplication of a matrix $M$ (standing for self.mat) by the metric tensor $G$ (standing for self.Ga.g), followed by a second post-multiplication by the reciprocal metric tensor $G^{-1}$ (standing for self.Ga.g_inv), would seem unnecessary. Mathematically one would have $(M G) G^{-1} = M (G G^{-1}) = M I = M$. So why not drop the two multiplications and just return $M$ (i.e. self.mat), as suggested by eric-wieser?

Answer: I had already tried what Eric suggested before making the post. The result was strange. f.matrix() would return a matrix $N$ with a pending transpose operation indicated, i.e. would return something that looked like $N^T$ where the superscript $^T$ stands for transpose. I haven't a clue as to why that should happen. But carrying out the transposition manually then resulted in a correct answer. But one shouldn't have to do the transposition manually. It should be done automatically before a method call f.matrix() returns its answer.

Performing the two post-multiplications results in a correct return matrix without the pending transposition operation, so that's what I did in my suggested code change. If something simpler can be done which forces the transposition, I'd say go with it.

One possible "something simpler" has just occurred to me, so I haven't yet tested it:

  1. Somehow (I don't know how, I'll have to search the documentation) extract the dimension n of the scalar product space that generates geometric algebra self.Ga.
  2. Change the current code line self.mat = Dictionary_to_Matrix(self.lt_dict, self.Ga) * self.Ga.g to self.mat = Dictionary_to_Matrix(self.lt_dict, self.Ga) .
  3. Then use return self.mat * eye(n) in both places where the line return self.mat currently occurs.

I suspect the multiplication by the identity matrix will force the transposition to be carried out before the return is made. And a single multiplication by the identity matrix will be less computationally intensive that two successive multiplications by self.Ga.g and self.Ga.g_inv.

@Greg1950 Greg1950 reopened this Sep 24, 2020
@eric-wieser
Copy link
Member

But one shouldn't have to do the transposition manually. It should be done automatically before a method call f.matrix() returns its answer.

You can make this go away by calling .matrix().doit(). I've fixed this already in 32f311d

You might want to switch to using the unreleased version of galgebra so you get this change, which you can do with

pip uninstall galgebra
pip install https://github.com/pygae/galgebra/archive/master.zip

This will also get you the L(x)-less printing you asked for in a previous issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants