You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you happen to be working with the Finite Element Method (FEM) solving for bouncy 🏈 things, we have a convenient tool 🎁 that we’d like to share.
🎓 For Scholars
The content of this section is described in the supplementary PDF.
Please cite our paper if you wish to mention this article; not the link to this repository.
Just to avoid confusion; the existence of this analysis is not even implied in the main paper, but its full contents do exist in the supplementary PDF.
🏁 The Final Results
Since the materials below are lengthy, we provide a summary of the final results. If you're excited, go ahead and proceed reading:
Any eigen-filtered force jacobian of isotropic energies expressed purely in terms of their singular values can all be computed in closed form, provided that $\frac{\partial \Psi}{\partial \sigma}$ and $\frac{\partial^2 \Psi}{\partial \sigma^2}$ are symbolically known.
If the energy is expressed in terms of invariants, eigen-filtered force jacobian can be obtained in closed form by using only $\frac{\partial \Psi}{\partial I_k}$ and $\frac{\partial^2 \Psi}{\partial I_k^2}$, where $I_k$ are the Cauchy-Green invariants or those from Smith et al. [1].
Our analysis can be used to re-derive exactly the same system from Smith et al. [1].
📖 What's the Story?
If you are solving it implicitly, you’ll probably need both the force and the force jacobian of a potential energy ⚡ with respect to 🎢 strain.
For example, let $\Psi(F)$ denote a hyper-elasticity potential energy with respect to a strain tensor $F$.
$F$ is a $3 \times 3$ matrix for solids and a $3 \times 2$ matrix for shells. It is alternatively called the deformation gradient.
Force derivatives refer to $\frac{\partial^2 \Psi}{\partial f_i \partial f_j}$, where $f_i$ is a single entry of matrix $F$.
🚧 What's the Problem?
We could manage 😖 to calculate the force $\frac{\partial \Psi}{\partial f_i}$, but the force jacobian $\frac{\partial^2 \Psi}{\partial f_i \partial f_j}$ is so hard 😵💫 that it drives us crazy! 🤯
But what a bummer! 😰 The reality is cruel 💀; what we actually need is the 🔍 eigen-decomposed force jacobian, not simply the force jacobian! 😭
They are necessary to project them onto the semi-positive space, ensuring that the Newton's method does descend the energy to be minimized. 🤔
A brute-force solution using numerical factorization is too 🐌 slow (it's a dense $9 \times 9$ matrix!).
Symbolic math software 💥 explodes.
🚧 Dead end 🚧.
Note
Some readers may instead opt for position-based dynamics or projective dynamics, which do not require force jacobian.
However, these first-order convergent methods are not suitable for highly stiff materials or high-resolution meshes due to their slow convergence 🐢.
If you’re skeptical, try it yourself.
They are best suited for real-time applications, such as 🎮 video games.
🤔 Why Not Just Use Past Literature?
We know that there is excellent literature out there 📚 but it has ⛔ limitations.
The most popular one [1] heavily depends on invariants, but not all isotropic energies can be clearly expressed with them, or at least not without considerable human effort.
For example, how can we write a flavor of Ogden energy $\sum_k \left( \lambda_1^{0.5^k} + \lambda_2^{0.5^k} + \lambda_3^{0.5^k} - 3 \right)$ using invariants?
Note
A practical example where our singular-value eigenanalysis is essential is the isotropic strain-limiting energies, which are discussed in the supplementary.
Other method [2] can handle such cases but have singularities when two stretches are the same along principal axes.
This means that even a rest pose 🛟 is considered a singular case.
Technically, our method is mathematically a variant of Zhu [3]; though, we eliminate (not all) the singularities they have and offer a more explicit solution.
A notable difference is that ours has the ability to re-derive the eigen system revealed by Smith et al. [1].
This is only possible with our formulation, not with Zhu [3].
[1] Breannan Smith, Fernando De Goes, and Theodore Kim. 2019. Analytic Eigensystems for Isotropic Distortion Energies. ACM Trans. Graph. https://doi.org/10.1145/3241041
[2] Alexey Stomakhin, Russell Howes, Craig Schroeder, and Joseph M. Teran. 2012. Energetically consistent invertible elasticity. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA '12).
to convert them to $\frac{\partial \Psi}{\partial x}$ and $\frac{\partial^2 \Psi}{\partial x^2}$, where $x$ is a element vertex of interest.
Once converted, both can be directly plugged into the Newton's method.
Just to be sure; the second chain rule only holds when $\frac{\partial^2 f_i}{\partial x^2} = 0$.
For linear elements, $F$ is linear in $x$. $\frac{\partial f_i}{\partial x}$ is a simple matter to compute, if you know how to get $F$ from $x$.
🔍 Eigen-decomposed Force Jacobians
If you are only interested in isotropic squishables 🎈 (e.g., their stretches aren't biased in any direction), we have a good solution 🎁 for it.
🪵 Volumetric Elasticity Energies
For 3D elasticity strain $F_{\mathrm{3D}}$ is a $3 \times 3$ matrix. First we realize that isotropic distortion energies are written as a function of singular values of $F$ such that $\Psi_{\mathrm{3D}}(\sigma_1,\sigma_2,\sigma_3)$, where $\sigma_{1 \cdots 3}$ are the singular values of $F$.
Let us list some popular ones (with Lamé and model-specific parameters substituted):
$\sigma_{1 \cdots 3}$ are obtained via a numerical SVD of $F$ at a reasonable time.
It's just a $3 \times 3$ matrix.
You may find that our closed-formed SVD routine (Apache v2.0) below is useful.
/* Apache v2.0 License */
template <unsigned R, unsigned C>
Svd<R, C> svd(const Mat<R, C> &F) {
Mat<C, C> V;
Vec<C> lambda;
solve_symm_eigen<C>(F.transpose() * F, lambda, V);
for (int i = 0; i < C; ++i) {
lambda[i] = sqrtf(fmax(0.0f,lambda[i]));
}
Mat<R, C> U = F * V;
for (int i = 0; i < U.cols(); i++) {
U.col(i).normalize();
}
/* Do an inversion-aware flip if needed */
return {U, lambda, V.transpose()};
}
This code can also handle non-square $F$, which is helpful for shell elements that we will cover later.
At the line solve_symm_eigen() we can use a closed-form solution.
Note
You may argue that the eigen decomposition of symmetric $3 \times 3$ matrices is expensive.
Yes, an iterative method like the Lanczos algorithm is expensive.
But for such a small matrix size, a direct solver is possible.
It may sound overwhelming, but aside from finding the three roots of a cubic equation, everything else is trivial.
I've written our own closed-form $3 \times 3$ eigen decomposition [Code Link], so go ahead and use it!
Let's get our hands dirty 👐. We’ll first write down a set of matrices and scalars.
$U_{3 \times 3}$ and $V_{3 \times 3}^T$ are rotation matrices obtained by the above SVD call of $F$.
Next, write down the following $3 \times 3$ symmetric matrix:
Let $w_1$, $w_2$, and $w_3$ be a set of eigenvectors of $H_{3 \times 3}$, and $\beta_1$, $\beta_2$, and $\beta_3$ be the corresponding eigenvalues.
We can use a $3 \times 3$ numerical eigen solver to get them.
For a simple case where $H_{3 \times 3}$ is diagonal, such as in ARAP, this decomposition is not necessary.
Let us write down a few more things:
All right, we have everything we need.
Make sure that we have 9 matrices $Q_{1 \cdots 9}$ and their corresponding scalars $\lambda_{1 \cdots 9}$ ready.
Note
Some readers may notice that our results are consistent with [1] in that the first six eigenmatrices are static, while the remaining three depend on the deformation. Indeed, our technique is closely related to [1].
$Q_{1 \cdots 3}$ are called twist modes, $Q_{4 \cdots 6}$ are called flip modes, and $Q_{7 \cdots 9}$ are called scaling modes.
Now we present results.
The first derivative (the force) is
where $S_i$ is a $3 \times 3$ stencil matrix with a single entry of one at the same row and column as $f_i$, and zeros elsewhere.
Just to be clear, $F$ is a $3 \times 3$ matrix and thus has 9 entries. Therefore, $f_i$ loops from 1 to 9.
The loop order in the matrix view does not matter; even a random order will work.
$u_k$ and $v_k$ are vectors representing the $k$-th columns of the rotation matrices $U_{3 \times 3}$ and $V_{3 \times 3}$, respectively.
Finally, the eigen-filtered force jacobian are
where $q_k$ is a single entry of $Q_k$ located at the same row and column as $f_k$.
Yes, it’s really that simple.
You don't believe it?
We've writen a Python code to numerically check its correctness. Check this out [Code].
Okay, there's one more step.
When $\sigma_i = \sigma_j$, $\lambda_{4,5,6}$ result in division by zero.
Note
Of course, if you use a symbolic package to simplify the symbolic expressions, singularities like $\sigma_i = \sigma_j$ may be removed. However, wouldn’t it be better if we could eliminate these singularities while keeping $\Psi_{\mathrm{3D}}$ abstracted?
This way, you won’t have to manually translate the symbolic expressions into code.
But in this case, we can use the following approximation instead:
For this approximation to work, the energy definition must satisfy the Valanis-Landel hypothesis: $\Psi_{\mathrm{3D}}(\sigma_1, \sigma_2, \sigma_3)$ should be invariant to the order of $\sigma_1$, $\sigma_2$, and $\sigma_3$. For example, $\Psi_{\mathrm{3D}}(\sigma_1, \sigma_2, \sigma_3) = \Psi_{\mathrm{3D}}(\sigma_2, \sigma_1, \sigma_3)$.
All isotropic energies must satisfy this condition.
Our Python code 🐍 above also checks this, and we assure you that it works even $\sigma_i = \sigma_j$ exactly! 🥳
If you’re feeling even lazy 🥱 to write symbolic expressions for $\frac{\partial \Psi_{\mathrm{3D}}}{\partial \sigma_i}$ and $\frac{\partial^2 \Psi_{\mathrm{3D}}}{\partial \sigma_i \partial \sigma_j}$, you can let a smart automatic differentiator do it for you.
The overhead should be negligible since the variables are only $\sigma_1$, $\sigma_2$ and $\sigma_3$, and major energies are concisely written in terms of singular values.
This way, we can truly automate 🤖 everything, meaning we won’t need to work on any math 😙 whenever we change the target elastic energy.
Plug and play. 🔌
🧭 Using Cauchy-Green Invariants
Picky readers 🤏 point out that singularities still exist $\sigma_i + \sigma_j = 0$, which is a bit extreme since elements need to be inverted to reach this condition, assuming you opt an inversion-aware SVD.
Of course, this occurs frequently in graphics applications, so it should be taken seriously.
The good news 📰 is that we can remove all the singularities in cases where the energies are expressed as a function of Cauchy-Green invariants, as shown by Stomakhin et al. [2].
More specifically, this means that the potential is defined as:
Of course, if $\frac{\partial \Psi_{\mathrm{3D}}}{\partial I_i}$ includes singularities, we can't do anything about them, but that's not our fault.
And there's more: if you can describe your target energy using invariants, you can also express $H_{3 \times 3}$ without directly differentiating $\Psi_{\mathrm{3D}}$ with respect to the singular values:
$$\begin{align}
\frac{\partial I_1}{\partial F} = 2 F, \hspace{3mm}
\frac{\partial I_2}{\partial F} = 2 ( \mathrm{tr}\left(C\right) F - F C ), \hspace{3mm}
\frac{\partial I_3}{\partial F} = 2 F \mathrm{adj} \left(C\right)^T.
\nonumber
\end{align}$$
Just to be clear, $\frac{\partial I_k}{\partial F}$ yields a $3 \times 3$ matrix, and $\frac{\partial I_k}{\partial f_i}$ represents the single entry of $\frac{\partial I_k}{\partial F}$ corresponding to the same row and column as $f_i$.
This way, you will only need to prepare $\frac{\partial^2 \Psi_{\mathrm{3D}}}{\partial I_i \partial I_j}$ and $\frac{\partial \Psi_{\mathrm{3D}}}{\partial I_k}$ whenever you change 🔁 the target energy density.
This approach is often more convenient than working directly with the singular values because many (not all) major energy ⚡ functions are described in terms of Cauchy-Green invariants, and it's also more succinct.
Everything else works without changing your code 🥳.
We wrote a Python script to verify the correctness of the materials in this section [Code].
🥼 For Shell Elasticity
We can extend the idea above for triangular meshes.
Recalling that $F$ is $3 \times 2$ for shells, the dimensions of $U$ and $V^T$ change to $U_{3 \times 2}$ and $V^T_{2 \times 2}$, respectively.
If you use our SVD code above, such changes are automatically handled.
How to obrain $F$ from $x$ is mentioned in the paper supplementary.
With this setting, the list of matrices $Q$ and scalars $\lambda$ are given by
Note that our SVD call yields $U_{3 \times 2}$, but the ones used above are $U_{3 \times 3}$. You can convert from $U_{3 \times 2}$ to $U_{3 \times 3}$ by extending it with another column. This additional column can be obtained by taking the cross product of the first and second columns of $U_{3 \times 2}$, such that
/* Apache v2.0 License, though this is too simple to claim... */
Matrix3x3 extend_U(const Matrix3x2 &U) {
Vector3 cross = U.col(0).cross(U.col(1));
Matrix3x3 result;
result << U.col(0), U.col(1), cross;
return result;
}
All looks good, except we’re still missing $a_1$, $a_2$, $b_1$, $b_2$, $\lambda_5$, and $\lambda_6$.
Similar to our 3D case, two vectors $[a_1, a_2]$, $[b_1, b_2]$ and their corresponding scalars $\lambda_5$ and $\lambda_6$ are given as the pair of eigenvectors and eigenvalues of the following symmetric $2 \times 2$ matrix:
When $\Psi_{\mathrm{2D}}$ is written in terms of two Green-Cauchy invariants $I_1 = \mathrm{tr}(F^TF) = \sigma_1^2 + \sigma_2^2$ and $I_2 = \mathrm{det}(F^TF) = \sigma_1^2 \sigma_2^2$ such that:
Similarly to the 3D case, we can leverage the chain rule to obtain $H_{2 \times 2}$ using only the derivatives of $\Psi_{2 \times 2}$ with respect to invariants.
Derivatives of $I_k$ with respect to singular values are:
The first derivative $\frac{\partial \Psi_{\mathrm{2D}}}{\partial \sigma}$ can be computed similarly to the 3D case. The formula for $\frac{\partial I_k}{\partial F}$ also remains the same.
Interested readers are referred to our verification Python code [Code].
Everything else is exactly the same as the 3D case presented above.
We provide another Python code [Code] to numerically verify this analysis.
Our technique can be used to arrive at the same eigen system revealed by Smith et al. [1].
We can confirm this by simply swapping the Cauchy-Green invariants with those of Smith et al. [1] and substitute them into our eigenvalue expressions.
This is simple enough to do manually, but I’ve written a SymPy code to help facilitate the task:
from sympy import *
a, b, c = symbols('\sigma_1 \sigma_2 \sigma_3')
I1, I2, I3 = a + b + c, a**2 + b**2 + c**2, a*b*c
E = Function('\Psi')(I1, I2, I3)
display(ratsimp((E.diff(a) + E.diff(b)) / (a + b)))
display(ratsimp((E.diff(c) + E.diff(a)) / (c + a)))
display(ratsimp((E.diff(b) + E.diff(c)) / (b + c)))
display(ratsimp((E.diff(a) - E.diff(b)) / (a - b)))
display(ratsimp((E.diff(c) - E.diff(a)) / (c - a)))
display(ratsimp((E.diff(b) - E.diff(c)) / (b - c)))
Now look at Equations (7.16 to 7.21) from [B].
The output is identical.
This can't be a coincidence.
[B] Theodore Kim and David Eberle. 2022. Dynamic deformables: implementation and production practicalities (now with code!). In ACM SIGGRAPH 2022 Courses (SIGGRAPH '22). https://doi.org/10.1145/3532720.3535628
We have now proven the equivalence of $Q_{1 \cdots 6}$ and $\lambda_{1 \cdots 6}$.
Looks good, but what about scaling mode eigenmatrices $Q_{7 \cdots 9}$ and $\lambda_{7 \cdots 9}$?
Smith et al. [1] define $Q_{7,8,9}$ using the eigenvectors and eigenvalues of an encoded matrix $A$ (see their paper for the full representation).
The eigenvectors and eigenvalues are arranged in exactly the same way as ours.
Therefore, to prove that our $Q_{7,8,9}$ and $\lambda_{7,8,9}$ are exactly the same, it is sufficient to show that $H_{3 \times 3} = A$.
For this purpose, let's again use the following chain rule, but this time with $I_k$ being the invariants from Smith et al. [1].
Doing everything by hand is too much work, so let's use SymPy again.
The following code constructs $H_{3 \times 3}$ and $A$ as defined by Smith et al. [1] and compares the differences.
from sympy import *
a, b, c = symbols('\sigma_1 \sigma_2 \sigma_3')
sym_I1, sym_I2, sym_I3 = symbols('I_1 I_2 I_3')
E = Function('\Psi')(sym_I1, sym_I2, sym_I3)
I1, I2, I3 = a+b+c, a**2+b**2+c**2, a*b*c
I, sym_I, s = [I1,I2,I3], [sym_I1,sym_I2,sym_I3], [a,b,c]
dEdI = Matrix([E.diff(sym_I[i]) for i in range(3)])
d2EdI2 = Matrix([[E.diff(sym_I[i], sym_I[j])
for i in range(3)]
for j in range(3)])
dIds = [Matrix([I[k].diff(s[i]) for i in range(3)])
for k in range(3)]
d2Ids2 = [Matrix([[I[k].diff(s[i],s[j])
for i in range(3)] for j in range(3)])
for k in range(3)]
# Our H_3x3 matrix
H3x3 = zeros(3)
for i in range(3):
H3x3 += dEdI[i] * d2Ids2[i]
for j in range(3):
H3x3 += d2EdI2[i,j] * dIds[i] * dIds[j].T
# Smith et al. (2019)
A = zeros(3)
for i in range(3):
A[i,i] = 2*dEdI[1]+d2EdI2[0,0]+4*s[i]**2*d2EdI2[1,1] \
+ I3**2/s[i]**2*d2EdI2[2,2]+4*s[i]*d2EdI2[0,1] \
+ 4*I3*d2EdI2[1,2]+2*I3/s[i]*d2EdI2[0,2]
for i in range(3):
for j in range(3):
if i < j:
k = 3 - i - j
A[i,j] = s[k]*dEdI[2]+d2EdI2[0,0] \
+ 4*I3/s[k]*d2EdI2[1,1] \
+ s[k]*I3*d2EdI2[2,2] \
+ 2*s[k]*(I2-s[k]**2)*d2EdI2[1,2] \
+ (I1-s[k])*(s[k]*d2EdI2[0,2]+2*d2EdI2[0,1])
A[j,i] = A[i,j]
# Symbolically compute the difference
display(simplify(H3x3-A))
This prints zero! 😲 But if you tweak the way $A$ is computed a little bit, it gives non-zero expressions 🔢, so this must be correct ✅.
This also confirms that their encoded matrix $A$ corresponds to $\frac{\partial^2 \Psi}{\partial \sigma^2}$.
Now we have proven that our eigen analysis can re-derive the same system as Smith et al. [1].
🍱 Takeaway C/C++ ⚙️ and Rust 🦀 Codes
If you’re feeling excited, we’d like to share full C/C++ and Rust implementations of the two analyses above under the Apache v2.0 license. 📜
Here they are [📂 Mini Project Directory].
This also runs on CUDA with minor adaptation work.