1.6 Vector Products
From this point on, our focus will primarily be centered around the three-dimensional Euclidean space $E^3$ for its relevance and simplicity. The last vector operation we have yet to discuss, which reduces to a tractable concept overview in the space of $E^3$, is the vector product or the cross product. This is equivalent to the exterior product or wedge product defined in the Euclidean space $E^n$ for an arbitrary $n$. Then, the vector product $\mathbf{u} \times \mathbf{v} \in E^3$ of two vectors is defined such that the following properties hold $\forall$ $\mathbf{u}, \mathbf{v}, \mathbf{w} \in E^3$ and $\alpha, \beta \in \mathbb{R}$,
- $\mathbf{u} \times \mathbf{v} = -\mathbf{v} \times \mathbf{u}$ (anti-commutativity),
- $\left| \mathbf{u} \times \mathbf{v} \right| = \left| \mathbf{u} \right| \left| \mathbf{v} \right| \sin \theta$, ($\theta$ as in $\cos\theta = \frac{\mathbf{u} \cdot \mathbf{v}}{\left| \mathbf{u} \right| \left| \mathbf{v} \right|}$),
- $(\alpha \mathbf{u} + \beta \mathbf{v}) \times \mathbf{w} = \alpha \mathbf{u} \times \mathbf{w} + \beta \mathbf{v} \times \mathbf{w}$,
- $\mathbf{u} \cdot \mathbf{v} \times \mathbf{w} = \mathbf{v} \cdot \mathbf{w} \times \mathbf{u} = \mathbf{w} \cdot \mathbf{u} \times \mathbf{v} = [\mathbf{u}, \mathbf{v}, \mathbf{w}] = [\mathbf{v}, \mathbf{w}, \mathbf{u}] = [\mathbf{w}, \mathbf{u}, \mathbf{v}]$ (the scalar triple product or box product).
We can easily deduce from property (a) that $\mathbf{u} \times \mathbf{u} = \mathbf{0}$, and likewise from properties (a) and (d) we can see that $[\mathbf{u}, \mathbf{u}, \mathbf{v}] = [\mathbf{v}, \mathbf{u}, \mathbf{v}] = 0,$ meaning that the vector $\mathbf{u} \times \mathbf{v}$ is orthogonal to both $\mathbf{u}$ and $\mathbf{v}$. Hence, $\mathbf{u} \times \mathbf{v}$ is normal to the plane formed by $\mathbf{u}$ and $\mathbf{v}$
We can show that the box product is linear in each argument by forming an inner product $(\alpha \mathbf{a} + \beta \mathbf{b}) \cdot \mathbf{e}$, such that $\mathbf{e} = \mathbf{c} \times \mathbf{d}$. Then, \begin{equation} [\alpha \mathbf{a} + \beta \mathbf{b}, \mathbf{c}, \mathbf{d}] = (\alpha \mathbf{a} + \beta \mathbf{b}) \cdot \mathbf{e} = \alpha [\mathbf{a}, \mathbf{c}, \mathbf{d}] + \beta[\mathbf{b}, \mathbf{c}, \mathbf{d}], \end{equation} and this can be repeated to show linearity for the second and third argument. Although one can freely cycle through the arguments of the box product (as in property (d)), interchanging two of the arguments will reverse the sign of the box product. That is, $[\mathbf{u}, \mathbf{v}, \mathbf{w}] = \mathbf{u} \cdot \mathbf{v} \times \mathbf{w} = -\mathbf{u} \cdot \mathbf{w} \times \mathbf{v} = -[\mathbf{u}, \mathbf{w}, \mathbf{v}]$, due to the anti-commutativity of the vector product.
In reference to property (d), we say that an orthonormal basis for $E^3 \{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}$ is right-handed if $[\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3] = 1$. This necessarily implies that $\mathbf{e}_1 \times \mathbf{e}_2 = \mathbf{e}_3$, $\mathbf{e}_2 \times \mathbf{e}_3 = \mathbf{e_1}$, and $\mathbf{e}_3 \times \mathbf{e}_1 = \mathbf{e_2}$. Now, let us define the three-index object as, \begin{equation} e_{ijk} = \mathbf{e}_i \cdot (\mathbf{e}_j \times \mathbf{e}_k) = [\mathbf{e}_i, \mathbf{e}_j, \mathbf{e}_k], \end{equation} where $e_{ijk}$ is called the unit alternator, or the permutation symbol, or more commonly the Levi-Civita symbol. From our definition, it follows that \begin{equation} e_{ijk} = \begin{cases} +1, \enspace \text{if} \enspace (i,j,k) = (1,2,3), (2,3,1), (3,1,2)\\ -1, \enspace \text{if} \enspace (i,j,k) = (2,1,3), (1,3,2), (3,2,1) \\ 0, \enspace \text{otherwise} . \end{cases} \end{equation} Notice that the permutation symbol satisfies the expected cyclic property $e_{ijk} = e_{jki} = e_{kij}$, as well as the property $e_{ijk} = -e_{jik} = -e_{ikj} = -e_{kji}$. Additionally, we can invoke the formula from Section 1.3, $\mathbf{v} = (\mathbf{v} \cdot \mathbf{e}_k) \mathbf{e}_k$, along with the definition of $e_{ijk}$ to obtain the following relation, \begin{equation} \mathbf{e}_j \times \mathbf{e}_k = (\mathbf{e}_i \cdot \mathbf{e}_j \times \mathbf{e}_k) \mathbf{e}_i = e_{ijk} \mathbf{e}_i = e_{jki} \mathbf{e}_i = e_{kij} \mathbf{e}_i. \end{equation} There are two free indices $j$ and $k$ and one dummy index $i$, all ranging from 1 to 3 in the term above. This nets nine equations for each possible free index pair with three terms on the right-hand side of the equation(s).
Suppose we want to compute the components of a vector $\mathbf{c} = \mathbf{a} \times \mathbf{b}$ in terms of the components of $\mathbf{a}$ and $\mathbf{b}$. Then, appealing to property (c) and carefully assigning indices to the component form of the equation, \begin{equation} \mathbf{c} = a_i \mathbf{e}_i \times b_j \mathbf{e}_j = a_i b_j \mathbf{e}_i \times \mathbf{e}_j = a_i b_j e_{ijk} \mathbf{e}_k. \end{equation} Since all the indices are repeated, we sum over $i$, $j$, $k$ from 1 to 3 and net 27 total terms on the right-hand side. Then, we can write $\mathbf{c} = c_k \mathbf{e}_k$ to get, \begin{equation} (c_k - e_{ijk}a_ib_j) \mathbf{e}_k = \mathbf{0}, \end{equation} and due to linear independence of the basis elements, we obtain \begin{equation} c_k = e_{ijk} a_i b_j. \end{equation} Here, the repeated indices imply a double sum on $i$ and $j$ for each choice of $k \in \{1,2,3\}$, and $c_k$ represents three equations with nine terms on the right-hand side. With this relation, we have an alternative way to compute the inner product $\mathbf{c} \cdot \mathbf{d}$ through the box product $[\mathbf{a}, \mathbf{b}, \mathbf{d}]$ purely in terms of their components, \begin{equation} [\mathbf{a}, \mathbf{b}, \mathbf{d}] = \mathbf{a} \times \mathbf{b} \cdot \mathbf{d} = \mathbf{c} \cdot \mathbf{d} = c_k d_k = e_{ijk} a_i b_j d_k, \end{equation} where again, the right-hand side has 27 terms when written out in full.
Although we say that $\mathbf{c}$ or $[\mathbf{a}, \mathbf{b}, \mathbf{d}]$ has 27 terms, in reality you won’t have to compute all of them as there are only a select number of nonzero entries of $e_{ijk}$. The permutation symbol also allows for compact expressions to compute the determinant of $3 \times 3$ matrices. Consider a $3 \times 3$ matrix $b_{ij}$ and let $B = det(b_{ij})$, then, \begin{equation} B = \begin{vmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{vmatrix} = e_{ijk} b_{i1} b_{j2} b_{k3}. \end{equation} While this expression may look unfamiliar at first, it is identical to the component-wise expression for that of a box product. This is because the determinant of a $3 \times 3$ matrix can be expressed exactly as the scalar triple product of three vectors whose components form such a matrix. Also, it is fairly easy to verify that $b_{ij}$ and its transpose, i.e. $(b_{ij})^T = b_{ji}$, have the same determinant so that $B = e_{ijk} b_{1i} b_{2j} b_{3k} = e_{ijk} b_{i1}^T b_{2j}^T b_{k3}^T$.
Observe that the equation for $B = det(b_{ij})$ is equivalent to, \begin{equation} e_{123}B = e_{ijk} b_{i1} b_{j2} b_{k3}, \end{equation} which provides insight that this equation is a special case of the relation, \begin{equation} e_{mnp}B = e_{ijk} b_{im} b_{jn} b_{kp} = \begin{vmatrix} b_{1m} & b_{1n} & b_{1p} \\ b_{2m} & b_{2n} & b_{2p} \\ b_{3m} & b_{3n} & b_{3p} \end{vmatrix}. \end{equation} You can painstakingly confirm this for yourself by inputting various values of $e_{mnp}$ and find that the right-hand side equates to $B, -B, $ or $0$. The upshot is that this relation in itself is a special case of the general relation, \begin{equation} e_{ijk}e_{mnp}B = \begin{vmatrix} b_{im} & b_{in} & b_{ip} \\ b_{jm} & b_{jn} & b_{jp} \\ b_{km} & b_{kn} & b_{kp} \end{vmatrix}. \end{equation} The following algebraic trickery (as with many aspects of these notes) comes straight to you from the mastermind Steigmann himself. Let’s suppose that $b_{ij} = \delta_{ij}$, the Kronecker delta, so that $b_{ij}$ is the identity matrix with determinant $B=1$. Then our expression becomes, \begin{equation} e_{ijk}e_{mnp} = \begin{vmatrix} \delta_{im} & \delta_{in} & \delta_{ip} \\ \delta_{jm} & \delta_{jn} & \delta_{jp} \\ \delta_{km} & \delta_{kn} & \delta_{kp} \end{vmatrix}, \end{equation} and expanding the determinant by the third row yields \begin{equation} \delta_{km}(\delta_{in}\delta_{jp} - \delta_{jn}\delta_{ip}) - \delta_{kn}(\delta_{im}\delta_{jp} - \delta_{jm}\delta_{ip}) + \delta_{kp}(\delta_{im}\delta_{jn} - \delta_{jm}\delta_{in}). \end{equation} Now set $p = k$ to sum on $k$ and use properties of the Kronecker delta to obtain the important $e-\delta$ identity, \begin{equation} e_{ijk}e_{mnk} = \delta_{im}\delta_{jn} - \delta_{in}\delta_{jm}. \end{equation} Further, we can set $n = j$ to sum again and obtain the identity, \begin{equation} e_{ijk}e_{mjk} = 3\delta_{im} - \delta_{ij}\delta_{jm} = 2\delta_{im}, \end{equation} then, setting $m = i$, \begin{equation} e_{ijk} e_{ijk} = 2\delta_{ii} = 6, \end{equation} finally altogether, we can rewrite the relation for $e_{mnp}B$ to get, \begin{equation} B = \frac{1}{6} e_{ijk} e_{mnp} b_{im} b_{jn} b_{kp}. \end{equation}