Skip to main content

2.6 Mulitlinear Maps

Previously, we discussed linear maps and linear functionals. A multilinear map is a natural generalization of these concepts, where the map takes multiple vector inputs and is linear in each argument separately. We shall use this extensively in our study of tensor analysis in later chapters.

Table of Contents

Introduction

Definition 2.6.1 (-Linear Map)

A map is called a -linear map if it is linear in each of its arguments separately.

If , the field over which the vector spaces are defined, then is called a -linear function.

Definition 2.6.1 (Skew-Symmetric -Linear Map)

A -linear map is called skew-symmetric if for any permutation of the set , we have

where is the sign of the permutation, equal to for even permutations and for odd permutations. It is often written as . Thus, more succinctly, we have

The set of skew-symmetric -linear maps from to is denoted by .

The set of skew-symmetric -linear functions from to is denoted by .

To breakdown the definition, we need to establish some notational conventions. If we have a list of vectors , a permutation of the set acts on the list of vectors by rearranging them according to the permutation. For example, if is the permutation that swaps and , then

The idenitity permutation (iota) leaves the list unchanged:

Acting on a single vector in the list means applying to the entire list and then extracting the corresponding vector. For example,

Acting on a -linear map means rearranging the arguments of according to the permutation. For example,

Finally, the sign of a permutation , denoted by , is defined as if is an even permutation (can be expressed as an even number of transpositions) and if is an odd permutation (can be expressed as an odd number of transpositions).

Summing over permutations means taking the sum of the results of applying each permutation to the object in question. For example, if we sum over all permutations of the set and act on a -linear map , we get

The Kronecker delta can be defined for permutations as

In other words, it acts like a filter that only "selects" the term where does the same thing as .


A map is skew-symmetric if swapping any two of its arguments changes the sign of the output. We can always construct a skew-symmetric -linear map from any -linear map by defining

Here, the sum is over all permutations of the set . To take a simple example, consider a bilinear map . The corresponding skew-symmetric bilinear map is given by

For a -linear map , the corresponding skew-symmetric trilinear map is given by

For any , the following statements are equivalent:

  1. whenever two of the vectors are equal.
  2. for any permutation of the set .
  3. whenever the set is linearly dependent.
Proposition 2.6.4

Let be a skew-symmetric -linear map, where . is completely determined by its action on any basis of . If annihilates all basis elements, then .


Proof. Denote the basis of by . Let be arbitrary vectors. Then, the action of on these vectors can be expressed as

The summation is a constant, so if annihilates all basis elements, then .


A determinant map in is a skew-symmetric -linear function .

There are special determinant maps we shall consider. Let be a basis of , and be the corresponding dual basis. For a set of vectors , we define the map

Applying a permutation to the arguments of gives

If changes the order of the vectors, we will get acting on where for at least one . Since the dual basis satisfies , this means that at least one of the terms in the product will be zero. Only if is the identity permutation do we get a non-zero result;

We define a determinant map , which is a skew-symmetric -linear function. Applying it on the basis vectors, we get

In an -dimensional vector space (where is finite), there exists a determinant map which is not the zero map.

Proposition 2.6.7

Suppose is a skew-symmetric -linear map, where and is another vector space.

Then there exists a unique vector such that

for all , where is a determinant map.


Proof. Denote the basis of by , where . We can always scale or one of the basis vectors to ensure .

If we denote the output of on the basis vectors by

then

Recall from Proposition 2.6.4 that if a skew-symmetric -linear map annihilates all basis elements, then it is the zero map. Therefore, we have

and thus


To see an application of Proposition 2.6.7, consider the following corollary, where we just replace the codomain vector space with the field .

:::corollary Corollary 2.6.8

Every skew-symmetric -linear functional is a scalar multiple of the determinant map .

:::

This is significant because it tells us that in an -dimensional vector space, there is essentially only one way (up to a scalar multiple) to define a skew-symmetric -linear functional, which is through the determinant map. We will see later how this relates to volume forms and orientations in differential geometry.

Proposition 2.6.9

Let be a determinant map in an -dimensional vector space . Let and be vectors in . Then,

where the notation indicates that the vector is omitted from the list of arguments.

Determinants

In the previous section, we introduced the concept of determinant maps as skew-symmetric -linear functionals in an -dimensional vector space. We will now connect this abstract definition to the more familiar notion of determinants of linear operators.

Let be a linear operator on an -dimensional vector space , and let be a nonzero determinant map. If the basis of is given by , we define the determinant as

is also a determinant map, since it is the composition of a linear operator with a skew-symmetric -linear functional. It is thus (by Corollary 2.6.8) a scalar multiple of ;

Note that our choice of was arbitrary; had we chosen another determinant map , we would have obtained the another scalar multiple of ;

for some non-zero scalar such that . Thus, the scalar is independent of the choice of determinant map, and we define the determinant of the linear operator as this scalar.

For a linear operator on an -dimensional vector space , and a nonzero determinant map , then define a determinant map as

where is any basis of . Then, the determinant of the linear operator , denoted by , is the unique scalar such that

Geometrically, the determinant of a linear operator can be interpreted as the scaling factor by which the operator changes volumes in the vector space.

Let be linear operators on an -dimensional vector space . The determinant map satisfies the following properties:

  1. for any scalar , where is the identity operator on .
  2. .
  3. if and only if is invertible.

Proof. These should be intuitively clear from the geometric interpretation of determinants as volume scaling factors. A rigorous proof can be constructed using the properties of multilinear maps and the definition of determinants.

First, for any scalar and the identity operator , we have

To proceed, recall the definition of :

As for each basis vector , we can factor out from each argument of (as it is multilinear):

Plugging this back into the definition of , we find

For the second property, we use the same definition of the determinant map;

The left-hand side can be expanded as

Thus we have

For the last property, recall that for an invertible operator , acting on a set of linearly independent vectors produces another set of linearly independent vectors.

() If is invertible, then are linearly independent. Since is a determinant map, it does not annihilate linearly independent sets of vectors. Therefore, , which implies .

() Conversely, if , then . This means that the set is linearly independent. Hence, must be invertible.

Thus all three properties are proven.


Classical Adjoints

In the previous theorem, we said that an invertible linear operator has a non-zero determinant. We will now explore a way to reach the inverse of a linear operator using determinants, through the concept of the classical adjoint.

Suppose we have a linear operator on an -dimensional vector space .

Define a new map by setting, for any and ,

where is a determinant map. To see a concrete example, consider five vectors , where . Then,

As is skew-symmetric, it is a multiple of the determinant map by Proposition 2.6.7. Thus, there exists a unique linear operator such that

is called the classical adjoint of the linear operator . To reword the definition, the adjoint is the unique linear operator satisfying

for all .

The classical adjoint of a linear operator satisfies

where is the identity operator on .