# Section 1.8

1. Let ${\displaystyle L,K:V\to V}$ be linear maps between finite-dimensional vector spaces that satisfy ${\displaystyle L\circ K=0}$. Is it true that ${\displaystyle K\circ L=0}$?

Solution:
No. in general composition of functions is not commutative. By the theorem that any linear map can be expressed as a matrix, finding a counterexample comes down to finding two matrices ${\displaystyle A,B}$ such that ${\displaystyle AB=0}$ but ${\displaystyle BA\neq 0}$. Here is one example of functions: ${\displaystyle V=\mathbb {R} ^{2}}$. ${\displaystyle L(x,y)=(x,0),K(x,y)=(0,x+y)}$. Then we have ${\displaystyle L\circ K(x,y)=L(0,x+y)=(0,0)}$ but ${\displaystyle K\circ L(x,y)=K(x,0)=(0,x+0)}$ so ${\displaystyle L\circ K=0}$ but ${\displaystyle K\circ L\neq 0}$.

4. Show that a linear map ${\displaystyle L:V\to W}$ is one-to-one if and only if ${\displaystyle L(x)=0}$ implies ${\displaystyle x=0}$.

Proof:
First note that for any linear map ${\displaystyle L(0)=0}$ because ${\displaystyle L(0)=L(0\cdot 0)=0\cdot L(0)=0}$.

Proof: ${\displaystyle (\Rightarrow )}$Suppose that ${\displaystyle L}$ is one-to-one. Then if ${\displaystyle L(x)=0}$ we have ${\displaystyle L(x)=L(0)}$ by the note above so that we must have ${\displaystyle x=0}$. Therefore ${\displaystyle L(x)=0}$ implies ${\displaystyle x=0}$. ${\displaystyle (\Leftarrow )}$ Now suppose that ${\displaystyle L(x)=0}$ implies ${\displaystyle x=0}$. If ${\displaystyle L(x)=L(y)}$ then by linearity of ${\displaystyle L}$ we have ${\displaystyle L(x-y)=L(x)-L(y)=0}$. But then by hypothesis that means ${\displaystyle x-y=0}$ which implies ${\displaystyle x=y}$. Therefore ${\displaystyle L}$ is one-to-one.

6. Let ${\displaystyle V\neq \{0\}}$ be finite-dimensional and assume that

${\displaystyle L_{1},L_{2},...,L_{n}:V\to V}$

are linear operators. Show that if ${\displaystyle L_{1}\circ L_{2}\circ \cdots \circ L_{n}=0}$ then at least one of the ${\displaystyle L_{i}}$ are not one-to-one.

Proof:
I will use proof by contrapositive. The equivalent statement would then be "`If all of the ${\displaystyle L_{i}}$ are one-to-one, then ${\displaystyle L_{1}\circ \cdots \circ L_{n}\neq 0}$. Then this becomes very easy if you know the fact from set theory that the composition of one-to-one functions is a one-to-one function. This gives the following. Suppose that ${\displaystyle L_{1},...,L_{n}}$ are all one-to-one. Then ${\displaystyle L_{1}\circ \cdots \circ L_{n}}$ is also a one-to-one function and so the only input that will give an output of 0 is the input ${\displaystyle 0}$ from problem 4. Therefore ${\displaystyle L_{1}\circ \cdots \circ L_{n}\neq 0}$ and we are done.

If you don’t know the fact from set theory you can prove it as follows. Suppose ${\displaystyle f,g}$ are one-to-one functions. Consider the function ${\displaystyle f\circ g}$. Then to show this new function is one-to-one assume that ${\displaystyle f\circ g(x)=f\circ g(y)}$. Then ${\displaystyle f(g(x))=f(g(y))}$. But since ${\displaystyle f}$ is one-to-one that means the inputs to ${\displaystyle f}$ must be the same or in other words ${\displaystyle g(x)=g(y)}$. But then ${\displaystyle g}$ is one-to-one so that means ${\displaystyle x=y}$ and therefore ${\displaystyle f\circ g}$ is one-to-one.

13. Consider the map ${\displaystyle \Psi :\mathbb {C} \to {\text{Mat}}_{2\times 2}(\mathbb {R} )}$ defined by ${\displaystyle \Psi (\alpha +i\beta )={\begin{bmatrix}\alpha &-\beta \\\beta &\alpha \end{bmatrix}}}$ ( a) Show that this is ${\displaystyle \mathbb {R} }$-linear and one-to-one, but not onto. Find an example of a matrix in ${\displaystyle {\text{Mat}}_{2\times 2}(\mathbb {R} )}$ that does not come from ${\displaystyle \mathbb {C} }$.

Proof:
To show this is ${\displaystyle \mathbb {R} -}$linear let ${\displaystyle z_{1}=\alpha _{1}+i\beta _{1},z_{2}=\alpha _{2}+i\beta _{2}\in \mathbb {C} }$. Then:

${\displaystyle \Psi (z_{1}+z_{2})=\Psi (\alpha _{1}+i\beta _{1}+\alpha _{2}+i\beta _{2})=\Psi (\alpha _{1}+\alpha _{2}+i(\beta _{1}+\beta _{2}))={\begin{bmatrix}\alpha _{1}+\alpha _{2}&-\beta _{1}-\beta _{2}\\\beta _{1}+\beta _{2}&\alpha _{1}+\alpha _{2}\end{bmatrix}}={\begin{bmatrix}\alpha _{1}&-\beta _{1}\\\beta _{1}&\alpha _{1}\end{bmatrix}}+{\begin{bmatrix}\alpha _{2}&-\beta _{2}\\\beta _{2}&\alpha _{2}\end{bmatrix}}=\Psi (z_{1})+\Psi (z_{2})}$
Similarly if ${\displaystyle z=\alpha +i\beta \in \mathbb {C} }$ and ${\displaystyle a\in \mathbb {R} }$ then:
${\displaystyle \Psi (az)=\Psi (a\alpha +ia\beta )={\begin{bmatrix}a\alpha &-a\beta \\a\beta &a\alpha \end{bmatrix}}=a{\begin{bmatrix}\alpha &-\beta \\\beta &\alpha \end{bmatrix}}=a\Psi (z)}$
Therefore ${\displaystyle \Psi }$ is ${\displaystyle \mathbb {R} }$-linear.
Now to show ${\displaystyle \Psi }$ is not onto we notice that any matrix in the image of ${\displaystyle \Psi }$ has top left and bottom right coordinate the same. So the simple matrix ${\displaystyle {\begin{bmatrix}1&2\\3&4\end{bmatrix}}}$ cannot possibly be in the image of ${\displaystyle \Psi }$. Therefore ${\displaystyle \Psi }$ is not onto.