# MATH2601 Higher Linear Algebra (1 Viewer)

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \text{Prove that if }\alpha\textbf{v}=\textbf{0}\text{ then either }\alpha=0\text{ or }\textbf{v}=\textbf{0}$

I have most of the proof covered up but I'm getting confused at the last bit.

$\bg_white \text{It is trivially true when }\alpha=0\text{ and }\textbf{v}=0.\\ \text{Otherwise, consider}$

\bg_white \begin{align*}\alpha\textbf{v}&=0 \\ \implies\alpha\textbf{v}-\alpha\textbf{v}&=-\alpha\textbf{v}\\ \implies (\alpha-\alpha)\textbf{v}=0\textbf{v}=\textbf{0}&=-\alpha\textbf{v}\\ \therefore \alpha\textbf{v}&=-\alpha\textbf{v}\end{align*}

\bg_white \text{If }\alpha \neq 0\\ \begin{align*}\alpha^{-1} (\alpha\textbf{v})&= \alpha^{-1}(-\alpha\textbf{v})\\ \textbf{v}&=-\textbf{v}\\ 2\textbf{v}&=0\\ \textbf{v}&=0\end{align*}

All I'm really stuck on is how to prove that if v \neq 0 why must alpha be equal to 0

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \text{Prove that if }\alpha\textbf{v}=\textbf{0}\text{ then either }\alpha=0\text{ or }\textbf{v}=\textbf{0}$

I have most of the proof covered up but I'm getting confused at the last bit.

$\bg_white \text{It is trivially true when }\alpha=0\text{ and }\textbf{v}=0.\\ \text{Otherwise, consider}$

\bg_white \begin{align*}\alpha\textbf{v}&=0 \\ \implies\alpha\textbf{v}-\alpha\textbf{v}&=-\alpha\textbf{v}\\ \implies (\alpha-\alpha)\textbf{v}=0\textbf{v}=\textbf{0}&=-\alpha\textbf{v}\\ \therefore \alpha\textbf{v}&=-\alpha\textbf{v}\end{align*}

\bg_white \text{If }\alpha \neq 0\\ \begin{align*}\alpha^{-1} (\alpha\textbf{v})&= \alpha^{-1}(-\alpha\textbf{v})\\ \textbf{v}&=-\textbf{v}\\ 2\textbf{v}&=0\\ \textbf{v}&=0\end{align*}

All I'm really stuck on is how to prove that if v \neq 0 why must alpha be equal to 0
$\bg_white \noindent All you need to do to prove the claim is assume \alpha \mathbf{v} = \mathbf{0} and show that if \alpha \neq 0, then \mathbf{v} = \mathbf{0}. To do this, simply multiply both sides of the assumption (\alpha \mathbf{v} = \mathbf{0}) by \alpha^{-1} and use some vector space axioms to conclude \mathbf{v} = \mathbf{0}, also recalling that \alpha^{-1}\mathbf{0} = \mathbf{0} due to a result you asked to be proved last year (iirc).$

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \text{Prove that if }\alpha\textbf{v}=\textbf{0}\text{ then either }\alpha=0\text{ or }\textbf{v}=\textbf{0}$

I have most of the proof covered up but I'm getting confused at the last bit.

$\bg_white \text{It is trivially true when }\alpha=0\text{ and }\textbf{v}=0.\\ \text{Otherwise, consider}$

\bg_white \begin{align*}\alpha\textbf{v}&=0 \\ \implies\alpha\textbf{v}-\alpha\textbf{v}&=-\alpha\textbf{v}\\ \implies (\alpha-\alpha)\textbf{v}=0\textbf{v}=\textbf{0}&=-\alpha\textbf{v}\\ \therefore \alpha\textbf{v}&=-\alpha\textbf{v}\end{align*}

\bg_white \text{If }\alpha \neq 0\\ \begin{align*}\alpha^{-1} (\alpha\textbf{v})&= \alpha^{-1}(-\alpha\textbf{v})\\ \textbf{v}&=-\textbf{v}\\ 2\textbf{v}&=0\\ \textbf{v}&=0\end{align*}

All I'm really stuck on is how to prove that if v \neq 0 why must alpha be equal to 0
$\bg_white \noindent When you went from 2\mathbf{v} = \mathbf{0} to concluding that \mathbf{v} = \mathbf{0}, you were essentially assuming what had to be proved (i.e. that if \alpha \mathbf{v} = \mathbf{0} and \alpha is a non-zero scalar, then \mathbf{v} is the zero vector). Instead, just do what I said above (multiply by \alpha^{-1}, noting of course that \alpha^{-1} exists if \alpha \neq 0 (assuming we're in a field)).$

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white G\text{ is a group and }a\in G.\text{ Proven in a) }\{a^k \mid k \in \mathbb{Z}\}\text{ is a subgroup of }G$

$\bg_white \text{Proven in b) if }H\text{ is finite, then }\exists m \in \mathbb{Z}^+:\\ a^m=e\text{ and }H=\{e,a,a^2,\dots,a^{m-1}\}$

$\bg_white \text{c) Show that if }G\text{ is a (finite) group and }|G|\text{ is prime}\\ \text{then }G\text{ is cyclic.}$

So I commenced by stating from Lagrange's theorem that |H| is a factor of |G|. However since |G| is prime, the only possibilities are |H| = |G| or |H| = 1

I'm looking at the |H| = |G| part. I want to deduce from |H| = |G| that H = G, so as H is clearly cyclic so must G. But how do I properly justify that H = G?

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white G\text{ is a group and }a\in G.\text{ Proven in a) }\{a^k \mid k \in \mathbb{Z}\}\text{ is a subgroup of }G$

$\bg_white \text{Proven in b) if }H\text{ is finite, then }\exists m \in \mathbb{Z}^+:\\ a^m=e\text{ and }H=\{e,a,a^2,\dots,a^{m-1}\}$

$\bg_white \text{c) Show that if }G\text{ is a (finite) group and }|G|\text{ is prime}\\ \text{then }G\text{ is cyclic.}$

So I commenced by stating from Lagrange's theorem that |H| is a factor of |G|. However since |G| is prime, the only possibilities are |H| = |G| or |H| = 1

I'm looking at the |H| = |G| part. I want to deduce from |H| = |G| that H = G, so as H is clearly cyclic so must G. But how do I properly justify that H = G?
Well H has the same number of elements as G and is a subset of G, so H = G. (If S is a set that has only a finite number of elements, then the only subset of S with the same number of elements as S is S itself.)

Last edited:

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

So that part makes sense now.

How do we complete the proof if |H| = 1?

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

So that part makes sense now.

How do we complete the proof if |H| = 1?
That can only happen if a = e (the identity). Take a to be any other element in G (there must be at least one other element since G has prime order, which implies |G| is at least 2), and the result will follow (since |H| won't be able to be 1).

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \text{Let }V\text{ be a finite-dimensional vector space and }T:V\to V\text{ be a linear transformation}\\ \text{Are the following true or false}$

The questions were

$\bg_white \\\text{a) If }<\textbf{u}\mid T(\textbf{v})>\, =0\text{ for all }\textbf{u},\textbf{v}\in V,\text{ then }T(\textbf{v})=\textbf{0}\forall \textbf{v}\in V\\ \text{b) If }<\textbf{v}\mid T(\textbf{v})>\, =0\text{ for all }\textbf{v}\in V,\text{ then }T(\textbf{v})=\textbf{0}\forall \textbf{v}\in V$

For the first one I claimed it was true by using a uniqueness result
$\bg_white <\textbf{x}\mid \textbf{y}>\, = \, <\textbf{x}\mid \textbf{z}> \, \implies \textbf{y}=\textbf{z}$

and by pairing T(v) with 0. But I can't figure out why this argument does not work for the second one?

#### seanieg89

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

Why do you think that "uniqueness result" is true? Add anything orthogonal to x to y and you won't change the inner product of x with it.

(Also the truth of b) is (perhaps surprisingly) dependent on the field your vector space is over.)

Last edited:

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \text{Let }V\text{ be a finite-dimensional vector space and }T:V\to V\text{ be a linear transformation}\\ \text{Are the following true or false}$

The questions were

$\bg_white \\\text{a) If }<\textbf{u}\mid T(\textbf{v})>\, =0\text{ for all }\textbf{u},\textbf{v}\in V,\text{ then }T(\textbf{v})=\textbf{0}\forall \textbf{v}\in V\\ \text{b) If }<\textbf{v}\mid T(\textbf{v})>\, =0\text{ for all }\textbf{v}\in V,\text{ then }T(\textbf{v})=\textbf{0}\forall \textbf{v}\in V$

For the first one I claimed it was true by using a uniqueness result
$\bg_white <\textbf{x}\mid \textbf{y}>\, = \, <\textbf{x}\mid \textbf{z}> \, \implies \textbf{y}=\textbf{z}$

and by pairing T(v) with 0. But I can't figure out why this argument does not work for the second one?
For a), we can do it like this: since T is from V -> V and < u, T(v) > = 0 for all u, v in V, for each v, just take u = T(v), and the result will follow.

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

Is there an intuitive explanation for this?

Let T be a linear map on a finite-dimensional inner product space V

Then T is an isometry iff T is unitary

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white V=\{p \in \mathbb{P}_3 \mid p(2) = 0\}\text{ (assumed to be a V.S.}$

$\bg_white \text{RTP: }B=\{(t-2),(t-2)^2,(t-2)^3\}\text{ is a basis for }V$

So the question is obviously easy first year stuff. I'd prove linear independence and then use dim(V) = B to deduce that it's a basis.

However, for the linearly independence step

$\bg_white c_1(t-2)+c_2(t-2)^2+c_3(t-3)^3 = 0\, \forall t$

I just want a validity check because I'm having second doubts. Mostly because no solutions made a remark on this.
I differentiated w.r.t t and then subbed in t=2 to prove c1 = 0. (And then repeated this to show c2 = c3 = 0.) Is this ok?

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white V=\{p \in \mathbb{P}_3 \mid p(2) = 0\}\text{ (assumed to be a V.S.}$

$\bg_white \text{RTP: }B=\{(t-2),(t-2)^2,(t-2)^3\}\text{ is a basis for }V$

So the question is obviously easy first year stuff. I'd prove linear independence and then use dim(V) = B to deduce that it's a basis.

However, for the linearly independence step

$\bg_white c_1(t-2)+c_2(t-2)^2+c_3(t-3)^3 = 0\, \forall t$

I just want a validity check because I'm having second doubts. Mostly because no solutions made a remark on this.
I differentiated w.r.t t and then subbed in t=2 to prove c1 = 0. (And then repeated this to show c2 = c3 = 0.) Is this ok?
Yeah, that's OK for showing linear independence.

#### seanieg89

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

Is there an intuitive explanation for this?

Let T be a linear map on a finite-dimensional inner product space V

Then T is an isometry iff T is unitary
In an inner product space, the norm is defined in terms of the inner product. This means that any operator that preserves the inner product will preserve the norm.

Less obvious is the fact that in an inner product space, the inner product can be written in terms of the induced norm (*). Consequently anything that preserves the norm will preserve the inner product.

Of course, you don't need to prove (*) in order to answer this particular question, but it kind of hits at the heart of the relationship between inner products and induced norms on a real/complex inner product space and is a good exercise.

Note also that finite dimensionality is not required in any of these arguments.

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white T:\mathbb{P}_2 \to \mathbb{R}^3, \, T(p) = (p(a),p^\prime(b),p(c))$

$\bg_white \text{Found in b): w.r.t. the standard bases, the matrix of T is}\\ A=\begin{pmatrix}1&a&a^2\\ 0&1&2b\\ 1&c&c^2\end{pmatrix}$

$\bg_white \text{Deduced in c): Using the fact that }T\text{ is invertible iff }A\text{ is invertible}\\ T\text{ is invertible provided }a\neq c\text{ and }a+c\neq 2b$

The question is an extension on c. How can I explain this answer from first-year calculus?

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white T:\mathbb{P}_2 \to \mathbb{R}^3, \, T(p) = (p(a),p^\prime(b),p(c))$

$\bg_white \text{Found in b): w.r.t. the standard bases, the matrix of T is}\\ A=\begin{pmatrix}1&a&a^2\\ 0&1&2b\\ 1&c&c^2\end{pmatrix}$

$\bg_white \text{Deduced in c): Using the fact that }T\text{ is invertible iff }A\text{ is invertible}\\ T\text{ is invertible provided }a\neq c\text{ and }a+c\neq 2b$

The question is an extension on c. How can I explain this answer from first-year calculus?
$\bg_white \noindent It is clear that a necessary condition for T to be invertible is that a\neq c (otherwise the first and last entry of T(p) are always the same, so T cannot be onto). Note also with a\neq c, we also can't have b = \frac{a+c}{2}, because if this is the case, since there exist more than one polynomial with roots at a and c, and these polynomials also havep'(b) = 0 (essentially since the vertex of the parabola is at b), T will map more than one polynomial to (0,0,0)^{\top}, and hence won't be one-to-one.$

$\bg_white \noindent To see that these conditions are also sufficient for T to be invertible, note that if a\neq c and a + b \neq 2b, then T is one-to-one. For if p(a) = q(a), p'(b) = q'(b) and p(c) = q(c), the polynomial r(t) := p(t) - q(t) has its roots at a and b and has slope 0 at a point \emph{that is not the midpoint of these roots} (b), since b \neq \frac{a+b}{2}. Hence the quadratic r must be the zero polynomial, i.e. p=q, so T is one-to-one.$

$\bg_white \noindent See if you can see why T also will be onto (or if you're willing to use some linear algebra at this point, you could say it's a one-to-one linear map between two equi-dimensional spaces, so is automatically onto as well).$

Last edited:

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \text{Let }T:\mathbb{R}^2\to \mathbb{R}^3\text{ be linear and suppose }\\T(\textbf{v})=(1,2,3)\text{ and }T(\textbf{w})=(2,4,6)$

$\bg_white \text{Must }\textbf{v}\text{ and }\textbf{w}\text{ be linearly dependent?}$

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

$\bg_white \text{Let }T:\mathbb{R}^2\to \mathbb{R}^3\text{ be linear and suppose }\\T(\textbf{v})=(1,2,3)\text{ and }T(\textbf{w})=(2,4,6)$

$\bg_white \text{Must }\textbf{v}\text{ and }\textbf{w}\text{ be linearly dependent?}$
No.

#### leehuan

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

Okay good I agree. But what would be an easy method to generate a counterexample?

#### InteGrand

##### Well-Known Member
Re: MATH2601 Linear Algebra/Group Theory Questions

Okay good I agree. But what would be an easy method to generate a counterexample?
$\bg_white \noindent Take a map whose image is just \mathrm{span}\left\{(1,2,3)^{\top}\right\}. Then T would map all non-zero vectors to multiples of (1,2,3)^{\top}, so wouldn't imply that those \mathbf{v} and \mathbf{w} are dependent. For example, take T to be defined by T(\mathbf{x}) = A\mathbf{x}, where A = \begin{bmatrix}1 & 2\\ 2 & 4 \\ 3 & 6\end{bmatrix}. Then take \mathbf{v} = \mathbf{e}_{1} and \mathbf{w} = \mathbf{e}_{2}.$