Linear independance (1 Viewer)

Librah

Not_the_pad
Joined
Oct 28, 2013
Messages
916
Location
Sydney Australia
Gender
Male
HSC
2014

Need help with c, and maybe a better explanation for b. Is there a shorter way of doing c then the way i'm thinking, which is to do u_2 dot u_3. Since you already proved the other cases in a, you just need to do the last possible combination, which is that. Part d is pretty simple.
 
Last edited:

InteGrand

Well-Known Member
Joined
Dec 11, 2014
Messages
6,109
Gender
Male
HSC
N/A

Need help with c, and maybe a better explanation for b. Is there a shorter way of doing c then the way i'm thinking, which is to do u_2 dot u_3. Since you already proved the other cases in a, you just need to do the last possible combination, which is that. Part d is pretty simple.
Are you sure we can use the previous parts to do (c)? Because I think (c) is asking for the case where the vectors vi are any general vectors, whereas the previous parts were for specific cases?
 

Librah

Not_the_pad
Joined
Oct 28, 2013
Messages
916
Location
Sydney Australia
Gender
Male
HSC
2014
Are you sure we can use the previous parts to do (c)? Because I think (c) is asking for the case where the vectors vi are any general vectors, whereas the previous parts were for specific cases?
Well it does ask in general any of the two vectors u_1, u_2, u_3. But if you can prove it in general, better.
 

InteGrand

Well-Known Member
Joined
Dec 11, 2014
Messages
6,109
Gender
Male
HSC
N/A
And for (b), can't we just say that since the determinant is nonzero, and the columns of it are the components of the vectors vi, then the vectors are linearly independent? This is a well-known fact, and since it just says "explain", it doesn't look like they want a proof of it?
 

Librah

Not_the_pad
Joined
Oct 28, 2013
Messages
916
Location
Sydney Australia
Gender
Male
HSC
2014
Also if possible would also like to know why if you have an invertible matrix 2x2 matrix P, where the columns of P are eigenvectors, then PD_1D_2P(inverse)= PD_2D_1P(Inverse). D_1,D_2 are diagonal matrices.
 

Librah

Not_the_pad
Joined
Oct 28, 2013
Messages
916
Location
Sydney Australia
Gender
Male
HSC
2014
And for (b), can't we just say that since the determinant is nonzero, and the columns of it are the components of the vectors vi, then the vectors are linearly independent? This is a well-known fact, and since it just says "explain", it doesn't look like they want a proof of it?
That's what i wrote, but i didn't think that would be worth 4 marks.
 

InteGrand

Well-Known Member
Joined
Dec 11, 2014
Messages
6,109
Gender
Male
HSC
N/A
That's what i wrote, but i didn't think that would be worth 4 marks.
OK, then maybe we could say that since the det(A) ≠ 0 (calling that matrix A), the equation Ax = 0 must have a unique solution (i.e. the zero vector), and since Ax is a linear combination of the columns of A, it follows that the only way to write 0 as a linear combination of the columns of A is to take 0 of each (i.e. the columns of A are linearly independent, i.e. the vectors vi are linearly independent).

But surely we'd need to use some result of det(A) ≠ 0. I doubt we'd have to derive everything from scratch.
 

awesome-0_4000

New Member
Joined
Jun 5, 2013
Messages
18
Gender
Male
HSC
N/A
Also if possible would also like to know why if you have an invertible matrix 2x2 matrix P, where the columns of P are eigenvectors, then PD_1D_2P(inverse)= PD_2D_1P(Inverse). D_1,D_2 are diagonal matrices.
The answer to this question has nothing to do with the properties of P, multiplication of diagonal matrices is commutative.
 

RenegadeMx

Kosovo is Serbian
Joined
May 6, 2014
Messages
1,310
Gender
Male
HSC
2011
Uni Grad
2016
isnt this gram schmidt pretty much? and c) is the projection forces the vectors to be perpendicular
 
Last edited:

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

Top