# First Year Mathematics B (Integration, Series, Discrete Maths & Modelling) (1 Viewer)

#### He-Mann

##### Vexed?

=Int(Coshx+coshx*sinh^2x )

Since coshx*sinh^2x is of the form int(y'*y)
We can treat it as x^2 so int(x^2) is (x^3)/3

Thus Int(Coshx+coshx*sinh^2x )=sinhx+sinh^3x/3
Just saying, that's one way to confuse people. Explanation is not clear.

#### leehuan

##### Well-Known Member

$\bg_white \text{For what values of }a\text{ does this series converge?}\\ \sum_{k=2}^\infty \frac{1}{k^a \ln k}$

I can tell by inspection that a > 1 but how do you prove it? Is there a way to get out of using the comparison test twice?

Last edited:

#### leehuan

##### Well-Known Member

Also why is it that if I have a 3x3 matrix with three distinct eigenvectors, these eigenvectors form a basis over R3?

##### -insert title here-

$\bg_white \text{For what values of }a\text{ does this series converge?}\\ \sum_{k=2}^\infty \frac{1}{k^a \ln k}$

I can tell by inspection that a > 1 but how do you prove it? Is there a way to get out of using the comparison test twice?
Reverse AM-GM should work.

#### leehuan

##### Well-Known Member

Reverse AM-GM should work.
What is this reverse AM-GM?

##### -insert title here-

$\bg_white \frac{2}{x^a \log{x}} < \frac{1}{x^{2a}} + \frac{1}{\log^2{x}}$

#### leehuan

##### Well-Known Member

$\bg_white \frac{2}{x^a \log{x}} < \frac{1}{x^{2a}} + \frac{1}{\log^2{x}}$
Never seen that before although it does make intuitive sense. Except, doesn't log^2 still diverge?

##### -insert title here-

Never seen that before although it does make intuitive sense. Except, doesn't log^2 still diverge?
In that case just rearrange the powers so there's an x² with the log.

#### leehuan

##### Well-Known Member

In that case just rearrange the powers so there's an x² with the log.
Rearrange the powers? I can't change the expression of my series.

#### InteGrand

##### Well-Known Member

Also why is it that if I have a 3x3 matrix with three distinct eigenvectors, these eigenvectors form a basis over R3?
$\bg_white \noindent In general, any n\times n real matrix with n distinct eigenvalues is diagonalisable (which is equivalent to saying there is a basis for \mathbb{R}^n consisting entirely of eigenvectors of A (such a basis is called an \textsl{eigenbasis})). This is because recall eigenvectors from distinct eigenvalues are linearly independent, so if there's n distinct eigenvalues, we clearly have n independent eigenvectors, which means we have a basis for \mathbb{R}^{n} consisting of eigenvectors. (This is also all true for matrices over general fields \mathbb{F}, replacing \mathbb{R}^{n}'s with \mathbb{F}^{n}'s. So for example also true for complex matrices: any n\times n matrix over \mathbb{C} that has n distinct eigenvalues has its eigenvectors form a basis for \mathbb{C}^{n}.)$

Last edited:

#### InteGrand

##### Well-Known Member

$\bg_white \text{For what values of }a\text{ does this series converge?}\\ \sum_{k=2}^\infty \frac{1}{k^a \ln k}$

I can tell by inspection that a > 1 but how do you prove it? Is there a way to get out of using the comparison test twice?
$\bg_white \noindent If a>1, the series converges using the comparison test and `p-test', since 0\leq \frac{1}{k^a \ln k} \leq \frac{1}{k^a} for all k sufficiently large (say k\geq 3) and \sum_{k=2}^{\infty} \frac{1}{k^a} converges if a>1.$

$\bg_white \noindent If a\leq 1, the given sum diverges. If a < 0, this is obvious since then \frac{1}{k^a \ln k}\to \infty as k\to \infty (rather than going to 0 as k\to \infty). If 0\leq a \leq 1, the sum still diverges. E.g. use the integral test or whatever other method you like to show that \sum \frac{1}{k \ln k} (so when a=1) diverges. Then if 0

Last edited:

#### leehuan

##### Well-Known Member

$\bg_white \noindent In general, any n\times n real matrix with n distinct eigenvalues is diagonalisable (which is equivalent to saying there is a basis for \mathbb{R}^n consisting entirely of eigenvectors of A (such a basis is called an \textsl{eigenbasis})). This is because recall eigenvectors from distinct eigenvectors are linearly independent, so if there's n distinct eigenvalues, we clearly have n independent eigenvectors, which means we have a basis for \mathbb{R}^{n} consisting of eigenvectors. (This is also all true for matrices over general fields \mathbb{F}, replacing \mathbb{R}^{n}'s with \mathbb{F}^{n}'s. So for example also true for complex matrices: any n\times n matrix over \mathbb{C} that has n distinct eigenvalues has its eigenvectors form a basis for \mathbb{C}^{n}.)$
Oh right tbh I forgot about the theorem at the start. But now I got another problem

$\bg_white \text{Say }\textbf{v}\text{ is an eigenvector of }A\text{ with eigenvalue }\lambda,\text{ so }A\textbf{v}=\lambda \textbf{v}\\ \text{Then why is this true?}$

$\bg_white A^k\textbf{v}=\lambda^k\textbf{v}$

#### seanieg89

##### Well-Known Member

If applying the linear operator A to the vector v scales it by lambda each time, and we apply A to v k times...

#### InteGrand

##### Well-Known Member

Oh right tbh I forgot about the theorem at the start. But now I got another problem

$\bg_white \text{Say }\textbf{v}\text{ is an eigenvector of }A\text{ with eigenvalue }\lambda,\text{ so }A\textbf{v}=\lambda \textbf{v}\\ \text{Then why is this true?}$

$\bg_white A^k\textbf{v}=\lambda^k\textbf{v}$
$\bg_white \noindent An easy induction! If k=1, it's true by definition. Then assuming it's true for some particular k\in \mathbb{Z}^{+}, i.e. that A^k \mathbf{v} = \lambda^{k}\mathbf{v}, we have$

\bg_white \begin{align*}A^{k+1}\mathbf{v} &= A\left(A^k \mathbf{v}\right)\\ &= A\left(\lambda^k \mathbf{v}\right) \quad (\text{inductive hypothesis}) \\ &= \lambda^k A\mathbf{v} \\ &= \lambda^k \lambda \mathbf{v} \\ &= \lambda^{k+1}\mathbf{v},\end{align*}

$\bg_white \noindent and so the result is true by induction.$

#### leehuan

##### Well-Known Member

$\bg_white \noindent An easy induction! If k=1, it's true by definition. Then assuming it's true for some particular k\in \mathbb{Z}^{+}, i.e. that A^k \mathbf{v} = \lambda^{k}\mathbf{v}, we have$

\bg_white \begin{align*}A^{k+1}\mathbf{v} &= A\left(A^k \mathbf{v}\right)\\ &= A\left(\lambda^k \mathbf{v}\right) \quad (\text{inductive hypothesis}) \\ &= \lambda^k A\mathbf{v} \\ &= \lambda^k \lambda \mathbf{v} \\ &= \lambda^{k+1}\mathbf{v},\end{align*}

$\bg_white \noindent and so the result is true by induction.$
Woah. And right when I was about to say "intuitively makes sense but couldn't convince myself of it mathematically". That's neat!

One last one for the time being please

$\bg_white \noindent\text{As far as I understand it, the Cauchy-Schwarz inequality is the statement }\\|\textbf{u}\cdot\textbf{v}| \le |\textbf{u}||\textbf{v}|\\ \text{So I'm not sure how to apply it to this series question.}$

$\bg_white \text{Suppose }\left\{a_k\right\}_{k=1}^\infty\text{ is a sequence of +'ve numbers for which }\sum_{k=1}^\infty a_k^2\text{ converges.}$

$\bg_white \text{Let }s_n=\sum_{k=1}^n\frac{a_k}{k}\text{ for }n=1,2,\dots$

$\bg_white \text{Use C-S to prove that the sequence }\{s_n\}_{n=1}^\infty\text{ is bounded.}$

#### InteGrand

##### Well-Known Member

Woah. And right when I was about to say "intuitively makes sense but couldn't convince myself of it mathematically". That's neat!

One last one for the time being please

$\bg_white \noindent\text{As far as I understand it, the Cauchy-Schwarz inequality is the statement }\\|\textbf{u}\cdot\textbf{v}| \le |\textbf{u}||\textbf{v}|\\ \text{So I'm not sure how to apply it to this series question.}$

$\bg_white \text{Suppose }\left\{a_k\right\}_{k=1}^\infty\text{ is a sequence of +'ve numbers for which }\sum_{k=1}^\infty a_k^2\text{ converges.}$

$\bg_white \text{Let }s_n=\sum_{k=1}^n\frac{a_k}{k}\text{ for }n=1,2,\dots$

$\bg_white \text{Use C-S to prove that the sequence }\{s_n\}_{n=1}^\infty\text{ is bounded.}$

$\bg_white \noindent By Cauchy-Schwarz, we have \frac{a_1}{1} + \frac{a_2}{2} + \cdots + \frac{a_n}{n} \leq \sqrt{a_1 ^2 + a_2 ^2 + \cdots + a_n ^2}\sqrt{\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{n^2}}. By the Basel problem series and assumption of convergence of \sum a_n ^2, the RHS is bounded above by a finite number, and the LHS is thus too, as required. (Also the LHS is an increasing sequence. So the series in question has its partial sums bounded above and increasing, so it in fact converges.)$

#### seanieg89

##### Well-Known Member

Woah. And right when I was about to say "intuitively makes sense but couldn't convince myself of it mathematically". That's neat!

One last one for the time being please

$\bg_white \noindent\text{As far as I understand it, the Cauchy-Schwarz inequality is the statement }\\|\textbf{u}\cdot\textbf{v}| \le |\textbf{u}||\textbf{v}|\\ \text{So I'm not sure how to apply it to this series question.}$

$\bg_white \text{Suppose }\left\{a_k\right\}_{k=1}^\infty\text{ is a sequence of +'ve numbers for which }\sum_{k=1}^\infty a_k^2\text{ converges.}$

$\bg_white \text{Let }s_n=\sum_{k=1}^n\frac{a_k}{k}\text{ for }n=1,2,\dots$

$\bg_white \text{Use C-S to prove that the sequence }\{s_n\}_{n=1}^\infty\text{ is bounded.}$
$\bg_white s_n^2 \leq \left(\sum_{k=1}^n a_k^2\right)\left(\sum_{k=1}^n \frac{1}{k^2}\right).$

Each of these factors is a partial sum of a convergent positive series and so is bounded above by its limit. Hence s_n is bounded.

#### leehuan

##### Well-Known Member

$\bg_white \text{So with those eigenvalue questions, they arose out of this markov system:}\\ \textbf{x}(k+1)=A\textbf{x}(k)\\ A=\begin{pmatrix}0.50&0.15&0.30\\0.10&0.60&0.30\\0.40&0.25&0.40\end{pmatrix}\\ \text{And let }B=\lim_{k\to \infty}A^k$

In a MATLAB output they computed the eigenvectors and eigenvalues of A for me. In part a) I had to write down a basis for ker(B) and im(B)

Part c) gave an initial condition
$\bg_white \textbf{x}(0)=\begin{pmatrix}-1\\2\\-1\end{pmatrix}\\ \text{and asked me to find }\lim_{k\to \infty}\textbf{x}(k)$

$\bg_white \text{The answers secondly say that }\lim_{k\to \infty}\textbf{x}(k)\text{ is proportional to }\begin{pmatrix}0.5\\0.1\\0.4\end{pmatrix}\\ \text{Which I get from part a) calculations.}\\ \text{But then part c) says the sum of the components of }\textbf{x}(k)\text{ is preserved.}\\ \text{Is this something about state vectors that i should know about?}$

#### InteGrand

##### Well-Known Member

$\bg_white \text{So with those eigenvalue questions, they arose out of this markov system:}\\ \textbf{x}(k+1)=A\textbf{x}(k)\\ A=\begin{pmatrix}0.50&0.15&0.30\\0.10&0.60&0.30\\0.40&0.25&0.40\end{pmatrix}\\ \text{And let }B=\lim_{k\to \infty}A^k$

In a MATLAB output they computed the eigenvectors and eigenvalues of A for me. In part a) I had to write down a basis for ker(B) and im(B)

Part c) gave an initial condition
$\bg_white \textbf{x}(0)=\begin{pmatrix}-1\\2\\-1\end{pmatrix}\\ \text{and asked me to find }\lim_{k\to \infty}\textbf{x}(k)$

$\bg_white \text{The answers secondly say that }\lim_{k\to \infty}\textbf{x}(k)\text{ is proportional to }\begin{pmatrix}0.5\\0.1\\0.4\end{pmatrix}\\ \text{Which I get from part a) calculations.}\\ \text{But then part c) says the sum of the components of }\textbf{x}(k)\text{ is preserved.}\\ \text{Is this something about state vectors that i should know about?}$
$\bg_white \noindent Note that A has columns that sum to 1. You can show as an exercise that this implies that the sequence \mathbf{x}(0), \mathbf{x}(1), \mathbf{x}(2),\ldots has \emph{constant entry sum}.$

$\bg_white \noindent So basically the key result is \emph{if a square matrix A has all columns summing to 1, then that sequence \left(\mathbf{x}(k)\right) has constant entry sum}. To show this, all you need to do is show that for such a matrix A (one whose columns sum to 1), for any vector \mathbf{v}, we have that the entry sum of \mathbf{v} equals that of A\mathbf{v}.$

Last edited: