Isn't the question asking for
Apart from that it seems fine, although the notation and seems a bit odd especially it seems a bit inconsistent, the first one is evaluated at the function value f(u,v) I think and the second in evaluated at f(u,v) as well.
and the last one is just not the same as the first one, evaluated at (u,v)?.
Last edited by dan964; 22 May 2016 at 7:07 PM.
I used subscript numerals for the notation because otherwise it'd be more confusing due to so many variables being present.
Is this correct
Last edited by leehuan; 22 May 2016 at 8:50 PM. Reason: Lol yep. Whoops - typo
Progress so far
___________
Fix x and y, differentiate with respect to t (using the chain rule to deal with the LHS differentiation), and then set t=1 in the resulting identity.
Explicitly:
Hang on, just one final bit that I don't compute (I think it's just a dumb moment)
If I use the chain rule on this without the t=1 bit treating u=tx and v=ty
Obviously when t=1, f(u,v)=f(x,y), but I can't justify it in my head why the partial f/partial u becomes partial f/partial x. I just need confirmation that this was an allowed step you used?
Think about what partial f/partial u means, it just means the derivative of the function with respect to the first variable (and you can replace u with x or whatever your favorite greek letter is). You are differentiating f with respect to its first variable and then evaluating it at the coordinates (x,y).
Try to convince yourself that
and
are the same functions, just with different "dummy variables".
Introducing things like u and v can be more hassle than help.
This is also an example of why some people prefer using numerical subscripts for a function to denote differentiation with respect to the j-th variable.
Last edited by seanieg89; 29 May 2016 at 3:20 PM.
Hm ok yep.
__________________
A lot of the question is omitted for me to have a go at myself
No idea at all how to apply what here.
You can't use the standard definition of the inverse tangent function, because θ can be anything from -π to π.
For that, you need to use a slightly trickier modification, known as atan2(y,x)
Let α be the principle arctangent of y/x
then atan2(y,x)
= α if x is positive
= α+π if x is negative and y is non-negative
= α-π if x and y are negative
= π/2 if x=0 and y is positive
= -π/2 if x=0 and y is negative
= undefined if x=y=0
Since r is positive, there is no trouble there, as it is simply your standard absolute distance using Pythagoras's Theorem.
Last edited by Paradoxica; 29 May 2016 at 3:31 PM.
If I am a conic section, then my e = ∞
Just so we don't have this discussion in the future, my definition of the natural numbers includes 0.
My notes were really ambiguous. They just recited the chain rule except replaced all the u's and v's with r's and theta's and I have no idea how to manipulate it.
I attempted the matrix inverse as an exercise for myself in the meantime and I got this:
The last step was using what the answers were trying to tell me to prove. I don't actually know that the last step is true.
But I don't get the logic behind it.
These are my notes. Explanations would be greatly appreciated but because idk if the notes are copyrighted I probably can't keep them up for too long.
(Apparently, according to my lecturer, this has never been examined in first year before either...)
Last edited by leehuan; 29 May 2016 at 4:18 PM. Reason: Lecture notes removed
Basically, partials of r and theta wrt x mean we are thinking of r and theta as functions of x and y, and we are differentiating wrt x holding y constant.
If you don't know about Jacobians, note r as a function of x and y is r = sqrt(x^2 + y^2), so (partial)r/(partial)x = x/(sqrt(x^2 + y^2)) = x/r = cos(theta) (sorry for lack of LaTeX, on phone).
For the theta one, tan(theta) = y/x => sec^2 (theta) * (partial)theta/(partial x) = -y/x^2 via chain rule and differentiation wrt x.
Hence since cos^2 (theta) = (x/r)^2, we have by rearrangement
(partial)theta/(partial)x = (-y/x^2)*cos^2 (theta) = - y/r^2 = -(sin(theta))/r, as y = r.sin(theta).
Edit: I see you do know Jacobians it seems. You basically use the 'Inverse Function Theorem' to get derivatives of the inverse map by inverting the Jacobian of the 'forward' map, as seanieg89 was saying.
Last edited by InteGrand; 29 May 2016 at 4:35 PM.
The logic behind it is that if f and g are inverse to each other, then the chain rule says that
so the differentials of functions inverse to each other are matrices inverse to each other.
This is exactly how you would prove the "easy" part of the inverse function theorem. Once you have established the differentiability of the inverse map, the differential of your inverse map is forced to be the inverse of the differential of your original map.
Ohh right. Yep thanks.
But I'll admit to another thing. I embarrassingly forgot entirely about what r and theta actually equalled to. So IG's method went right over my head............and was also why I didn't comprehend what Para said
So like, this question felt never ending and I aborted.
Part b) is just the chain rule:
Is there any shortcut to save me from computing several product and quotient rules in this one
This is the hardest question of the semester's homework.. lol
1. Approaching along the line x=0 gives you a limit of zero, and approaching along the line y=x gives you a limit of 1/2, so you cannot extend f continuously to the full plane.
2. Just literally partially differentiate by first principles, you get zero for both of the limits (the limits defining the partial derivatives at the origin, which is the only potentially problematic point). The difference quotients are identically zero.
Moral of the story:
Saying that all partials of a multivariable function exist at a point is much weaker than saying a function is differentiable at that point. In fact as this example shows, it is even weaker than continuity! It makes sense when you think about it, being "nice" in the coordinate directions does not say anything about how potentially badly behaved you might be in the infinitude of other directions.
Here is a followup question for you:
Suppose f:R^2 -> R has directional derivatives in every direction at the origin. Ie f(tx,ty) is a differentiable function in the single variable t, for any fixed point (x,y) in the plane.
1. Is f necessarily continuous?
2. Is f necessarily differentiable?
Where differentiability at (0,0) is the statement that there exists a linear map f'(0,0) from R^2 to R with
f(x,y)=f(0,0)+f'(0,0)(x,y)+E(x,y)
where E(x,y)/sqrt(x^2+y^2) -> 0 as (x,y)-> 0.
Last edited by seanieg89; 30 May 2016 at 4:46 PM.
I need guidance (except for part a) please
I need it seriously broken down because the fact that a is a vector scares me
I always forget when to apply the total differential approximation and how to apply it. Any tips with respect to this question?
There are currently 1 users browsing this thread. (0 members and 1 guests)
Bookmarks