Integration of x^x. NB: not in syllabus (1 Viewer)

KeypadSDM

B4nn3d
Joined
Apr 9, 2003
Messages
2,631
Location
Sydney, Inner West
Gender
Male
HSC
2003
Trust me, there is no primitive function for x<sup>x</sup>, well none that we can write simply.

Go integrate Ln[x]<sup>-1</sup> I'm sure people will appreciate finding a simple way of representing Li(x)
 
Last edited:

Euler

Member
Joined
Sep 7, 2003
Messages
81
Li(x) is Logarithmic integral. It is basically the integral of dt/(ln t) from 0 to x.

http://mathworld.wolfram.com/LogarithmicIntegral.html

Strongly related to the Prime Number Theorem.

This makes things interesting because Primes are discrete things, like Algebra. 2,3,5,7,11,13,...

Integration is Analysis, dealing with the infinitesimals, epsilons and stuff.

Yet, you use Analysis to answer questions in Number Theory.
 
Last edited:

KeypadSDM

B4nn3d
Joined
Apr 9, 2003
Messages
2,631
Location
Sydney, Inner West
Gender
Male
HSC
2003
"What on earth does the distribution of prime numbers have to do with the behavior of subatomic particles?"

And, if the riemann hypothesis is true, they do.

How cool is that?
 

Grey Council

Legend
Joined
Oct 14, 2003
Messages
1,426
Gender
Male
HSC
2004
Will that be of actual help? I mean, if we discover the Riemann's hypothesis to be true, will that trigger a whole new era of discoveries? Or is this whole thing actually just a mathematical curiosity?

And another thing I don't understand is this: If Riemann's hypothesis is a VERY powerful theory, then why not assume it to be true, and then go on doing whatever?
hrm
 

KeypadSDM

B4nn3d
Joined
Apr 9, 2003
Messages
2,631
Location
Sydney, Inner West
Gender
Male
HSC
2003
Originally posted by Grey Council
And another thing I don't understand is this: If Riemann's hypothesis is a VERY powerful theory, then why not assume it to be true, and then go on doing whatever?
hrm
I never got that bit either. I just thought maybe the method of proving it will provide insights into whatever...
 

KeypadSDM

B4nn3d
Joined
Apr 9, 2003
Messages
2,631
Location
Sydney, Inner West
Gender
Male
HSC
2003
Originally posted by buchanan
That's precisely what number theorists do.

Many theorems in number theory rely on it being true (or its generalisation to other L-functions).
But what implications does it have for the distribution of prime numbers?

Is it just that it shows Pi(x) = Li(x) + O(x<sup>1/2</sup>Ln[x])? Or are there deeper symmetries than that?
 

KeypadSDM

B4nn3d
Joined
Apr 9, 2003
Messages
2,631
Location
Sydney, Inner West
Gender
Male
HSC
2003
Originally posted by buchanan
Well, the Riemann zeta function has an Euler product

for primes p.
But for distribution of primes. Or something else. We already knew "The Golden Key" long before Riemann even made is passing comment that 'they' might all lie on the critical line.

(Note: 'they' is the 'non-trivial-zeroes-of-the-extended-zeta-function')
 

martin

Mathemagician
Joined
Oct 15, 2002
Messages
75
Location
Brisbane
Gender
Male
HSC
2002
In response to the original question about integrating x^x, I found this somewhere when I was looking for some questions for my Analysis midsemester. It looks a bit complicated, to say the least, but I haven't actually read it all.

It claims to be a proof that int x^x cannot be expressed in terms of elemenary funtions.

tordm@vana (Tord G Malmgren) writes:
>>I could really use a solution to the problem of integrating x^x dx
>>(x to the xth). It has been keeping me up late at the math library
>>for days, & I can not find a solution or a good reference/article on
>>this.

> hmm.. isn't this extremly basic? you might be killing a mosquito
>with a sledgehammer. Couldn't you write x^x=exp(xlnx).

I really really really ought to polish this up for FAQ inclusion. I
am combining two old articles of mine, the first giving and sketching
the general Liouville theory, the second applying this theory to x^x:
========================================================================
We give a fairly complete sketch of the proof that certain functions,
including the asked for one, are not integrable in elementary terms. The
central theorem is due to Liouville in 1835. His proof was analytic. The
sketch below is mostly algebraic and is due to Maxwell Rosenlicht. See his
papers in the _Pacific Journal of Mathematics_, 54, (1968) 153-161 and 65,
(1976), 485-492.

WARNING: Prerequisites for understanding the proof is a first year graduate
course in algebra, and a little complex analysis. No deep results are used,
but I cannot take the time to explain standard notions or all the deductions.

Notation: a^n is "a power n", a_n is "a sub n". C is the complex numbers,
for fields F, F[x] is the ring of polynomials in x OR an algebraic extension
of F, F(x) is the field of rational functions in a transcendental x, M is
the field of meromorphic functions in one variable. If f is a complex
function, I(f) will denote an antiderivative of f.

A differential field is a field F of characteristic 0 with a derivation.
Thus, in addition to the field operations + and *, there is a derivative
mapping ':F->F such that (a+b)'=a'+b' and (ab)'=a'b+ab'. Two standard
examples are C(z) and M with the usual derivative map. Notice a basic
identity (logarithmic differentiation) holds:

[(a_1 ^ k_1) * ... * (a_n ^ k_n)]' a_1' a_n'
--------------------------------- = k_1 --- + ... + k_n ---
(a_1 ^ k_1) * ... * (a_n ^ k_n) a_1 a_n

The usual rules like the quotient rule also hold. If a in F satisfies
a'=0, we call a a constant of F. The set of constants of F is called
Con(F), and forms a subfield of F.

The basic idea in showing something has no elementary integral is to
reduce the problem to a sequence of differential fields F_0, F_1, etc.,
where F_0 = C(z), and F_(i+1) is obtained from F_i by adjoining one
new element t. t is obtained either algebraically, because t satisfies
some polynomial equation p(t)=0, or exponentially, because t'/t=s' for
some s in F_i, or logarithmically, because t'=s'/s is in F_i. Notice
that we don't actually take exponentials or logarithms, but only attach
abstract elements that have the appropriate derivatives. Thus a function
f is integrable in elementary terms iff such a sequence exists starting
with C(z).

Just so there is no confusion, there is no notion of "composition" involved
here. If you want to take log s, you adjoin a transcendental t with the
relation t'=s'/s. There is no log function running around, for example,
except as motivation, until we reach actual examples.

We need some easy lemmas. Throughout the lemmas F is a differential field,
and t is transcendental over F.

Lemma 1: If K is an algebraic extension field of F, then there exists a
unique way to extend the derivation map from F to K so as to make K into
a differential field.

Lemma 2: If K=F(t) is a differential field with derivation extending F's,
and t' is in F, then for any polynomial f(t) in F[t], f(t)' is a polynomial
in F[t] of the same degree (if the leading coefficient is not in Con(F))
or of degree one less (if the leading coefficient is in Con(F)).

Lemma 3: If K=F(t) is a differential field with derivation extending F's,
and t'/t is in F, then for any a in F, n a positive integer, there exists
h in F such that (a*t^n)'=h*t^n. More generally, if f(t) is any polynomial
in F[t], then f(t)' is of the same degree as f(t), and is a multiple of
f(t) iff f(t) is a monomial.

These are all fairly elementary. For example, (a*t^n)'=(a'+at'/t)*t^n
in lemma 3. The final 'iff' in lemma 3 is where transcendence of t comes
in. Lemma 1 in the usual case of subfields of M can be proven analytically
using the implicit function theorem.
--------------------------------------------------------------------------
MAIN THEOREM. Let F,G be differential fields, let a be in F, let y be in G,
and suppose y'=a and G is an elementary differential extension field of F,
and Con(F)=Con(G). Then there exist c_1,...,c_n in Con(F), u_1,...,u_n, v
in F such that
u_1' u_n'
a = c_1 --- + ... + c_n --- + v'.
u_1 u_n

In other words, the only functions that have elementary anti-derivatives
are the ones that have this very specific form.
--------------------------------------------------------------------------
This is a very useful theorem for proving non-integrability. In the usual
case, F,G are subfields of M, so Con(F)=Con(G)=C always holds.

Proof:
By assumption there exists a finite chain of fields connecting F to G
such that the extension from one field to the next is given by performing
an algebraic, logarithmic, or exponential extension. We show that if the
form (*) can be satisfied with values in F2, and F2 is one of the three
kinds of allowable extensions of F1, then the form (*) can be satisfied
in F1. The form (*) is obviously satisfied in G: let all the c's be 0, the
u's be 1, and let v be the original y for which y'=a. Thus, if the form
(*) can be pulled down one field, we will be able to pull it down to F,
and the theorem holds.

So we may assume without loss of generality that G=F(t).

Case 1: t is algebraic over F. Say t is of degree k. Then there are
polynomials U_i and V such that U_i(t)=u_i and V(t)=v. So we have

U_1(t)' U_n(t)'
a = c_1 ------ + ... + c_n ------ + V(t)'.
U_1(t) U_n(t)

Now, by the uniqueness of extensions of derivatives in the algebraic case,
we may replace t by any of its conjugates t_1,..., t_k, and the same equation
holds. In other words, because a is in F, it is fixed under the Galois
automorphisms. Summing up over the conjugates, and converting the U'/U
terms into products using logarithmic differentiation, we have

[U_1(t_1)*...*U_1(t_k)]'
k a = c_1 ----------------------- + ... + [V(t_1)+...+V(t_k)]'.
U_1(t_1)*...*U_n(t_k)

But the expressions in [...] are symmetric polynomials in t_i, and as
they are polynomials with coefficients in F, the resulting expressions
are in F. So dividing by k gives us (*) holding in F.

Case 2: t is logarithmic over F. Because of logarithmic differentiation
we may assume that the u's are monic and irreducible in t and distinct.
Furthermore, we may assume v has been decomposed into partial fractions.
The fractions can only be of the form f/g^j, where deg(f)<def(g) and g
is monic irreducible. The fact that no terms outside of F appear on the
left hand side of (*), namely just a appears, means a lot of cancellation
must be occuring.

Let t'=s'/s, for some s in F. If f(t) is monic in F[t], then f(t)' is also
in F[t], of one less degree. Thus f(t) does not divide f(t)'. In particular,
all the u'/u terms are in lowest terms already. In the f/g^j terms in v,
we have a g^(j+1) denominator contribution in v' of the form -jfg'/g^(j+1).
But g doesn't divide fg', so no cancellation occurs. But no u'/u term can
cancel, as the u's are irreducible, and no (xx)/g^(j+1) term appears in
a, because a is a member of F. Thus no f/g^j term occurs at all in v. But
then none of the u's can be outside of F, since nothing can cancel them.
(Remember the u's are distinct, monic, and irreducible.) Thus each of the
u's is in F already, and v is a polynomial. But v' = a - expression in u's,
so v' is in F also. Thus v = b t + c for some b in con(F), c in F, by lemma
2. Then

u_1' u_n' s'
a = c_1 --- + ... + c_n --- + b --- + c'
u_1 u_n s

is the desired form. So case 2 holds.

Case 3: t is exponential over F. So let t'/t=s' for some s in F. As in
case 2 above, we may assume all the u's are monic, irreducible, and distinct
and put v in partial fraction decomposition form. Indeed the argument is
identical as in case 2 until we try to conclude what form v is. Here lemma
3 tells us that v is a finite sum of terms b*t^j where each coefficient is
in F. Each of the u's is also in F, with the possible exception that one
of them may be t. Thus every u'/u term is in F, so again we conclude v'
is in F. By lemma 3, v is in F. So if every u is in F, a is in the desired
form. Otherwise, one of the u's, say u_n, is actually t, then

u_1'
a = c_1 --- + ... + (c_n s + v)'
u_1

is the desired form. So case 3 holds.
------------------------------------------------------------------QED------
This proof, by the way, is a LOT easier than it looks. Just work out some
examples, and you'll see what's going on. (If this were a real expository
paper, such examples would be provided. Maybe it's better this way. Indeed,
if anybody out there takes the time to work some out and post them, I would
be much obliged.)

So how to you actually go about using this theorem? Suppose you want to
integrate f*exp(g) for f,g in C(z), g non zero. [This isn't yet the asked
for problem.] Let t=exp(g), so t'/t=g'. Let F=C(z)(t), G=any differential
extension field containing an antiderivative of f*t. [Note that t is in
fact transcendental over C(z): g is rational and non-zero, so it has a
pole (possibly at infinity) and so t has an essential singularity and can't
be algebraic over C(z).] Is G an elementary extension? If so, then

u_1' u_n'
f*t = c_1 --- + ... + c_n --- + v'
u_1 u_n

where the c_i, u_i, and v are in F. Now the left hand side can be viewed
as a polynomial in C(z)[t] with exactly one term. We must identify the
coefficient of t in the right hand side and get an equation for f. But
the first n terms can be factored until the u_i's are linear (using the
logarithmic differentiation identity to preserve the abstract from). As
for the v' term, long divide and use partial fractions to conclude v is a
sum of monomials: if v had a linear denominator other than t, raised to
some power in its partial fraction decomposition, its derivative would be
one higher power, and so cannot be cancelled with anything from the u_i
terms. (As in the proof.) If w is the coefficient of t in v, we have
f=w'+wg' with w in C(z). Solving this first order ODE, we find that
w=exp(-g)*I(f*exp(g)). In other words, if an elementary antiderivative
can be found for f*exp(g), where f,g are rational functions, then it is of
the form w*exp(g) for some rational function w. [Notice that the conclusion
would fail for g equal to 0!]

For example, consider f=1 and g=-z^2. Now exp(z^2)*I(exp(-z^2)) has no
poles (except perhaps at infinity), so if it is a rational function, it
must be a polynomial. So let (p(z)*exp(-z^2))'=exp(-z^2). One quickly
verifies that p'-2zp=1. But the only solution to that ODE is the error
function I(exp(-z^2)) itself (within an additive constant somewhere)!
And the error function is NOT a polynomial! (Proof? OK, for one thing,
its Taylor series obtained by termwise integration is infinite. For
another, its derivative is an exponential.)

As an exercise, prove that I(exp(z)/z) is not elementary. Conclude that
neither is I(exp(exp(z))) and I(1/log(z)).

For a slightly harder exercise, prove that I(sin(z)/z) is not elementary.
Conclude that neither is I(sin(exp(z))).

Finally, we consider the case I(z^z).

So this time, let F=C(z,l)(t), the field of rational functions in z,l,t,
where l=log z and t=exp(zl)=z^z. Note that z,l,t are algebraically
independent. (Choose some appropriate domain of definition.) Then
t'=(1+l)t, so for a=t in the above situation, the partial fraction
analysis (of the sort done in the previous posts) shows that the only
possibility is for v=wt+... to be the source of the t term on the left,
with w in C(z,l).

So this means, equating t coefficients, 1=w'+(l+1)w. This is a first
order ODE, whose solution is w=I(z^z)/z^z. So we must prove that no
such w exists in C(z,l). So suppose (as in one of Ray Steiner's posts)
w=P/Q, with P,Q in C[z,l] and having no common factors. Then z^z=
(z^z*P/Q)'=z^z*[(1+l)PQ+P'Q-PQ']/Q^2, or Q^2=(1+l)PQ+P'Q-PQ'. So Q|Q',
meaning Q is a constant, which we may assume to be one. So we have
it down to P'+P+lP=1.

Let P=Sum[P_i l^i], with P_i, i=0...n in C[z]. But then in our equation,
there's a dangling P_n l^(n+1) term, a contradiction.
--
-Matthew P Wiener (weemba@sagi.wistar.upenn.edu)
 

Euler

Member
Joined
Sep 7, 2003
Messages
81
Thank you for that Martin! I had that paper in my hands a few years ago, but I didn't even get past the first page. Only when I reread the first bit (of your post) that I remembered that Liouville had something to do with it.

Your post will make a lot more sense if I read it with pen and paper.

Well, it seems that the original question is now settled!
 

+Po1ntDeXt3r+

Active Member
Joined
Oct 10, 2003
Messages
3,527
Gender
Undisclosed
HSC
2003
Originally posted by KeypadSDM
It was a good birthday present from my girlfriend.
:| i wasnt even allowed to mention maths (or simpsons) to any of my gfs.. and some did 4u..
 
Last edited:

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

Top