Maths Discussion Thread (1 Viewer)

seanieg89

Well-Known Member
Joined
Aug 8, 2006
Messages
2,662
Gender
Male
HSC
2007
Well I was thinking the rest of the forum could be used for posting math problems, discussing math events and specific proofs and stuff like that. I intended this thread to be more general mathematical theory, random tidbits from everything in discussion.

But I guess if you wish then it cant continue (please consider having a thread like this though)
I agree. I think there are definite benefits of a thread like this (and a purpose distinct from that of the more specific threads in this subforum).
Not the least of which is the fact that it is much less intimidating to ask a potentially 'stupid question' in discussions like this than it is to create your own topic for public scrutiny.

Anyway, I find that some of my most fruitful/enjoyable mathematical discussions have been of an undirected nature, the sort of coffee-room "what if?" type questions rather than really specific and technical discussions.

Of course its up to the mods but just thought I would throw in my two cents.
 

Sy123

This too shall pass
Joined
Nov 6, 2011
Messages
3,730
Gender
Male
HSC
2013
Reading up on Goedells Incompleteness Theorem, and I find it a very interesting topic, how do we axiomise basic arithmetic operations, if its even possible?

Moreover, what do mathematical realists have to say in argument against the mathematical idealists whom claim that we invent maths rather than discovering it like realists propose. Aren't mathematical idealists technically saying that even the axioms we propose are non-rigorous (hence mathematics in general is non-rigorous)

Moreover, reading up on the Peano axioms in relation to rigour.

There is a natural number 0.
Every natural number a has a natural number successor, denoted by S(a). Intuitively, S(a) is a+1.


Those are the first two statements, paying attention to the part 'S(a) is a+1, where S(a) is the successor', is it saying that we define a+1 to be S(a) or that S(a) to be a+1?

What Im saying is, is the statement giving definition to basic arithmetic, or is basic arithmetic giving definition to the succession of the natural numbers?
And doesn't Goedells incompleteness theorem render it non-rigorous?

Thanks
 

seanieg89

Well-Known Member
Joined
Aug 8, 2006
Messages
2,662
Gender
Male
HSC
2007
Reading up on Goedells Incompleteness Theorem, and I find it a very interesting topic, how do we axiomise basic arithmetic operations, if its even possible?

Moreover, what do mathematical realists have to say in argument against the mathematical idealists whom claim that we invent maths rather than discovering it like realists propose. Aren't mathematical idealists technically saying that even the axioms we propose are non-rigorous (hence mathematics in general is non-rigorous)

Moreover, reading up on the Peano axioms in relation to rigour.

There is a natural number 0.
Every natural number a has a natural number successor, denoted by S(a). Intuitively, S(a) is a+1.


Those are the first two statements, paying attention to the part 'S(a) is a+1, where S(a) is the successor', is it saying that we define a+1 to be S(a) or that S(a) to be a+1?

What Im saying is, is the statement giving definition to basic arithmetic, or is basic arithmetic giving definition to the succession of the natural numbers?
And doesn't Goedells incompleteness theorem render it non-rigorous?

Thanks
Have not studied this in too much depth before. I think an axiomatisation of arithmetic is just something that is a formal system roughly equivalent to the Peano axioms.

Have not read much about mathematical idealism, what are its main features? Its often not as simple as just pigeonholing mathematicians into one of several philosophical views.

In the Peano characterisation of the naturals we assert the existence of successors. The concept of addition is secondary from this, and is defined by the successor operation recursively. We have no prior notion of addition, this would be circular logic.

And I am unsure about exactly what you mean by rigorous/non-rigorous. It is true that Godel's incompleteness theorem makes it impossible for us to prove the consistency of Peano's axioms from within the system itself, which is rather unfortunate. Most mathematicians ignore this now and treat it as an unresolvable issue. No foundational system of mathematics is completely satisfactory to everyone.
 

Sy123

This too shall pass
Joined
Nov 6, 2011
Messages
3,730
Gender
Male
HSC
2013
Have not studied this in too much depth before. I think an axiomatisation of arithmetic is just something that is a formal system roughly equivalent to the Peano axioms.

Have not read much about mathematical idealism, what are its main features? Its often not as simple as just pigeonholing mathematicians into one of several philosophical views.

In the Peano characterisation of the naturals we assert the existence of successors. The concept of addition is secondary from this, and is defined by the successor operation recursively. We have no prior notion of addition, this would be circular logic.

And I am unsure about exactly what you mean by rigorous/non-rigorous. It is true that Godel's incompleteness theorem makes it impossible for us to prove the consistency of Peano's axioms from within the system itself, which is rather unfortunate. Most mathematicians ignore this now and treat it as an unresolvable issue. No foundational system of mathematics is completely satisfactory to everyone.
Mathematical Idealism is pretty much Intuitionism, that is:

'There are no non-experienced mathematical truths'. So it pretty much is saying that mathematics is simply a creation of the human mind, although we create them from set axioms and derive properties from there, they would not exist if we were not there to invent them. So for example, you have said before that we cannot say that the Natural Numbers exist in the physical world, hence could we conclude that natural numbers were invented by humans to count things.

The properties that arise in natural numbers are only there because we, well not make them up but we just derive them from the axioms and that such a thing never existed before.

And since Godel's incompleteness theorem makes it impossible to prove consistency of the Peano axioms, can we not say that all of Number theory in general is based off an unjustified axiom? And hence non-rigourous?

Looking to other numbers however, how can we establish even real numbers exist or even infinitesimals which are the basis of calculus. I think what intuitionism is arguing is that they dont exist, we just establish them.

Or rather from the mathematical realist's point of view, we discover maths instead of making it up. Are we too simple minded to be able to justify the existence of natural numbers? Are we just a species that stumbled on a natural phenomena and are unable to justify its existence or are we a species that have created mathematics in order to benefit us?

I personally find it a very interesting discussion, but as you have clarified, most mathematicians these days do not worry about such inconsistencies.

If we can conclude that Number theory is based off inconsistent axioms, what other areas of mathematics does this also hold true? Topology/Geometry maybe?
(So many question marks in this post)
 

seanieg89

Well-Known Member
Joined
Aug 8, 2006
Messages
2,662
Gender
Male
HSC
2007
"An unjustified axiom"? In what way can any axiom really be justified? It is simply a list of formal rules defining an object, from which we deduce these objects properties. The word non-rigourous is a bit vague for my liking, but in essense yeah we cannot set maths on as firm a foundation as we might like. Or more precisely, we cannot "prove" that the foundations satisfy all properties that we would desire in a formal way, all such arguments must go into the realm of philosophy rather than formalism. To say that this implies that mathematics is non-rigorous is a bit cynical in my opinion, it just restricts the nature of this rigour.

Well many of the questions you raise are philosophy rather than mathematics, things like the 'existence' of numbers/objects and the 'truth' of statements are difficult to define. What does it mean for an object to exist? My personal philosophy: I don't make any claims about the existence of the objects I study in any real sense or the truth of any statements. In writing down axioms of a formal system I am exploring an imaginary perfect and purely formal world in which these objects exist and behave exactly as I stipulate. I don't talk about truth, I talk about theoremhood...whether we can by a finite sequence of formal logical laws prove any given statement. Proofs are merely manipulations of strings of meaningless symbols by a predetermined set of rules.

And we cannot conclude NT is based off inconsistent axioms, we just cannot prove the consistency of NT. And the same is true of any sufficient complex logical system. I'd recommend contacting a metamathematician / someone that specialises in mathematical logic and reading a lot more if you want to learn more as I don't really know much more than this.
 

iBibah

Well-Known Member
Joined
Jun 13, 2012
Messages
1,374
Gender
Male
HSC
2013
What do these symbols mean:

and also: ~
 

Sy123

This too shall pass
Joined
Nov 6, 2011
Messages
3,730
Gender
Male
HSC
2013
"An unjustified axiom"? In what way can any axiom really be justified? It is simply a list of formal rules defining an object, from which we deduce these objects properties. The word non-rigourous is a bit vague for my liking, but in essense yeah we cannot set maths on as firm a foundation as we might like. Or more precisely, we cannot "prove" that the foundations satisfy all properties that we would desire in a formal way, all such arguments must go into the realm of philosophy rather than formalism. To say that this implies that mathematics is non-rigorous is a bit cynical in my opinion, it just restricts the nature of this rigour.

Well many of the questions you raise are philosophy rather than mathematics, things like the 'existence' of numbers/objects and the 'truth' of statements are difficult to define. What does it mean for an object to exist? My personal philosophy: I don't make any claims about the existence of the objects I study in any real sense or the truth of any statements. In writing down axioms of a formal system I am exploring an imaginary perfect and purely formal world in which these objects exist and behave exactly as I stipulate. I don't talk about truth, I talk about theoremhood...whether we can by a finite sequence of formal logical laws prove any given statement. Proofs are merely manipulations of strings of meaningless symbols by a predetermined set of rules.

And we cannot conclude NT is based off inconsistent axioms, we just cannot prove the consistency of NT. And the same is true of any sufficient complex logical system. I'd recommend contacting a metamathematician / someone that specialises in mathematical logic and reading a lot more if you want to learn more as I don't really know much more than this.
I see, so yeah the questions are more philosophical then anything.
 

GoldyOrNugget

Señor Member
Joined
Jul 14, 2012
Messages
583
Gender
Male
HSC
2012
What do these symbols mean:

and also: ~
are notations generally used in asymptotic algorithm analysis for representing the 'cost' of a particular sequence of operations in terms of the amount of time or space they take up. They characterise and group functions based on their growth rates. Formally (I might mess up this definition, it's been a few years since I went through it properly), if there exists some x such that for some arbitrarily large C. I've never required little-O notation, but I believe it applies the above definition but with 'some arbitrarily large' replaced with 'any'.

These definitions receive extensive use in computer science, because they allow one to abstract out the details of algorithms to obtain a general overview of their growth rate.

Example! Consider the following procedure to determine all the prime numbers from 2..n:

Code:
for each potential prime p from 2..n:
  could_be_prime = true
  for each potential divisor d from 2..p:
    if p == 0 (mod d):
      // we have found a divisor. this p value can no longer be prime
      could_be_prime = false
  // we have gone through all potential divisors now.
  // if could_be_prime is set, this number has no divisors > 2
  if could_be_prime:
    this value of p is prime!
For each potential prime (roughly n) of these), we have to iterate through up to n numbers. We say this algorithm has time complexity . There are minor speedups that can be made: for example, one can observe that we only need to test values of d up to half of n. However, because this doesn't change the growth rate of the function, we don't care -- it's still . If we were to observe that we only need to test values of d up to the square root of p, this WOULD change the growth rate of the function. It is now .

Theoretical computer science (the area I'm interested in, which is actually a branch of mathematics and not a science at all) is all about discovering, comparing and classifying algorithms like these.

This is actually an area that I know a lot about :) unlike most of what's discussed in this thread.
 
Last edited:

iBibah

Well-Known Member
Joined
Jun 13, 2012
Messages
1,374
Gender
Male
HSC
2013
are notations generally used in asymptotic algorithm analysis for representing the 'cost' of a particular sequence of operations in terms of the amount of time or space they take up. They characterise and group functions based on their growth rates. Formally (I might mess up this definition, it's been a few years since I went through it properly), if there exists some x such that for some arbitrarily large C. I've never required little-O notation, but I believe it applies the above definition but with 'some arbitrarily large' replaced with 'any'.

These definitions receive extensive use in computer science, because they allow one to abstract out the details of algorithms to obtain a general overview of their growth rate.

Example! Consider the following procedure to determine all the prime numbers from 2..n:

Code:
for each potential prime p from 2..n:
  could_be_prime = true
  for each potential divisor d from 2..p:
    if p == 0 (mod d):
      // we have found a divisor. this p value can no longer be prime
      could_be_prime = false
  // we have gone through all potential divisors now.
  // if could_be_prime is set, this number has no divisors > 2
  if could_be_prime:
    this value of p is prime!
For each potential prime (roughly n) of these), we have to iterate through up to n numbers. We say this algorithm has time complexity . There are minor speedups that can be made: for example, one can observe that we only need to test values of d up to half of n. However, because this doesn't change the growth rate of the function, we don't care -- it's still . If we were to observe that we only need to test values of d up to the square root of p, this WOULD change the growth rate of the function. It is now .

Theoretical computer science (the area I'm interested in, which is actually a branch of mathematics and not a science at all) is all about discovering, comparing and classifying algorithms like these.

This is actually an area that I know a lot about :) unlike most of what's discussed in this thread.
Thanks for the in-depth response!

I actually came across these symbols in a number theory book. Still a little confused, so I will take the time later to go over what you have said, and see how they apply in number theory (like the example you gave). Also is there a way to run the code given, or is there more to it than that?
 

GoldyOrNugget

Señor Member
Joined
Jul 14, 2012
Messages
583
Gender
Male
HSC
2012
If you found them in a number theory book, be aware that they might be using alternative and/or stricter definitions than I am.

Here's some equivalent code that runs in Python:

Code:
n = 100
for p in xrange(2, n):
  could_be_prime = True
  for d in xrange(2, p):
    if p % d == 0:
      could_be_prime = False
  if could_be_prime:
    print p, 'is prime!'
To run it, go to http://codepad.org/, paste it in, select 'Python' on the left, tick 'Run code' next to the submit button, then press submit.
 
Last edited:

iBibah

Well-Known Member
Joined
Jun 13, 2012
Messages
1,374
Gender
Male
HSC
2013
If you found them in a number theory book, be aware that they might be using alternative and/or stricter definitions than I am.

Here's some equivalent code that runs in Python:

Code:
n = 100
for p in xrange(2, n):
  could_be_prime = True
  for d in xrange(2, p):
    if p % d == 0:
      could_be_prime = False
  if could_be_prime:
    print p, 'is prime!'
To run it, go to http://codepad.org/, paste it in, select 'Python' on the left, tick 'Run code' next to the submit button, then press submit.
I think what you were saying is most likely correct, I just need to take some time to go through it.

Btw that code, though simple, is pretty cool for small values of n.
 

seanieg89

Well-Known Member
Joined
Aug 8, 2006
Messages
2,662
Gender
Male
HSC
2007
Things like big-O notation vary slightly depending on context...generally the source will give an explicit definition of the notation when they introduce it.

Most commonly, if f:R->R is a function, we write O(f(x)) for an arbitrary function that is bounded above by a positive constant multiple of f(x) for all sufficiently large x.

Eg
Any real quadratic polynomial is O(x^2).
The harmonic series up to n is O(log n)
If the Riemann hypothesis is true, then the number of primes less than x is equal to x/log(x) + O(x^{1/2}log(x)).
 

GoldyOrNugget

Señor Member
Joined
Jul 14, 2012
Messages
583
Gender
Male
HSC
2012
Most commonly, if f:R->R is a function, we write O(f(x)) for an arbitrary function that is bounded above by a positive constant multiple of f(x) for all sufficiently large x.
Hehe, you mathematicians. For us, O(n2) pretty much means 'about n2'.
 

seanieg89

Well-Known Member
Joined
Aug 8, 2006
Messages
2,662
Gender
Male
HSC
2007
Hehe, you mathematicians. For us, O(n2) pretty much means 'about n2'.
Haha but what if we bound the growth of something above by something of the order of n^2 but we don't know that this is the best possible bound? Some notation is needed here surely :).

A mathematical version of "about" is the relation ~. We say f~g if lim f(x)/g(x)=1.
 

GoldyOrNugget

Señor Member
Joined
Jul 14, 2012
Messages
583
Gender
Male
HSC
2012
Does the computer define what is 'about' or would the programmer define how much is about?
I think the manner in which programmers/computer scientists judge big-O complexity is actually equivalent to the mathematical definition -- we just think about it in 'about' terms, because they serve us a practical purpose. If I'm solving a problem and I have an algorithm that runs in O(n) time and one that runs in O(n2) time, I don't care about mappings between reals and upper bounds for some constant etc. etc., I care that the former will run in about a second while the latter will run in about 3 years. Is this what you're asking?

Haha but what if we bound the growth of something above by something of the order of n^2 but we don't know that this is the best possible bound? Some notation is needed here surely :).
Do you mean we've bounded it by, say, O(n2) but we think we can reduce that to O(n)? Or that we've bounded it by O(n2) but we think it might actually only take n2/3 operations on average?

In the former case, it's usually pretty easy to judge the complexity of an algorithm correctly (the hard part is coming up with the algorithm in the first place). You just sort of think about the number operations you're performing relative to input size.

In the latter case, as the example above demonstrates, for the kind of values of n you're dealing with (in informatics, bounds will be given on the problem statement; in real life, Google could be dealing with n="number of pages on the internet"), there's little need to get as tight a bound as possible. The loose indications given by big-O are good enough.
 
Last edited:

seanieg89

Well-Known Member
Joined
Aug 8, 2006
Messages
2,662
Gender
Male
HSC
2007
I meant lowering the exponent of n rather than the constant, mathematicians seldom care too much about the constant either. We also think about it as being 'about' blah by the way, we just need some kind of formal definition so this notion can be used in proofs etc.
 

GoldyOrNugget

Señor Member
Joined
Jul 14, 2012
Messages
583
Gender
Male
HSC
2012
In most cases, the structure of the code will reflect the time complexity. One loop that goes up to n will be O(n), two sets of these nested loops (like in the prime number example) will generally be O(n^2), etc.

Recursive functions are a bit harder -- you usually have to visualise what's happening in your head. If you're evaluating something like the fibonacci function f(n) = f(n-1) + f(n-2) (with appropriate base cases), it's O(2^n). If you're splitting a range in half each time (e.g. the efficient algorithm to find the kth largest item in a list), it might be n + n/2 + n/4 + n/8 + ... = O(n).

Here's a clever prime generating algorithm where the time complexity isn't obvious:

1. visualise a large table of consecutive integers starting at 2 and ending at N. let the 'confirmed prime' be 2
2. cross out all multiples of the 'confirmed prime' (except itself)
3. choose the new 'confirmed prime' by finding the first uncrossed number after the current 'confirmed prime'
4. if there exists such a number, go back to (2)

On the first time through, the confirmed prime is 2, so you'll cross off 4,6,8,10,...
On the second time through, the confirmed prime is 3, you'll cross off 6,9,12,...
On the fourth time through, the confirmed prime is 5, so you'll cross off 10,15,20...

Eventually, only primes will be uncrossed. Furthermore, our first set of crossing-out took N/2 (roughly), our second set took N/3, our third set took N/5, ...

So overall, we took under N(1/2 + 1/3 + 1/4 + ... + 1/N), and as you pointed out, the harmonic series up to n is O(log(n)), so this whole algorithm is O(N log N). This is AWESOME because although conceptually this seems about as easy as the O(n^2) algorithm, if we're finding the first million primes, this will do it in well under a second, whereas the O(n^2) algorithm will take several hours. So cool!
 

seanieg89

Well-Known Member
Joined
Aug 8, 2006
Messages
2,662
Gender
Male
HSC
2007
An algorithmic version of the sieve of eratosthenes :).

PS The sum of the reciprocals of the primes up to n is asymptotic to log(log n) :), even better than log(n).
 
Last edited:

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

Top