• Got a question on how to use our new website? Check out our user guide here!

What are CPU cores? (1 Viewer)

wagig

Member
Joined
Jan 29, 2013
Messages
157
Gender
Male
HSC
2014
For all you awesome people out there:
What exactly are CPU cores? For example when a CPU is dual core (meaning it has two cores), what are they?
Does it mean it has 2 Arithmetic Logic Units (ALUs)? Or two physical Central Processing Units (CPUs)?
Or what else?
 

Macqncheese

Banned
Joined
Feb 27, 2013
Messages
102
Gender
Undisclosed
HSC
N/A
Dual Core = 2 cores
Quad core = 4 course.

Pretty self-explanatory
 

ebbygoo

Active Member
Joined
May 12, 2012
Messages
607
Location
Look Up.
Gender
Male
HSC
2014
I don't do software but I'm pretty sure that's not what he was asking
 

noahwilson

New Member
Joined
Jun 25, 2013
Messages
2
Gender
Female
HSC
2010
This is good question, core processor is a single computing component with two or more independent actual central processing units .
 

techlog

New Member
Joined
Oct 13, 2013
Messages
5
Location
Perth, Western Australia
Gender
Male
HSC
2016
When a CPU (Central Processing Unit) is Dual Core it has the speed of two cores which has a speed displayed in MHz or GHz so times the Amount of MHz or GHz by the amount of cores and you get how fast the computer can process.

~Techlog
 

hjed

Member
Joined
Nov 10, 2011
Messages
211
Gender
Undisclosed
HSC
2013
It's basically two+ of everything on the same chip, as supposed to hyper threading which is when there is two ALUs but only one CU
 

Phaze

Pleb.
Joined
Nov 7, 2013
Messages
408
Location
Sydney
Gender
Male
HSC
2014
A multi core processor has multiple CPU's not ALU's. A multicore processor is just multiple CPU's made within one chip. The difference between having two single core processors in two separate slots and having a multicore processor in one slot is that having the multiple cores in the one slot enhances speed. This is a simply explanation but I hope it helps
 

anomalousdecay

Premium Member
Joined
Jan 26, 2013
Messages
5,797
Gender
Male
HSC
2013
A multi core processor has multiple CPU's not ALU's. A multicore processor is just multiple CPU's made within one chip. The difference between having two single core processors in two separate slots and having a multicore processor in one slot is that having the multiple cores in the one slot enhances speed. This is a simply explanation but I hope it helps
This is correct. It is important to note that single core processing has not increased in speed clocking for about ten years.

This is the reason why multiple CPU's are used, as the physical limits of computer's are arising.

This is why most processors are said to have a clocking speed of around 3 GHz.

When a CPU (Central Processing Unit) is Dual Core it has the speed of two cores which has a speed displayed in MHz or GHz so times the Amount of MHz or GHz by the amount of cores and you get how fast the computer can process.

~Techlog
Not necessarily. The interaction of the physical processors will provide some speed loss. If you had 50 cores, it would have a similar maximum processing as 25 cores, as of speed losses.
 

GoldyOrNugget

Señor Member
Joined
Jul 14, 2012
Messages
583
Gender
Male
HSC
2012
Yay, theory time! :D

This is the reason why multiple CPU's are used, as the physical limits of computer's are arising.

This is why most processors are said to have a clocking speed of around 3 GHz.



...The interaction of the physical processors will provide some speed loss. If you had 50 cores, it would have a similar maximum processing as 25 cores, as of speed losses.
anomalousdecay is right. The physical limits in question are mostly related to transistor size and density. You can only pack so many transistors onto a chip without causing massive overheating (the kind that can't be mitigated by fans / water / other cooling devices).

There is also another important factor when it comes to multicore processing. The reason why have multiple cores at all is that a core can only do one thing at a time. But if you're running a modern OS on a single core computer, you can still be watching a video in your browser and messaging friends and editing a word document at the same time. To give the impression of multitasking, what that single core is actually doing is switching between all the tasks really quickly. So it spends a few microseconds rendering the next video frame, a few microseconds getting input from the keyboard into the word document, etc. This all happens so fast that for the user it seems like everything is happening simultaneously. This process is known as time multiplexing and it's controlled by the operating system's scheduler. Each process has a thread and the operating system alternates rapidly between all the different threads. When your computer freezes, that's probably because a thread misbehaved and stopped the scheduler from interrupting it and switching to another thread.

When you add another core, you can now do things simultaneously for real! You just multiplex across both cores. So your computer's speed should double, right?

But what happens if, say, MS Word is in the process of reading input from your keyboard at the moment when the scheduler switches to another thread? MS Word's thread will get deactivated and your keystroke might never appear in the document. Reading input from the keyboard is a simple example of a critical section of code: code that accesses a resource shared between multiple threads (e.g. a keyboard or a file) and can't be interrupted while doing so. Importantly, critical sections can't be parallelised - at all. Even if you have two cores, if a thread on one core is accessing input from your keyboard, the thread on the other core has to wait until it is allowed to access your keyboard.

Let's say all the threads running on your computer consist of 30% critical sections (that must run in serial) and 70% parallelisable sections. You can keep adding cores to your computer and the 70% of parallelisable code can be shared among those cores -- but every time a critical section runs, everything else still has to stop and wait for it to finish. So as the number of cores -> infinity, the speed of your computer is still bounded by the 30% of critical code. This is an application of what's known as Amdahl's law.

So you can't just keep adding cores to keep speeding up your computer. A better use of your time is to work out how to reduce the amount of critical code in your programs. This is becoming a hugely important part of computing nowadays.
 

anomalousdecay

Premium Member
Joined
Jan 26, 2013
Messages
5,797
Gender
Male
HSC
2013
Yay, theory time! :D



anomalousdecay is right. The physical limits in question are mostly related to transistor size and density. You can only pack so many transistors onto a chip without causing massive overheating (the kind that can't be mitigated by fans / water / other cooling devices).

There is also another important factor when it comes to multicore processing. The reason why have multiple cores at all is that a core can only do one thing at a time. But if you're running a modern OS on a single core computer, you can still be watching a video in your browser and messaging friends and editing a word document at the same time. To give the impression of multitasking, what that single core is actually doing is switching between all the tasks really quickly. So it spends a few microseconds rendering the next video frame, a few microseconds getting input from the keyboard into the word document, etc. This all happens so fast that for the user it seems like everything is happening simultaneously. This process is known as time multiplexing and it's controlled by the operating system's scheduler. Each process has a thread and the operating system alternates rapidly between all the different threads. When your computer freezes, that's probably because a thread misbehaved and stopped the scheduler from interrupting it and switching to another thread.

When you add another core, you can now do things simultaneously for real! You just multiplex across both cores. So your computer's speed should double, right?

But what happens if, say, MS Word is in the process of reading input from your keyboard at the moment when the scheduler switches to another thread? MS Word's thread will get deactivated and your keystroke might never appear in the document. Reading input from the keyboard is a simple example of a critical section of code: code that accesses a resource shared between multiple threads (e.g. a keyboard or a file) and can't be interrupted while doing so. Importantly, critical sections can't be parallelised - at all. Even if you have two cores, if a thread on one core is accessing input from your keyboard, the thread on the other core has to wait until it is allowed to access your keyboard.

Let's say all the threads running on your computer consist of 30% critical sections (that must run in serial) and 70% parallelisable sections. You can keep adding cores to your computer and the 70% of parallelisable code can be shared among those cores -- but every time a critical section runs, everything else still has to stop and wait for it to finish. So as the number of cores -> infinity, the speed of your computer is still bounded by the 30% of critical code. This is an application of what's known as Amdahl's law.

So you can't just keep adding cores to keep speeding up your computer. A better use of your time is to work out how to reduce the amount of critical code in your programs. This is becoming a hugely important part of computing nowadays.
Luckily I did Age of Silicon. OP if you find this stuff interesting, I recommend you do Age of Silicon as an elective module for HSC Physics.
This is UNSW comp sci right?
Do you do this type of stuff in electrical as well?
 
Last edited:

GoldyOrNugget

Señor Member
Joined
Jul 14, 2012
Messages
583
Gender
Male
HSC
2012
They probably teach basics of operating systems in many engineering and computing subjects. I'm not sure about electrical eng specifically.
 

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

Top