# Question (1 Viewer)

#### Morgues

##### Member
Came across this in a trial paper.......anyone know the answer?

#### Morgues

##### Member
its attached

The question is

The following table describes the numerical data types in a particular programming language
For each of the data types shown, explain
1. How the data is stored in memory
2. Why the maximum value is the value given

#### Attachments

• 45.7 KB Views: 129
Last edited by a moderator:

#### del

##### Member
1. Assuming it's a simple data type...stored in a memory location the size of the data type..... so not a reference variable

2. Wild guess... but isn't it to do with two's complement.... hence the negative and positive boundary/range

thats just imo..

#### Morgues

##### Member
It does ask how EACH one is stored in memory............have looked in the text book but no where does it say how simple data types are stored

The size of the maximum I was thinking got to do with the number of bits.........although really have no idea

#### del

##### Member
For:

1. I was basing the simple data types on the Java programming language... simple data types basically = primitives in java, and I've got a sun book which mentions stuff like that and how its stored.....for some reason Sun had this whole chapter thingo on the differences between the storage of primitives and objects....Hmm....then again..the real's would be stored with the exponent and the mantissa......if i recall correctly.....bit foggy

2. Based on two's complement in that a data type can have both positive and negative values.....I'll quickly explain. On the basis of an integer being 2 bytes...that equates to 16 bits...

2^16 = 65536
65536 / 2 = 32768

so the range is -32768 to +32767
it's 32767 because 0 is considered as a positive number....

if you work it out for the rest it makes sense.....sorta.....it made more sense...when my teacher explained it

#### Morgues

##### Member
yep i get it.......
but how would u explain the range of real numbers.....single and double precision?

#### del

##### Member
Hmm, this is from what we got in class (a sheet)

'For single precision floating point numbers it is common practice to use one byte for the exponent and three bytes for the mantissa. Double precision floating point numbers consist of one byte for the exponent and seven bytes for the mantissa'

#### Morgues

##### Member
Gee
Is this stuff in Excel or anything?

#### del

##### Member
nup, some photocopy we got.....

#### Morgues

##### Member
but what topic is this stuff under?
Can't believe it wouldnt get mentioned in Excel

#### sunny

##### meh.
Its briefly mentioned in Excel (page 33) under Defining and Understaninf the Problem. Its covered in more detail in the option topic Software Developer's View of Hardware

#### Lazarus

##### Retired
I'm not sure how much detail they want you to give - the number of marks that the question is worth would be some indication. You'll probably know most of the following, but I'll give the background info just in case.

start long, probably useless background info

A computer's memory is made of specialised silicon chips called memory chips, consisting of millions of copies of a particular electrical circuit. This memory circuit is like a light switch, in that it can be in one of two states, 'off' and 'on'. The computer can set the state to off or on and it can find out what the current state is (without changing it). It takes about one hundred millionth of a second to do each of these things.

One of these circuits is enough to remember whether something is true or not, with 'on' representing 'true' and 'off' representing 'false'. We say that such a circuit holds one bit of information. The information can have any interpretation we choose. What can't be changed is the fact that there are only two possible values, one represented by 'off' and the other by 'on'.

A possible interpretation is to let 'off' mean 0, and 'on' mean 1, and in this way the memory circuit can stand for a number (either 0 or 1). By taking several circuits together we can represent larger numbers. With two bits of information we can represent four different possible values: 00, 01, 10 and 11. Each of these could represent a number (i.e. 0, 1, 2 and 3) but they could equally well be red, yellow, blue and green. The computer program that uses these two bits decides what they mean. With three bits we get eight values and can count up to seven: 000, 001, 010, 011, 100, 101, 110, 111 (from 0 to 7).

This particular representation is called the binary encoding of the integers, since the patterns left are just binary (base 2) numbers. In general, using k bits allows 2^k different values, which are enough to count from 0 to 2^k - 1. In a typical modern computer, 32 bits are grouped together in this way to form a word:

00000010010001001110001011011011

By our formula, 2^32 = 4294967296 different values can be stored in one 32-bit word (of course, only one value can be stored at any one time), and this is enough to count from 0 to 2^32 - 1. However, a different representation which allows negative numbers is more commonly used, and in that representation it is possible to store any integer in the range -2^31 to 2^31 - 1 inclusive. It is not hard to check that, although a different set of numbers is being represented, there are still exactly 2^32 different possible values. Although in principle a computer could store arbitrarily large integers, by allocating as many bits as required, for practical efficiency integers are limited to this range so that they always fit into one word. There's also a smaller grouping, of eight bits into one byte:

00110101

Four bytes make one word on most computers. Since it contains eight bits, one byte can represent 2^8 = 256 values.

A computer's memory is measured in bytes; a typical memory chip holds 64MB (64 megabytes, or millions of bytes) of memory. Each byte has a number, from 0 to whatever, called its address, and the computer is able to store and retrieve the value of the byte with a given address for any N in this range, in about one hundred millionth of a second.

In summary - a computer's memory holds information; the number of bits determines the number of possible values; one byte typically holds one character, and one word typically holds one integer; the computer can assign values to these bytes and words, and retrieve the values stored in them very quickly. Also, each byte has a number: 'byte number 5076' or whatever.

These basic capabilites of the computer appear in high-level languages such as Java, slightly dressed up to make them easier to use. For example, suppose we want to set aside some memory to hold an integer, which represents the number of concert tickets we've sold. In Java we would write the following declaration:

int totalSold;

This is an instruction to the computer to set aside enough memory to hold one integer (one word of memory, as we know). Instead of nominating which word ('the word beginning at byte number 5076', perhaps), for our convenience we let the computer find a word for us that is not currently being used to hold anything else; and we give that word the name totalSold, which is more readable for us than a number. We rely on the computer to understand that whenever we say totalSold we mean the word that it chose for us when we gave the declaration. Names like totalSold, that stand for chunks of memory, are called variables in computer programming.

end long, probably useless background info

1. To store the data in memory, the computer allocates a set number of bytes (2, 4 or 8 in this case), each consisting of a series of bits. In the case of an integer or long int, the bit sequence represents the binary encoded version of the relevant number (don't use abbrev. in an exam). In the case of a decimal, it's slightly more complicated, but the general idea has to do with scientific notation. Take for example the number 1.234 * 10^23. It is split into two parts - the mantissa (1.234) and the exponent part (23) for the power-of-ten multiplier (which means the number multiplied out would have 20 zeroes in it, 23 minus the three decimal places). ((I'm afraid I'm not too sure on the specifics here, but those two parts are somehow encoded into bits, heh.)) Most high-level programming languages also assign a given variable name to such memory blocks.

2. Integer arithmetic is close to but not actually mathematical base-two. The low-order bit is 1, next 2, then 4 and so forth as in pure binary. But signed numbers are represented in twos-complement notation. The highest-order bit is a sign bit which makes the quantity negative, and every negative number can be obtained from the corresponding positive value by inverting all the bits and adding one. This is why integers using 32-bits (4 bytes) have the range -2^31 to 2^31 - 1. That 32nd bit is being used for sign. Similarly, integers using 16-bits (2 bytes) have the range -2^15 to 2^15 - 1 (i.e. -32768 to 32767).

I'm afraid I can't formally prove the range given for the decimal numbers - sorry!!

And I apologise for the overly verbose post.

#### Lazarus

##### Retired
Oh my god, 8 people replied whilst I was writing that!! Oh well.

#### Morgues

##### Member
It was worth 4 marks so probably .5 for identifying how each one is stored in memory and another .5 for explaining the maximum value

#### Morgues

##### Member
Thanks for the help guys.............still don't quiet understand how you can get the range of a real number but the rest is now clear

#### Morgues

##### Member
Originally posted by sunny
Its briefly mentioned in Excel (page 33) under Defining and Understaninf the Problem. Its covered in more detail in the option topic Software Developer's View of Hardware
So is it more of an option topic question?
Haven't covered that yet so could explain why I had absolutely no idea

#### gandalf

##### New Member
Option Topic Question

Yep,
Its the option called "Programmers view of hardware".

#### pulse

##### New Member
meh

it depends what OS your running. personally i can only tell you about windows. see in windows the way data is "stored in memory" is that the fetch execute cycle is started, then when it comes to storing the result in the accumulator windows finds the accumulator already has some past calculation in it and doesnt know what to do. for fear of mucking things up it leaves the accumulator alone and moves on to the next accessible memory, RAM. so it decides with all that RAM the little two, four, eight byte number is going to be lonly so it tries to make things better by filling RAM right to the top with countless runs of 0101 0101. When it fills RAM completely up to the top if has no where to put the calculation so then it decides it will just call on virtual memory, i mean its almost as good as RAM. so it fills virtual memory a little bit, then finds that more virtual memory is simply added...because some dumb user has allowed windows to "allocate" its own virtual memory. well its just like a child in a toy store. windows continues to fill virtual memory with runs of 1010 1010 untill it has consumed 12.8 gigs of your 13 gig hard drive. it then gets really scared and hides in the corner covering itself with the blue screen of death so you cant see it.

the reason the "maximum" values are given is that if you try to pass these values in the windows OS .... all hell breaks loose.

na really i got nothing. are these questions from a old 3unit computing test? because its not something i will ever cover in sdd

#### sunny

##### meh.
Although thats true we're looking at why data types have size restrictions.

But nah it shouldn't be from the old course, cause its in the new one.

#### Morgues

##### Member
Its from last years trial paper I got which is VERY hard
I am trying to scan it in
Its definetly in the syllabus somewhere, hopefully the option topic