Thursday, October 08, 2009

ICA-CP: SEXI for Java datatypes

Java Datatypes

State:
There are two broad categories of datatypes in Java, primitive data types which are "built-in" and abstract datatypes, like String and KeyboardReader, which require a library to be imported in order to use them. There are seven basic primitive data types. They are bit, nybble, byte, char, boolean, int, and double.

Size matters with primitive datatypes. It matters because hardware memory is reserved for use by a variable when it is declared based on the datatype. The following are the sizes of the primitive datatypes:
1) bit - may be a 1 or a 0 (zero)
2) nybble - 4 bits
3) byte - 8 bits
4) char- 2 bytes or 16 bits; may be character values
5) int - 4 bytes or 32 bits; may be integer values
6) double - twice the size of an int at 8 bytes or 64 bits
7) boolean - 1 which is True or 0 (zero) which is False

Of the other datatypes that we have used, the datatype String is multiple "char" types or multiple characters in size.

The size of a variable's datatype determines what kind of value a variable can represent. A variable of datatype bit can only be a 1 or zero. If I declared this variable:

bit myBit;

I could never assign it the value of 3 or 6 or 999. The largest number that myBit can represent is just one. It is similar for an int declaration of:

int numberOfStarsInTheGalaxy;

Could I represent a googol with numberOfStarsInTheGalaxy? Well . . . since a googol is a 1 followed by 100 zeroes or this number:

10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

and since the largest number an int can represent is 2147483647, I think the answer is a definitely "negatory!" Could a googol fit in a double datatype (hint: a double is 64 bits big)?

Elaborate:
The size of the datatype determines how high one may count or how many characters one may have. For example, since a bit is 1 or zero, the highest one may count is 1. With an int datatype, 1 bit is used for the positive or negative integer sign leaving 31 bits for representing the integer number. If every one of the 31 bits for counting is a one, we would have 31 ones: 1111111111111111111111111111111 (I think that's 31). If we put those 31 ones into a scientific calculator in the binary counting system, then change the counting system in the calculator to decimal, we find that 31 ones in binary equal 2,147,483,647. So, that's the largest positive number we can represent with an int datatype. The largest negative number for an int is -2,147,483,648. For the double datatype, one bit is used for the sign and the remain 63 (out of 64 bits) is used to represent the number value. For the char datatype, the largest amount of data that can be represented is one character. For String, multiple characters may be represented.

Casting is changing the datatype of a variable from a larger datatype to a smaller one. (One cannot cast from a smaller datatype to a larger.) So, if I declare an variable of datatype int, for example:

int myVar = 2147483647

I can cast myVar into a smaller datatype like bit:

(bit) myVar;

Please notice that when casting, I place parentheses around the datatype in the cast statement.

So, myVar which was 31 bits of data as an int becomes one bit of data as a datatype of bit. But, when I do that, the difference in bits is lost or cut off. That is, 30 bits are truncated or cut off. So, after I cast from an int to a bit, the value of myVar (which was 2,147,483,647 when it was datatype int) becomes one since I now only have one bit. To trunctate means to cut off and throw away the rest of the bits that are larger than the new datatype can handle.

Exemplify:
Here's another example of casting. I have this variable declaration:

String myVar = "Hello World";

I cast it into char:

(char) myVar;

Since myVar after it is cast is only one character long, the rest ("ello World!") is truncated or cut off. After I cast myVar down from a String to a char, the value of myVar becomes just the letter, "H".

Why is it that we can cast only from larger to smaller datatypes? Remember that when a variable is declared in Java, memory space in the hardware is set aside for it. For example, if I declare:

int myAge;

the computer sets aside 32 bits in memory for the use of the variable, myAge. I can cast myAge to any datatype smaller than 32 bits because the space is already set aside in the computer. I cannot cast myAge into a datatype larger than 32 bits because there is simply not enough space in the computer's hardware memory set aside for it.

Illustrate:
So, using datatypes in Java and casting is much like buying milk in different sizes of containers. I can have a cup, pint, quart, half-gallon, or gallon container of milk. The milk container size is like the size of the datatype. When I cast from one datatype to another, I could put a cup of milk into a quart sized container. But, I could never fit a gallon of milk into a pint-sized container.

0 Comments:

Post a Comment

<< Home