10 Feb 2012

Truth of Primitive Conversion


Size does not matter
The data type with smaller bit size can be assigned to the data type of higher bit size. It has been considered a more obvious way to understand the primitive conversion. For example char whose bit size is 16 can be assigned to the int with bit size 32.
If that is true then what happens when you convert byte (8 bit) to char (16 bit)?


Error: Possible loss of precision

As shown in above figure , it is perfectly legal and very obvious to assign char(16 bit) value to a int(32 bit) but compiler complains when a byte(8 bit) is assigned to a char(16 bit).  Why there should be problem in second case? 
The truth is the primitive conversion does not happens on the basis of size of respective data types at all. The primitive conversion always happens on the basis of range of respective data types.


Suppose data type s is to be assigned to t i.e. t=s , then t must represent every single value from the range of s. Range of int is -2147483648 to 2147483647 and range of char is o to 65535. As in above figure char to int is allowed since int can represent every possible value of char data type. But in second case byte cannot be assigned to char because char cannot represent -22 since range of char is 0 to 65535


Lets take another example to make this clear. The long data type has 64 bit size that still can be fit into float data type of 32 bit size.

long to float conversion


The reason why above code compiles and run fine is that because float can represent every possible value of a long variable, since the range provided by the float is wide enough for the long data type. So assignment form long to float is not a problem.


Well that is all about the truth behind the primitive conversion. Just remember that size does not matters only range matters.