<- previous index next ->
Numbers are represented as the coefficients of powers of a base.
(in plain text, we use "^" to mean, raise to power or exponentiation)
With no extra base indication, expect decimal numbers:
12.34 is a representation of
1*10^1 + 2*10^0 + 3*10^-1 + 4*10^-2 or
10
2
.3
+ .04
------
10.34
Binary numbers, in NASM assembly language, have a trailing B or b.
101.11B is a representation of
1*2^2 + 0*2^1 + 1*2^0 + 1*2^-1 + 1*2^-2 or
4
0
1
.5 (you may compute 2^-n or look up in table below)
+ .25
------
5.75
Converting a decimal number to binary may be accomplished:
Convert 12.34 from decimal to binary
Integer part Fraction part
quotient remainder integer fraction
12/2 = 6 0 .34*2 = 0.68
6/2 = 3 0 .68*2 = 1.36
3/2 = 1 1 .36*2 = 0.72
1/2 = 0 1 .72*2 = 1.44
done .44*2 = 0.88
read up 1100 .88*2 = 1.76
.76*2 = 1.52
.52*2 = 1.04
quit
read down .01010111
answer is 1100.01010111
Powers of 2
Decimal
n -n
2 n 2
1 0 1.0
2 1 0.5
4 2 0.25
8 3 0.125
16 4 0.0625
32 5 0.03125
64 6 0.015625
128 7 0.0078125
256 8 0.00390625
512 9 0.001953125
1024 10 0.0009765625
2048 11 0.00048828125
4096 12 0.000244140625
8192 13 0.0001220703125
16384 14 0.00006103515625
32768 15 0.000030517578125
65536 16 0.0000152587890625
Binary
n -n
2 n 2
1 0 1.0
10 1 0.1
100 2 0.01
1000 3 0.001
10000 4 0.0001
100000 5 0.00001
1000000 6 0.000001
10000000 7 0.0000001
100000000 8 0.00000001
1000000000 9 0.000000001
10000000000 10 0.0000000001
100000000000 11 0.00000000001
1000000000000 12 0.000000000001
10000000000000 13 0.0000000000001
100000000000000 14 0.00000000000001
1000000000000000 15 0.000000000000001
10000000000000000 16 0.0000000000000001
Hexadecimal
n -n
2 n 2
1 0 1.0
2 1 0.8
4 2 0.4
8 3 0.2
10 4 0.1
20 5 0.08
40 6 0.04
80 7 0.02
100 8 0.01
200 9 0.008
400 10 0.004
800 11 0.002
1000 12 0.001
2000 13 0.0008
4000 14 0.0004
8000 15 0.0002
10000 16 0.0001
n n
n 2 hexadecimal 2 decimal approx notation
10 400 1,024 10^3 K kilo
20 100000 1,048,576 10^6 M mega
30 40000000 1,073,741,824 10^9 G giga
40 10000000000 1,099,511,627,776 10^12 T tera
The three representations of negative numbers that have been
used in computers are twos complement, ones complement and
sign magnitude. In order to represent negative numbers, it must
be known where the "sign" bit is placed. All modern binary
computers use the leftmost bit of the computer word as a sign bit.
The examples below use a 4-bit register to show all possible
values for the three representations.
decimal twos complement ones complement sign magnitude
0 0000 0000 0000
1 0001 0001 0001
2 0010 0010 0010
3 0011 0011 0011
4 0100 0100 0100
5 0101 0101 0101
6 0110 0110 0110
7 0111 0111 0111
-7 1001 1000 1111
-6 1010 1001 1110
-5 1011 1010 1101
-4 1100 1011 1100
-3 1101 1100 1011
-2 1110 1101 1010
-1 1111 1110 1001
-8 1000 -0 1111 -0 1000
^ / ^|||
\_ add 1 _/ sign__/ --- magnitude
To get the sign magnitude, convert the decimal to binary and
place a zero in the sign bit for positive, place a one in the
sign bit for negative.
To get the ones complement, convert the decimal to binary,
including leading zeros, then invert every bit. 1->0, 0->1.
To get the twos complement, get the ones complement and add 1.
(Throw away any bits that are outside of the register)
It may seem silly to have a negative zero, but it is
mathematically incorrect to have -(-8) = -8
<- previous index next ->