Week 6

A few student’s asked me to upload the slides to each day’s lessons, so here you go!

Monday: slides1-17 Imprecision
Tuesday: slides1-18 Using C References
Wednesday: slides1-19 Casting
For those who were out on Monday, Oct 19, we talked about integer overflow and floating-point imprecision (something you will deal with in Greedy!).

Integer Overflow

  • Integer overflow is a problem with limited sizes of variables. If we had one byte to store an integer that looked like this:
       128    64    32    16     8     4     2     1
    
         1     1     1     1     1     1     1     1
  • This number represents 255, but if we added 1 to this number, the value would carry over to become 0:
       128    64    32    16     8     4     2     1
    
         0    0    0    0     0    0    0    0
  • The example above demonstrates this concept with 8-bits, but in reality, the int datatype uses 4 bytes or 32 bits. One of these bits is reserved to keep track of whether the number is negative or positive, which leaves 31 bits to store the integer value. This means that the largest positive number we can represent with an int datatype is 2 to the 31st power minus 1. Since ints store both positive and negative numbers, the total number of numbers we can represent is twice this amount plus zero which equals 2 to the 32nd power.

Integer Division

1 #include <stdio.h>
2
3 int main(void)
4 {
5    float f = 1 / 10;
6    printf("%.1f\n", f);
7 }

The program should print out .1, but instead outputs 0.0.

This is because the value of 1 and 10 in line 5 is assumed to be of an integer, and the decimal value we expect is truncated, or thrown away. We still see 0.0, but only because %.1f, for one decimal point, is specified in line 6 to printf.

Floating Point Imprecision

Floating-point imprecision happens because the computer can’t represent an infinite number of real numbers. Instead, it gives us the closest number it can represent. Let’s look at the program below:

If we fix the code to:

1 #include <stdio.h>
2
3 int main(void)
4 {
5    float f = 1.0 / 10.0;
6    printf("%.1f\n", f);
7 }

We get 0.1.

Let’s see what happens when we print 28 decimal places:

1 #include <stdio.h>
2
3 int main(void)
4 {
5    float f = 1.0 / 10.0;
6    printf("%.28f\n", f);
7 }

We expect something like 0.100000000000…​ but we get: 0.1000000014901161193847656250

The video below shows what happens when we have what seem like very small errors that can add up.

Advertisements