So i was solving the Taylor series , which is here:
This is the code :
#define _CRT_SECURE_NO_WARNINGS
#include<conio.h>
#include<stdio.h>
#include<math.h>
long double Fact(long double h) {
if (h <= 0) return 1;
else return h*Fact(h - 1);
}
void main(void) {
int p = 0;
long double s = 0, k = 0, c = 0, l = 0,d=0;
int n = 0, x = 0;
printf(" n ");
scanf("%d", &n);
printf(" x ");
scanf("%d", &x);
d = x;
while (n>=0) {
k = pow(-1, n);
c = (2 * n + 1);
l = Fact(c);
d = pow(x, 2 * n + 1);
s = s+ ((k / l)*d);
n = n - 1;
}
printf("Result : %.16LG\n", s);
_getch();
}
The question is : how could a long double be grater than 2^80 value if I enter n = 16
and x = 2,147,483,646
but it still writes correct result (i compared the result of the programm with wolfram alfa)
Let's imagine that I decided to invent my own data type, which I'd call bloat
(like float
, geddit?). This type would be just one byte wide (8 bits) and use the following representation: bit #0 (the least-significant one) has weight 40 = 1, bit #1 has weight 41 = 4, bit #2 has weight 42 = 16, bit #3 has weight 43 = 64 and so on and so forth.
The combination of bits 00010001
in bloat
would stand for 1 + 256 = 257
. The maximum value representable in bloat
would be 11111111
, which is 21845
. So, here you are: using my freshly invented bloat
type I managed to represent value 21845
in just 8 bits of memory. 21845
is greater than 214, yet I somehow managed to squeeze it into just 8 bits! How did I achieve that?
Easy: in order to "stretch" the apparent range of my type I sacrificed some intermediate values. My bloat
type cannot represent number 2
, for one example. It can't represent number 66
. And so on. There are lots of values under 21845
that my bloat
cannot represent. If you count all possible different values my bloat
can represent, you will discover that there are exactly 256
of them, i.e exactly 28 different values are representable.
Floating-point types, like your long double
employ pretty much the same principle to "stretch" their range. Their internal format and properties are more complicated than those of my bloat
, but the underlying idea is the same: the absolute range of a 80-bit floating-point type is much much greater than 280 because it "skips" (cannot represent) lots and lots of values inside that range.
The exact details of their internal representation are widely available on the Net.