Frac is a free C library for computing with continued fractions.

Continued fractions

The binary representation of 1/3 is infinite. Floating-point numbers, our standard tool for working with reals, cannot even handle numbers a young child could understand.

In contrast, every rational can be uniquely and exactly represented by a finite continued fraction (provided we forbid those with last term 1). Continued fractions are immune from oddities such as 1/3 = 0.333… and 1 = 0.999…, living in a realm free from the tyranny of binary, or any other base.

Every irrational number can also be uniquely expressed as an infinite continued fraction. Many frequently encountered algebraic and transcendental numbers have easily computable continued fraction expansions which can be truncated to yield approximations with built-in error bounds, unlike floating-point approximations, which must be accompanied at all times with their error bounds.

Furthermore, suppose we use floating-point to compute some answer to 100 decimal places. What if we now want 200 decimal places? Even if we possess the first 100 digits and carefully preserve the program state, we must start from scratch and redo all our arithmetic with 200 digit precision. If we had used continued fractions, we could arbitrarily increase the precision of our answer without redoing any work, and without any tedious error analysis.

But we must pay a price for these advantages. We have a representation from which we can instantly infer error bounds. A representation for which we may arbitrarily increase precision without repeating any calculations. A representation which in some sense is the closest we can hope to get to an exact number using rationals. Is it any wonder therefore that binary operations on continued fractions, whose output must also uphold these high standards, are clumsy and convoluted?

These drawbacks are almost certainly why these fascinating creatures remain obscure. Yet surely there are some problems that scream for continued fractions. How about a screen saver that computes more and more digits of pi, each time picking up where it left off? Or imagine a real-time application where all continued fractions simply output the last computed convergent on a timer interrupt. Under heavy system load, approximations are coarser but the show goes on.

Disclaimer

Lest my salesmanship backfire, let me temper my anti-floating-point rhetoric. Increasing the precision of floating-point operations without redoing work is in fact possible for many common cases. For example, Newton’s method is self-correcting, meaning we may use low precision at first, and increase it for future iterations. Even pi enjoys such methods: there exists a formula revealing any hexadecimal digit of pi without computing any previous digits, though it requires about the same amount of work.

Moreover, several floating-point algorithms converge quadratically or faster, thus good implementations will asymptotically outperform continued fractions as these often converge linearly.

Nonetheless, for lower accuracies, the smaller overhead may give continued fractions the edge in certain problems such as finding the square root of 2. Additionally, precision decisions are automatic, for example, one simply needs enough bits to hold the last computed convergents. Built-in error analysis simplifies and hence accelerates development.