Home:ALL Converter>Why is Python's Decimal precision not related to floating point precision?

Why is Python's Decimal precision not related to floating point precision?

Ask Time:2021-01-16T10:04:43         Author:manoelpqueiroz

Json Formatter

Background

In Python's decimal module, getcontext, through the prec attribute, sets the precision for the Decimal objects one constructs. I find it rather strange that the term "precision" is used, since it leads to the assumption as to how decimal handles floating point numbers that could exceed such precision on their decimal part (i.e., the fraction/non-integer part).

However, in the decimal module, "precision" seems to refer to the actual count of digits in a Decimal object:

In [1]: from decimal import Decimal, getcontext
In [2]: l = [Decimal('0.1234567890123456789'),
             Decimal('1.1234567890123456789'),
             Decimal('20.1234567890123456789'),
             Decimal('300.1234567890123456789'),
             Decimal('-4000.1234567890123456789')]

In [3]: [+e for e in l]
Out[3]:
[Decimal('0.1234567890123456789'),
 Decimal('1.1234567890123456789'),
 Decimal('20.1234567890123456789'),
 Decimal('300.1234567890123456789'),
 Decimal('-4000.1234567890123456789')]

In [4]: c = getcontext()
   ...: c.prec = 5 # Spicing things up
   ...: [+e for e in l] # Infix + operator reconstructs objects with current precision
Out[4]:
[Decimal('0.12346'),
 Decimal('1.1235'),
 Decimal('20.123'),
 Decimal('300.12'),
 Decimal('-4000.1')]

In [5]: c.prec = 15
   ...: [+e for e in l]
Out[5]:
[Decimal('0.123456789012346'),
 Decimal('1.12345678901235'),
 Decimal('20.1234567890123'),
 Decimal('300.123456789012'),
 Decimal('-4000.12345678901')]

In [6]: c.prec = 60
   ...: [+e for e in l] # Precision larger than original Decimals recreates the objects
Out[6]:
[Decimal('0.1234567890123456789'),
 Decimal('1.1234567890123456789'),
 Decimal('20.1234567890123456789'),
 Decimal('300.1234567890123456789'),
 Decimal('-4000.1234567890123456789')]

With the exception of Decimals between -1 and 1, which ignore the leading zero as a digit, we see that precision on the floating part of the number varies according to its magnitude.

The Standard Library documentation actually hints that the precision is not the number of floating points, but rather overall digits (although it never explicitly tells readers about this caveat):

Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem

But even in this passage, one could easily interpret as "28 [decimal] places".


My point

I have a few issues with the implementation of getcontext.prec in the decimal module referring to "count of digits".

  1. On a technical nature: why the leading zero does not count as with other Decimals? (I'd suppose it has something to do with scientific notation, but I'd like clarification on the matter)

  2. On a semantic nature: why would the developers of the Standard Library refer to "precision" as digit counting and not floating point precision, as we often do both inside (e.g., specifying the round function) and outside computing (e.g., "I want answers with a 5-point precision")? To me the term "precision" is misleading in the decimal module, as it is explicitly mentioned early in the documentation:

    Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.”

  3. On a practical level: how would I have to work if my implementation requires the use of Decimals, but I need to set precision in regards to the floating part of a number, which could vary from 1 to 1,000,000,000 and beyond as its integer part? Obviously the most hands-on approach is to simply use round to set precision, but it seems cumbersome, especially when dealing with multiple numbers that are not necessarily contained in sequences, which could lead to unnecessary code repetition.

Author:manoelpqueiroz,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/65745811/why-is-pythons-decimal-precision-not-related-to-floating-point-precision
yy