NakedPnL

The public registry of verified investment performance. Every return sourced from SEC filings, exchange APIs, or platform data.

Registry

  • Registry
  • Market Context
  • How It Works
  • Community

Verification

  • Get Verified
  • Connect Exchange

Legal

  • Terms of Service
  • Privacy Policy
  • Refund & Cancellation
  • Support
  • GDPR Rights
  • Cookie Policy
  • Disclaimers
  • Methodology
  • Compliance
Follow

NakedPnL is a publisher of verified performance data. Nothing on this site constitutes investment advice, a recommendation, or a solicitation to buy, sell, or hold any security, commodity, or digital asset. Past performance does not indicate future results. Trading carries a high risk of total capital loss.

© 2026 NakedPnLAll performance data is verified by the NakedPnL teamcontact@nakedpnl.com
NakedPnL
RegistryPricingHow It WorksCommunitySupport
NakedPnL/Guides/Why Floating-Point Math Is Wrong for Investment Returns
Methodology guide

Why Floating-Point Math Is Wrong for Investment Returns

IEEE 754 floats silently lose precision. For TWR over thousands of daily returns, the error compounds. Here is the bug, the math, and the fix.

By NakedPnL Research·May 7, 2026·12 min read
TL;DR
  • IEEE 754 binary floating-point cannot represent decimals like 0.1 exactly. The error is small but compounds.
  • 0.1 + 0.2 evaluates to 0.30000000000000004 in JavaScript, Python, Java, C, and every other language using doubles.
  • Across thousands of chain-linked TWR sub-periods, the rounding drift can move a published return by tens of basis points.
  • Arbitrary-precision decimal libraries — Decimal.js, Python decimal, Java BigDecimal — eliminate the error.
  • NakedPnL's TWR engine runs Decimal.js end-to-end, with 28 significant digits of precision throughout.
On this page
  1. The classic gotcha
  2. Why a tiny error becomes a big number
  3. A concrete demonstration
  4. Why not 'just round'
  5. The same bug in other languages
  6. The cost: speed and ergonomics
  7. How NakedPnL's engine handles it
  8. Money-weighted return is no different
  9. Frequently asked questions

Almost every programming language uses IEEE 754 double-precision floating-point as its default numeric type. It is fast, hardware-accelerated, and accurate to roughly 15 significant decimal digits. For most computing tasks that is fine. For financial calculations it is not, because it cannot represent simple decimal numbers like 0.1 exactly.

The error is small per operation. The problem is that financial calculations chain thousands of operations together — daily returns over a year, sub-period chain-links over a multi-year track record, fee accruals across millions of trades. Small errors compound. By the time the published number reaches a user-facing page, the binary float version can disagree with the mathematically correct answer by basis points or more.

The classic gotcha

Open any JavaScript console and try this:

> 0.1 + 0.2
0.30000000000000004

> 0.1 + 0.2 === 0.3
false
Reproducible in Chrome DevTools, Node.js, and every other JS runtime.

The same is true in Python with floats, in C with doubles, in Java with primitive double, and in Rust with f64. The reason is structural, not a language bug. IEEE 754 binary doubles can exactly represent any number of the form m × 2^e where m and e are bounded integers. They cannot exactly represent 0.1, because 0.1 in binary is the infinite repeating fraction 0.000110011001100…

The hardware stores the closest representable value, which for 0.1 is approximately 0.1000000000000000055511151231257827021181583404541015625. Add two of those and you get the inexact 0.2 representation, then a tiny extra bit when you add 0.3's representation. The discrepancy is real, deterministic, and unavoidable while the storage format is binary floats.

Why a tiny error becomes a big number

A single multiplication of two doubles incurs at most one rounding event, with relative error bounded by 2^-52 ≈ 2.22 × 10^-16. Trivial. The problem is when you chain hundreds or thousands of those multiplications together — exactly what TWR does.

Daily TWR over a 5-year track record requires approximately 1,825 chain-link multiplications. Even if each one only contributes 10^-16 of relative error, the cumulative drift is bounded above by roughly n * eps in the worst-case condition number, and in some pathological return sequences the accumulated drift can be measurably larger when intermediate values pass through subtraction-heavy operations like sub-period return computation r = (V_end - V_start) / V_start.

That last formula is particularly nasty. When V_end and V_start are close (a small sub-period return), the subtraction in the numerator suffers catastrophic cancellation — the two operands have nearly equal magnitudes, so most of their leading significant digits cancel, leaving the answer dominated by the floating-point representation error of the original inputs. The relative error of r in that regime can be many orders of magnitude larger than the per-operation eps.

Where the bug actually bites
Catastrophic cancellation in (V_end - V_start) / V_start, multiplied across thousands of small daily moves, is the dominant source of float error in chain-linked TWR. It is not the addition of (1 + r) factors — it is the subtraction inside r itself.

A concrete demonstration

Below is a JavaScript snippet that chain-links 1,825 daily returns of exactly 0.1% (a synthetic, deterministic input). The mathematically correct answer is (1.001)^1825 - 1 ≈ 5.18962589... With native floats, the answer drifts by a measurable amount across runs and architectures.

// IEEE 754 doubles
function twrFloat(dailyReturns) {
  let growth = 1;
  for (const r of dailyReturns) {
    growth = growth * (1 + r);
  }
  return growth - 1;
}

// Synthetic 5-year deterministic input
const returns = Array(1825).fill(0.001);

console.log(twrFloat(returns));
// Logs: 5.189625884136544 (or similar, drifts by 1e-13 across runtimes)
// Mathematically exact: 5.18962589...
The float answer is close but not exact, and the trailing digits are non-deterministic across hardware and JIT compilation paths.

Now repeat with arbitrary-precision decimals using Decimal.js, the same library NakedPnL's production TWR engine uses:

import Decimal from 'decimal.js';

Decimal.set({ precision: 28 });

function twrDecimal(dailyReturns) {
  let growth = new Decimal(1);
  for (const r of dailyReturns) {
    growth = growth.times(new Decimal(1).plus(r));
  }
  return growth.minus(1);
}

const returns = Array(1825).fill('0.001');

console.log(twrDecimal(returns).toFixed(20));
// Logs: 5.18962589164226...
// Reproducible exactly across every machine, every runtime, every commit.
Decimal arithmetic is slower (about 30x) but the answer is bit-for-bit reproducible.

Reproducibility is the key word. NakedPnL's value proposition is that any third party can re-derive a published TWR from the raw exchange responses. If two re-verifiers running on different hardware get different answers in the 14th decimal place, the entire chain-of-trust falls apart. Decimal.js is the cheapest way to make the engine deterministic.

Why not 'just round'

A common deflection is 'these errors are below display precision, just round to 2 or 4 decimal places at the end'. That argument is wrong for three independent reasons.

  1. Rounding only the final number does not fix intermediate calculations. If sub-period returns are stored in a database and later re-aggregated for a different reporting window, the float error has already been baked in and cannot be retroactively corrected.
  2. Hash-chain integrity requires byte-exact reproducibility. NakedPnL hashes the canonicalized result of every TWR computation and chain-links the hashes. A 14th-decimal-place difference between the original computation and a re-verifier's reproduction breaks the SHA-256 match.
  3. Equality checks fail unpredictably. Code paths like `if (twrA === twrB)` or zero-checks like `if (subPeriodReturn === 0)` behave inconsistently when the operands are float results of different operation orders. Decimal arithmetic eliminates that class of bug.

The same bug in other languages

This is a property of IEEE 754, not of any one language. Here is the same gotcha in five common stacks:

LanguageNaive expressionResult
JavaScript0.1 + 0.20.30000000000000004
Python (float)0.1 + 0.20.30000000000000004
Java (double)0.1 + 0.20.30000000000000004
C (double)0.1 + 0.20.30000000000000004
Rust (f64)0.1_f64 + 0.2_f640.30000000000000004
Same hardware standard, same answer, in every mainstream language.

Each language ships a fix in its standard library or a battle-tested third-party package:

LanguageDecimal solution
JavaScript / TypeScriptdecimal.js or big.js (npm)
Pythondecimal module (built-in)
Javajava.math.BigDecimal (built-in)
C# / .NETSystem.Decimal (128-bit, built-in)
Rustrust_decimal crate (crates.io)
Goshopspring/decimal (most adopted)
PostgresNUMERIC type with explicit precision/scale
Use these for any code path that touches money, returns, or fees.

The cost: speed and ergonomics

Decimal arithmetic is not free. Decimal.js operations are roughly 20–50x slower than native JavaScript number arithmetic, depending on the operation and the precision setting. For a daily TWR over a single account that is irrelevant — the entire computation runs in milliseconds even with thousands of sub-periods. For high-frequency intra-bar pricing it would matter; that is not a NakedPnL concern.

The ergonomics are mildly worse. You write `a.plus(b)` instead of `a + b`, and you must explicitly construct Decimal instances from string inputs to avoid round-tripping through a float on the way in. The TypeScript ecosystem has matured to the point where this overhead is small in practice — every NakedPnL adapter accepts exchange numeric fields as strings, never as parsed JSON numbers, exactly to avoid the lossy conversion.

Strings in, decimals through, strings out
The discipline that prevents 99% of float bugs is to never let a financial number touch a native float type. Parse the exchange response as a string, construct a Decimal directly from the string, do all math with Decimals, and serialize back to a string for storage and display.

How NakedPnL's engine handles it

The TWR engine at lib/calculation/twr-engine.ts is Decimal.js end-to-end. Inputs are pulled from each venue adapter (Binance, Bybit, OKX, IBKR) as raw API response strings. The adapter constructs Decimal instances directly from those strings, never via Number conversion. Every sub-period return, every chain-link multiplication, and every fee accrual stays in Decimal until the canonical form is hashed.

The hash itself is computed over the Decimal toFixed(28) string of the result, not over a float. That means a re-verifier in Python, Java, or any other language can independently parse the published value as a high-precision decimal and SHA-256 the canonicalized string to reproduce the chain hash exactly. The full reference implementation is at /docs/verification.

Money-weighted return is no different

Everything in this article applies equally to any other monetary calculation. Money-weighted return (IRR) involves polynomial root-finding, which iterates Newton's method or bisection over the same float operations. Black-Scholes pricing, P&L attribution, fee schedules, drawdown calculations — all of them deteriorate under naive float arithmetic. The fix is the same: use a decimal type at the boundary and stay in decimals through every intermediate calculation.

Frequently asked questions

How much TWR error does float arithmetic actually introduce in practice?
For a typical 1-year daily track record (about 365 sub-periods), the cumulative error from float arithmetic alone is usually below 1 basis point — the rounding bound is roughly n × machine epsilon, which works out to about 10^-13 relative error. The reason to still use decimals is reproducibility: a 10^-13 difference between the original engine and a third-party re-verifier breaks the SHA-256 hash match even though the absolute number is fine. NakedPnL's whole value proposition is byte-exact re-verifiability, so even invisible drift is unacceptable.
Why not use cents (integers) instead of decimal?
Integer cents work for nominal balances (no representation loss in dollars and cents) but break down the moment you compute a return — the result is a fractional number that does not fit cleanly in any integer denomination. Decimal libraries handle both balances and returns in a single type. They are the right abstraction for end-to-end financial math.
Does PostgreSQL NUMERIC have the same problem as floats?
No. The NUMERIC type in PostgreSQL is an arbitrary-precision decimal, semantically equivalent to BigDecimal or Decimal.js. Storing returns and balances in NUMERIC columns rather than DOUBLE PRECISION columns is essential. The Prisma schema for NakedPnL uses NUMERIC for every monetary field for exactly this reason.
What about JavaScript's BigInt?
BigInt is arbitrary-precision integer, not decimal. It solves the integer overflow problem (which is real — JavaScript's Number type cannot exactly represent integers above 2^53) but it does not help with fractional values. You either build a fixed-point system on top of BigInt or use a dedicated decimal library. Decimal.js is the more idiomatic answer for financial returns.
Are there any places where floats are still acceptable?
Yes. Anywhere the result is bounded, never multiplied by money, and used only for display or non-financial computation, floats are fine. Things like UI animation timings, statistical metrics computed on already-decimal inputs (a Sharpe ratio is a ratio of decimals — the float result of the division is acceptable for display), or pixel coordinates. Anything that touches a balance, a return, or a fee belongs in a decimal type.
Why does NakedPnL hash the decimal string instead of the bytes?
Because the canonical decimal string is the only representation guaranteed identical across implementations. A Decimal.js binary representation is internal to the library and not portable. A toFixed(28) string is a fully specified text format that any compliant decimal library in any language will produce identically given the same input, which is what makes the SHA-256 chain re-verifiable from a Python or Rust implementation.

References

  • IEEE Std 754-2019 — Standard for Floating-Point Arithmetic
  • Goldberg, D. (1991). What Every Computer Scientist Should Know About Floating-Point Arithmetic. ACM Computing Surveys.
  • Decimal.js — Arbitrary-precision decimal library
  • Python decimal module (PEP 327)
  • PostgreSQL — Numeric Types Documentation
NakedPnL is a publisher of verified investment performance data. We are not an investment adviser, broker, dealer, or asset manager, and nothing on this page constitutes investment advice or a recommendation. See the compliance page for our full regulatory posture.