Why a Screenshot of P&L Isn't Evidence — And What Actually Counts as Proof
A screenshot is a moment-in-time projection. It cannot prove completeness, methodology, or that the underlying account exists. What proof of trading performance actually requires.
- A screenshot is a static image. It cannot be re-derived from primary records, cannot prove the account exists, and cannot establish completeness over a claimed period.
- The five structural problems with screenshot-as-proof: editability, window cherry-picking, methodology opacity, single-account framing, and absence of timeline anchor.
- Real proof of trading performance requires four properties: independent primary data, deterministic computation, append-only history, and a published methodology.
- Screenshots have valid uses (illustration, communication) but not as substitutes for verifiable performance data.
A trader posts a screenshot on Twitter. It shows a brokerage P&L of $1.2 million for the year. The replies treat it as proof. They should not. A screenshot is an image. It is not a record of trading; it is a picture of a record of trading at one moment, taken by someone with full control over what is in the frame and what is not. There are five distinct, structural reasons a screenshot fails as evidence, and once you see them, the question 'do you trust this trader's P&L screenshot?' becomes the wrong question. The right question is 'what would actual proof look like, and does this trader have it?'.
This guide walks the five structural problems and lays out what proof actually requires. It is not an attack on traders who post screenshots — most are sharing in good faith. It is an attack on the convention that a screenshot suffices, because that convention shifts the burden of doubt onto the viewer in exactly the wrong direction. The verified-track-record glossary entry covers the four properties of a verified record; this guide explains why a screenshot does not satisfy any of them.
Problem 1 — Editability
Modern image editors can produce a screenshot indistinguishable from a real one. Browser developer tools can rewrite a page's DOM and the resulting screenshot looks pixel-identical to a screenshot of the unedited page. The visual signal that 'this is a real exchange dashboard' is unreliable; any element on the page can be replaced with a different value, and the resulting image carries no marker of authenticity. This is not paranoia — it is the everyday capability of any user with a browser. A screenshot's apparent authenticity comes from social context (a poster's reputation, a venue's branding) rather than from anything intrinsic to the image.
Some venues sign their statements cryptographically (FxPro's MT4/MT5 PDFs are an example). Those signed PDFs are a real defence at one specific layer — the file content is bound to the venue's signing key. Screenshots have no equivalent. There is no signature on a screenshot, no canonical form, no checksum. A reviewer looking at one has nothing to compare against and no way to detect editing.
Problem 2 — Window cherry-picking
A screenshot shows whatever window the dashboard had open at the moment of capture. Most exchange dashboards let the user select date ranges, account subsets, instrument filters, and metric definitions. A screenshot of '2024 P&L' tells the viewer nothing about what 2023 looked like, what other accounts the trader operates, what specific period within 2024 the dashboard was filtered to, or what the trader's full venue inventory is. Even if the visible numbers are perfectly accurate, the framing is the publisher's choice and the viewer has no way to confirm that the chosen window is representative.
This is the same structural problem the methodology guide on why most leaderboards are gameable describes for operator-rendered rankings. The publisher controls the window; the viewer sees only the window; without an external chain that establishes the full history, completeness is unverifiable. The screenshot does not lie; it just answers a question the viewer probably did not mean to ask.
Problem 3 — Methodology opacity
A dashboard's headline number is the result of a calculation. Different calculations on the same data give different numbers. 'Total P&L' on a Bybit derivatives dashboard is gross of funding-rate accruals; 'Realised P&L' on a Binance USDT-M dashboard depends on whether the trader has closed positions or only marked them; an account-level 'Return' figure on a generic broker can be total return, time-weighted return, or money-weighted return depending on the broker's choice. The screenshot does not state which methodology produced the number. The viewer assumes; the publisher chose.
The methodology guide on auditing a crypto trader's PnL from raw API data covers why fee handling alone can move a 'gross' P&L figure by 5-30% relative to the net realisation. None of that is visible in a screenshot. A trader claiming 'I made 80%' and showing a dashboard that displays gross-of-funding figures may have realised 50% net, which is a different result; the screenshot does not betray the difference because the dashboard does not surface it.
Problem 4 — Single-account framing
A screenshot shows one account. A trader who operates multiple accounts can take a screenshot of the winning one and never mention the others. This is not a hypothetical; it is the standard pattern in retail-trader marketing. A 'trader' with five accounts and one good year on one of them produces a screenshot of that account's winning year and leaves the other four accounts undisclosed. The screenshot is honest about what it shows. The narrative around it is dishonest because the framing implies an aggregate result that the underlying portfolio does not support.
An honest aggregate requires either a single connected portfolio across all the trader's accounts (which a screenshot cannot demonstrate) or an external attestation that the connected accounts are the trader's complete set. The methodology guide on survivorship bias in trader rankings covers the same problem applied to ranking surfaces. A registry that aggregates daily NAV across multiple connected venues produces a single combined chain; a screenshot of one of those venues does not.
Problem 5 — Absence of timeline anchor
A screenshot has no anchor in time. A timestamp is only as trustworthy as the system that produced it; a metadata timestamp on the file is a property of the publisher's machine, not the venue's. There is no way to confirm that the figures in the screenshot reflect the venue's state on the date the screenshot claims. A backdated screenshot — produced today, edited to display last year's date — is indistinguishable from a real screenshot taken last year. Without an external anchor, the date claim is the publisher's word.
Real timeline anchors exist. A blog post about a trade that includes the screenshot, posted on a public platform with a timestamp the platform assigns, partially anchors the date — at least to the post date. A venue API response with a server-side timestamp is a stronger anchor. A daily NAV snapshot pulled at a fixed time and committed to a public ledger is the strongest. The OpenTimestamps Bitcoin anchoring used by NakedPnL is one form. The bitcoin-timestamp glossary entry covers the cryptographic anchoring that takes timeline integrity from 'the publisher's word' to 'the publisher cannot retroactively change this'.
What real proof requires
Real proof of trading performance has four properties, none of which a screenshot provides. First, independent primary data — the figures come from the venue's own records, accessible to a reviewer with read-only credentials, not from the trader's redrawing of those records. Second, deterministic computation — given the primary data, any reviewer can re-run the algorithm and arrive at the same figure to documented precision. Third, append-only history — historical entries are not editable, so a corrective entry is added rather than the original being silently restated. Fourth, published methodology — the calculation is openly documented and survives manager-to-manager comparison.
On NakedPnL, all four properties are constraints of the design. Read-only API keys provide independent primary data. The TWR engine in lib/calculation/twr-engine.ts is open-source, deterministic, and re-runnable. The chain is append-only — every NavSnapshot is hashed and chained, and the daily Merkle root is committed to Bitcoin via OpenTimestamps. The methodology is documented at /docs/verification with reference Python and JavaScript snippets. A reviewer with no trust in NakedPnL can re-derive every published figure on the /verify/chain/[handle] page using the Web Crypto API. The how-to-verify-a-trader-track-record-yourself guide walks the procedure end to end.
What screenshots are good for
Screenshots are useful for illustration and communication. Showing a follower the layout of a trade setup, demonstrating that a feature works, walking through a problem visually — all of these are reasonable uses where the screenshot's role is to depict, not to prove. The problem is the conventional misuse: a screenshot deployed as the primary evidence of a performance claim. The fix is not to ban screenshots; it is to stop treating them as proof. A screenshot can illustrate a claim that is independently verified elsewhere; it cannot be the verification surface.
The same logic applies to broker statements (signed PDFs from a regulated firm), TradingView chart screenshots with watermarks, and venue-rendered ranking surfaces. Each one is informative; none is a complete verification surface. They are layers in a diligence file, valuable when combined with primary-record access and a chained NAV history, weak when used alone.
How to challenge a screenshot claim politely
Asking 'is your P&L verified?' is not an accusation; it is a reasonable due-diligence question. The polite framing is to invite the trader to add a verification surface rather than to demand they defend the screenshot. 'Have you considered publishing this on a chained registry like NakedPnL?' is a different conversation from 'I do not believe you'. The trader who has nothing to hide will engage with the question; the trader who responds with hostility is signalling something.
- Ask about completeness: 'Is this one of multiple accounts, or your full portfolio?'
- Ask about methodology: 'Is this gross or net of fees and funding?'
- Ask about period: 'What is the start date of this performance, and is the chart cumulative from that date?'
- Ask about source: 'Did this come from a venue API export or a dashboard screenshot?'
- Ask about anchoring: 'Is there a way to confirm this state existed on the date you posted?'
All five questions have honest answers if the underlying claim is honest. None of them is hostile. The trader who can answer them is operating with verifiable evidence; the trader who deflects is not. The questions are also useful self-discipline: a trader who cannot answer them about their own claim has identified what their proof surface is missing.
What changes when proof is verifiable
When proof is verifiable, the conversation moves from 'do you trust this trader' to 'does the math reproduce'. The trader has a stronger position because their figure is no longer a claim; it is a re-derivable computation. The viewer has a stronger position because they no longer have to extend trust on the basis of social signals. The marketplace, in aggregate, separates serious traders from noise more efficiently because the cost of producing a verifiable record is small relative to the cost of producing convincing screenshots, and the verifiable record holds up to scrutiny in a way the screenshot does not.
This is what verification is structurally for. It is not about catching frauds — most traders are honest. It is about removing the burden of proof from the social context (where it does not belong) and placing it on the data (where it does). A screenshot is a social artefact; a chained NAV history is a data artefact. The latter does the work the former pretends to.