Assay

The feeling of understanding is indistinguishable from the real thing until you've experienced the real thing.

More data. More opinions. More signals. The information accumulates. The conviction doesn't.

Fifty-seven percent of all stocks underperform Treasury bills over their lifetime. Only two percent generate ninety percent of total market wealth creation. The base rates are brutal, and no amount of screening changes them.

What changes them is depth. Understanding a business well enough to know, when the stock drops forty percent and everything feels wrong, whether the thesis is breaking or the opportunity is widening. That kind of conviction isn't generated by dashboards. It's built through structured, adversarial analysis that tests every assumption and survives.

The stock market is a device for transferring money from the impatient to the patient.

Warren Buffett

The Analytical Framework

Every analysis runs through over a hundred named analytical frameworks, drawn from decades of investment research and refined across hundreds of iterations. ROIIC — Return on Incremental Invested Capital — measures whether competitive advantages are widening or narrowing. Flywheel verification distinguishes genuine network effects from narrative. Value trap red flag patterns screen for the six diagnostic signals that predicted most historical failures.

The system treats trajectory, not snapshots. A company's current state is a single frame. Its trajectory is the film. Segment-level data often reveals inflection points years before aggregate metrics catch up — NVIDIA's data center segment accelerating from eleven to forty-three percent growth while consensus still saw a cyclical gaming chipmaker. AWS growing triple digits, hidden in Amazon's “other” revenue before separate disclosure.

Each company undergoes forensic examination — earnings transcripts read against financial filings, competitive claims verified through independent sources, the strongest bear case constructed and then stress-tested with specific counter-evidence. Competitive positioning maps the business against its actual market structure, its customers' real switching costs, and the conditions that would erode its advantages. The question isn't whether a moat exists — it's whether the moat is widening, stable, or narrowing, and at what rate.

The stage that gathers evidence is prohibited from forming opinions. The stage that renders judgment never sees the bull case first.

Gathers

What the filings actually say

The strongest case against

What independent sources verify

Survives

Only what it couldn’t destroy

Whether the thesis survives

The finding, not the opinion

Two layers that check each other. What survives is the analysis.

What the system sees in a single company

Every company is analyzed through mental models selected specifically for that business. Network effects for a platform — does each new user make the product more valuable for everyone else? Switching costs for enterprise software — what has the customer invested that they’d lose by leaving? Capital cycle dynamics for an industrial — is the industry over- or under-investing in capacity? Commoditization risk for a technology company — is the advantage durable, or is it something a competitor can replicate in eighteen months? The same data means different things through different lenses.

Before any competitive advantage claim is accepted, the system maps customer reality at the ground level. Not “high switching costs” — what specifically have customers invested that they cannot transfer? Data accumulated. Customizations built. Training invested. If the answer is nothing specific, the moat claim is narrative, not evidence.

What the system reads in the environment

Most macro analysis is noise. Twenty-nine commonly cited market predictors were tested across decades of data — most failed in the data they were built on and worse outside it. The system uses only the signals that survive that scrutiny, and the most important one is not in the stock market at all.

Seven companies sit at the most connected nodes in the global economy. Individually, each is a business. Read simultaneously, their collective results reveal what’s happening in sectors they don’t even operate in — consumer spending across income levels, enterprise investment confidence, and infrastructure cycles that no single company’s earnings call can illuminate.

Twenty-seven U.S. bear markets since 1928. Every one followed by a full recovery. That is true, important, and incomplete. The index recovers because it is designed to recover — it drops failures and adds successes. Fifty-four percent of individual stocks never recovered their previous high. The system presents the full historical record, including the parts that most analysis smooths away.

What the system does at the portfolio level

The most consequential gap in most portfolios isn’t information — it’s the distance between where conviction is strongest and where capital is concentrated. Positions drift. A holding that appreciated into the largest position may no longer be the highest-conviction one. A high-conviction holding that was never sized up remains too small to matter. The system maps these mismatches with precision, because the portfolio an investor holds is rarely the portfolio their own analysis would build today.

Every analytical framework has a failure mode that is not analytical — it is behavioral. A position at a loss looks different from a position at a gain, even when the forward return profile is identical. These distortions cost individual investors roughly three to four percent annually — not from bad analysis, but from decisions that felt right. Before any portfolio-level judgment, the system audits for the specific bias most likely to be distorting the analysis.

What This Isn't

This isn't a stock screener. Screeners filter on quantitative criteria and return lists. This system reads earnings transcripts, verifies competitive claims through independent sources, stress-tests bear cases with specific evidence, and renders judgment on thesis fragility.

The analysis is performed by one of the most advanced AI models available — but the model is not the methodology. Strip away the architecture and what remains is a chatbot: capable, fluent, and structurally inclined to tell you what you want to hear. The model's intelligence is not what's scarce. What's scarce is the architecture that prevents it from taking shortcuts — that forces adversarial rigor where the model would default to agreement, enforces silence where it would default to certainty, and demands counter-evidence where it would default to confirmation. It can still be wrong. AI oversimplifies, occasionally misreads data a human analyst would catch on instinct, and sometimes sounds more certain than the evidence warrants. What addresses these limitations is not the model — it's the architecture itself. Mandatory uncertainty acknowledgment, explicit anti-hallucination constraints, a five-level evidence verification hierarchy. The model is the instrument. The architecture is the methodology. Neither is infallible — and any claim of infallibility would be the clearest sign that rigor had been replaced by reassurance.

It doesn't manufacture confidence. If the evidence is ambiguous, it says so. If the evidence indicates a thesis is broken, it presents the finding with the same specificity applied to favorable conclusions. If the analytical bar filters out every candidate examined, that's a finding, not a failure. The most expensive analytical mistake isn't a wrong conclusion — it's a premature one. Selling a compounder during a temporary drawdown. Holding a deteriorating thesis because admitting the loss feels worse than accepting the risk. The disposition effect costs investors three to five percent annually. When the evidence is insufficient, the analysis says so — rather than manufacturing an opinion to fill the silence. The gaps between data points are not emptiness. They are structure that hasn't been examined yet.

All I want to know is where I'm going to die, so I'll never go there.

Charlie Munger

The Discipline

Every analysis includes pre-mortem reasoning: assume the investment failed catastrophically three to five years from now, then write the narrative. This surfaces risks that optimism obscures. The honeypot screen — modeled on failures like GE, where blue-chip reputation and dividend aristocrat status masked years of cash flow divergence from reported earnings — tests whether attractive surface signals are supported by underlying fundamentals.

Three conditions break a thesis — and only three. The growth trajectory must be structurally impaired, not temporarily slow. Valuation must be egregiously disconnected from realistic growth while growth is simultaneously decelerating. Or business quality must have materially deteriorated. Everything else is noise that the system is designed to filter out.

I'd rather be wrong for documented reasons than right by accident.

The Bar

Eighty-six stocks out of twenty-six thousand generated half of all market wealth between 1926 and 2015. The average hundred-bagger took twenty years at twenty-six percent annually. Generational compounders at their inflection points endured drawdowns exceeding sixty percent — Apple saw three above seventy, Amazon above eighty.

The system applies these base rates. It doesn't pretend that good analysis can reliably identify the 0.4%. What it can do is build understanding deep enough to recognize the evidence pattern when it appears, and hold through the drawdown that tests whether your conviction was evidence-based or hope-based.

Price and thesis are different claims about the same company. A declining price does not invalidate a thesis built on structural evidence any more than a rising price validates one built on narrative. Whether an investment was misevaluated may not be determinable for years — the hundred-plus frameworks this system applies are designed to extrapolate trajectory, and trajectory unfolds over years, not quarters. A forty-percent drawdown three months after analysis may be a thesis failure or a buying opportunity, and the analysis builds the evidence to distinguish between them. It will not always succeed. Even Buffett devotes a section of every annual letter to his mistakes. What these frameworks do is apply decades of investment research and structured adversarial reasoning to distinguish conviction built on evidence from confidence built on narrative. What they cannot do is eliminate uncertainty. No methodology can.

Real analysis takes weeks per company. Reading 10-Ks, verifying competitive claims through independent sources, building valuation frameworks, stress-testing bear cases with specific evidence. Even if you had the time, knowing which frameworks to apply and how to apply them correctly is itself years of study.

I work full-time as a software developer. I have nine holdings. That's over a year of research just to stay current on what I already own — before looking at anything new.

So I built a system that could do it. I took the analytical frameworks I'd studied as a foundation and spent months building on them — iterating on analytical architecture, testing against real portfolios, refining until the architecture encoded everything I could find about how to actually evaluate a business — and then how to challenge every conclusion it reached. The architecture is the product of that obsession, not a copy of any syllabus.

In 1995, astronomers pointed the Hubble Space Telescope at the darkest patch of sky they could find — a region so black it appeared completely empty. The ten-day exposure came back with ten thousand galaxies. What looked like nothing was the deepest structure in the universe.

That depth of analysis opens the shutter on the space between your data points — the parts you skipped because they looked empty.

You can't go back to the naked eye.

What this costs →