A daily practice for scanning the horizon — surfacing unusual events, papers, projects, and announcements that hint at possible futures before they become trends.
Signals is a personal futures intelligence project by Roger Meike. Every day, four AI systems generate candidate leads — potential weak signals hiding in the noise. Those leads are then independently verified: every link confirmed, every claim traced to its primary source, every statistic checked. Only verified signals make it into a report.
The result is a curated daily feed filtered for the weird, the surprising, and the potentially consequential — outliers and early indicators that suggest where things might be heading, typically 6–18 months before they become obvious trends.
Beyond daily scanning, the practice has grown to include deep research dives, opinion divergence mapping, structured second-order consequence analysis, a probabilistic scenario engine, and quarterly retrospectives that close the loop between what we predicted and what actually happened.
Unexpected capabilities, new frameworks, tools fighting AI side effects
Error correction breakthroughs, new applications, accessibility milestones
Garage projects, open source tools, one-person projects punching above their weight
Legislative shifts, digital sovereignty moves, infrastructure vulnerabilities
Health-tech crossovers, robotics, bio-computing interfaces
Retro-computing revivals, bizarre experiments, things that make you go "wait, what?"
Routine product updates. Funding announcements (unless the structure is novel). Obvious hype cycles. Mainstream news that everyone already knows. Unverifiable leads — if we can't confirm the link works and the content matches the claim, it doesn't make the report. Quality over quantity: three verified signals beats ten shaky ones.
The practice operates across three conceptual layers, each building on the one below. Daily observations feed measurable tracking signals, which in turn inform a landscape of possible futures.
Six scenarios that bound the space of plausible futures. Eight axes map where evidence currently points. Not predictions — a bounding box that sharpens as evidence accumulates.
Specific, measurable thresholds we monitor over time. When a threshold is crossed, the signal "fires." Each signal logs both confirming and disconfirming evidence.
Raw observations from scanning. High volume, mostly noise, occasionally revelatory. Events that happened in the world, verified against primary sources.
Each signal passes through a five-stage intelligence pipeline, from raw collection to actionable forecasting.
The entry point. Ranked views that direct your attention to what matters most right now: what moved this week, what's closest to crossing a threshold, and whether past predictions came true. Every item links through to the underlying evidence.
The daily feed. Four AI systems generate candidate leads, which are verified against primary sources and unified into a single report. Reports use adaptive categories that emerge from the signals themselves rather than forcing content into fixed buckets. Each report also includes a meta-signal — a higher-order pattern connecting the day's individual signals into a coherent theme.
When a pattern of weak signals points toward a potential future, we ask: what specific, measurable thing would need to change for this future to arrive? That becomes a tracking signal — a concrete threshold we monitor over time. When a threshold is crossed, the signal "fires," triggering a structured Futures Wheel analysis of second and third-order consequences. Each signal logs both confirming and disconfirming evidence, and we run weekly bias audits to catch blind spots.
The synthesis layer. Six scenarios bound the space of possible futures, and eight axes map where evidence currently points. Axis positions are re-evaluated weekly against accumulated evidence. A probability engine computes evidence-weighted scenario probabilities based on the full evidence stream, structural causal mappings, and source quality assessments. These aren't predictions — they're a visualization of which directions the evidence currently points, with honest uncertainty preserved.
Beyond daily scanning, two types of in-depth analysis go deeper on specific topics.
Comprehensive research reports on a specific tracking signal or emerging topic. A deep dive asks: are we tracking the right signal? What's the detailed state of the art? Who are the key players? Based on current velocity, when might the threshold be crossed? What second-order effects should we anticipate? Deep dives are queued, prioritized, and scheduled on a regular cadence.
Where daily reports find events (things that happened), opinion divergence reports map disagreement — who thinks what, why, and whether positions are converging or splitting apart. For each topic, we systematically gather positions from multiple stakeholder lenses: public opinion, academic research, industry strategy, policy and regulation, practitioner experience, and foresight analysis. The pattern of disagreement often reveals timing better than events do: convergence suggests an inflection is near; divergence suggests uncertainty is structural.
The Futures tab is backed by a probability engine that computes evidence-weighted scenario probabilities. Rather than relying on subjective axis positioning alone, the engine ingests every evidence record from every daily report and tracking signal update, then computes how much each piece of evidence shifts each scenario's probability.
Each evidence record is assessed on three dimensions: source quality (Tier 1 primary measurement carries more weight than Tier 3 secondary reports), implication strength (how directly the evidence bears on the signal's threshold), and independence (whether the evidence is genuinely new information or redundant coverage of the same event).
Evidence connects to scenarios through a structural causal map that declares, for each tracking signal and scenario pair, not just whether the signal is relevant but how it's relevant: is the signal necessary for the scenario? Sufficient on its own? One of several possible paths? Or incompatible? These structural roles determine how strongly evidence flows through to scenario probabilities.
The six scenario probabilities are computed independently and do not sum to 1. Scenarios are overlapping and the space is not exhaustive — elements of multiple futures can coexist simultaneously.
A 9×9 cross-impact matrix models how advancement in one domain (meta-signal) changes the probability trajectory of others. For example, hardware efficiency gains enable the sovereignty stack, which in turn strengthens trust infrastructure — a three-stage positive cascade. Conversely, agentic capability combined with agent economy growth creates a pipeline toward labor displacement. The matrix is updated after quarterly retrospectives and after any cluster of related threshold firings.
When a tracking signal crosses its threshold, we don't just check whether the predicted consequence materialized. A structured Futures Wheel traces consequences across four rings: the event itself, first-order effects already visible, second-order consequences expected in 6–24 months, and third-order implications at the 2–5 year horizon. This generates new tracking signal candidates, deep dive queue items, and calibration-weighted adjustments to the Futures visualization.
Each source uses a differentiated prompt tuned to exploit its platform's unique strengths. They generate candidate leads — not verified signals. Every lead must be independently confirmed before inclusion.
Not all evidence is created equal. We use a three-tier system to assess how much weight to give each data point — and actively watch for hype.
| Tier | Type | Examples |
|---|---|---|
| Tier 1 | Primary measurement | Peer-reviewed paper, benchmark result, SEC filing, shipped product, enacted law |
| Tier 2 | Official claim | Company blog, press release, pre-print, announced partnership |
| Tier 3 | Secondary report | News article, analyst note, social media post, community discussion |
When a topic has lots of Tier 3 coverage but little Tier 1 evidence, that's a hype signal — many people talking, few verifiable results. Conversely, steady Tier 1 evidence accumulating quietly is a substance signal. We flag the difference.
The probability engine also accounts for source stance: evidence from sources acting against their own interest (a skeptic conceding progress, or an advocate acknowledging a setback) carries more weight than evidence aligned with the source's incentives.
Each morning, a Claude agent visits all four AI sources, extracts their candidate leads, and runs them through a verification gate before synthesis. The process prioritizes:
Tracking signals make pre-registered predictions: "when X crosses threshold Y, consequence Z follows." We hold ourselves accountable through multiple mechanisms.
Quarterly retrospectives assess which fired signals' consequences actually materialized, score calibration accuracy, audit for blind spots using the Three Horizons framework, and run Causal Layered Analysis self-critiques on selected meta-signals to surface hidden framing assumptions. The current track record is visible in the Pulse tab.
Weekly bias audits count confirming versus disconfirming evidence across all tracking signals. Any signal with less than 15% disconfirming evidence gets flagged and counter-searched. If we're only finding evidence that confirms our thesis, something is wrong with how we're looking. Items flagged two weeks running get immediate counter-searches — no deferring.
Calibration scoring applies to all fired signals. Scores weight how axis adjustments flow into the Futures visualization: well-calibrated fires get full adjustment, partial calibration gets half, and poorly calibrated fires trigger threshold revision before any adjustment is made.