We accept that high‑quality training demands curated, labeled datasets. Inference should follow the same rule. CxUs bring that discipline to runtime: the same clarity, labeling, and provenance we require for training data—applied to the context we feed the model when it answers.
Even though we've seen this train coming, seeing it in action still gives me pause - the moment something becomes "easy," it also becomes dangerous. Over the weekend I started playing around with Sora 2 from OpenAI, and let's just say — this thing is equal parts miracle and nightmare fuel.
The “cameo” feature alone is so good it’s unsettling. What used to take teams of visual-effects artists and GPU clusters now happens on a laptop — or worse, on your teenager’s iPhone. While you have to admit the tech is amazing - if we put a malicious bent on things, we have to consider that when the technical barrier drops, the attack surface explodes. And Sora 2 just dropped that barrier to the floor.
What seems like eons ago, back in March 2025 I wrote a probably-too-long paper about the nature of establishing relationships between humans over digital channels. Digital Identity Verification was an attempt at a framework to categorize how trust is formed between people — and then use that framework to systematically explain how bad actors manipulate that trust to perpetuate identity fraud.
I still think the paper is accurate, though I missed a critical point. I had positioned my argument around people establishing trust with other people via technology. What I overlooked is that the majority of technical approaches today focus on establishing trust between devices and/or software and, by extension, the people using them. These approaches often assume that the identity of the person becomes inherent to the device.
It is this critical gap — between human and technology — that we are looking to bridge with Vero. Building tools to facilitate real-time peer-to-peer authentication inherently creates friction in the process. But our friction is intentional: it’s a demonstration of trust, a clear signal of intent. And that signal is the smoke. In our case, if you see the smoke, you can be confident there is no fire.
The 2025 Context Engineering Survey, which reviewed more than 200 research papers and enterprise pilots, cautions: “Simply enlarging an LLM’s context window does not guarantee reliable attribution or auditability; we still need explanation systems, audit mechanisms, and governance structures.” (Context Engineering Survey 2025 §4). Put differently, the problem isn’t raw memory capacity — it’s the provenance of the information we cram inside. This is exactly the rationale we followed when designing Context Units for our Pyrana platform.
It's funny I've spent years talking about tokens in all of my blockchain work. Whether it be NFTs or stablecoins, it seemed like tokens were everywhere. Now that we're diving in deep and trying to fix some problems with context in AI, it seems the topic of tokens (albeit a very different kind) is still center stage. The phrase "tokenization of the world" has taken on a new meaning to me.
Continue reading
Enterprises plough well over $200 billion every year into data-warehouses, BI dashboards, and analytics suites,- and analysts now forecast the big-data analytics market to top $1.1 trillion by 2033¹. Financial-services firms alone spend $44 billion a year just on market-data feeds². Yet studies keep showing that roughly half of the information companies collect never gets used³.
When we think about things our knowledge flows like water - it's hard to grasp why we know something or where we learned it (we sometimes don't stop to think about if it is even true), yet we are able to draw conclusions that we "think" are right. If we later ask "why did I think that" we can rationalize our thought process to explain/convince others, but that is usually done in hindsight. We like to think that logic and reason guided us to a conclusion, but most of our decisions are just made by our gut and we then justify them later. The more often we turn out to be right the less we are challenged and the more confident we are in our thinking.
The proliferation of AI-generated deepfakes has escalated threats of identity fraud in digital communications. This paper examines existing identity verification methods, introduces a strategic framework employing layered defenses to significantly increase attacker complexity, and proposes integrating cryptographic visual signatures alongside traditional verification methods. By analyzing attacker-defender dynamics using game theory, and referencing contemporary adversarial economics literature, we demonstrate the practical effectiveness of combining multiple verification modalities to deter identity fraud.
There’s nothing new under the sun… If 20 years of technology consulting has taught me anything, it’s that a good story can be retold forever. Sometimes we just need to update it with the current buzz words or fit it to current the narrative for it to seem relevant. Today’s narrative of choice, of course, is AI. We hear a lot about how enterprise AI agents are going to revolutionize corporate organizations, but when I ask people how, it sounds a lot like the same old story.