By this point in the conversation, it should be clear that agentic commerce isn’t really about agents.
It’s about what happens when software is authorised to act in the real world — across platforms, organisations, and jurisdictions — and what that implies for trust, accountability, and scale.
We’re often asked whether MATTR is an AI company. We’re not.
That distinction isn’t philosophical. It’s practical.
We build infrastructure for a world in which software — increasingly AI-driven software — is trusted to do things that matter: to initiate transactions, share sensitive information, and act on behalf of people and organisations under defined constraints.
Agentic systems accelerate this shift, but they don’t invent it. They remove the shortcuts we relied on when humans were always in the loop, and in doing so they expose the underlying trust problem more clearly.
As agentic ecosystems evolve, execution will continue to diversify. Different platforms, agents, and interaction models will coexist. That’s healthy. What can’tmremain fragmented is how authority is expressed, how consent is enforced, and how outcomes are verified when something goes wrong.
Our work sits beneath applications and above infrastructure — at the point where trust becomes something systems can reason about, verify, and rely on independently. We contribute to the standards that define digital credentials, selective disclosure, and delegated authority, and we turn those standards into production-grade services that operate in regulated, real-world environments.
This isn’t theoretical work. It’s informed by deployments where failure has consequences — across government, financial services, and large-scale ecosystems where trust has to function between parties who don’t know each other in advance.
It always makes me smile when people say investors are “only interested in AI startups”. Experienced investors understand that when a new execution model emerges, durable value often accrues to the infrastructure that makes it usable at scale. The current AI cycle is a reminder of that: some of the most consequential outcomes haven’t been driven by models alone, but by the systems that allow those models to operate safely in the real world.
Agentic commerce follows the same pattern: intelligence creates possibility, and infrastructure turns that possibility into something organisations and consumers can depend on. Ifyou’re building, deploying, or governing agentic systems — in commerce or beyond — now is the right time to engage seriously with how trust, authorisation, and accountability are represented in practice.
That’s the layer we focus on.