Agentic systems are moving from advising humans to acting on their behalf. MATTR Labs is exploring what this shift means for trust, authorisation, and accountability.
MATTR Labs will release a series of articles and host a webinar at the conclusion of the series that will examine what changes when software is authorised to act — and what that means for real-world systems.

This work is not about predicting winners or promoting specific technologies. It’s about understanding the trust infrastructure that must exist for agentic systems to operate responsibly at scale.
01
02
03

These questions surface across every agentic ecosystem — regardless of platform, protocol, or industry.
Over five weeks, MATTR Labs will publish a short series of perspectives exploring how trust needs to evolve as agentic systems scale.
Published
For years, AI in commerce has largely been advisory: search, recommendations, comparisons, customer support. These systems influenced decisions, but humans still executed them.
Published
Understanding where fragmentation is acceptable helps narrow the real problem. Once software is authorised to act, the challenge isn’t coordination between systems — it’s how responsibility is delegated, constrained, and evidenced when things go wrong.
Published
Delegation changes the trust model. When a human clicks a button, intent is implicit. When software acts on someone’s behalf, intent has to be made explicit.
Coming soon
Coming soon
At the conclusion of this series, MATTR Labs will host a live session to explore:
MATTR Labs is MATTR’s hub of innovation, focused on emerging technologies core and adjacent to digital credentials.
Our work explores:
Our current experiment:

tools.mattrlabs.com