Cybersecurity

AI, Automation, and the Data Problem Hiding in Authorisation

AI automation

Niall McLoughlin

Authorisation steps into the spotlight

Identity conversations have centred on authentication. Single sign-on, MFA, passkeys, phishing resistance. We hardened the front door and declared progress and I’ve written about how that is changing. That work really mattered but once a user is authenticated, the question that determines real security is not who they are. It is what they can do.

That question is authorisation.

Modern security ambitions depend on dynamic decisions

The industry is now moving toward zero standing privilege, continuous identity evaluation, real-time access decisions, and automated enforcement. AI agents are beginning to act on behalf of users. Workloads are becoming autonomous. Security teams are being told, correctly, that access should be dynamic, contextual, and revocable at any moment.

None of that is achievable without serious investment in authorisation. And yet, as organisations begin trying to implement these ambitions, they keep running into the same obstacle.

Data.

Every authorisation decision is a data decision

When a system determines whether someone can access a dataset, approve a payment, deploy infrastructure, or retrieve sensitive records, it is evaluating attributes. Job family. Location. Risk posture. Entitlements. Context. System state. Sensitivity classification.

Those inputs rarely live in one place. They come from HR systems, ITSM tools, directories, governance platforms, application databases, and increasingly, real-time security telemetry. If those data sources are inconsistent, stale, or only partially trusted, the authorisation decision built on top of them can't be relied on, so it’s not. It may look automated and it may be technically elegant but it will not be trustworthy and without that trust you can’t make decisions.

Why great technology still produces bad outcomes

There are excellent technologies on the market designed to solve the enforcement side of this problem. Fine-grained authorisation engines such as OpenFGA, OPA, and Cedar-based policy models allow expressive, high-scale, real-time decisioning. Governance platforms such as Okta OIG, SailPoint and One Identity Manager ingest enormous volumes of identity and entitlement data to model access, drive lifecycle controls, and support compliance. These are strong, mature technologies but they all depend on the same thing. The quality of the data they consume and the ability to build decisions off that data.

An authorisation engine fed with incorrect attributes will produce incorrect decisions. A governance platform built on stale entitlements will give you a beautifully governed version of the wrong picture of that person, machine or building. No amount of policy sophistication compensates for unreliable inputs.

Why the pressure is increasing now

The industry is shouting about no longer relying on static roles and annual certification cycles. Why is that? Breaches are identity focused, and removing those long standing entitlements reduces the blast radius. And yes, AI. Read more on that below.

The direction of travel is clear. No standing access. Just-in-time privilege. Continuous evaluation. Real-time response. AI systems acting within tightly scoped boundaries. Identity Mesh and Continuous Identity are the go-to buzzwords that are making these decisions real.

All of these models assume that authorisation is dynamic and data-driven.

What good looks like when it works

Where does it work well? My favourite example is a digital streaming service. Access to content is not manually assigned show by show. A baseline experience as a free user is established, then content is layered dynamically based on subscription tier, region, age rating, and real-time context, individual purchased shows. The user sees a seamless interface and underneath the system is making continuous authorisation decisions driven entirely by data. That is not happening with manual requests and approvals. Well, maybe it is. Mechanical Turks exist! A digital streaming service that served up resources based just on authenticating would struggle.

Enterprises want some or all of that behaviour for all the right reasons. It reduces admin overhead, it secures access to services and it supports audit and all those compliance frameworks that need to be adhered to. They want roles and attributes to drive entitlement without constant human friction and accesses to change and adapt as context changes. They want standing privilege removed wherever possible.

Where organisations get stuck

The focus, however, often lands too heavily on the identity platform itself. Can the IdP enforce this? Can it evaluate that? Can it model attributes in real time? Technically, the answer has been 'yes' for a long time. What no platform can do is invent trustworthy data. Smarter engines running on contaminated fuel.

If the core identity and entitlement data feeding authorisation is inconsistent or unreliable, every downstream decision inherits that risk. Faster automation simply accelerates poor decision making.

Now, to the AI bit because that’s where the eyes are.

This becomes even more critical as organisations introduce AI agents and expand Non-Human Identities across their environments.

AI raises the stakes

A common first step is to assign ownership. Every agent has a responsible person. Every service account maps back to a human. Every automated workflow has an accountable owner. That’s necessary as it creates visibility and accountability but it is only the first half step.

Knowing who owns an AI agent or Non-Human Identity doesn’t define what that agent should be allowed to do. It simply tells you who to call when something goes wrong. The real control emerges when ownership itself becomes an authorisation input. If an organisation has reliable data about a person’s role, job family, location, entitlements, and risk profile, that same data can start to shape the access boundaries of the AI agents and service accounts they own. In effect, Non-Human Identities could inherit authorisation context from trusted human identity data.

This is not the only model for securing automation and AI, but it is a model and could form part of your controls towards the new ISO42001 AI framework and it brings us back to the point of this article.

Start with the attributes you should already trust

Before organisations can confidently unleash AI across their enterprise, they must be able to trust the authorisation data that constrains it.

The encouraging part is that building this foundation does not require modelling every entitlement on day one. The most successful programmes start with a small set of high-value attributes that should already be correct in any well-run business. Location. Office. Job family. Employee type. Human/Non Human. Pick yours.

Build maturity progressively

When those data points are owned, governed, and synchronised reliably into identity systems, they can immediately drive a significant portion of automated access and authorisation decisions. Policy enforcement where the policy actually means something. From there, build out, layering in more context and more attributes as data maturity improves. Even if you are only able to apply this to your humans, it frees up time to focus on wrangling the NHIs into shape.

The leadership reality

Technology has never been the blocker. What most organisations still lack is clean, connected, trusted authorisation data. So when leaders ask why modern authorisation feels hard, the answer is rarely tooling.

Authorisation is finally getting the attention it deserves but it will only scale in environments where data is treated as a security asset. There are whole industries out there stealing identity data because of its value, so treat your identity data with the same value perspective.

So pick your xBAC and challenge it based on your data. See if it will stand up before looking at why your IAM platform isn’t taking all the pain away.

If you fix the inputs, everything else becomes possible

Fix data, and automation becomes achievable.
Fix data, and zero standing privilege becomes a target in the distance rather than something in the 'too hard' basket. It can’t stay there any more.
Fix data, and AI can start to be reigned in and guide rails applied.

It always comes back to data. It really always has.

Related posts

Read more on

No posts available

Subscribe to our newsletter