Agentic AI: What it is, and why businesses need to manage the risk

Nigel Pemberton
Head of Transformation
20/4/2026


Agentic AI is quickly moving from concept to practical application. For many New Zealand businesses, the appeal is obvious: systems that can act with greater autonomy, streamline tasks, support decision making, and improve speed across business processes.


But as with any emerging technology, the opportunity needs to be weighed carefully against the risk.


What is Agentic AI?


Agentic AI is a form of artificial intelligence that goes beyond simply responding to prompts. Rather than only providing information or suggestions, it can take action, make decisions, and carry out tasks with less human input. That might include gathering information from multiple sources, recommending a course of action, triggering a workflow, sending a response, or completing part of a task automatically.
The opportunity is clear: it can save time, reduce manual effort, and help teams move faster. But it also introduces new risks that need to be managed carefully.

How Agentic AI changes the risk equation


Traditional software tends to follow defined rules and stay within clear boundaries. Agentic AI is different because it can act more independently. In practice, that means it becomes a new kind of operator inside the business, one that can move quickly and at scale, but may not always be subject to the same level of oversight as a person or system.

This is where the risk becomes real. Even if an AI agent cannot directly access critical systems, it can still create significant exposure by influencing decisions, giving incorrect advice, or operating beyond what it was meant to do.

Why businesses should tread carefully


Recent reporting has raised a number of concerns about how these tools can affect business operations and security.
Examples have included:
• Guidance generated by the tool contributing to the exposure of sensitive internal information
• Incorrect technical advice leading to follow on security issues
• Unauthorised transactions and accidental data deletion in environments without the right safeguards in place

The details differ, but the pattern is the same. The risk is not just about malicious use or a direct breach. It is also about limited oversight, weak controls, and too much reliance on outputs that may be wrong or unsuitable in context.


For business leaders, the message is straightforward: the issue is not just what these tools can do directly, but how easily they can introduce gaps in control.

The opportunity is real, but so is the need for governance


This does not mean businesses should avoid these tools altogether. It means their use needs to be supported by clear controls. More organisations are beginning to treat them as part of the operational environment, with stronger oversight around what they can do, what they can access, and how their activity is monitored. That includes accountability, and the ability to step in quickly if something goes wrong. If a tool can influence a business process, it should be governed like any other business critical capability.

What businesses should focus on?


As businesses’ explore agentic AI, there are several practical areas that deserve attention.
1. Establish clear ownership.
2. Introduce decision guardrails.
3. Treat AI agents as controlled identities.
4. Build safeguards into workflows.
5. Make intervention possible.


One of the biggest mistakes businesses can make is treating this as purely a technology issue. It is a leadership issue as well, and good outcomes depend not just on the tools in place, but on clear governance, practical controls, and informed decision making at the leadership level.

If your business is starting to explore agentic AI and wants a practical view of the risks, controls, and governance considerations involved, get in touch with the IT Partners team.