Agentic AI in Legaltech: Proceed with Supervision!

ByKen CrutchfieldPublished inAnalyses & TrendsJuly 25th, 2025
Agentic AI in Legaltech

Semi-Autonomous agents can transform work if leaders maintain oversight

ChatGPT has just announced the release of ChatGPT Agent, which is worth some reflection. Agentic AI is now legitimized, and there is a standard to measure against. ChatGPT can now access your calendar or buy ingredients for the meal you want to cook tonight. It can communicate with the outside world and proactively perform tasks on your behalf.

What is Agentic AI?

A marketer would be remiss if they didn’t consider adding “Agentic” to the description of any new AI feature. Given all of the marketing hype, there is some confusion around Agentic AI.

Agentic AI focuses on creating autonomous agents that can reason and orchestrate tasks. These agents can make decisions and instruct other systems. Imagine telling Siri on your iPhone to Venmo money to a co-worker for a company happy hour. Now, imagine a system that can notify you that it will be submitting a court filing for your firm. This proactive example highlights the tremendous potential and also the possible dangers of autonomous agents.

Semi-Autonomous Agents

The term autonomous agents should raise some concern. I believe semi-autonomous agents is a better term. Do we really want fully autonomous agents that learn and interact independently, to find ways to accomplish tasks?

We live in a world full of cybersecurity risks. Bad actors will think of ways to use agents. Even well-intentioned systems could mishandle a task without proper guardrails.

New technologies are problems looking for solutions. Agentic AI is no different. If Generative AI is a hammer looking for nails, then Agentic AI is the foreman overseeing the crew. If the foreman only has workers with hammers, the crew will not be able to install a sink. Construction management must ensure the crew and foreman are appropriately equipped and trained.

Legal professionals will want to thoughtfully equip their agent technology with controlled access to the right services. Agents must be supervised, and training must be required for those using or benefiting from agents. Legal professionals will also want to expand the scope of AI Governance to include the oversight of agents.

OpenAI has figured this out. As the user, you must provide permission to allow ChatGPT Agent to access systems and perform work on your behalf. ChatGPT users will assume the risks, rewards, and responsibilities associated with using the service. Connectors in ChatGPT are currently in beta testing and will connect to third-party applications. (I wonder when ChatGPT Agent will connect to legal research services???)

In the meantime, here are a few practical considerations for legal innovators.

Provide Business Oversight to Technologists and Agents

Legal innovators at tech companies, law firms, and corporations must lead desired outcomes and guide the process. Agentic AI will require supervision. Human review of Generative AI output is essential. Stating the obvious may be necessary, especially with agents. Controls, human review, and human monitoring must be part of the design and the requirements for any project. Leadership should not leave this to the IT department alone.

Leverage What Already Works, Focus on Improvement

As a guideline, legal innovators should ensure that technologists consider existing data and processes that are already in place and working effectively. Maybe the system needs to be replaced, but the focus should be on improvement rather than just exploring new ways to use cutting-edge AI technology. I’ve heard of machine learning engineers getting excited when they could classify data with 90% accuracy, when metadata in existing systems was 99% accurate, and human-reviewed. Too often, engineers are unaware of existing processes and systems.

Think Global, Act Local

There was debate on limiting regulations on AI in the One Big Beautiful Bill Act, but limits did not make it into the final public law. Agentic AI will eventually attract government regulation.

Organizations should not wait for government regulation. They will want to design and plan their solutions thoughtfully. Legal innovators should:

  • Require human oversight for all agent-driven workflows.
  • Expand AI governance to include agent behavior and permissions.
  • Define clear processes, roles, and escalation paths.
  • Ensure staff training and accountability around agent usage.
  • Focus on outcomes and consider existing processes and data that may already work.

In Summary

Large Language Models (LLMs) are transformative, but also have inherent limitations. They hallucinate. They also struggle with math because LLMs weren’t designed to solve math problems. Generative AI can fail in simple tasks like spelling instructions, expressing the alphabet, or by creating images of humans with six fingers.

Agentic AI will also be transformative. Supervision and thoughtful design will ensure that agentic solutions look to a calculator to do math rather than an LLM to solve a math problem. Legal innovators must consider and leverage existing processes that already work well, such as those using Robotic Process Automation (RPA) tools or well-documented rules.

OpenAI’s latest release of ChatGPT illustrates the rapid pace of technological change. In a fast-changing environment, we must not lose sight of the fact that technology must solve problems and that solutions must be designed with consideration for people, processes, and supervision to ensure success.

The final editing of this article included the use of AI.


Tags
Analyses & Trends
Share
Founder & CEO, Spring Forward Consulting
Related Posts


Jennifer Case
July 8th, 2025