Blog
8 Things I Heard About AI at the ACC Annual Meeting
The Association of Corporate Counsel (ACC) held its Annual Meeting in Nashville from October 6-9, bringing together thousands of CLOs and general counsels from across the country. Unsurprisingly, many sessions focused on AI and related themes, with panelists and attendees sharing their perceptions — good, bad, indifferent, and ponderous — of the rapidly evolving landscape.
As a former GC and AI enthusiast, I appreciate the full spectrum of comments, from eagerness to genuine concern. Here are some of the more intriguing statements I heard and my thoughts about them:
1. “If we get more efficient with AI, we’ll just be pressured to reduce headcount.”
The concern that 'if we get more efficient with AI, we’ll just be pressured to reduce headcount' is legitimate, but the pressure to reduce overhead looms over legal teams regardless. Avoiding the technology that can make you more valuable to the business is not a winning strategy.
Instead, try to help the executive team understand that intelligent tech, as one panelist eloquently put it, “gives us the space to practice at the top of our license.” It empowers legal teams to do more creative, strategic work and focus on what really matters to the business.
Find out how
Evisort
can help your team
Test Evisort on your own contracts to see how you can save time, reduce risk, and accelerate deals.
2. "AI certainly can produce surprising results, so be mindful about the use case."
At this stage, it’s safer to use AI to analyze trusted data for well-defined purposes, such as extracting critical terms and clauses, summarizing key data points, calculating dates and amounts, suggesting revisions and first drafts, etc. While AI can be a highly valuable curator of information, it is far less reliable for nuanced decision making based on unknown data sources, like evaluating whether a contract term is enforceable, assessing the merits of a claim, or ensuring compliance with applicable regulations.
3. “Part of the ROI for AI is reduced risk, but how can we quantify that?”
Efficiency gains and cost reductions are loud, but risk reduction is silent. AI can, for example, spot the existence or absence of critical language in documents to help drive compliance with regulations and strategic initiatives.
To demonstrate that value, point to a negative experience the company has had before or bring up a competitor that experienced unnecessary litigation, data breach, or other exposure from an avoidable vulnerability. While you’ll never be able to prove a negative, you may be able to persuade with cautionary tales.
4. “Lawyers need more training in prompt engineering.”
I saw fair attendance and engagement at a session offering some basic guidance on how to write effective prompts for chatbots. With all respect (and while prompt engineering is generally a worthwhile skill), we’re now at a point where lawyers shouldn’t need to refine their prompting much to get the results they seek.
From fine-tuned LLMs to legal-specific guardrails to automatic prompt optimization, there’s far more AI sophistication and safety available for lawyers than generic AI chatbots can provide. In-house counsel should keep demanding better.
5. “Legal teams want to be a ‘Department of Yes.’”
A session about how legal teams can better engage with the rest of their businesses brought in a large audience (and yielded lots of laughs with anecdotes about the lengths the panelists had gone to make friends with other business teams).
This is the right focus at the right time, and while the session wasn’t expressly focused on AI, it hinted at key AI benefits. When in-house counsel has ready access to all the data they need about all their business relationships and can seamlessly share that data with other teams – even directly in the systems they use every day – that’s a big value-add.
6. “The comparative standard is not AI versus perfection.”
This was well said by a knowledgeable panelist, and it’s a good rule of thumb. When deciding whether to use AI for a task, compare its output not to a fictional ideal but rather to the average work product that’s readily available to you. You can still apply a human layer of review and refinement for the work product that matters most (in fact, you may be required to legally). In short, use AI to accelerate and enhance human output, not obviate it.
7. “My CEO threatened to replace me with GPT!”
A GC at my table during a session told me the tale of her CEO asking her to give GPT three questions (yes, I know this sounds like a Monty Python sketch), eager to prove to her that it would deliver as well as she could.
Her GC was frustrated when two of the answers GPT returned were only “mostly” as expected, and the third was quite off from expectations. (Again, the difference between analyzing trusted data versus making nuanced general judgments based on unknown data – see #2 above.) Perhaps more to the point, though, he needed his GC to tell him that. AI is best applied to empower and expedite human decision-making, not replace it.
8. “I work on legal AI because I’m a recovering attorney.”
I heard this old chestnut at a presentation, and I’m including it here because, to my surprise, it still drew a laugh. No matter how cliched the “recovering attorney” label gets, I guess it’s just eternally relatable that lawyers find major parts of what they do tedious and frustrating and (like me) are drawn to tools that can make their daily lives easier and more productive. Respect to all those in-house counsel doing the work every day, and I hope to hear your thoughts again next year.
Want to learn more about how AI revolutionizes contract management for in-house counsel and procurement teams? Download our whitepaper today.