Skip to content
By Role
icon-legal
csuite-lightblue
finance-lightblue
procurement-lightblue
finance-lightblue-1
By Vertical
pp wd
staffing-purple
healthcare-purple
Manufacturing-purple
SaaS-purple
financialservices-purple
IT-purple
transportation-purple
retail-purple
Insurance-purple
howitworks-teal
integrations-teal
blog-blue
resourcelibrary-blue
Product blog
News@1.5x-1
icon-trust-center
whoweare-green
contactus-green
careers-green
handshake

Generative AI & Your Contracts: A Conversation with IntelAgree’s General Counsel, Lee Rone

, , , , | October 12, 2023 | By | 7 min read
Generative AI & Your Contracts: conversation with IntelAgree's General Counsel, Lee Rone

The legal sphere is buzzing with the promise of Generative AI, and for good reason. It's not just about cost savings, enhanced work quality, and scalability — it's about a whole new way of working. In fact, According to Salesforce, more than two out of three (68%) of companies believe Generative AI will help them better serve their customers.

Here at IntelAgree, we're equally excited. With AI at the core of our platform, we see integrating Generative AI as a natural next step to help legal teams work smarter, faster, and with more precision. That's why we're thrilled to share that Generative AI will be introduced to the IntelAgree platform in late 2023 — and in the meantime, we're launching this new blog series to answer your burning questions and demonstrate how it can augment and improve your current CLM processes.

In our first post, we'll be featuring insights from Lee Rone, IntelAgree's General Counsel. We'll delve into the impact of Generative AI on CLM and legal roles, discuss strategies for adoption, and explain how Generative AI will enable IntelAgree users to extract even more value from our platform:

 

IntelAgree: What is Generative AI, and how does it work in the context of contract management?

Lee Rone: At IntelAgree, we view Generative AI as a tool that enables individuals authoring contracts to do so more rapidly and consistently. It follows their contract playbook, adhering to established guidelines on how they want things worded under certain circumstances. We're leveraging Generative AI to craft language that aligns with that playbook and addresses the specific situation at hand. This technology has the ability to read contextually, understand what it's analyzing, and follow instructions from a playbook. It then generates text as if it had been written by an experienced attorney.

Right now, I think lawyers are eagerly seeking information about Generative AI because they want to know how it can assist them, what risks it might pose, and how they should inform their teams about it. Some concerns include the potential for Generative AI to generate inaccurate results, cause intellectual property infringement, and cause the loss of attorney-client privilege. And these risks aren't just relevant to legal tech; all lawyers care about issues like infringement, breach of confidentiality, data security, and loss of privilege.

 

IntelAgree: How did the idea of adding Generative AI to IntelAgree's platform start, and what inspired it?

Lee Rone: The concept of adding a playbook feature to IntelAgree has been on our minds for a while. And since AI is already built into the foundation of our platform, we saw Generative AI as a natural progression for IntelAgree. 

We understand that every legal team authoring or negotiating common types of contracts has a unique approach to negotiations - they have certain standards and are always on the lookout for specific elements and risks. With the advent of Generative AI, we saw an opportunity to bring this playbook concept to life.

Authoring contracts is an arduous process, so we thought, why not proactively make edits for them? Or at least suggest what we think they should do? Our aim isn't to replace lawyers or introduce any risk - we're not telling them to step back and let the machine take over. Instead, we're suggesting what we believe should happen based on their playbook. Our AI will give you options for your top few choices in a particular situation — as well as the rationale — but we always defer to the lawyer's expertise on whether they want to accept the edits or tweak the playbook to make future edits more accurate.

 

IntelAgree: Can you explain what Generative AI means for the legal profession in simple terms? 

Lee Rone: The easiest way to describe Generative AI's impact on the legal profession is through two words: opportunity and risk. The opportunity lies in automation, especially for straightforward agreements like NDAs. This doesn't mean replacing lawyers, but rather, it offers a chance for saving time and costs. Legal departments, like other teams in a company, are often under pressure to cut costs, increase productivity, and scale for growth. Generative AI presents a viable solution.

Another thing to consider is that Generative AI is not always about trying to have more output; it's about improving quality, too. Generative AI might help paralegals and attorneys maintain their current pace while increasing the quality of their work and reducing risk. This is particularly important in areas like compliance where, currently, identifying key compliance issues requires running machine learning or manually reviewing and then reviewing the results. Generative AI could simplify this process by automatically identifying these issues, enhancing the quality of results and reducing the time spent identifying compliance risks.

Plus, many GCs and CLOs at larger companies are thinking about scalability. Legal teams are often understaffed, and Generative AI can help these small but effective teams keep pace with increasing workloads. As businesses accelerate their marketing, sales, and other operations, the legal workload inevitably increases. If you're not prepared to use Generative AI to support your role/team, the business may find someone who is. 

 

IntelAgree: How would you describe IntelAgree’s Generative AI differentiators?

Lee Rone: IntelAgree's Generative AI stands out in two significant ways. Firstly, we will use the same data for our Generative AI as we do for our platform’s proprietary machine learning models. This means that we are able to input higher-quality data into Generative AI models and, therefore, get higher-quality outputs from them, leading to improved outcomes for our customers.

Secondly, our generative AI results are tailored to individual client needs. For instance, when it comes to auto-redlining, it's up to the client to specify exactly how they want the model to function. Although our Generative AI can be used as a general-purpose tool, the results will be substantially improved if you assist us in customizing it by providing data, instructions, context, additional configuration, and so on. 

 

IntelAgree: Can you elaborate on how IntelAgree's Generative AI explains its rationale for outputs?

Lee Rone: We understand that Generative AI is new and exciting, but also a bit daunting, especially for risk-averse professionals like lawyers. They're willing to test the waters, but they want to see how our AI arrives at its conclusions before fully diving in. 

That's why "showing the AI's work" is crucial to our playbook concept. This concept allows a senior lawyer to lay out their strategy in plain English, without the need for coding or complex dropdown menus. They can specify how certain situations should be handled, and the Generative AI can contextually understand and implement these instructions.

When the AI shows its work, it provides an opportunity for the lawyer or drafter to evaluate the process that Generative AI used to arrive at its results, and if desired, refine their playbook further, adding nuances based on the AI's suggestions. So, it's all about creating a feedback loop. The Generative AI learns from the playbook, makes suggestions, shows its work, and the user refines the instructions based on the AI's output. 

 

IntelAgree: What strategies should general counsel use to ensure AI-generated contracts meet their organization's legal guidelines and policies?

Lee Rone: The first strategy is a bit old-school — ensure that you have all your guides and policies in place. This includes having your playbooks ready, whether they're in Word format or otherwise. If you don't have a playbook for a particular agreement or contract type, or if you're not evaluating your playbook in the context of whether the AI is interpreting it correctly, then you're missing crucial steps. So, step one is to round up your playbooks, make sure they're updated and comprehensive, and understand what they entail.

The second strategy involves using Generative AI to evaluate your playbooks. Going back to IntelAgree's ability to 'show its work,' this allows you to prompt the AI to ask the questions that a lawyer or paralegal would typically ask. By doing this, you can assess whether the AI is asking the right questions based on your playbook.

However, it's important to remember that human involvement remains essential. A human needs to design the initial prompt and review the results. If the AI's responses don't align with your expectations, you might need to adjust the prompt. This iterative process helps ensure that your Generative AI is correctly interpreting and applying your playbooks.

 

IntelAgree: What legal and compliance issues should general counsel consider when adding Generative AI to their contract management processes?
Lee Rone: When it comes to adding Generative AI to contract management processes, there are a couple of key areas that legal teams need to focus on.

First and foremost is information security. This is a broad area, but the primary concerns are about data: where is it stored and who has access to it? We've put a lot of thought into these questions at IntelAgree, and we're confident in the security features of the generative AI our platform uses. However, it's crucial to have these discussions because they're an essential part of the due diligence process. Secondly, privacy issues come into play. While they're related to information security, they're not exactly the same. Information security generally pertains to hardware and technology protection at an organizational level; on the other hand, privacy typically involves individual rights or protections and tends to be more document-oriented. It focuses on whether data is being protected in accordance with the regulatory standards applicable to a particular business and/or person.

In a nutshell, when integrating Generative AI into their contract management processes, general counsel should prioritize understanding the tool's information security features and ensuring that its use aligns with privacy regulations.

 

IntelAgree: How do you envision Generative AI changing the skills and roles of legal professionals in the future, and how can general counsel prepare for this shift?

Lee Rone: You know, it's fascinating how quickly jobs are evolving due to AI. Take prompt engineers, for example. Just a couple of years ago, this role was virtually unheard of outside of big tech companies. Now, it's not uncommon to see prompt engineer positions popping up on job boards, even offering six-figure salaries right off the bat.

And this change isn't just confined to the tech industry. In the legal field, lawyers will need to get comfortable working with prompt engineers to effectively leverage AI. One of the most common questions we get asked is, "Is AI going to take my job?" The answer is no, AI isn't going to replace lawyers — but those who know how to work with AI will gain an edge over their peers. This is something we emphasize at IntelAgree; AI is a tool to support and enhance your work, not replace it.

Think back to when the California privacy regulation, CPRA, was passed to amend CCPA, perhaps the most well-known US state privacy law. When it took effect in 2023, among a host of several new requirements, every company subject to CCPA/CPRA had to consider notifying their vendors about new compliance obligations. If we'd had Generative AI at that time, it could have identified all the contracts that needed updating and sent out compliance notices. Generative AI could have drafted these notices for each vendor, speeding up the process and boosting efficiency.

 

IntelAgree: How can general counsel prepare their teams and organizations for the integration of Generative AI into their legal workflows?

Lee Rone:  The first step is to get comfortable with prompt engineering. It might sound like a new, complex concept, but it really isn't. It's about clear communication and writing, something that lawyers and other legal professionals are already good at. They're used to giving precise instructions and articulating complex ideas in simple words. These skills are exactly what you need in prompt engineering. Plus, working with prompt engineers can be a great opportunity for both parties. Engineers can learn how to express ideas clearly and concisely from lawyers, while lawyers can gain a better understanding of the technology they're dealing with.

Lastly, education is key. General Counsels should strive to understand as much as possible about Generative AI and prompt engineering. They can attend seminars, read up on the subject, or even take in-depth courses. This will help them understand the risks and benefits, and how to make the most of this technology.

 

Stay Tuned for More

In an era where Generative AI is transforming the legal industry and redefining efficiency, our conversation with Lee Rone, IntelAgree's General Counsel, offers an exciting preview of what's to come. And this is just the beginning.

Subscribe to the IntelAgree blog and stay tuned for the rest of the series, where you'll gain invaluable insights into security, learn how to craft effective prompts, and receive actionable advice to fully harness the power of Generative AI.