How to Use AI and LLMs in GRC Effectively & Securely
Use cases & strategies for harnessing AI and LLMs in GRC workflows
Continuing our conversation with Andrew Gunter and Jason Rohlf from Cential, we talked about specific instances of artificial intelligence (AI) and large language models (LLMs) in the realm of governance, risk, and compliance (GRC). We discussed effective and secure strategies for leveraging AI and LLMs in GRC workflows, including IRL use cases for GRC and actionable ways to harness the power of these advanced technologies.
From enhancing efficiency to prioritizing data security, here are current best practices that can your organization can use to move forward in the evolving GRC landscape.
Use cases of AI in GRC automation
Generating Policies
With a carefully crafted prompt, ChatGPT can generate a full-blown policy statement, say, around user access through Onspring’s AI hub. Caveat: Should you blindly use this ouput as your official policy? Not necessarily, but this auto-complete feature gets you going in the right direction much faster, usually about 60-70% of the way there with this one, blank-page function.
Legal Translation
Legal documents and detailed regulations can often be verbose and challenging to interpret. AI can simplify and translate the language of laws and regulations into plain and understandable terms. By submitting the legal wording to an AI hub in Onspring, we get a business translation that conveys the regulation in layman’s terms. Remember: It’s important to validate and review the information to ensure accuracy and alignment with legal department requirements.
Risk Statements
By leveraging a language model, we can request not only the risk statement but also additional insights, such as impact and likelihood ratings, risk descriptions, and relevant laws and regulations. With the right prompt, we can even ask for NIST controls to aid in risk management. Again, it’s important to regard this content as a starting point, reviewing and refining it to meet specific expectations. Whether for enterprise-level risk management or internal audits, the AI hub in Onspring provides a powerful resource to accelerate analysis and decision-making processes.
Additional Considerations as You Get Started with ChatGPT in GRC
In our conversation with Cential, they were quick to point out several potential risks associated with the use of AI and LLMs in GRC.
Risk 1: AI hallucination
This is the phenomenon of AI generating text that is factually incorrect or misleading. For example, an LLM might generate a risk statement that is based on incorrect information or that makes unrealistic assumptions. It is important to be critical of the output of LLMs and to verify the accuracy of the information they provide.
Risk 2: Bias
LLMs are trained on large datasets of text, and these datasets can reflect the biases that exist in society. For example, an LLM might generate a risk statement that is biased against a particular group of people. It is important to be aware of the potential for bias in the output of LLMs and to take steps to mitigate it.
Risk 3: Data privacy and security concerns
LLMs can access and process large amounts of data, and it is important to ensure that this data is protected from unauthorized access. It is also important to have a clear policy in place for managing the use of LLMs in GRC. This policy should address issues such as data privacy, security, and bias.
Despite these risks, AI can be a powerful tool for GRC professionals, if are used safely and effectively. Here are some tips for using AI and LLMs in GRC:
Define the use case for using an LLM before implementing it. This will help to ensure that the model is used in a way that is appropriate and effective.
Have a policy and procedure in place for managing the use of LLMs. This policy should address issues such as data privacy, security, and bias.
Use a central hub where the interaction between the GRC solution and the AI model takes place. This hub can help to control the flow of information between the two systems and to mitigate the risks associated with using large language models.
Be critical of the output of LLMs and verify the accuracy of the information they provide.
Be aware of the potential for bias in the output of LLMs and take steps to mitigate it.
With a few cautions, GRC professionals can use AI safely and effectively to improve their GRC processes and reduce risk.
Got questions? So did our webinar viewers. Read Cential’s answers to the hottest questions in “Q&A: Application of AI to GRC.”
Actionable insights we think you’ll like
Decoding the Latest HIPAA Security Rule Proposals for 2025
Get our take on updates in the new HIPAA Security Rule proposals aimed at enhancing cybersecurity in healthcare, including mandatory written policies, asset mapping, business resiliency and improved business associate management.
Guide: What is NIST RMF?
Learn about NIST RMF and how it helps you identify, assess and manage cybersecurity risks, including how it can safeguard data and streamline compliance.
How to Present KRIs Effectively to Your Board: A Coaching Guide
Presenting KRIs effectively to your board can transform risk management from a reactive chore into proactive leadership. This guide offers step-by-step insights on aligning KRIs with business strategies, leveraging technology, and using clear communication to make your data impactful and actionable.