Use of AI in GRC: Onspring Featured on CyberWire Daily
Curious about how to use AI in your GRC processes? Learn more from Ryan Lougheed, Director of Product Management at Onspring, from his interview in this episode of CyberWire Daily. CyberWire Daily provides daily cybersecurity news and industry analysis, hosted by David Bittner. In this episode, Onspring’s Lougheed provides an in-depth look at how AI technologies are transforming GRC processes.
We know GRC practices are designed to help businesses operate smoothly, ethically, and legally. Governance, of course, allows leadership to direct and control the company effectively; risk management refers to an organization’s ability to anticipate and manage potential issues; and compliance dictates how a business follow laws and ever-changing regulations. If your business deals with regulatory concerns, compliance is obviously a top priority. Mature GRC practices now should include managing the risk associated with using AI in your business.
There are fundamental principles to safely and efficiently utilize AI. Onspring breaks GenAI risk management down in this article about the “Four (4) Best Practices for Managing Risk.” Once you have worked through key aspects such as transparency, risk assessment and management, data privacy and security, you can take the next steps to implementing AI for your business.
Today, many companies are looking for ways to implement AI inside of a GRC platform, framework or tool for business process automation. Currently, AI in GRC is largely about asking LLMs (Large Language Models) such as Chat GPT to craft policies or summarize key regulatory changes as step one. AI-driven systems promise to continuously monitor transactions, communications, and activities to detect compliance breaches in real-time.
The process of managing documentation and generating compliance reports can be labor-intensive and prone to errors, so many are also looking to AI to simplify and automate report creation so that human effort can be focused on evaluating these reports and making decisions instead.
In this podcast, Onspring’s Director of Product Management, Ryan Lougheed, breaks down the use of AI in GRC this way, “At Onspring, we use a crawl, walk, run analogy to assess the steps of maturity in using AI.” Here’s how that breaks down.
Crawling with AI for GRC
Step one, or “crawling,” establishes AI in a general-purpose manner. That could like using an LLM, such as ChatGPT, Google’s Gemini, or Microsoft’s Co-Pilot, throughout or as part of an app to capitalize on the LLM benefits. For example, an organization may lean on AI to create internal compliance policies. AI-driven compliance tools can track regulatory changes and then compare records against these regulatory guidelines, flagging any discrepancies instantly. These are low-risk, low-reward elements of your GRC program and a great place to start. You will gain some efficiency, but that’s about it. You won’t put any proprietary data or customer information at risk.
Walking with AI for GRC
The next phase involves combining data retrieval with an LLM to provide more context and predictive analytics. For instance, this involves combining live external data from websites, like stock prices or weather, with data from your internal tools, such as HR or CRM sources. This allows you to analyze data and give it useful context that can answer business questions and create business-specific reports, evaluations, and solutions. It can provide trending predictive analytics to help assess future risk, among other useful data for your business.
As with all phases of implementing AI-driven solutions, it’s critical to integrate human insight and oversight to both the input and the output. These solutions won’t be reliable in an AI vacuum.
Running with AI for GRC
Advanced AI users are running with it. When you are running, you are able to constantly fine-tune your LLM by embedding organizational knowledge within it. This is going to give you the most powerful results, but is also where the greatest risk lies.
At this level, you are asking your models to utilize proprietary materials to yield incredibly detailed information based on your organization’s data, and in your specific context versus a more general “everybody” context. This allows you to be more predictive vs. reactive in your use of AI.
Advanced AI Predicts vs. Reacts
At an advanced level, AI can analyze historical data, identify patterns, and forecast emerging risks with remarkable accuracy. For example, AI-driven tools can assess credit risks by evaluating a range of variables such as market conditions, financial behaviors, and political factors. This proactive approach allows businesses to implement mitigation strategies early, safeguarding against financial instability and operational disruptions.
But AI isn’t magic. At this point, the inputs are especially critical. Bad data in means bad data out. You’ll want to examinine that initial, raw data to make sure that it’s, indeed, accurate and reliable, and you’ll continue to test and monitor the validity of the input data and the output to ensure its integrity. There will likely be limitations of your datasets and your modeling, too, to take into consideration.
Ultimately, you’ll want to be able to train models for the various verticals your business needs, such as a model for compliance, a model for risk, a model for vendor management, etc. to get the greatest impact.
Human Perspective
According to Lougheed, “It’s critical to maintain a human perspective and oversight. We need to employ a human-centric approach to ensure the validity and accuracy of the data and the output.” These tools cannot be used in a vacuum without human perspective and engagement to be used safely and successfully.
AI in GRC: Practical insights and real-world examples
Learn how the rubber hits the road in this summary of a recent webinar Onspring hosted with Cential. Andrew Gunter, partner, and Jason Rohlf, consulting director, shared practical insights and real-world examples regarding the application of artificial intelligence to governance, risk, and compliance (GRC) processes.
Dive Deeper into AI for GRC
For more detail and a deeper dive into this fascinating topic, listen to the full podcast on the CyberWire Daily podcast, episode 2047. Onspring provides no-code, cloud-based, governance, risk, & compliance software and is a leader in GRC best practices. To learn more about AI use in Onspring’s automation, feel free to request a demo.
Want to learn more on this topic?
- The GRC World Forum offers useful learning and insights on the future of AI in GRC and how to harness the power of AI to transform GRC safely and successfully.
- Read the McKinsey report on how AI can help banks manage risk and compliance.