3 Rules for Effective Risk Evaluation
Make the numbers work for informed decision-making
Mental math has never been my strong suit. Give me paper and pencil, and I can probably still crank out theorem proofs or algebraic equations. But ask me to add up the total of 10 grocery items in my head? Total train wreck. My husband really enjoys exploiting this weakness, and we have several stories of family lore where I’ve botched some simple math problem. (I get the last laugh when it comes to grammar.)
Recently, I encountered a real numbers challenge when supporting a risk evaluation session for an Onspring client. Here’s the breakdown:
8 department executives in
1 meeting room for
4 hours to evaluate approximately
20 risks and plot on a heat map by combining
6 different ratings
You do the math…is this even possible? With no breaks, no interruptions and no rabbit holes, we had less than 15 minutes to introduce a risk, provide background, discuss and ask questions, and have each executive submit their six ratings—a pretty daunting sum of activities in a very short span of time.
We knew this was an ambitious feat and would require careful preparation to pull it off. So we established the following ground rules to help us execute a productive meeting:
Trim the Fluff
We designed a risk evaluation tool in Onspring that presented meeting participants with only the information they needed at that moment to make an informed decision. On tablets and mobile devices, users saw a field to rate the criteria, a rating scale guide and a space for comments. Critical background information was presented for each risk, and the group briefly discussed the scope of the risk and the rating criteria. Participants could submit their risk evaluations in Onspring and add comments in seconds.
Minimize User Error
We had no need to tally results manually. Using live data analytics in Onspring, the meeting facilitator shared results for discussion as soon as all participants submitted their votes. Resolution on scoring was swift, and live calculations provided average scores, minimums and maximums. Transposition errors were non-existent since Onspring calculations ran directly from participant submissions.
With the new meeting format and tool, we had to be willing to adjust things that didn’t work—in real time. One of our rating calculations in Onspring had different value weighting than other ratings, which caused confusion for the meeting participants. Although this calculation was still essential to providing a final heat map at the end of the meeting, the live calculation was throwing the participants off. So we quickly modified the rating for participant view without changing our overall score calculation strategy. This eliminated confusion while still yielding the correct heat map results.
You may be wondering if our ground rules paid off and if the numbers played in our favor. We assembled a clearly defined risk heat map from which further discussion could be based and adjourned an hour before the scheduled time (which contributed to happy execs and an overall positive feeling about the meeting). I’d call that pretty solid math.