Beyond the Hype: The Real Implications of GenAI on Banking
In this episode of The Purposeful Banker, Dallas Wells welcomes Corey Gross, VP of Product Management at Q2, to talk about the impact of GenAI on banking, including current use cases and those expected to develop.
Listen
Subscribe
Related Links
[Podcast] How Can AI Deliver Real Value for Banking?
[Blog] How Small Financial Institutions Can Leverage AI to Play on the Same Field as the Big Banks
[Blog] Expanding Andi With the Andi Copilot Early Adopter Program
[Blog] The Andi Copilot Early Adopter Program: The Technology Behind a Trustworthy Copilot
Transcript
Dallas Wells
Hello and welcome to The Purposeful Banker, the leading commercial banking podcast brought to you by Q2, where we discuss the big topics on the minds of today's best bankers. I'm Dallas Wells. Welcome to the show. Today I'm welcoming back Corey Gross, who's been featured on the show a couple of times, most recently back in November when we did a Purposeful Banker livestream on how AI can deliver value for banking. Cory leads Q2's AI Center of Excellence, which helps us figure out how we use AI internally and also how we embed AI into our products for our customers. Corey, welcome back.
Corey Gross
Thanks so much for having me back.
Dallas Wells
All right. So Corey, we got lots of, I think, interesting use cases we'll talk about today and things that we're excited about, but given that there's a lot of bank and credit union executives that listen to this, let's start with the hype cycle. I think we should ground ourselves in some reality first. So I talk to a lot of these executives day to day, and these folks are experienced buyers of technology, they've been down these roads before where there's big promise about the next new thing, and a lot of them do work out. There was, heck, back-to-banking stuff, living on the internet, and cloud, and mobile, and these things eventually become real, but it feels like in the early days sometimes we can get out over our skis a little bit. So from your particular seat, where do you feel like we are in terms of AI and especially for these banking use cases?
Corey Gross
So it's a good question. So I think hype probably peaked in the months following ChatGPT 3.5 being released, and it appeared as just, quite frankly, magic for a lot of observers and folks that hadn't really familiarized themselves with language models and what the potential outputs could look like. So I think that's where everyone's mind got out of control. This is the next financial assistant, this is going to automate customer support and service, there could be absolutely no poor consequence to come out of having a human being provide personally identifiable information to these models and whatnot. I think we're over that hump.
And I think that especially in highly regulated industries like ours, in financial services, we've reached this stage of, “Can I trust this thing? This was a great demo, this was really cool. I can see the potential for consumers to ask ChatGPT for recipes for their dinner party, but can I embed this within an experience that I put out to tens of thousands, hundreds of thousands, millions of users and not suffer any negative repercussions?” And I think we're now seeing the response to that and seeing more practical use cases where you don't have to risk it all, whether that be your organization's intellectual property or risk any confidential information being leaked, and now heading into what are the practical ways that a bank can implement this in a way that not just benefits our account holders, but our people and how they work.
Dallas Wells
I think that makes sense. And bankers are also really good at this risk management aspect. So I think a lot of the conversations we're having are really just about better understanding and how do we step into this in a thoughtful way and make sure that we put the right risk controls around it. So that was one of the first things that we did here at Q2, was really think about what are the guardrails. What are some of those early things that you and your team worked on?
Corey Gross
So I think early on what we wanted to do was just basically set up some governance framework, so how can we use these new tools within our walls ethically, responsibly, and then we could turn to practically. Ethically and responsibly meant setting up a standard of acceptable use, identifying which use cases or what ways to interact with LLMs and transformer models were not going to be acceptable given the industry and environment we operate in, and that it was working diligently with the leadership team to set up training for how to use these tools so that we can start to experiment, start to see what is the art of the possible, and then practical for our use cases. So if you think of that hype cycle … I talked about GPT 3.5. You get to this peak of expectations, I think it's called the peak of inflated expectations, which is it promises to the moon and back.
What we tried to do, because we've gone through this exactly, as you said—internet, mobile, cloud—is let's just get to what the ramifications are if we get this wrong and then we can worry about what the slope of enlightenment and the path of productivity actually will look like. So once we set up that framework for how to not hurt the business and how to protect people from being able to do something that they just unknowingly did to increase our risk posture, let's train, let's teach, and then let's start to experiment.
So you mentioned what did our team do? In the early days, we ran POCs. We said, "OK, well, ChatGPT is allegedly a great way to offer support to customers. Well, let's put together a support bot that we could specifically train on Q2 product data and let's put this in front of our customers just as a POC," which we did at CONNECT in 2023 to see how they would respond.
The response was, "This is really exciting, and this is really something that I can see over time helping us scale the way that we answer just simple product-related questions about new releases or how to best configure a solution when a new offering or a new feature comes out." Or, "Hey, how do banks like me properly configure the solution to get the most outcome?" We could use LLMs for that.
And so then our team started to turn to more ambitious pursuits, which I'm sure we'll talk about, but I think the name of the game is once we set up a safe way of experimentation, experiment and test and learn. And then what we did was we ran through that POC and we found out what the limitations are and what would actually be something that looks great in a demo, but to actually put this into production would require X level of expense and we would have to suffer X percent of error rate, which isn't something that we want to be able to put out. So that was the opening salvo post-GPT for us.
Dallas Wells
And I think that GPT moment that you've mentioned there was a really important one, it created all this excitement and awareness, but it also, like those things tend to do, it seemed to define what people think of as AI in maybe a limiting sort of way. So I think there's still a lot of folks who when they think of AI, and especially these language models, they're picturing a chat window and this conversational interaction with a bot. So before we get into the specific banking use cases, help me walk through what are the things outside of just that text chat interface, what sort of things can generative AI do from a technical standpoint?
Corey Gross
So I think when ChatGPT came out, now of course we're in a GPT-4 world where we've got more emotion-laden and expression-filled chat that you can have with a large language model, but obviously that was the way to make it real for most people. But I think what generative models do that is exciting enterprises, now that we're getting through that slope of enlightenment, I never thought it was purely a trough because I don't think we were ever in a trough of disillusionment with large language models and generative AI. I think there was just a, huh, how are we going to fit this into our box of executing things in a compliant and risk management-oriented world?
But I think what you're starting to see emerge as the primary value proposition of generative AI is workflow automation. And this is what Microsoft has been touting from day one as they started to say, "ChatGPT has provided the face to generative AI, but we're going to create something that acts as more of a copilot for our employees to increase productivity."
And, of course, copilot isn't new to Microsoft. They acquired GitHub, which they themselves had a software development coding assistant called GitHub Copilot. And, of course, if there's a competitive solution to that as well from GitLab, because those are just automating tasks and they're doing so in very complex workflows.
What machine learning executed really well, it's just single track. Let's train a neural network with a set of training data to execute this one function repeatedly and predicatively well. And then, of course, as you move through the machine learning spectrum, you could take on more and more complex tasks with large language models. In generative AI you're just summarize this document for me in a different language. So asking it to do multiple things that they themselves might be time-consuming and complex for a human being, and doing it with a fair degree of accuracy, certainly for the use case. And Microsoft, what I think it did in its tools for its Office 365 copilot was just show you how you could summary or highlight a paragraph and say, "Turn the data here into a table."
These are real, practical applications for generative models, and they have nothing to do with speaking to a chatbot and having a human-like conversation. It's just take this workflow that would take me 15, 20 minutes, maybe even longer, and execute it in seconds, and then I can edit, I can edit the things that didn't come through properly. This way you take all of the major issues that enterprises and financial institutions have with GenAI right now, which are propensity for bias, hallucination, oh, you're using my data to train some other large language model. You take that out of the equation and now you've distilled generative AI into this is just going to help me get my work done faster.
And when I create my work on my own, it's not like I just type up a bunch of stuff and let it fly. I edit my work, I have other people review my work. So you're taking all of the risk factors and you have a mitigating element to this, which is you are in control of the ultimate output, you are in control of what you submit, but this thing is sitting alongside you and using your commands or using your configuration choice, your preferences to get things done faster and making you more efficient.
Dallas Wells
So I think what you're touching on there … I had started my career in finance and that's baked deep into my bones … then that model risk management that financial institutions have been dealing with for decades now at this point, but they've been through many exam cycles. It's been beat into their brains these models need to be predictable. The way I test it is I put in a series of inputs and then I need to be able to know what to expect to come out the other side. So within that context, within that framework, can we do those same sorts of exercises with a large language model and come up with something that our examiners are going to be OK on the other end of it?
Corey Gross
Right now, what you can do is, of course, you can tune the outputs that you get from LLMs to put a little bit more control, reducing amount of hallucinations or creativity, as they'll call them, and be able to really put guardrails around what you ask an LLM to do. And this is even some of the more off-the-shelf models that we use with Claude and GPT, of course, and Anthropic we've played with. So there are some guardrails that are kind of built in. I think that what we've historically done in the machine learning world is we just expose the outputs. In a model risk management world, you have to show how did it reach the conclusion. LLMs, it's a little bit more difficult to do if you just unleash GPT because you don't know. You can ask the same question what you're getting at five different ways, but it's ostensibly the same question and get five different results.
And so I think what we have to do then is you take applications where that is a feature, not a bug, and you say, "Well, we're not going to implement that," because we're taking away what this ... It is a misapplication of the tool to the problem. But in the case of what we're doing now in some of our experiments and, of course, some of the product development we're doing, we're taking what a language model does really well, which is summarization and translation, and we have an objective function of what the expected output is, like you would for machine learning, and then you can write evals for all the different ways you can execute something to see what the accuracy rate is on all those different inputs. And so that's a helpful framework.
And by the way, these eval rate writers are the next data scientist, I think, for generative AI-based software development because it helps create more confidence that you can have an expected output from a large language model versus just crossing your fingers, hoping and praying, treating creativity as a bug, not a feature. That, of course, I don't think is translatable into the financial services context. So it's really about, A) finding the right application for the LMM, B) writing evals, and testing and learning as you go along, and having a high bar for what that output needs to look like before you create a production solution. And that's quite frankly how we've disqualified some features in our product development.
Dallas Wells
Yeah, it just doesn't work, it doesn't fit for that particular kind of job.
Corey Gross
Not reliable.
Dallas Wells
Yeah. You were talking about the applications there, the jobs that we hand these models in our workflows and in the applications that we're building, and then a lot of those what I've heard you and your team talk about is that idea of speeding up a process that a human is doing. So, again, in general terms, taking a block of text and putting it into a table. And I think it's worth noting, you said accuracy rates there, we're talking about something that the humans in that loop were doing anyway, and their accuracy rates aren't all that fantastic either, right?
Corey Gross
We are not good, it turns out, at proofreading, being details-oriented.
Dallas Wells
Yeah.
Corey Gross
Yeah, exactly right.
Dallas Wells
So I think that's something that all of us as users and consumers of these models in the years to come will have to learn, that there's some expectation management there. People see this as a model, a piece of software, and so any error it's like, "Well, this thing's broken, it doesn't work right." And it's like, "Well, if you had humans doing this and they just spent the last month doing it instead of the last five minutes, the model will spit it out, and how many errors would we have had to go back through and correct and adjust and edit?" We just have to think of these workflows, I think, a little differently.
Corey Gross
Well, I think that's a good point. I think we were talking about this lately where we were doing a demo of one of the features that we're piloting with some of the FIs that we're working with right now. And in doing the demo, we were actually looking for errors, discrepancies in the data between a loan agreement, which is actually being executed by the customer, and then the memo, the draft of the memo. And we actually, in doing the demo, because this was based on live data, we found inaccuracies, and that shouldn't happen because that was an executed loan agreement.
So the folks on the call had to stop the call to actually see if what was put out to the customer was still in that acceptability banner. They're like, "Oh my God. It could have been real bad if that human proofread loan agreement went out and wasn't within that margin of acceptability." And that's where just having a tool, an assistant that knows to look for these things to ask the question, "Are you sure?" If you really think about in the history of word processing, document processing, just the products that Microsoft and others have put out, we've gotten used to all these various shortcuts for find and replace, and just a find and replace tool isn't perfect because if you didn't spell the word the right way, it may not find that, it's not elastic enough to find that, but it's doing so much of the work by finding 37 words that you actually want to replace to that word. That is better than you having to manually go through it.
And that's probably more pinpoint accurate in finding 37 matches than you would've found. Now, you can now comb through all the variations, or, heck, you could type the variations. You could have misspelled variations, we have that now. It doesn't mean that you just say, "We're done." No. It's just an accelerant to your work.
So I think what we're trying to figure out in some of the experiments we're doing, can we save someone 50% of their time on something like document review? And if it starts to edge up even higher, what does that translate to in terms of value for a financial institution? Does that mean that we can get money on the books faster because we're accelerating a loan agreement for being signed? And what does that mean if we could get collectively a quarter's worth of loan agreements executed and on the books 50% faster? That's real money.
We've seen in even GitHub Copilot it's like, "Oh, this is the 37% or 12 ..." These are meaningful numbers, because if you add up all that time that you can deflect developers and engineers to more meaningful creative work, we're talking about immense value creation collectively. So think about the value creation that's created when you're taking, in the commercial banking context, folks who are ostensibly relationship managers or salespeople, they want to get out there and find more deals. They want to work with their clients arm-in-arm to understand their problem statements and find solutions to them with the products that they offer. If they're sitting doing manual document review, it's almost like a misallocation of resources … 100% replacement of that job. It's about deflecting those people to higher value.
Dallas Wells
Highest and best use of that rare skillset, yeah. So you've talked a little bit about this concept of a copilot for a relationship manager. What are some of the other early use cases that you're seeing financial institutions be serious about? I think there's tons of possibilities, but what are some of the early ones that seem like they're practical in these early days?
Corey Gross
I think what to me is undefeated in terms of use cases where a lot of bankers that I've seen just eye line to is anything fraud, compliance, risk mitigation. Those are going to be the earliest of adopters in terms of departmental spend and generative AI, because that's where bad guys spend money, find ways to dodge your framework so that they can defraud the bank. So as a financial institution, you have to be investing in counter solutions for that. And so we've seen copilots emerge for AML solutions, we've seen copilots emerge for compliance tech, KYC.
So all of the things that you would expect to be the early adopters because of where the bad guys spend their time and money are picking things up. And the next category outside of fraud, AML, compliance, risk mitigation, that whole part of the bank is in productivity, efficiency, because that's where a lot of other software and enterprise spend comes in. New version of Microsoft comes out, we've got to get that, a new version of Salesforce needs to be up, all the foundational tools and technology that allow our people to get work done. And that's been the area that we dabble in.
Dallas Wells
And I think, to simplify it a little bit, it's like anywhere where we see these massive flows of data and what we're trying to do is flag things that need to be evaluated for the humans to make decisions with. So humans are not good at combing through giant piles of data and finding the anomalies and finding those patterns. What they are good at is the decision-making, to be able to extrapolate from what they know and the patterns that they've seen and make a call with the AI as a tool to aid them in that. I think that general framework, that general pattern, there's so many of those in the banking universe that it's just about, as you said earlier, finding the right fits and the ones where it's easiest and most pragmatic to plug these tools into that workflow.
Corey Gross
It's like GPS. When GPS came out, you knew where to go directionally to get from point A to point B, and you knew that there were different routes that you could take with Waze and Google Maps, and now that it's completely democratized. I remember the early AT&T commercials from the early ‘90s where they were pitching that every car was going to have GPS in a year or so, or something like that, it was going to be built in the car. And now we're in a world where the data that we collect from all the different things that could happen—traffic conditions, there's an accident—that helps get you from point A to point B faster. Uses data so that in real time you can be equipped to make the best possible route decision to get to your destination. And that's what copilots are, just like GPS is our copilots extensively, they help get from point A to point B faster.
Dallas Wells
I like that analogy because we had to go through a similar learning and adoption cycle where a lot of times that GPS would tell you something like, "Well, that's not the right way to go. This thing doesn't know what it's talking about." But it was making suggestions a lot of times. Then all of a sudden we started figuring out when it was telling you to go a goofy way, it's like, "Oh, well, it knew that there was an accident up ahead that I didn't know about yet." And so we all started to slowly trust those things a little more than we did in the early days.
Corey Gross
And exactly like every single technology cycle goes, it might come down to user experience improvements, like Waze was the thing that people trust more over Google Maps because they told you, "Upcoming accident reported 23 minutes ago, people are on the scene to clean it up. That's why we're rerouting you." It was all crowdsourced, and that's why Google bought it. Google bought it because they saw the opportunity to bring more fidelity to their maps, which was still the highest used GPS navigation application on mobile based, but with Waze they took it to a new level because it was able to translate the information that it was getting to the end user so that they could trust the input better.
That goes back to everything that we put into this early framework for how we use AI: responsible, ethical, practical. So responsible to me means everything that we put in front of a customer needs to be able to improve their compliance law, not increase risk. How do we do that? It's by along the way telling you how we got to a suggested answer, showing you where we don't have high confidence so that you can investigate that, and even providing citations along the way if we're going to reference documentation.
So for folks that may not be aware of the dangers of using ChatGPT as a resource for researching jurisprudence, but a lawyer, which I guess has now become infamous, used ChatGPT to pull case law in prosecuting a lawsuit. Turns out that all the case law that ChatGPT returned was completely hallucinated and made up, and the solutions that it returned obviously you couldn't find them, there were no referencing cases that were not … I went to law school. You'd use LexisNexis or Westlaw for all of your case law and jurisprudence. Surprise, surprise, you couldn't find any of this on Westlaw and LexisNexis. So it's incumbent on us that we put information in front of an end user that references documentation, whether it's policy information or a customer document, be able to click on it and it takes you to the source. You're like, "I got it."
Dallas Wells
Like a trust but verify, right?
Corey Gross
Trust, but verify is, among other phrases, a key tenet for how we practice AI.
Dallas Wells
I'll give an example of something I saw that you and your team had built that I loved. We've seen some of these early fraud detection type of approaches where it'll just give something a risk score. It's like, "Hey, we took in all these inputs, we put them into this mystery black box, and out came a risk score that was a 72," and that's what they present to the user, and the user's like, "I have no idea what that means. Is that good? Is that bad? Where's the threshold? And why is it a 72," versus showing the work.
So you all would show this one in particular was a document scanning things like, "Here's where we found this information on that check, and here's what we thought it said, here's what we compared it to. That's why we flagged it to you." And I think that transparency, it takes some effort to build that in, but I think that gets us through this adoption cycle that's going to be really important in the coming years that we get that part of it right.
Corey Gross
Yeah, and it's like how do you get out of a trough of disillusionment? You provide more transparency and explainability so that every step of the way, in this case with check data and document extraction, showing what the model is doing if they choose to uncover that layer just beneath the surface, so you could see how did it find the handwriting on the check, how did it find the addressee name, and instead of providing an abstract score with no legend or way of transferring it, it's just what is the use case, fundamentally, what are we trying to do here? We're trying to match an addressee to a pay name, and if they match, it should be binary. And if it doesn't, then the ask of the operator is to just investigate. So look, so if it doesn't match 100%, it could be an honest mistake.
By the way, you have to correct those mistakes if you're in banking anyway. If I spell someone's name wrong, even in a pre-data AI world, they would say, "Sir, we can't accept this check. Your name is spelled C-O-R-E-Y, not C-O-R-Y." And so similarly, we're just saying, "Check this," and if this is allowable because it's some exception, then we give the tool, the override to say, "Wow learn that this is OK, so that next time we don't have to burden you with double checking something that you've already accepted."
And that goes back to that conversation about how are we translating data to information by using more optimal user experiences to actually make the process better? Because the underlying technology could be doing its job, but it's how we translate the outputs to the end users that can be trustworthy and usable is what really creates value.
Dallas Wells
So let's shift a little bit and talk to maybe a little bit more of the practical way that financial institutions are going to have to buy and license and start to use these tools. They're so ubiquitous now. So it's like if you are a banker and you're using Microsoft and you're using Salesforce, already there's two pretty sophisticated, well-funded AI copilots that are now in the mix there. But how should financial institutions be thinking about these broad, general, big tech copilots versus some of the narrow vertical industry-specific ones, and how do we see those two things interacting maybe even with each other over time? I think that's really going to be fascinating.
Corey Gross
Yeah. I'd say across the world there's going to be so many different kinds of horizontal software that gains mass adoption across industry because they just provide general purpose use. Salesforce, it's called Salesforce for a reason. Every organization, enterprise, SMB, either on Salesforce or something that looks like Salesforce that Salesforce allows to exist in that particular segment. But fundamentally, this is a general purpose tool that does a bunch of things well, even though I'd say that it's difficult to ultimately set up and can be a bit of a bear of a solution because it's got so much functionality, but it's going to handle something that becomes general purpose use across all of industry. And the same is true for Microsoft. They are masters of their products, and just to ground it again in that idea of their respect to copilots, the copilots are going to be masters at helping you execute general purpose workflows very well.
Tables are ubiquitous, spreadsheets are ubiquitous, document processing, PowerPoint presentations, these are ubiquitous use cases, so there should be a copilot to help you navigate. Think about all the shortcuts and commands, and as you become an expert working with macros and all the various capability within document processing and the suite of Office products, you become more skilled and adept at using it. And copilot can help you get there faster. For Microsoft, it's almost a bit of an onboarding tool to allow junior users or more beginning users to become experts, it helps them get to the experts faster. With vertical copilots and vertical software, starting with vertical software, it can hone its focus on problems and workflows that are unique and specifically affect financial services, retail. Point-of-sale software is something that's very specific to an industry.
Now, there's various segments within an industry. Perhaps Square could take a little bit of this and Epicor can take ... But they're very specific in what they're meant to accomplish—ERP solutions, et cetera. And then in financial services, Microsoft isn't going to build digital banking. They're not going to build a lot of these highly specialized software tools that in a lot of cases have to work together in some way. That's why a lot of these large financial services businesses exist, to be able to create integratability or adaptability between these solutions that are either mission-critical for the bank to run or critically important for bankers that work inside and outside of branches to be able to do their work properly.
And so when you think about a vertical copilot versus a horizontal copilot, horizontal copilots will be able to take those general use applications and help you master them. Vertical copilots will help you be able to navigate the specific workflows and needs of a job at a financial institution, jobs at a financial institution, and help you string together those workflows that sometimes have to occur between multiple applications. You might start in application A and then you might end in application F, but they all are part of the same workflow that is very specific and unique to your job. And that's where I think you need that industry expertise and knowledge to be effective.
Dallas Wells
And I would say also maybe the sitting right there in the workflow, having access to the sensitive data that is very applicable to the work that we're doing there but within the safe, walled garden of that application.
Corey Gross
That's like the cost of entry. All of it has to come with the proviso and it's got to be compliant, and you've got to have model risk management, and info security is a priority, and, and, end. All the things that a lot of horizontal providers might create, especially for larger institutions or larger enterprises that are able to pay more, the larger horizontal software companies they want to be SaaSed, they want to gain from economies of scale, but they can't create a lot of exceptions in order to be able to stay that way.
Dallas Wells
Let's wrap with one more thing here around adoption. This has been really interesting. And I'm guessing you've had a lot of these conversations, too, with an exec at a bank or a credit union. Everyone agrees that this AI, and especially the generative AI, is going to be everywhere in, whatever, three years, five years, somewhere right over the horizon, this stuff will be ubiquitous and embedded throughout their organizations, but it's not there today. And I think what everybody's having trouble navigating is what do those early steps look like, and especially in the financial services world like "Where am I on that? Am I early? Am I late? Am I getting left behind?"
And there's a little bit of FOMO there, but there's also a little bit of I don't want to be too far out on the leading edge and do something stupid here, some sort of career-limiting decision here. So what's that decision process? What's the right way for folks to think about getting from here to, again, in the banking world what is right around the corner where this is going to be everywhere? How do we get from A to B there?
Corey Gross
I think like anything it's not optimizing or planning for perfect. Here's what we've seen financial institutions do, maybe instead of waxing theoretical, I can just throw out a few anecdotes. I've seen some organizations just create AI task forces. We have a center for excellence, and that didn't mean that this was the beginning of AI at Q2, it just meant we are bought into the idea that this is going to become increasingly an important technology for our team in ways that it hadn't before, as well as our products, and that requires some level of coordination so we don't make the same mistakes twice, three, four, five times. Let's make them once and let's propagate learning across the company. That's why an AI Center of Excellence. So if we think about this from the perspective of a financial institution, you're probably already using AI. You're probably using some form of machine learning, you may even have more advanced machine learning and fraud tech and other solutions that you've procured, but you have familiarity with AI.
I think that where generative AI will play an increased role at your organization is internal. You might develop products that use AI, but probably the seismic shift here is our people are using ChatGPT and Dall-E and other content generation tools like its Microsoft Word. How do we put our arms around this in a way that protects them and protects us? And I think that's where an AI task force can first start with what is our acceptable use policy and our ethics around AI relative to this new leap forward, and how do we then use that as a guide to teach people what is right and what is wrong? I think if you just ignore it and say, "Hey, we'll get there," and we get there, I personally think that that's a foolhardy approach because your people can already use it and get access to it. If Q2 decided to just ban AI, we can't ban what people do with their cell phones and their home computers. They could expose the company to risk on their own machines as easily as they could on our machines.
So similarly, teaching people what the benefits, the how-to's, and the risks of AI, a good first step. The second is grounding use cases in existing problems, existing workflows. Your people are experts at what pains them every day. The thing that humans are uniquely amazing at is complaining. If you just listen instead of deflecting the complaints, you might actually hear valid problem statements where AI could play a meaningful role in making the bank, the credit union more profitable, run more efficient, because now there might be a solution that's optimally tuned to solve that problem. We go back to workflow automation, so much of what causes people heartache at a bank or at an enterprise is how long it takes to do something menial or how long it takes to do something that is distracting them from the real work that they want to do.
Well, if we listen to the things that people are bubbling up as the largest complaints about the work they do, that might form a better basis for evaluating tools that are specific to address those jobs to be done, versus getting concerned with, well, what should my grand AI strategy be? That to me is like a paralyzing effort. Instead, let's set up a good responsible way of approaching AI, teaching people the benefits and the pitfalls, and then listening to people to be able to pinpoint what workflows might be the best candidates for AI experimentation.
Dallas Wells
And Corey, what I heard woven through that answer there is while you do have to start by putting some policy and guardrails in place, this can't just be a philosophical or an academic abstraction of this. At some point, you’ve got to roll up your sleeves and get your hands dirty, and it's about just picking a problem, picking a process that has some rough edges, but that you can get your arms around and trying some of these tools, and you'll figure out the good and the bad and the ugly with these things. And that's what we've done here is just jump into it. And some good things pop out, there's also a bunch of stuff that you just throw away. This either won't work, or at least it won't work right now.
Corey Gross
And that's great. In any adoption curve, disqualifying what isn't the best use case is as valuable as stumbling on gold because it's, again, orienting your time to the highest value returns.
Dallas Wells
Yeah, absolutely. Well, Corey, good stuff, appreciate you making the time, and I'm sure we'll be back to talk through this again as fast as it's all changing. So thanks for coming on.
Corey Gross
Anytime.
Dallas Wells
All right. Thanks for listening to this week's episode of The Purposeful Banker. If you want to catch more episodes, please subscribe to the show wherever you like to listen to podcasts, including Apple Podcasts, Spotify, Stitcher, and iHeartRadio. As always, we'd love to hear what you think in the comments, and you can learn more about the company behind the content by visiting q2.com. Until next time, this is Dallas Wells and you've been listening to The Purposeful Banker.