Balancing AI With Security and Compliance
In this episode of The Purposeful Banker, Dallas Wells welcomes Beth-Anne Bygum, Q2’s chief information security officer, to discuss the implications of GenAI on security and compliance and what financial institutions should consider for successful adoption.
Listen
Subscribe
Related Links
[Blog] The Andi Copilot Early Adopter Program: The Technology Behind a Trustworthy Copilot
Transcript
Dallas Wells
Hello and welcome to The Purposeful Banker, the leading commercial banking podcast brought to you by Q2, where we discuss the big topics on the minds of today's best bankers. I'm Dallas Wells. Today I'm welcoming Beth-Anne Bygum, who is Q2's chief information security officer.
For obvious reasons, this is an area where Q2 is making substantial investments. The threat landscape is bigger and more dangerous than ever. And as stewards of very precious personal data, our customers expect us to stay ahead of those threats. So, Beth-Anne is responsible for all things security, like global information protection, product and application security, enterprise IT security, cyber defense, and our general data protection strategy.
So, we'll touch on a few of those areas today, but we'll pull on one thread in particular that has been a really hot topic in all corners of fintech, which is some of the security concerns that financial institutions have when it comes to adopting AI. So, Beth-Anne, that was all a mouthful, but welcome to the podcast and look forward to talking through this with you.
Beth-Anne Bygum
Hey, Dallas, thanks for having me today. I am looking forward to the conversation, as well.
Dallas Wells
Yeah, you bet. As a first-time guest, why don't you give us a little bit of your background? Tell us how you landed in this field of security and then eventually how you landed at Q2.
Beth-Anne Bygum
It's been a journey as any career is a journey for each of us as individuals. I originally, many moons ago, started in sales. I was very much interested at the time in understanding the growth and the trajectory of pharmaceutical and biotech sales, and started the journey there. Spent years in consumer products goods, moved eventually into the biotech and healthcare sector. And I really appreciate their regulated landscape. I know it's probably weird, but I love a good SOP. Over time, began to really appreciate the fact that we are securing and defending at the code level. Everything that we do now is about defense in code. And that becomes more complex, complex or maybe easy, depending upon your lens as we look at the acceleration of some of these digital technologies and the AI capabilities. But we're up for that battle.
As I mentioned, the journey has been across a couple different sectors. And landed back in the regulated space in fintech and working with the amazing crew here at Q2. I, myself, 30 years in IT, with the last 16 in some form of security. And my husband and I are empty nesters. So my husband's honorably retired from the military after 33 years. We are empty nesters, so we are ready to ride into the sunset.
Dallas Wells
Yeah. Yeah, very cool. So, interestingly enough, we've got a few folks who've come across Q2—in our IT and security and in the data world—that come from a similar medical background. I think it is, it's highly regulated. The data needs to be protected in the same way. But just curious, as you landed here with our customers being banks and credit unions, where did you find the state of the union for those customers? Are they ready for today's threat landscape or do you feel like we've got some ground to make up here?
Beth-Anne Bygum
When we look at security and defense practices across all the different sectors, when I was in healthcare and biotech, we would constantly look to the FI sector. These are typically early adopters. When regulations change or we receive updated expectation from different regulatory agencies, you see certain segments adopt those practices. Early in the financial segment was one that would typically have the early adoption. The conversation starts to become a little bit more interesting when you double click on that segment and look at the SMB space. I had an opportunity to sit with Jen Easterly. Q2 is a member of The National Technology of Security Coalition, organization of CISOs that are focused on contributing and providing feedback to the creation of policy. Director Easterly was very clear on the fact that we all play a role in securing our nation, but specifically companies like Q2 are in a prime position to help the customers that are part of their ecosystem.
I think we have a role to continue to play. I think FI has always been ... When you look at the big rocks and then you look at SMB space, you still see companies in the FI space as early adopters, setting the direction and helping to demonstrate what good compliance and good adherence to practice is. But it's a daily battle. It's a daily battle. And partnerships within the company, like working with you, Dallas, and the team members here, it's constantly taking that step back. What does that mean? How do we apply it? How do we increase rigor? And that's a constant conversation.
Dallas Wells
Yeah, it's interesting. It feels like banks and credit unions, and maybe this is harsh self-criticism for most of them, but I think a lot of them feel like they're technical laggards. And there's this reputation of like, "Oh, bankers are cautious and slow moving and there's no innovation." And I think, like you saw coming from a different industry, there's actually a lot of innovation there. And there's a need to be at the cutting edge, as the saying goes, like, "Why do you rob banks?" Because that's where the money is. Though, they've always had to be at the forefront of this. But I think that brings some interesting challenges, particularly for the executives that are ultimately responsible for this and have to make a lot of the budget decisions. Where do we allocate our time and attention and funds to combat this stuff?
But it's also in the modern era become deeply technical. So, this is one of those areas where you've got executives that have to make decisions and they feel a little bit in the dark. So, do you have any thoughts on how some of these leaders and C-Suite folks can keep up with ...? How much do they need to know? How far down that rabbit hole do they need to go to make the right calls or at least find the right kind of trusted folks inside and outside of their institution to be able to make those decisions?
Beth-Anne Bygum
Dallas, I think that's an interesting point. We are at the cusp of a pretty significant disruption in technology. It's interesting, because when you take a look back over time, the last time we were at a point of disruption like this, similar questions, similar needs for understanding, similar kinds of, well, how much do I invest? Am I an early adopter? When you look at the adoption curve, should I be an early adopter or should I wait until things vet out a little bit? I think one of the key concepts, Q2 is very much part of a growing body of individuals that are contributing to a school of thought. But one of the things that we have to be able to do is differentiate between what is fact, what do I need to be clearly aware of? What is fact? And what do I need to start to learn, and adopt, and train my organization around? And then what's potentially the marketing noise?
So, maybe the first step is start to tune your ear to the difference to be able to delineate this is part of a body of knowledge that I need to absorb and be really fluent in, versus this is contributing or part of just an overall campaign or marketing noise. I mean, that's sort of the first step in beginning to train the ear and learn concepts within both the changes in the digital landscape, and then also what's coming in with the AI and LLM, large language model discussions.
Dallas Wells
Yeah. So, it has all these echoes of like the move to the cloud. So, back in my PrecisionLender days, that company was founded in 2009 and we were cloud native from day one. A lot of our customers, as we sold them, that was the very first time they were ever putting real data into the cloud.
Beth-Anne Bygum
Yeah. And remember, in those conversations, the early adopter, that we weren't necessarily talking about how significant of a financial shift that adoption created. So, I think that's a great example of where you have to balance the facts with what is the adoption of this technology. What is the comprehensive suite of decisions that I need to make both financial, legal, compliance, security, and then make the decision based on that suite of facts versus different salient points?
Dallas Wells
Yeah. And it's like this, as you're doing some of these first baby steps into it, you mentioned the economics of it. It's hard to justify it for that first baby step, because you have all of these really far-ranging policy decisions to make. At that time it was, well, what sort of data privacy policies and rules and procedures should we have for moving something to the cloud? And it was at times real friction, and we were just adamant enough believers that it was the right ultimate answer, that we stuck with it. It was an interesting shift. It went from this thing where it was hard to sell that to all of a sudden it would've been hard to go the other way. Where it's like, well, if you're not in the cloud, we don't want to mess with your homegrown data center and have to do a whole separate set of diligence around that versus Microsoft Azure or AWS.
So, it feels like as we start to tiptoe into this AI stuff, it's a similar thing. Everybody I think feels like whatever the adoption curve is, it's been really fast so far. But not too far into the future, it's going to be one of those things where it's everywhere, and so we have to be prepared for it. But gosh, these first few baby steps are a little frightening and maybe a little expensive for the tiny little use case that we may be talking about, right?
Beth-Anne Bygum
For sure. I mean, I think if we take the conversation of AI adoption and break it into three buckets, there's, again, going back to let's distill the conversation and to be able to absorb it. So, if I break it into three buckets, first bucket being the adoption of AI in technologies that my strategic partners use. If you look at it just from that lens alone, we're already fast track. We're already in the pool, we're already in the lake. We're swimming, because our strategic partners and the companies upon which we rely on enterprise corporate capabilities, like your big tech solutions, they've got the AI quite embedded into their ability to deliver solutions.
So, if an organization is just starting with, “How do I leverage commercialized adoption of AI and leverage that from a commercial perspective,” my suggestion is first study the companies that you've partnered with. And have someone either on your compliance team or your security team constantly reading reports about the vendors, to see if and when they've changed their use of and their model of AI, specifically how the data that's hosted is being processed or used in those environments.
The second would be, to your point, is then if we're developing within an environment where a bank or a credit union has decided, "You know what? We're going to step into this and begin to develop some capabilities," just then being able to partner with organizations that are maybe ahead of you to understand the differences in the models and what problems do we want to use a technology to solve. Technology is an enabler for business decisions. So, if you've got really clearly defined business decisions, then you can wrap the technology around that. If the problem or the business opportunity isn't clearly defined, and potentially, we're heading down a path where that investment just doesn't pay off.
And then the third, which really sort of bumps up against the second is, are you going to buy or build? Because at the end of the day, you can either play in both of those spaces.
Dallas Wells
I think the reality, like the pragmatic reality when we think about a financial institution starting to use AI, like you said in that first bucket, your strategic partners. It's hard to even say we don't want any of this. It's so deeply entangled in there, even from the Microsoft that's in every enterprise business in the world essentially. And they've got from their Developer tools all the way into PowerPoint, there's AI options in there. So, it's here. And so, you've got to be ready. But I think maybe the pragmatic reality that comes along with that is, as you take some of these first steps, there's some comfort and maybe even some examiner credibility that comes with trusted partners and there's some safety and a track record.
So, it's a vendor that you've done business with for a long time. Maybe they helped you step into the early days of mobile. They helped you into the early days of the cloud. Now they're into the early days of AI. You've been down this road before with someone that you sit across the table from. They're also regulated. It feels like no matter how impressive some of those demos are, no matter how cute the parlor tricks are, it may show up in the demo, that grounding some of that with a trusted partner really matters.
Beth-Anne Bygum
Yeah, for sure. I mean, from a security perspective, so third-party risk, we're looking at the first bucket of opportunities. So, third-party risk, then the securing of the data with that third party, it does definitely increase the level of rigor in the way we vet our third parties. I collaborate quite a bit with the third-party risk team here at Q2. And as we study changes in how the vendors and the partners are adopting AI and processing data, you are constantly updating that playbook. And then from a testing perspective, if the solution is on network or a part of your ecosystem, or it's a solution that where you have the ability to do some kind of testing, whether penetration testing or code testing, there are certain aspects to the solutions if they're homegrown or purchased, that we now have to add a revised approach to testing.
At the end of the day, a lot of this goes back to some of the early concepts around hygiene of secure software development. I remember years ago, I used to have this conversation with the lead architect at a company where I was working. And I would always say, "If we could just put the architects back in the rightful spot, then a lot of the downstream security conversations that we have would be addressed." And so, it's the same kind of concept with the adoption of new technology. We've got to get back to some of the basics, some of those core good engineering practice basics, because that's the light. That's the light in the darkness. And that's going to help us find our way as we're vetting the adoption of this technology and figuring out where do we best supply it.
Dallas Wells
Yeah. So, you mentioned developing and buy or build. And I think for a lot of institutions they've always felt like we don't really develop our own stuff. But I think that's part of the promise and the allure of these large language models, is like you can have some laypeople, for lack of a better term, that can actually build and develop a little bit. These things can code. So, if we put ourselves in the position again as a leader at a bank or a credit union, and you've got somebody who comes to you and says like, "Hey, look, we can use some of these off-the-shelf tools, and we can create this thing. We can create this internal tool." Before you say yes to that, what are some of the first starting point basic questions that we need to ask about that, or maybe flags to raise to certain parts of the organization before we're just off and running with a skunkworks project like that?
Beth-Anne Bygum
Going back to just having good engineering practices, like a checklist or decision-making, RACI matrix, obviously, if it's a buy solution, you're vetting that vendor. You're understanding where and how the data is going to be processed. You're understanding what their secure software development practices are. You're understanding and having them attest to concepts, such as there's evidence that a copy of the data is not stored within the solution and is not learning from your data. So, I'm sure you've probably seen some of the recent reports around some of the tools that we all use that have AI embedded in it. And they're learning off of the data that is housed in that platform. So, those are some questions you really want to vet if you're on the buy side.
If you're on the build side, it's a very similar concept, but then you also want to partner with those development and engineering teams to make sure the architecture diagram, data flow diagram, understanding where within the solution some of the more critical aspects of the business logic rules and how the analytics are being processed. Is that internal to your environment, or is it somewhere close, externally facing, or in an environment where there's a shared hosting?
So, all of those design questions—which takes me back to the power and the role of the architect—you want to plan that out early if you're on the build side and just making really informed decisions around risk. If we're going to go down this path, what's the risk to the business and what's the risk to the brand? So, it's just really focusing in on that due diligence.
But the interesting thing, like I said earlier, we're at the cusp. This is a huge disruption. And there's a lot of conversations around if AI is going to replace jobs or roles. And I was in a panel and I heard someone say, "It's not that the AI is going to replace jobs or roles, it's going to replace the jobs and roles of people that didn't prepare." So, the question is, how can we start preparing ourselves and preparing our teams to be early adopters in a smart way, that allows us to maintain security and demonstrate that compliance, but gets all of our employees trained, ready to use the technology?
Dallas Wells
Like always, as sophisticated as the technology can get, it always comes back to the human element. And that for a financial institution is always the big exposure, is who's using it how, and who maybe gets duped in what way. So, let's maybe talk about some of those. And again, like you said, we're very early days, but some of those use cases on maybe both sides of that security wall. So, what are some ways that we're seeing financial institutions use AI as a helpful protective layer? And then what are some of the ways that we're seeing the bad guys use these from everything from deep fakes to whatever else, to find new creative ways inside those walls? What are some early on both sides that we're seeing AI there?
Beth-Anne Bygum
Yeah, for sure. This technology is so fast. First of all, we're at a point in time where we can't have manual controls—the ability to detect, assess, and determine if it's an anomaly or if it's a potential compromise. We are way past the ability to leverage manual controls. And so, strategic partners that have the AI technology embedded into their detection and protection capabilities allowed, offer faster identification of unauthorized behavior. And being able to leverage that against more advanced review of the potential issue against threat intelligence, the state of the environment. Where is that potential threat within the environment? And then just determining that faster with more intelligence and more action-oriented intelligence.
The issue is the same tools that we use from a security perspective to protect ourselves, or in some cases also the same tools that the threat actors leverage, I think there's this concept within the security industry that the actors will always take the fastest path in, most direct path in. So, even with some of our advanced AI-based capabilities, still comes down to making sure that we're looking at the vulnerabilities, the configuration weaknesses, tightening that up as quickly as possible. And again, even the tools in those spaces are helping to inform with faster intelligence. So, it's going to be an interesting evolution as the security practitioners continue to adopt technology that's designed to identify.
You mentioned some of the deep fakes. I think there's areas where we're still waiting for security vendors and technologies to help us with determining the authenticity of some of these images. I don't know if you've had a chance to play with Sora, been out there to look at that. But it is going to be even more challenging to be able to authenticate as the technology becomes more vividly aligned to real-world identities.
Dallas Wells
Yeah. I think, ultimately, we all are going to have to just learn to be more and more skeptical of don't trust your lying eyes or your ears for that matter, because this stuff's really good and it's hard to stay ahead of it. So, maybe a little bit in that realm, maybe we can wrap up with ... well, it's probably a hard question, but this technology is moving so fast. And I think it's a crude proxy, but an interesting one, to measure how fast it's growing, is just look at a company like Nvidia. So, an infrastructure provider for this AI revolution. This will be a moving target, so it may not be true by the time you hear this. But as we record it, the market cap of Nvidia is bigger than the market cap of Amazon and Walmart and Netflix combined, which is just sort of mind-boggling to me. That's where we stand.
So, given that pace and that sort of expectation of where this is headed, how do you see AI adoption evolving within these banks and credit unions over the next few years, call it two to five years? What do you think that looks like?
Beth-Anne Bygum
Yeah. You know what? It's interesting. This question is timely. We all need to go back to school, some kind of school to learn and get ready for this technology. I was watching some videos over the weekend of the CEO of Nvidia, Jensen, talk about his vision and strategy for that company. And he's driving a very clear disruption strategy that has not only achieved great traction, but the vision that he's laid out even before for the next five years is even more comprehensive. The question becomes, where are we in that vision? I had a professor who used to say to me, "You get once in life to prepare, and then you're on." "You get once in life to prepare, and then you're on."
We're in the preparation window, and so the question becomes, what can I start leveraging this technology now so that I can learn from it in a safe space, before it's placed upon me and I don't have time or I don't have a runway? This is the time when you sit down with your teams and you have those conversations. "These kinds of tasks, they're transactional basic process tasks. I've set a target, 26% of our process-based tasks, I want moved to some kind of AI-enabled process." And why is that? So that myself and my team can start to learn. We can start to practice. We have to be intentional about safely and quickly learning before it's here. So, within five years, it's going to be here in a way that we've never envisioned.
Dallas Wells
I think that's the consensus, is that it's coming fast, it's going to be everywhere. I like the way you phrased that of we're in this preparation window and make good use of that time. I think that's why this is top of mind for so many folks.
So, well, good stuff, Beth-Anne. Really appreciate you diving into this with us. Thanks for joining us. And as hot of a topic as this is, I'm sure we'll be back to cover some more soon, but thank you.
Beth-Anne Bygum
Sounds like a plan. Thanks for your leadership, Dallas.
Dallas Wells
Yeah. All right. So, that's it for this week's episode of The Purposeful Banker. If you want to catch more episodes, please subscribe to the show wherever you like to listen to podcasts, including Apple Podcasts, Spotify, Stitcher, and iHeartRadio. As always, we'd love to hear what you think in the comments. And you can learn more about the company behind the content by visiting Q2.com. Until next time, this is Dallas Wells and you've been listening to The Purposeful Banker.