The Buzz with ACT-IAC
Welcome to The Buzz with ACT-IAC – your source for the hot topics and top issues affecting the federal technology market. Join us each week to hear insights from government and industry leaders, stay informed on the topics facing government, gain access to thought leadership and valuable reports. Subscribe to follow us on your favorite podcast platform.
The Buzz with ACT-IAC
ICYMI: AI Unleashed
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
This episode features a panel discussion from ACT-IAC's Imagine Nation ELC event focused on artificial intelligence (AI). The session emphasizes aligning AI initiatives with mission objectives, rigorous testing, and governance to ensure public trust and operational success.
Small Business Alliance | ACT-IAC
ACT-IAC Gives Back: Wreaths Across America 2026 | ACT-IAC
Subscribe on your favorite podcast platform to never miss an episode! For more from ACT-IAC, follow us on LinkedIn or visit http://www.actiac.org.
Learn more about membership at https://www.actiac.org/join.
Donate to ACT-IAC at https://actiac.org/donate.
Intro/Outro Music: See a Brighter Day/Gloria Tells
Courtesy of Epidemic Sound
(Episodes 1-159: Intro/Outro Music: Focal Point/Young Community
Courtesy of Epidemic Sound)
MYRA DUDLEY: Good morning. Good morning. You are at the right session. Welcome to AI Unleashed. My name is Myra Dudley. I am the IBM Technology managing director.
MYRA DUDLEY: Supporting GSA and we're just so pleased to have you here in this panel. IBM is very proud to be a platinum sponsor of this event, and as a long time leader in AI innovation, applying and advancing AI within our own enterprise and across partnerships in public sector, I am absolutely thrilled and honored to introduce our moderator, Quan Myles.
MYRA DUDLEY: Boatman Quan is the president and the CEO of Myles Edge Consulting and a dynamic, one of the most dynamic and transformation leaders in government. Quan was, had roles as senior executive services [00:01:00] within the Department of Interior, interior Business Center. She also led leadership roles, had leadership roles in.
MYRA DUDLEY: GSA and fema, the Virginia IT Agency, as well as the Federal Reserve of Richmond. And then in addition to the executive leadership, she has such passion and advocacy for public service, having led the executive women's, uh, council, as well as, uh, Virginia Small Business Commission, and then the African American Federal Executive uh, association.
MYRA DUDLEY: So please join me in welcoming Quan as our moderator today.
QUAN MYLES BOATMAN: Myra, thank you so much for that warm welcome, and I am so excited. Um, good morning everyone. Um, and welcome to AI Unleashed. Um, as we look forward to the future, the intersection of [00:02:00] technology workforce. And AI is becoming one of the most important conversations across government, business, and society. And as you heard a little bit from, uh, Larry Allen this morning, you know, everybody is using it and we are looking at how we integrate that into government.
QUAN MYLES BOATMAN: So in today's panel, we will explore, explore how AI is not just a tool. Larry talked about his tools, um, for efficiency, but also a force that can create meaningful value while ensuring our people remain central to every mission. So before we get begin today, let's introduce, um, our panelists. So first we'll start with, um, sitting next to me is Commander Jonathan White.
QUAN MYLES BOATMAN: He brings extensive experience in maritime safety, national security, and operational leadership. Within the US Coast Guard, emerging technologies such as AI enabled analytics and [00:03:00] automated decision support tools are beginning to reshape mission execution. Commander White has been engaged in efforts exploring how AI can strengthen situational awareness and improve response times, optimize readiness and support crews in complex high pressure operations.
QUAN MYLES BOATMAN: His perspective reflects the reality of integrating new technologies into environments with human judgment, safety and mission urgency remain paramount. So next we have my, my former colleague here, Kirsten Dela. Um, she's the program manager for IT Cloud services at fema. Um, Kirsten is a senior leader, um, where she also leads efforts to modernize systems, strengthen cloud strategy, and build, um, and build the agency's cloud center of excellence.
QUAN MYLES BOATMAN: She specializes in, in turning complex, it challenges into scalable data-driven [00:04:00] solutions and is known for bridging the gap between IT stakeholders and vendors to deliver clear actionable outcomes. And prior to fema, she spent over a decade at GSA leading digital and data transformation initiatives that improved business intelligence across the agency.
QUAN MYLES BOATMAN: And next we have, next we have, um, Lance Jenkinson. Uh, Lance is, I wanna note Lance is serving in, in his personal capacity for today's, um, breakout session. So mode, so Chatham House rules in. Everyone,
QUAN MYLES BOATMAN: everyone.
QUAN MYLES BOATMAN: Um, so Lance is a seasoned technology and operations leader with more than 15 years experience driving digital modernization and complex mission critical environments.
QUAN MYLES BOATMAN: Lance is known for his practical first, um, people first approach to AI adoption. He focuses on helping teams understand how to use emerging tools to elevate human judgment, strengthen decision [00:05:00] making, and improve workforce workforce efficiency. His teach people how to fish. Philosophy has been a powerful method in reducing fear around ai, empowering employees and ensuring technology adoption is both meaningful and sustainable.
QUAN MYLES BOATMAN: And then, um, last year we have, um, uh, Bijan Monterey. He's a senior leader at, um, the Federal Deposit Insurance Corporation with deep expertise in financial systems. Risk management and data-driven regulatory oversight. As financial services evolve and threats grow more complex, Bijan has involved, has been involved in initiatives, examining how AI and advanced analytics can enforce, can enhance fraud detection, enable earlier risk identification, strengthen regulatory processes and support and secure and, um, support and secure resilient financial [00:06:00] ecosystems.
QUAN MYLES BOATMAN: His work sits at the intersection of innovation and trust and two pillars that are essential in public confidence in modern financial oversight. So please give a warm welcome to our panelist.
QUAN MYLES BOATMAN: So before we get into all of the great knowledge and expertise of our panelists, we are going. Start and set the stage today, um, with a short video. So this video is gonna highlight, um, the veteran administration's, um, use of AI in transforming mission outcomes. So want you to know that this video was completely developed through ai.
QUAN MYLES BOATMAN: No human involvement has been used in the development of this. So, as we watch the video, I want you to think about these two things, how AI [00:07:00] delivers tangible, measurable value, and how it improves service delivery without replacing the human element.
AI VIDEO: Welcome to our demo. We used an agentic AI orchestrator to identify a strong AI use case in the federal government and to create a full video about it. We asked the AI to generate a virtual panel on the topic and have it discuss the similar questions. We will later post to our panelists. The results you'll see were produced with minimal human intervention other than keeping a human in the loop.
AI VIDEO: Creation and adjustments were driven by age, agentic, ai, and refined iteratively through cycles of its own reviews and modifications. We will start with an example of an AI use. Case in the federal government validating AI for eye disease.
AI VIDEO: We're looking at a real world test for medical AI run by the va, [00:08:00] all about screening for eye disease, and the stakes are huge.
AI VIDEO: We're talking about diabetic retinopathy, a leading cause of preventable blindness. So the VA decided to put multiple AI systems to the test in a major head-to-head study. Get this, they ran seven different ais on over 300,000 images from more than 23,000 veterans. So the big question, right? How did these AI systems stack up against the human expert graders?
AI VIDEO: The results kind of mixed bag. One algorithm was great, but another missed almost 26% of advanced cases. This really drives home a key point. Not all AI is created equal, not by a long shot. The bottom line is clear for patient safety. Rigorous real world testing before use is an absolute must.
AI VIDEO: Now for a virtual panel created by AI on the topic,
AI VIDEO: you know the feeling right, staring at a huge document backlog, maybe dealing with long waits for [00:09:00] citizen services.
AI VIDEO: We keep hearing about the promise of ai. But how do we actually turn that promise, those cool pilot projects into real measurable value across federal agencies? That's the deep dive today.
AI VIDEO: Well, things are definitely moving fast. The vision is turning into, uh, actual deployments. Just look at the GAOS 2025 review across the agencies.
AI VIDEO: They checked overall AI use case as well. They nearly doubled up to about 1100.
AI VIDEO: 1100. That's quite a jump. What about generative AI specifically? That seems to be everywhere.
AI VIDEO: That's even more striking. Gen ai. Cases they shot up ninefold, uh, from about 32 to nearly 282 in just the year reviewed. But you know, just having more AI isn't the goal.
AI VIDEO: Getting value means structure governance.
AI VIDEO: Exactly. That path, you know, from a pilot to something really operational, it absolutely has to start with solid oversight. We need to be using the NIST AI risk management framework, the RMF, it's becoming our common language for risk, for defining roles, and that's a core piece of what OMBM 24 10 [00:10:00] requires too.
AI VIDEO: Okay. But governance can feel heavy, right? How do we keep moving quickly? From my perspective, running these projects, you absolutely need high quality data. You can trace and you need a real product owner, someone focused on outcomes on the adoption by actual users, not just. You know, rolling out tech,
AI VIDEO: right, which brings up the trust issue.
AI VIDEO: Yeah. This huge spike in ai, especially gen ai. Are we helping our people or, uh, replacing them? That's the wording you hear.
AI VIDEO: It really has to be about augmentation. That's the only way this works for the federal mission. I think AI can do the heavy lifting, crunching data, drafting summaries, but the human tasks, judgment, ethics, that final call affecting someone that stays with staff, period.
AI VIDEO: Absolutely. That human in the loop idea is critical, especially for systems that impact safety or rights.
AI VIDEO: Thank you for your time. Now, back to the panel with real humans.[00:11:00]
QUAN MYLES BOATMAN: All right, well, the AI hasn't taken over today's panel just yet. So back to our humans. Um, you know, this, this video is really just an example of. Um, the power of AI for true, um, mission-driven outcomes like, you know, preventative, you know, eye disease. This is really important. So I'd like to start with asking our panelists, like what, um, you know, how does that resonate?
QUAN MYLES BOATMAN: The, the video resonate with you? So I'll start with Commander White first.
COMMANDER JOHNATHAN WHITE: Yeah. So I've been on the ground floor of a lot of use case, uh, development use case id, ideation in the Coast Guard and, and providing my tech, uh, expertise, right? So there's, there's a balance between what, uh, the end user would say and, and honestly, you some, anybody can create that video, right?
COMMANDER JOHNATHAN WHITE: That's a cool pitch, right? So you're gonna be hit with a ton of these pitches that look really convincing, right? And we got, wait, we gotta do this, we gotta do this now. And it's like, you gotta balance that with the reality of the situation, right? And like, okay, how [00:12:00] much data did we actually have? Was it 300,000 macro degenerative eyes, or was it, was it just a random sampling of something or whatever, like.
COMMANDER JOHNATHAN WHITE: To me, it's, it's the confluence of the work that we've been doing the last three years, making data available, raising the quality of that data, really understanding it. We have 11 missions in the Coast Guard that translates to about 100 different data sets that align with different, different, uh, quality standards, different, uh, security standards, right?
COMMANDER JOHNATHAN WHITE: Uh, and, and so you gotta, you really gotta work that layer first, entertain the use cases coming in, but then strategically align those with your mission value, right? Don't go after use cases that are not gonna advance your mission value. Don't go after use cases that you don't have the data for, right?
COMMANDER JOHNATHAN WHITE: You're going to fail. Right? And I think there's a recent article that the one that goes on the post all the time, right? 99% of things fail, uh, with ai, with MIT, right? It's because people don't consider the full stack when they look at it, right? They're just looking at the end goal. So that's what we're looking at right now is, is how do we connect really good [00:13:00] use cases with really good data so we can get actual of results?
COMMANDER JOHNATHAN WHITE: Thank you.
QUAN MYLES BOATMAN: All right, Kirsten. Kirsten.
KIRSTEN DELASH: So, um, coming from private sector GSA to fema, what I've noticed is, um, and this isn't a surprise to anyone, but, um, government is 10 years behind private sector. So we really lean on our partners and our vendors to help us to facilitate and bring us new technology and innovation and what, uh, is happening at FEMA and DHS as a whole.
KIRSTEN DELASH: Um, at a high level, there's many different pockets of AI initiatives that are happening, but I'm gonna echo off of what Larry Allen spoke to at the speaker panel is that AI is unequal and we're still not at a, a level where we're really ai, we're just scratching the surface. And in order for AI to really take fold and take place, you've [00:14:00] gotta look at the backend things have to be to the cloud.
KIRSTEN DELASH: I, I mean, we need to do due diligence to work with our smaller agencies to help them, uh, do more, a lot of, more of interagency outreach so that we're all working from the same plane and we're all striving towards the same goal regardless of funding or in inequities. And that's a, just a responsibility rule number one, as we migrate still to the cloud, there's still agencies that are not in the cloud and they're falling behind.
KIRSTEN DELASH: We're already behind. And so how do you catch up and still work that still work? The backend infrastructure on the cloud modernization, getting the data sets, doing the data modeling so that you have quality data sources so that you have intelligent data to make intelligent choices. And that's where, that's the space we're at.
KIRSTEN DELASH: We're, uh, finally looking at the vendors to start coming in and upscaling the workforce to get us to use co-pilot [00:15:00] and do like, uh, John was saying these many use cases. So that we take away the trepidation of this new skill, this new tool, and really get people to be curious, invite them for different challenges, because as much as it seems scary and overwhelming, if you just bring your curiosity in an open mindset, you'll be amazed at what you can uncover and, and the direction forward.
KIRSTEN DELASH: So we're really, uh, getting to that planning assessment stage as a whole, and we've got to really set the, the baseline and groundwork so that we can excel and really utilize the features. Thank you,
QUAN MYLES BOATMAN: Kirsten and Lance, you know, um, I know this is gonna be, they're, they're jumping right in because I know they're gonna have such engaging conversation.
QUAN MYLES BOATMAN: Um, so Lance, you know, in the areas of the, the, um, video, like what really stood out from, for you in, um, thinking about like, um, measured, um, outcomes when it comes to thinking about, you [00:16:00] know, how do we frame up AI.
LANCE JENKINSON: Uh, great. Well, what I really applauded the VA doing is, is really taking a good look at, um, good use cases that weren't, that, that made a lot of sense from what they were trying to do.
LANCE JENKINSON: Like the way I've been looking at this from my experience is really trying to frame our folks to really walk, crawl, and run. And what I also have been finding is that what I think a lot of this also boils down to is, is our workforce enablement. Because I think from the command control structures that, um, you know, I've been working within, we've, it was very like, oh, new tools.
LANCE JENKINSON: Scary. Don't use that to cross the entire workforce. And now the pendulum has swung completely to the other side where, oh, oh, you're not using ai. You don't know how to use this. But I feel like we're focusing on tech. We have a lot of acquisition professionals reviewing proposals, but folks still haven't had the baseline understandings of how do you prompt, how do you use these tools?
LANCE JENKINSON: How can you discern what is marketing versus what's real? And so I think it's really [00:17:00] important that we, you know, take a step back and make sure we bring our, our staff along with us.
QUAN MYLES BOATMAN: Alright. Bijan, what about you?
BIJAN MANSOURY: Okay, well, thank you. I couldn't agree more with the panel members. Um, you know, the, the video just to add to the points that were made, the video really resonated with me.
BIJAN MANSOURY: It's, you know, AI is more of a augmentation of what we want to do. It's like, the simplest way I think of it is I want to get to point from point A to point B. I can walk or I can take a bicycle so that bicycle is worthless if I'm not using it or if, if I'm not using the bicycle. So there has to be a, uh, cohesive, uh, interaction between the two in order for it to be valuable at the end.
BIJAN MANSOURY: So, um, we recently, um, I know we have limited time, but we recently had a situation where, um, uh, we had an audit issue on a, on a, on a contract. And, uh, you know, we were working on a use case, uh, you know, to. Use AI to [00:18:00] practically use AI to process invoices. And the invoice is the issue. And, uh, you know, the automation that we presented looked really good.
BIJAN MANSOURY: The senior leadership said, you know what, we're gonna put down that we're gonna resolve this issue by implementing this ai. But we had to push back and we had to say, no, wait a minute. We have to make sure that the people that are involved are reviewing, approving, processing things, uh, properly. So without hate human interaction, without cognitive, uh, without, uh, ethical points that they're making, it's, it's not really possible to use AI without that.
BIJAN MANSOURY: So, yeah.
QUAN MYLES BOATMAN: Thank you Bijan. Um, so when, um, Bijan, you were talking about, you know, how you're using, um, AI for the, um, invoice automation. Um, now I wanna kind of transition a little bit to talk about, you know, what types of, um, you, each of you to think about, what types. Um, uh, AI tools are you using in your organization today?
QUAN MYLES BOATMAN: Um, or are [00:19:00] ways are you thinking about using AI tools? Um, we've heard a lot of conversation earlier this morning when, um, uh, Larry was talking about, you know, it's really the, the, the tools is the enabler for, um, the work and the mission to help support, um, you know, the work that, um, people are doing. Um, so I'll start, I'll reverse order here, so I'll start Bijan with you.
BIJAN MANSOURY: So we essentially, um, break up our, our, um, use cases, for example, for invoices. When it comes to contract invoice reviews, we essentially started with the simplest way. Um, you know, we look at invoices that are simply commodities and, uh, you know, for example, we buy a forklift and, uh, the invoice for that is very simple.
BIJAN MANSOURY: So we use MPP to process that automation. And then we go to phase two where we use learning. We use, um, you know, bring in some ai, um, artificial intelligence capability for, [00:20:00] for AI to look at the contract, look at the invoice, and make sure that they reconcile and they make sense. And then we go to the next phase.
BIJAN MANSOURY: Generative ai. We're, we're not at the generative AI phase yet, but that's, that's how our mindset is basically like that go with a low hanging fruit the easiest way, and then go to the next phase and then take it from there. Essentially.
QUAN MYLES BOATMAN: Like, like Lance said, you have to, you know, walk, walk, walk, walk before you can run.
QUAN MYLES BOATMAN: So, you know, it takes that, you know, iterative process. Process.
LANCE JENKINSON: Great. Well, thank you. And from, uh, from my experience where what we're seeing is, um, from also from the workforce enablement perspective, showing staff how to leverage these various rag tools, right? The, that chief augmented, you know, generation and what's been.
LANCE JENKINSON: What we've seen have been effective is a lot of the policy teams that we get a lot of different queries. The congressional coming in and being able to at least double check your homework before you respond, or it can at least point you to the general area of that particular [00:21:00] policy that you're looking at, to then provide a correct response to get that second opinion, uh, you know, before finalizing and shipping it off.
LANCE JENKINSON: So it doesn't necessarily always get a hundred percent. We all know hallucinations are real. Um, even if the system really believes it's right, which is so why it's important. We still have our subject matter experts in that, in that loop. But we've, if they found that, um, you know, at least the policy teams that I've been working with have seen at least 30, 40% increase in the amount of, you know, workload they can handle, the queries that they can handle because they're able to skip some of the, um, more tedious steps on that.
LANCE JENKINSON: But what we're also looking at in the future is leveraging our, um, various, uh, chat tools, not for the end users. Um, I think from, from our perspective. I feel like that that risk of, of, of a hallucination giving one of our like, like a member of the general public something wrong, that's always what ends up in the Washington Post.
LANCE JENKINSON: But what I think a nice, happy medium is, is providing extra tools to like our tier one, [00:22:00] um, help desk agents or to our other, uh, agents that when folks are calling in, at least our lower level staff can solve more problems at a lower level versus having to escalate, um, or and then make sure they're not giving out that wrong information.
LANCE JENKINSON: So we're looking forward to seeing how we can adapt those into our various assistance centers.
QUAN MYLES BOATMAN: Yeah, Lance, that's, um, that's a really great strategy and approach that you're taking, especially, you know, 20 to 30% if they can, you know, do the, the heavy, the heavy, um, legwork ahead of time. And, you know, just like if you have an intern, you're not gonna like move forward or release something that your intern gave you.
QUAN MYLES BOATMAN: You're really gonna validate and fact check, um, that before um, you're sending it out. So Kirsten.
KIRSTEN DELASH: Uh, so me being the it tech nerd, so I like to get my hands dirty and, and look at this stuff. So I, um, so I personally, uh, I I, what we've done at fema, FEMA's DHS is our parent, um, agency, and we've, uh, been allowed to set up a couple of, like, [00:23:00] labs in sandboxes.
KIRSTEN DELASH: So, uh, we currently use copilot. And so I was like, okay, so how can I make this work for just something simple that I do, so that I can make it relatable to other people so that I can kind of do a demo and show and then teach by doing. And so I created, um, uh, app, uh, that basically is cutting off 20 to 40 hours of my time a month.
KIRSTEN DELASH: And how I'm doing that is I'm, I have basically created a, a data set and model. Where I have a lot of spreadsheets, I have a lot of num number crunching. I'm able to put agents onto other data sources and pull that in and then look at how I, what the outcomes and what the um, what the parameters are inside my data model so that I can make sure that I test it, the outcomes are what I need it to be or within the range.
KIRSTEN DELASH: And then I can do predictive modeling. So thus far, I haven't launched it, but I've been testing it and it's been [00:24:00] saving me 20 hours so that I can reallocate my time to other tasks that aren't so, um, you know, redundant or manual labor intensive. And then I just wanna take you on a little journey. And I know we have time, uh, we're time limited, but you know, there's so much trepidation.
KIRSTEN DELASH: We don't know what AI's gonna bring. There's so many. AI is not just an IT problem, it's an everybody problem. And the more we can gravitate towards it and just scratch the surface of it to understand we have the potential as FEMA to do generative ai, where we can do predictive analysis on historical storms.
KIRSTEN DELASH: We can look where the storms coming in advance. We do have tools to do that. But imagine if all the tools talk together. So when that storm hits, we're already evacuated, there's alert systems, state, local, everything. We're boots on ground. We're working with people. We have a point of [00:25:00] sales, uh, device in front of us that's hooked to the, um, the insurance agencies.
KIRSTEN DELASH: State and local. Your bank account. I know. Nope. You know, that's kind of dangerous territory. But, but you can go to someone and say, alright, give me your id. Let's validate you. Here's a voucher. Your money's in your bank account. And then guess what? Development and, and refurbishing your home starts in two weeks.
KIRSTEN DELASH: We're cutting time, cost, resources, and we're becoming so efficient and helpful to our constituents. It's, it's just a world of possibilities.
COMMANDER JOHNATHAN WHITE: So I, ugh. I know. I mean, we're, we're gonna be in a nerd battle this entire thing. So
QUAN MYLES BOATMAN: I may, I may have to summarize,
COMMANDER JOHNATHAN WHITE: I'm, I'm like thinking, how can I one up this? So, so here we go.
COMMANDER JOHNATHAN WHITE: Um, so the Coast Guard went really early in on authorizing ai, right? So there's, there's a governance piece to this, which is like, it's okay. Use it, right? And so we were, we wrote some [00:26:00] policy very early on. I was on the ground floor that pushing it because, um, I was like, this is totally, we gotta grab onto this, right?
COMMANDER JOHNATHAN WHITE: And it's free, right? Nipper, GPT and camo, GPT, it was like there, and everybody was asking me like, oh my God, can I use this thing? And I'm like, I'm using it every day. Like nobody told me I could use it. I
KIRSTEN DELASH: cheater.
COMMANDER JOHNATHAN WHITE: Yeah, right. A hundred percent cheating on the test, right? So. Uh, and I told all my team, I told all my, uh, junior officers, I said, if you're not using this to do your work, I'm gonna be disappointed in you.
COMMANDER JOHNATHAN WHITE: Right. Totally disappointed. So I want you to come to me and say, I, I made this A GPT, I vetted it, and I I smoothed it and I'm delivering it to you as a final product. Like that should be the sequence of events that I'm hearing. Go ahead, Lance. You wanna say? Right.
LANCE JENKINSON: Well, but I think you raised a really great point.
LANCE JENKINSON: 'cause I think we've, our team's also the GPT Great. Um, I'm sure AI did that. So, but where it comes down to is how do we want to handle, like disclaim for example. So if, if, I mean, I [00:27:00] don't have to put an asterisk on my assignments, uh, when I use spell checker or grammar checker, but, uh, there's, there's two camps where if you touched AI to do anything on a work product, oh, now you need to disclaim that.
LANCE JENKINSON: 'cause somehow that makes it, um, special versus did you really use it to create something really original? And so I'm curious to get the team's perspective and where you all fall on that.
COMMANDER JOHNATHAN WHITE: Yeah. Oh, no, come on. I'm sorry. I had, so, so I'm a big fan of, if it's a quality product, you don't need to disclose it, right?
COMMANDER JOHNATHAN WHITE: Because you, that means you vetted it, right? You as a human have, have vetted it. It's, you could have created it yourself if you really sat down and, and got it, right. Whatever. Right? It's words or words, right? Uh, and, and I think that's important. Unless you're submitting a homework assignment, in which case, fine.
COMMANDER JOHNATHAN WHITE: Right? But at, at the end of the day, like, if you're just creating an IATT test plan, like I don't care how that got created, as long as it's valid and it's executable, right? Uh, now there's, there's another story. Like if you're actually, um, like you mentioned with the bank accounts, right? If you're actually gonna put AI in the [00:28:00] middle of it, of a, of a dedicated process, you should probably disclose that AI's in the middle of that dedicated process because errors can be made and, uh, finger pointing will happen.
COMMANDER JOHNATHAN WHITE: And, and who, who is responsible for that at that point, right? It's a self-driving car problem. Right? What happens when my self-driving car crashes into your self-driving car? Who's, who's at fault? Who developed it? Elon, I guess, right? Who developed,
QUAN MYLES BOATMAN: who developed, who developed theirs last. That's, that's right.
QUAN MYLES BOATMAN: Whoever was first was,
COMMANDER JOHNATHAN WHITE: there is a bit of regulatory review and, and a certification associated with this that I don't think is present yet. And that's, that's a little bit of a worrying factor too. Go ahead, crystal.
KIRSTEN DELASH: And, and I wanted to chime in because, um, Lance, you brought something up that's very critical and it has to also do with the, um, AI maturity model and governance is a huge aspect of that because what is proprietary?
KIRSTEN DELASH: What is a self-generated, what is, uh, super intelligence And, um, a lot of it is looking at, again, your data [00:29:00] sources, the content and it's just a wild, wild west right now. But that's like a whole nother. Uh, area that's gonna explode with AI on legality, um, confirmation, credibility, data sources. Yeah. And so it's just scratching the surface, right?
KIRSTEN DELASH: I
LANCE JENKINSON: mean, what we're also finding though is that like when folks use ai, or maybe they, um, they're still new in their, earlier in that process, um, that requirements document or that test plan, it's gonna have a lot of AI swap in it, right? It's gonna have a lot of, um, you know, verbose language could've been a lot better and some of our, um, review teams and sometimes it can take more time to really thoroughly read the AI swap documents than had you just maybe done it from, from scratch.
LANCE JENKINSON: But I see that as a good thing. 'cause at least it's showing that our teams are embracing the, you know, the new tech and trying to use it and then it, it'll get better over time. It's just like practicing piano and instrument. I mean, we all have to start somewhere. And, um, so, so I see that as like a worthwhile.
LANCE JENKINSON: And I think, so I'm gonna, I'm gonna, I think AI slot [00:30:00] might
LANCE JENKINSON: be, the AI slot might be a quotable.
QUAN MYLES BOATMAN: So, so as we transition, we talk about, you know, people are getting used to, um, figuring out how to, um, get used to the, the thought process of, it's not cheating that we can use it, but what parameters are there, um, that we will be using it.
QUAN MYLES BOATMAN: Um, so as we think about transitioning for, from, you know, getting the start of, um, how do we use it? Um, what, um, what areas do we use it? How do we disclose it? Um, next we're thinking about what's the, what's the value and, um, what are some of those, you know, success criteria or lessons learned, um, when we think about those types of things.
QUAN MYLES BOATMAN: Um, so, uh, if you can share, um, some of the initiatives that did not achieve the intended outcomes and what lessons were learned from those experie. Experiences because when, um, uh, commander White you talked about, um, the quote about [00:31:00] 99% of AI initiatives fail. We think about real life things fail, projects fail, but it's what you do with that information for the next.
QUAN MYLES BOATMAN: So we have to remember that. So if you can talk about some of those initiatives, um, that did not, um, those AI initiatives that didn't quite go as planned, and, you know, how did you go forward and, and how did you use those lessons learned to, um, you know, get to that next?
COMMANDER JOHNATHAN WHITE: Yeah, so we started, uh, internally we've wanted to create our own kind of NIP or GPT experience, but I very, at the very beginning, I said, I don't want to recreate a chat bot.
COMMANDER JOHNATHAN WHITE: I'm not interested in a chat bot. I'm, I'm not here to be your psychologist or whatever. Right? What I want you to do is interact with that thing because you're doing Coast Guard business, right? You're going in, you're asking a question, you're getting the answer, and you're conducting your business, right?
COMMANDER JOHNATHAN WHITE: And you're out. Right. And that's, that was the key criteria. And I put on Ask Hamilton, which is our branded version of that. So we created [00:32:00] a prototype version of that very early. Um, this was probably before a lot of the, the true COTS products started showing up, right? And, uh, that was a, that was a very big success.
COMMANDER JOHNATHAN WHITE: I sold it all up in the headquarters, right? It was great. Uh, and then, uh, unfortunately, uh, two things didn't happen, didn't materialize, right? We didn't get people, and we didn't get but, uh, funding for it. And I was like, I guess you don't want it then, right? Uh, so we, we fell back, right? We went back a little bit.
COMMANDER JOHNATHAN WHITE: We said, focus on the existing products, focus on championing DHS chat as a, as a solution. Um, and then we'll regroup. So, uh, try number two. Uh, didn't manifest and we're on try number three now. Uh, so we're back. We're back, right? Hey, uh, and so hopefully we're delivering that early, early calendar year, next year, right?
COMMANDER JOHNATHAN WHITE: Uh, QQ two, and so that, that'll be a big deal. I have four use cases lined up. One is the general purpose. Like I'm a coastie. I need to know how to do my business. Uh, our Marines and inspector community is, is really into this, right? Uh, they carry a briefcase of information with them when they do an inspection of like a cruise ship, for example, [00:33:00] and they're like pulling the toes out, trying to figure out what's what.
COMMANDER JOHNATHAN WHITE: It'd be great if they could just maybe snap a picture or ask a question and get an idea of how they should address a particular finding. Uh, so that's a huge use case. Our naval engineering community, uh, as you may have seen, we got $25 billion, uh, sitting out there. And the big beautiful bill, a good portion of that is to, it's on its way.
COMMANDER JOHNATHAN WHITE: It's on its way. It's on its way. It's just pull the IOU sticky out. But it's, uh, it's, it is huge for us because we are going to build new ships. We are gonna build new aircraft. We are going to make it so that the coasty of the future is able to do their mission. Those things don't materialize outta nowhere.
COMMANDER JOHNATHAN WHITE: There is a tremendous amount of work that goes into building a ship, right. Tremendous. And that's drawings, those are contract reviews, those are availability periods. Those are missionization packages. It's huge. So being able to not only develop those, but also validate them or vet them or check them, look for anomalies, look for weird stuff, right.
COMMANDER JOHNATHAN WHITE: [00:34:00] And make the process go faster. That's the goal.
QUAN MYLES BOATMAN: So Kirsten, um, tell us a little bit about, um, you know, what are some of those projects or initiatives that, you know, you've come across that you had some learning experiences from
KIRSTEN DELASH: every day. I learn qua every single day. Um, but again, I come from it from a little bit of a different perspective, from the IT backend.
KIRSTEN DELASH: Um, and it, and I'm gonna echo this once more, is that it's really about getting your systems to the cloud, utilizing, I mean, we're still just in a, in a maturity model for cloud at level number one. We still have to, we. Still, not just we fema, but a lot of us still are not utilizing cloud as it should be or as efficiently, and it ties into the data sets and modeling.
KIRSTEN DELASH: And so it has to do with the quality, the consistency, the availability so that he can do his job when he is out in the field. [00:35:00] And, um, and it's really setting up the, uh, environment so that people can put together their use cases, be creative and start working towards, um, creativity instead of just processes and looking at the world of possibilities and how we can offer that environment to our constituents and create something that's actually very viable.
KIRSTEN DELASH: But we're still in the embryonic stages. We're still, you know, doing the chat gp, and if you do chat GP and you say, what are your data sources? There's only like three. So imagine if you get the data model. And you get all your data sets that are really looking at 15 different ones, how much more rich is that information gonna be?
KIRSTEN DELASH: Then you can start getting to the super intelligence where it can start mapping and looking at patterns and behaviors so that your outcomes are something that really is tangible and reliable information so that at the executive level, at the mission level, you can make really [00:36:00] good decisions and choices.
LANCE JENKINSON: So I wanted to add to that though, is I think, you know, for the last 25 years, I think we in government have been pushing agile and iteration, right? So this AI push, I, I'll see it any differently. Again, we have to iterate, we have to start and continue to, you know, build that muscle, build that, that competency.
LANCE JENKINSON: And I feel like sometimes we're regressing where we think the AI tool is gonna just give us that a hundred percent solution silver bullet immediately and it's like, no, no, we really need to stick to those foundational agile practices, which we've been, uh, applying to many different disciplines. But I think one other example I really wanted to toss to answer your question about, um, lessons learned, and that's also to our industry partners out here in the panel that I think is really important as they propose solutions, you know, to, to the government is a couple of years ago we wanted to implement a, it was like a machine learning AI tool to kind of help us with, um, uh, pre flagging, um, improper payments and, and dealing with compliance.
LANCE JENKINSON: And, um, it [00:37:00] sounded great. Everything we sounded like we was gonna provide everything we needed outta the box, didn't have to, you know, just configure it and we didn't have to customize it. And it was great. We did successful deployment, everything was on time. Then we start to use it. And this is where, um, we ran into an interesting, um, dilemma is that in the private sector, a lot of the various business rules that they built into the product don't necessarily apply to, for example, government travel regulations.
LANCE JENKINSON: Um, we don't care if there's alcohol in your receipt, but. 50% of all the business rules in that system kept flagging alcohol every time we saw it on a receipt. So our teams were spending more time adjudicating, um, false positives for something we don't care about where, you know, perhaps maybe we should have customized it from in the beginning, but I think folks need to kind of keep that in mind that the, the rule sets and, and, and, and the use case application needs to make sure it matches to the unique government, um, application.
QUAN MYLES BOATMAN: So you heard that everyone makes sure there's no alcohol or government [00:38:00] employees, no alcohol on your receipts,
BIJAN MANSOURY: so, okay. Here, um, on my perspective, um, the, the biggest lesson and takeaway was change management. I think we could have done a better job on change management. We, we had the money, we had the resources, we had the developers, we tested the tool automation.
BIJAN MANSOURY: Everything was working fine. We just didn't implement a meaningful change management in place. And it was very difficult. To launch it, essentially. But we learned through that, the iteration process that we went through, and it's working, it's very effective in our organization. Right.
QUAN MYLES BOATMAN: Um, Bijan, can you, um, talk, um, talk a little bit more about the change management because it is part of, um, the success of, you know, any project and especially when we're, um, doing something, you know, very new and complex and it's, you know, a little scary for a lot of people and it's a cultural shift for everyone.
QUAN MYLES BOATMAN: Um, what does that change management [00:39:00] look like and how does that, um, how are you infusing that throughout the process?
BIJAN MANSOURY: Absolutely. So the biggest, the interesting part was when we started, we said, okay, we will implement change management. Once we're done with the project, then we'll start training and everything.
BIJAN MANSOURY: But change management really starts when you are coming up with the idea. You have to bring people into the, into the fold. You have to, uh, solicit ideas, suggestions, feedback. As we developed, we provided updates to the staff, transparency where things are, and, and some of the feedback we received later were super helpful.
BIJAN MANSOURY: We were able to update, we were able to make changes that were helpful that we didn't see. There were some blind spots that we caught. Um, so the change management portion, not only, um, start from the beginning when the idea comes into fruition all the way throughout the process, you have to work through it.
BIJAN MANSOURY: And then you work through the training, the staff, you work through updating the team [00:40:00] as you go through the, the, the, the iterations of the change management. So, uh, extremely helpful process. I mean, it shouldn't be taken, uh, lightly.
QUAN MYLES BOATMAN: So I'm getting a sign that we have like five minutes. So I'm gonna, um, lean into the one last question, um, about the human element.
QUAN MYLES BOATMAN: Um, so, you know, when we think about the human element with the workforce and the leadership adoption, and again, we only have five minutes. Um, you know, what are the things that, um, the AI tools that, um, as AI tools become more integrated, um, how is your organization keeping, um, humans meaningfully in the loop?
QUAN MYLES BOATMAN: And again, we have five minutes so that we can get to some q and a from the audience.
COMMANDER JOHNATHAN WHITE: So, uh, when we did the, um, the pre-brief, I learned that my, my word for the button is not shared with anybody, but I call it the sparkle button, right? It's the, it's, it's the ai, it's the AI button. I think Google [00:41:00] patented it or whatever, right?
COMMANDER JOHNATHAN WHITE: It looks like a sparkle, right? So you click the sparkle, it generates text, right? Uh, um, what I'm want to, what I want to do is seek out products that are building sparkle buttons in them, right? But vetted ones, right? They're, they're approved through the, the proper, um, uh, information assurance channels, right?
COMMANDER JOHNATHAN WHITE: You know, and that, that is a key piece, right? Those two things have to come together. But what I want to do is see your roadmap for when are you gonna gimme my sparkle button? Okay? That's step one. And then step two is to incorporate that into the general workflow of your business process, right? So as you're using the tool, I'm not going to somewhere else to generate text and bring it in.
COMMANDER JOHNATHAN WHITE: I'm not gonna do that. I'm not gonna go on the internet and do that, right? I'm gonna hit that sparkle button and it's gonna, it's gonna do its thing and then eventually that button's gonna disappear. And what you're gonna be presented with is a intermediary product that is automatically generated based on who you are and what your general purpose is, and maybe some initial selectors, and then you're just gonna vet that product and move it along its way.
COMMANDER JOHNATHAN WHITE: [00:42:00] Right? So eventually, like I think the human experience is pushing AI actually away from you, right? It's actually like not interacting with the ai, it's the AI presenting you the answer. Right? Based on, based on your context. Right. And, and I really want to see the journey map of that for a lot of the large vendors who are, who are looking to build that 'cause still very much, AI is a, is a, is a transactional.
COMMANDER JOHNATHAN WHITE: Right. It requires you to initiate and to bring the right prompt and then to do the right stuff. I would love to see that start getting rolled back, right? Not invisible, but rolled back. And it's less of a, oh, I'm using ai and it's more of AI is absolutely enabling my success. Right?
KIRSTEN DELASH: And so I'm gonna look at it from a different approach is boots on the ground.
KIRSTEN DELASH: We've got people in the workforce who have to do more with less time is of the essence. They haven't been able to carve out time to really look at what this is and [00:43:00] how it can improve. So how do you work at it from that approach? Um, you can have the tools in front of 'em, but they're intimidated again and it's behavior modification.
KIRSTEN DELASH: And then it's also showing them the, the world of possibilities. So how do you balance that with the tools, the sparkle pony to. That you mentioned, um, and, and keep moving forward. So, um, that's a different tool.
QUAN MYLES BOATMAN: I am gonna step, I think it, you know, when you have the sister, brother thing, these DHS people sit next to each other and this is what happens.
KIRSTEN DELASH: So, uh, so I'm gonna do a, a little plug is I reposted a Microsoft, uh, link on my LinkedIn account and it has free, uh, AI training and cer er certifications. And I encourage you to bring your curiosity, your open-mindedness, and try it.
KIRSTEN DELASH: [00:44:00] Um, and just see, see how it goes. Uh, because that's the only way we're gonna be able to really adapt and be agile is you can bring a horse to water, but you can't make 'em drink. You gotta kind of shove 'em in and then, but, but it, it fish and fish them out. It's gonna
KIRSTEN DELASH: be,
KIRSTEN DELASH: it's gotta be the will one and the curiosity, because if you don't have that and, and it's the who, who moved my cheese all over again, they're so far behind the curve that, that there might not be possibilities to keep moving forward and, and keep the continuum.
LANCE JENKINSON: No, and, and I'd like to echo Commander White, uh, comment about, um, how, how we're embracing this and what I like to think of this is if we're starting to really focus more on our objective based contracts, you know, gone are the days of button chair, number of hours focus on, and this is why I've built trust with my stakeholders, uh, across my career as well, is to, I'll worry [00:45:00] about the how, which I'll then roll down to my vendor to let worry about the how.
LANCE JENKINSON: Um, I just want to my, I want my, you know, stakeholders to focus on the what. What are we trying to solve here? What's the problem we wanna solve? So the way I think about AI is that I want to side load it, um, on onto our, uh, you know, existing systems. I don't want to necessarily get a big pot of money and go do some fancy oh, AI project.
LANCE JENKINSON: No, I want to be able to iterate whatever legacy application that we have, and then how is it going to make, meet that mission better with maybe that AI tool built into it, but it should just be baked in as part of the solution, uh, versus, uh, focusing on the tech itself. I mean, I don't know. When I was go buying my home, I didn't really care what kind of brand of what kind of tool or drill they used to build it.
LANCE JENKINSON: I just wanted to make sure the rain didn't leak, you know, when I'm sleeping in my bed. So I think we need to take that same, um, thought process and, and apply it here. AI is no, no more special.
NOTHING: Yep.
BIJAN MANSOURY: Um, so, uh, just to add to the panelists, uh, comments, um, [00:46:00] you know, as, as Spencer uh Johnson says, changes inevitable in our organization.
BIJAN MANSOURY: AI is the reality of our future. So we have to embrace it, we have to work through it, and we have to find ways to, uh, augment our, our, um, our work in order for us to be more effective and be able to meet the mission. Yeah. Thank you all very much. All right.
QUAN MYLES BOATMAN: So, um, I wanna thank the panelists for their, um, insight and contributions.
QUAN MYLES BOATMAN: Um, so what we've heard today really is, um, you know, so much information about the change that AI is making in, you know, we do it personally, but how it's affecting our, um, work life as well. And it's not just, uh, it's not an easy button. Um, there are lessons learned, they're best practices. You know, we need a little sparkle.
QUAN MYLES BOATMAN: It seems like we need a little sparkle in it. Um, but these are ways that we, it is a iterative process and it's not about the AI to solve the [00:47:00] problem of what we're hearing. It is about how are we using, um, how are we using AI to enable the success of what we're doing to, to affect the mission, to be, um, to the end result of being mission critical.
QUAN MYLES BOATMAN: Um, so those are the things that, um, you know, we really should focus on. So thank you all. Please give our panelists a round of applause. Thank you all. And I think we have some, um, mics in the room for questions.
AUDIENCE: Good morning, Jeff Erman with next. Thank. Forgive me. 'cause my question might be a little self-serving and there are no equ or sparkle pony analogies.
AUDIENCE: Um, so as we heard this morning, AI is a tool and it relies on people. And we also heard that the government is 10 years behind [00:48:00] private sector. And we've all seen the MIT studies that say most a IT projects fail to meet their objectives and deliver value. So my question is, how are you today or are you today measuring the value of who is using AI in the organization, how they're using it, specifically those individuals and what they're getting out of it.
AUDIENCE: And even more so how are you harnessing that information to enable your other employees training is wonderful, but as we all know, also when you walk out of the class, you lose a lot of that knowledge as well. So from an ongoing and scalable way, how are you dealing with this?
AUDIENCE: Yeah, I'll go ahead and
KIRSTEN DELASH: start. So, um. Working and partnering with DHS and the other components. We actually, um, are working with DHS on pilots and we've already gotten all these use cases. And so we're actually bringing those folks to the table to experiment and creating a sandbox to which they can experiment.
KIRSTEN DELASH: And again, like Larry Allen [00:49:00] mentioned is that AI is unequal. There's different pockets based on funding, based on talent of workforce, but it's, it's bringing it back to level setting so that we invite people to use their creativity and curiosity and that's unmeasurable. But we still are, we don't have KPIs or metrics to enhance that 'cause we're not there yet.
KIRSTEN DELASH: We don't wanna intimidate them. We wanna encourage them. So we're still at the embryonic stage of something that's gonna be beautiful.
COMMANDER JOHNATHAN WHITE: So, so, uh, two things. Um, as part of the Ask Hamilton effort, we also, um. Combine that with an effort to build an AI hub. So basically a place like an inventory, it's an inventory plus it's a place to go and talk about the use cases, expose people to them, talk about the models, talk about if we're building agents, what agents, what do they do, where can I get 'em, what can I do?
COMMANDER JOHNATHAN WHITE: Right? It's almost like a data catalog, but very focused on ai. So that's a, that's a second deliverable to our work there. But then as part [00:50:00] of that underpinning all of that is we're doing journey maps with the users. So we're saying, what does your experience look like today? Right. I open up a PDF document from my email.
COMMANDER JOHNATHAN WHITE: I for, well, no, excuse me. Let me, let me, because we have a lot of STIGs that we have to apply to. So I download the PDF document to a download folder. I open it, I click the Enable, enable, enable, enable button, then I click sign. Okay, now it's signed. Then I save it again. That takes another five minutes. Then I go and add it to the email, and I email it to somebody else.
COMMANDER JOHNATHAN WHITE: And then the person comes back to me and says, you use the wrong form. Literally happened last week. I thought it was a sparkle. No sparkle there. There is no sparkle there. But the, the idea is like, that's, if that's your journey map, like how can I, how can I solve that? Right. That's, that's a huge problem.
COMMANDER JOHNATHAN WHITE: And we haven't solved it yet. Um, he knows, but we need to be talking about that too. Right.
LANCE JENKINSON: Well, one thing though, I think we need to take a look at though, is that same MIT study that we all love to beat AI over the head with, was that the success criteria from my [00:51:00] understanding was that the project needed to have like, paid for itself and been demonstrating, you know.
LANCE JENKINSON: Within six months, that was like their target. So in the government, we're not here to necessarily make a profit. We're here to make our mission. And I think one area of looking at how effective is, how is this helping our staff? So if I'm, you know, I took a huge hit in the DRP with a lot of my, um, DBAs. So I was down to like one.
LANCE JENKINSON: And so being, and, and, uh, this particular employee was not, um, um, the most, um, leaning forward on, on AI. And, and not through any fault of her own, just I think through culture of, of that kind of, uh, uh, that hesitation. But when she was able to kind of, you know, and on her own pace, not forcing her, you know, come to me, um, and 'cause I, I have an open door policy when it comes to wanting to use ai.
LANCE JENKINSON: I light up anytime someone wants to know how to use it. But, you know, okay, hey, you can actually, we can, you know, write SQL and, and get recommended, um, scripts from this. Let's load, let's load the schema, let's experiment. And so, okay. It didn't [00:52:00] stop all the bleeding. We're, we're still in a lot of pain. Um, but we're at least able to, um, get a lot more, uh, work done and it's been able to help her be a lot more effective in her role.
LANCE JENKINSON: So in that, I say that's a success, that's a win. I mean, uh, so that's how I, I I wanna look at the little small wins. 'cause I think those are gonna help then build that trust and help snowballs so we can then want to tackle larger projects without the kind of, um, workforce resistance.
AUDIENCE: Hello, my name's Virginia Hug and, um, our federal yes. And fellow of the National Administration. I'm working on an article, uh, based on following questions. So I'd love to,
AUDIENCE: um, my thesis is that, um, human in the loop is necessary but not sufficient. I'm a human being [00:53:00] and you know, at times. We just, and at the same time, we are no longer in the information age. We are in the disinformation age in an area when disinformation is a tool of our adversaries and that that information can, uh, Wikipedia is not necess source of truth.
AUDIENCE: Tomorrow, could see Ukraine start the war with Russia. How do we, when we have large language models that are reliant on a broad base of information that is not necessarily accurate and that a human in the loop is necessarily not sufficient? Not not sufficient. And when we're looking at critical missions, national security missions, health missions, the welfare of our city, what I'd like to explore are how, how technology can be added not to replace humans, but to augment the role of humans in ensuring that.
AUDIENCE: We [00:54:00] don't have garbage in and garbage out to make sure that the information and the data that we're reliant on in making our decisions is the best quality, um, of air in the past period. Thank you again for your time and attributions to our government for
BIJAN MANSOURY: Yeah, thank you for the question. Um, I, you know, I want to be optimistic.
BIJAN MANSOURY: I think we're 10 years behind because of these particular reasons. 'cause we're very risk averse and, and the data that comes in, we want to make sure that the data's clean and it's not manipulated. Hallucination doesn't exist. So in some ways, I think being 10 years behind may be a blessing in disguise to make sure that things are tested, they run through their course, and then they, they they get implemented.
BIJAN MANSOURY: So, so that, that concern exists. I don't like, I don't think there is a real answer, um, exact answer to it, but there is a. There's the process of iteration, there's the process of, um, ensuring that the data that we feed [00:55:00] into it is accurate. That there is a, uh, there's a lot of, a lot of work that needs to happen.
BIJAN MANSOURY: And I, I genuinely think that, that we need to be very careful with that. It's a genuine concern. Thank you for raising it
LANCE JENKINSON: Well, um, I think step one, um, maybe x nay on the Chinese models, so Deep sea, you know, no bueno. Um, um, but I mean, in my example with the SQL Developer, it just, that highlights the human in the loop piece that it didn't replace her.
LANCE JENKINSON: She still knows the, she still knows the schema, uh, from on the back of her hands. She still understands the data much better than any sort of model ever really could. Um, so it's, it's being, teaching her how to, how to use that to be able to leverage the tool so that she knows that she's getting an output and it doesn't look quite right.
LANCE JENKINSON: And that's, that's, and I guess the prompting, I guess, is more of an art than a science. Um, and that's what we have to keep, you know, developing and, and again, building that muscle within our organizations to make sure that they can, um, properly leverage these tools and, and be able, be able to, [00:56:00] you know, hone that, that particular feeling.
LANCE JENKINSON: But this is also another reason why I really encourage us other, uh, because I know as, as leaders, as executives, it's, we are, we're not necessarily as hands-on with the tech. And, um, so one thing that, that I did and I would encourage others to do is I built my own AI lab at home. Um, I have several servers I set up.
LANCE JENKINSON: I, you know, have my alama load some models in it, um, build like an image generation to be able just to see how these things, all the pieces work together so at least can understand the, the building blocks. So that as we are looking at various different tools and we're looking at different solutions that are proposed, at least can have some sort of, um, uh, tangible understanding of kind of how they work at, at, at some sort of basic level versus do it just being all, um, you know, PowerPoints.
LANCE JENKINSON: So I, I, that's one thing I would really encourage.
QUAN MYLES BOATMAN: So not only does Lance have an, uh, an open door policy at his office, but it sounds like we need to have like a, a a a lab ai, an AI party at his house, an AI party at Lance's house. So anytime. That's what I'm [00:57:00] hearing, Lance. Yeah, I heard a third nerd.
COMMANDER JOHNATHAN WHITE: I was gonna say he won the NERD award.
NOTHING: Yes.
COMMANDER JOHNATHAN WHITE: So I, I just wanted to, to make one point is I really want to see some more research coming out of, of academia regarding small language models and medium language models. Mm-hmm. I'm gonna coin those terms right now. Right? Like, we gotta get away from these foundational models that are unbelievably corrupted with bad information.
COMMANDER JOHNATHAN WHITE: Right. And also most of them are sparse, right? They're, they're large because we suck at training things. We don't know how to make them small. And so I think we need to really, really narrow in on the research on how do I make these much more efficient? Plus that also helps with the, uh, nuclear power plant problem that we have too, right?
COMMANDER JOHNATHAN WHITE: So those things, I think those go hand in hand. And honestly, I'm really looking like whoever cracks that code is the next like Nvidia, right? In my opinion, right? That's, that's where the investment needs to go. That's where we need to be. 'cause everyone should have their own [00:58:00] SLL right? Which is tailored to their need and doesn't have the corruption that we, that we don't know is even there.
LANCE JENKINSON: Well, one other thing to add to that too is, and I think this goes back to the, uh, video that we started the session with and, and I, and look as government, I, we, this is something that we do, you know, we like, we love our governance, we love our various review processes and we like to limit the technology solutions to, okay, this is my one AI tool that, that every use case needs to run through because that's what I've vetted.
LANCE JENKINSON: But in the case of that video with the va, you know, they tested multiple different products and you know, some of them, the results were quite. Quite, um, different, uh, between those different tools. And I think we need to make sure that as we set up our various governance processes, that we don't go back to that same process of o of over control.
LANCE JENKINSON: That there are, especially in this early stage of the industry, there are gonna be, we are gonna have to have multiple models or multiple small models or multiple medium models, multiple large models. We don't want to go crazy, but I think we need to make sure we [00:59:00] keep that open mind and have those processes that we can at least add other, uh, other technologies for the use cases.
QUAN MYLES BOATMAN: Alright, well it looks like I've got, I, I've received the, the end. Oh, no time. Um, and, um, so our humans will be around all day today. I believe. I'm, I'm I'll be here or other, or otherwise we can just meet at Lance's house. Yes. Um, but thank you so much for your insight today. Um, you know, where we are today with, um, with ai.
QUAN MYLES BOATMAN: We've started the process of getting our use cases and our governance processes. We are in the stages of also getting people familiar with, um, how to use it, um, getting it entrenched that it's not just one way to do things and getting them more familiar with it. Um, and the other most important thing, it's, you know, for government, they're not just looking at, you know, themselves.
QUAN MYLES BOATMAN: We're, we're looking at [01:00:00] industry to help us to figure out our process, to make it easier for, um, us to, to find ways to look at how we can make it valuable and how we can use those metrics. So, um, thank you again for, um, your, your insight, the knowledge of being able to share. Um, thank you all for your questions and I'm sure there were lots of questions we didn't have time to get to, um, get to today, but I.
QUAN MYLES BOATMAN: The panelists will definitely be around. Um, so please make sure that you take time to enjoy the rest of ELC. Thank you all. Thank you.
KIRSTEN DELASH: And
MYRA DUDLEY: next
KIRSTEN DELASH: set of sessions, please show.