December 2023 – Vol. 12; Issue 12

Generative AI: The Easiest New Year’s Resolution You Can Keep

The Hottest Resolution for 2024

Every GovCon business runs on basic software for productivity, accounting, security, and more. Some run specialized software. Some sell software-related products and services. Computer software is as central to business, today, as ledgers and typewriters once were. We also know software will advance. We know we’ll have to keep up, and we try not to be hasty. Deliberation is good and caution can be warranted. Companies and entire industries have adoption patterns. And that leads us to generative AI (GenAI). Owners and Execs are cautious of generative AI, if not skeptical. It’s very new, and it works differently than most, if not all, other business software. Its programmers don’t know exactly how it does what it does. It’s known to fabricate information. It might make your proprietary information public. Agencies’ views are mixed. It’s hyped to change the world, so caution seems warranted.

Follow Through on Your Resolution

Generative AI is unambiguously beneficial. Need a draft to get you started? Technical content simplified? Resumes found and scored? Done, done, and done. Generative AI can help in dozens of ways on client projects, in the back office, and in business development, capture, and proposals. It holds the promise not only of increased effectiveness and efficiency, but also of improved competitiveness. So, if you don’t use GenAI but your competitors do, that might cost you. For innovative technology, generative AI is inexpensive, easy to use, easy to scale, and improves with use. Its “black box” is fascinating or worrisome, depending on your point of view. So how do you make this risk-benefit decision?

New Years Resolution

Planning Out Your Resolution

Choose a safe, easy starting point for your first try. Especially when using a public, Web-based tool, ask about subjects of general interest, technical questions you usually Google, news stories, or published reports. See what you get, pick part of the answer, and ask more about it. Challenge part of an answer. Ask it to answer at a fifth-grade level. Or summarize. Or elaborate. See what you get and ask conversational follow-on questions, just like when you talk to a subject matter expert. Your goal is to gain some knowledge of generative AI’s capability by using it, so you can see the potential for more organized and targeted use. Unlike some New Year’s Resolutions which are difficult to keep, this one might inspire you to do more.

Exercise Safe Practices when Starting Your Resolution

You have choices about where to start. Ask a vendor for a trial license or try a publicly available, Web-based tool like ChatGPT, Bard, Claude, or Bing. If Microsoft’s Co-pilot is available in your productive applications, you can use it. Choose a tool to which you have easy access and have a go. You can compare and contrast, later. For any tool you choose, read the vendor’s data, use, or privacy policy so you know what happens to your content. You especially want to know if your content is included in future tool training. By reading the company’s policy you’ll also know if you can opt out of certain uses of your content, such as for training. If using a public tool, don’t enter content which is privileged in any way. There’s more to know about safe use but if you avoid privileged information for your first use, you limit your risk. If you use a licensed private tool, you can be more confident using proprietary information. Still, read and ask about the vendor’s data and privacy policies.

Six Pieces of Advice when Starting out with Generative AI

  1. If you’ve been avoiding generative AI until now, do something with it. The more you try it, the more you’ll see business uses.
  2. Start small. Start safe.
  3. Don’t think of generative AI like a search engine. Think of it like a smart colleague or consultant and interact with it the way you’d interact with them.
  4. Be curious. If you wonder if generative AI can do something, ask it. Ask it something general, specific, or even fanciful.
  5. If your prompt is lengthy or complicated, chunk it down to smaller pieces.
  6. Know your tool’s data, use, or privacy policy, and exercise caution adding privileged information to tools which train on user inputs.

Lou Kerestesy
703-835-3267
lou.kerestesy@dwpassociates.com

TJ Sharkey
202-591-5958
thomas.sharkey@dwpassociates.com

Doug Black
703-402-4511
doug.black@dwpassociates.com

AI, Export Controls, And You – December 18

On October 30, 2023, the Biden Administration issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. You can read DWPA’s summary of the Order’s purpose and intent here. Below we explain what the Order’s language about “dual use technologies” could mean to your business.

Artificial Intelligence has been part of government contracting and consulting for decades. As of September 1, 2023, AI.gov lists more than 700 use cases across 19 departments. The U.S. Government Accountability Office’s December 12, 2023 report, Artificial Intelligence: Agencies Have Begun Implementation But Need to Complete Key Requirements, identifies more than 1200 current and planned uses in 23 departments. And the General Services Administration identifies over 1200 members from 60 agencies in its AI Community of Practice.

Generative AI (GenAI) promises to increase use cases as readily available tools make GenAI accessible, affordable, and powerful for government agencies and contractors.

The Biden Administration’s Executive Order renewed focus on how AI policy will impact competitiveness, intellectual property, privacy, and national security. A key impact for U.S. companies will be compliance with Export Controls as firms consider export constraints to develop, implement, and offer AI systems and tools. 

Key Things to Know

Robust export controls already exist in the US, in two ways. One is “defense articles and services” governed by the State Department’s International Traffic in Arms Regulations (ITAR). The other is control of “dual use” technologies with both commercial and potential national security uses, governed by the Commerce Department’s Export Administration Regulations (EAR).

It’s relatively straightforward to identify and apply controls to “defense articles and services” subject to ITAR. It’s in the area of dual use technologies that regulations are less well-known, and more ambiguity exists. These require vigilance on the part of companies to ensure compliance as they consider how to employ AI in their offerings.

A critical business question is what will be controlled?  Generally, dual use technologies are controlled by “item-based” controls like systems and hardware (e.g. CHIPS Act export direction impacting advanced semi-conductor release), or by “end-user” controls on countries, organizations, or individuals (e.g. the “Entity List”). But there is also a category of less well-understood controls that focus on the “end use” itself, and place obligations on exporters to have “knowledge” of what end users might do with the technology. These are end uses that could involve support of nuclear, missile or unmanned aerial vehicles, or chemical/biological capabilities.

The responsibility to abide by these controls and requirements for compliance is entirely on “US persons,” defined as both individuals and companies. There are substantial penalties, both criminal and civil, that apply to both.     

Call To Action For Companies

In addition to the existing export controls, the EO will almost certainly drive new rulemaking at both the Departments of State and Commerce. Future regulations combined with the fast-evolving AI landscape mean companies should carefully evaluate and address export controls as they bring new capabilities to market.

The best practice is normally to obtain expert export control and/or legal advice. The best risk management for companies at this point is to do this early to avoid risky and potential expensive impacts from export considerations.

To learn more contact Lou.Kerestesy@DWPAssociates.com.

Summary of AI EO Purpose and Intent – December 18

On October 30, 2023, the Biden Administration issued its “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

Deep Water Point & Associates (DWPA) has substantial experience with laws, regulations, guidance, programs, and requirements central to the Order. We’re analyzing the EO to understand whether and how it might affect our AI and generative AI use, and our clients. Federal contractors and SaaS or PaaS cloud service providers not well-versed in Department of Commerce Export Administration Regulations (EAR) might start investigating. This article summarizes the Order’s purpose and content.

This EO builds on recently published AI strategic documents and frameworks by Federal agencies and institutes. It points out that much of what’s already done for software development and data laws also applies to AI. With 186 shall statements and 98 deadlines, the EO establishes clear direction and cadence for next steps needed by the Federal government. It addresses AI and generative AI, and clearly describes when export control law and regulation apply.

The Order’s 13 sections are summarized below. Sections 4 – 11 constitute the Order’s “eight guiding principles and priorities.”

Sec. 1. Purpose emphasizes the significance of responsible AI use, highlighting its potential to address critical challenges and improve various aspects of society, while also acknowledging the risks associated with irresponsible use. It underscores the need for collaboration between government, the private sector, academia, and society to harness AI for good while mitigating its risks.

Sec. 2. Policy and Principles states that it is the policy of the Biden Administration to advance and govern the development and use of AI in accordance with eight guiding principles and priorities. This Section separately describes the eight guiding principles and priorities, which are Sections 4 – 11.

Sec. 3. Definitions defines 32 terms. Noteworthy among them is the term dual-use foundation model, which is used 16 times and is central to developer, user, and agency requirements and prohibitions.

Sec. 4. Ensuring the Safety and Security of AI Technology is the largest of the Order containing more than a quarter of the entire Order, one-quarter of its deadlines, and almost one-third of its shall statements. This section details guidance and direction pertaining to safe and reliable use, almost two dozen infrastructure as a service requirements, cybersecurity, biosecurity, and other types of uses and risks. Section 4 contains one of two uses of the term red-teaming pertaining to generative AI. Section 10 contains the other.

Sec. 5. Promoting Innovation and Competition outlines measures to attract and retain AI talent to the US, promote innovation through public-private partnerships, provides guidance to patent examiners, and identifies measures to support AI in healthcare, for Veterans, in climate change, scientific research, and other domains.

Sec. 6. Supporting Workers emphasizes the government’s commitment to understanding and addressing AI impacts on the workforce. It directs the development of reports analyzing labor market effects and workforce disruption mitigation principles and best practices, and education and workforce development.

Sec. 7. Advancing Equity and Civil Rights outlines the government’s efforts to address discrimination, promote equity, and protect civil rights in various aspects of AI deployment, including the criminal justice system, government benefits and programs, and the broader economy.

Sec. 8. Protecting Consumers, Patients, Passengers, and Students highlights the government’s efforts to ensure the responsible and ethical use of AI in healthcare, education, transportation, and communications, while protecting consumers and addressing potential fraud, discrimination, and privacy risks.

Sec. 9. Protecting Privacy emphasizes the government’s efforts to address and mitigate privacy risks associated with AI, promote the use of privacy-enhancing technologies (PET), and support PET guidelines, research, and development.

Sec. 10. Advancing Federal Government Use of AI is the second largest section of the Order. It highlights steps and guidelines to advance the Federal government’s use of AI and enhance its AI talent and management. It forms an interagency council to coordinate the development and use of AI in agency programs and operations, other than the use of AI in national security systems. Section 10 contains all the Order’s only references to the Technology Modernization Fund. It also contains one of two uses of the term red-teaming pertaining to generative AI. Section 4 contains the other.

Sec. 11. Strengthening American Leadership Abroad underscores the importance of the United States in global AI leadership, setting standards, promoting responsible AI development and deployment abroad, and addressing cross-border AI risks, particularly in critical infrastructure.

Sec. 12. Implementation establishes the White House AI Council which will coordinate AI-related activities and policies across the Federal government. It identifies the Assistant to the President and Deputy Chief of Staff for Policy to serve as the Councils’ Chair. It identifies 28 agencies’ secretaries, directors, and chairs as members, plus the heads of such other agencies, independent regulatory agencies, and executive offices as the Chair may designate or invite to participate.

Section 13. General Provisions ensures that this EO is not read as impairing authorities granted by law, or as establishing existing authorities or government functions.  

To learn more contact Lou.Kerestesy@DWPAssociates.com.

Using Generative AI Safely – December 13

A conference presenter recently told an audience, “Whatever you put on ChatGPT is out there. Gone for good. Out of your control.”

We hear that dire warning a lot and it raises serious concerns about business use of public tools like ChatGPT or Bard. The warning could also be more cautious than it needs to be, and cost you more than it buys in protection. Let’s see.

What Is Generative AI, And How Does It Work?

Most software we use is deterministic. It produces the same output given the same inputs and conditions. We rely on that predictability when it comes to writing emails and reports, and analyzing sales or budget scenarios.

By contrast, GenAI is generative. It’s designed to produce diverse and even creative outcomes using the same or similar inputs. We want it to brainstorm with us. To summarize a report in its words. Or to change the tone of an email for us.

GenAI does this by using language patterns. It recognizes the relationship of words, phrases, and sentences and then uses statistical probability to select the best sequence of words to return to you, based on your prompts.

When you hear talk of GenAI training, this is what’s meant – training it to recognize and use language patterns. As an example, ChatGPT was trained on 300B words, including scoring and weighting them based on how they were used in sentences. This “deep learning” is what makes generative AI useful.

What Has GenAI Training to Do with Safe Use?

The way GenAI works tends to limit what others can know about your use. While it’s true that GenAI tools read your prompts and might store them for future training, GenAI’s focus on language patterns rather than whole entries helps control risk but not eliminate it. Consider an example.

Say you cook and want to make a tomato sauce you’ve never made before. You search online for something you haven’t heard of, and search engines return entire recipes to you. All the ingredients, quantities, steps, and times for you to read – as you would expect.

But what if you used GenAI?

Let’s say I had previously put my grandmother’s secret tomato sauce recipe – which includes a dash of soy sauce at the end – in a prompt asking a generative AI tool (a GPT) to make a shopping list for me. Let’s also say the GPT stored my prompt for future training. Would it return my grandmother’s recipe to you like search engines would?

Because GPTs analyze language patterns to return language patterns to you, it’s not likely to return her entire recipe the way a search engine would. But, had you told it you wanted to try something unusual, it could very well inform you that “Some tomato sauce recipes use a dash of soy sauce at the end” because that’s novel. It could offer that tip along with others, all based on novel ingredients from thousands (tens of thousands?) of tomato sauce recipes.

It matters little if a GPT returns my grandmother’s entire recipe to you if her secret ingredient is identified for you. Her secret is out. But had you asked a GPT for Indian tomato sauce recipes, or different recipes with paprika, it might not have considered a dash of soy sauce at the end relevant. Remember, it’s all about what you ask and the relevance a GPT determines using language patterns and statistical probability.

So, is your proprietary or privileged business information at risk of being made public, through your use of GPTs trained on your prompts?

The answer is not no, but is it ever? The answer is yes, depending, and now you understand why. What, then, are safe uses of public GPTs?

A Word About Types of GPTs

AI terminology can be confusing. Glossaries contain dozens of terms, many of which sound like they say the same thing. Even the boundaries between simple terms like open, public, and proprietary aren’t so clean that certain terms always and only apply to ChatGPT or Bard, for example, while other terms always and only apply to, say, ACME Inc’s AI-assisted proposal tool. For the sake of easy reference, let’s divide products this way:

  • Public refers to ChatGPT, Bard, and others you can try for free by registering at the tool’s website
  • Private refers to dedicated, domain-specific tools you pay to use by user, per month, or by some other unit

We realize this might confuse architectures, fail to account for products with free and paid versions, ignore distinctions between publicly and privately held companies, and more. That’s okay because making those distinctions won’t change what we’re saying about safe use.

One safe-use advantage of a private tool is you can build and separate your document repository, and use only your repository to train the tool. Your vendor’s tool might also have a data relationship to foundational models, however, which might expose your data to others through training. Vendors know how to firewall your data and let you opt out of model training. Read the vendor’s and data use and privacy policies, understand the tool’s settings, and talk to the vendor if you have questions.

Can you also use a public tool safely? You can.

First, public tools might also permit you to prevent sessions from being used to train the GPT. Read their data use and privacy policies to understand how your data will be used, and to see if you can opt out of training.

Second, many valuable uses will have nothing to do with proprietary or privileged data. A proposal manager might use a GPT to improve their understanding of technical issues, to improve their conversations with technical SMEs. A team lead might role play with a GPT to understand the perspective of others on the team without ever using proprietary information. If you want to keep the risk-reward scales tipped in your favor, clarify what you want to accomplish with a particular use, know what success looks like, and ask yourself what might go wrong. You’ll find many ways to prompt a GPT which don’t require business data or information.

So, What’s the Bottom Line?

Recall my colleague’s dire warning at the conference: “Whatever you put on ChatGPT is out there. Gone for good. Out of your control.”

It’s true that the content of your prompts can be out there, depending on policies and settings. But it’s also true you can prevent the leaking of proprietary and privileged information.

But it’s also true that the way GenAI uses what’s out there reduces some risk for you. How safe that feels is a subjective judgment we’ll talk about in the next article. But understanding how GenAI trains helps you understand how information you provide in prompts can show up for future users.

In the GenAI Discovery Project, DWPA is experimenting with public and private tools. Using public tools, we know there’s zero chance we’ll give competition any advantage – because there’s no advantage at stake. There’s no soy sauce in the prompts. For uses where there’s a chance we could give something away, we know it’s a small chance and we weigh the gain we want from the harm we don’t want, and act accordingly.

DWPA has not used private tools, yet, beyond Discovery Project trials, so we can’t speak to practices with them. We know private tools have additional safeguards built in. If you use or are considering a private tool, talk to your vendor about how it’s trained and how your data might be included.

Whether using a public or private tool, read your tool’s privacy policy or statement. They’re not generally written for human reading, but gut it out so you know what’s happening to your data. You’ll probably see a choice for opting your content out of tool training. DWPA has exercised that option.

Beyond understanding how GenAI tools train and work, safe use comes down to use cases and risk tolerance. We’ll look at that in the next article but, for now, we’ll leave you with the thought that you probably already engage in a practice which is like determining GenAI safe use: Asking questions at an industry day, or in written Q&A during a solicitation process.

You can ask in ways which show your hand, or in ways which don’t. You weigh the odds of gaining information to your advantage versus benefiting your competition and neutralizing your gain. You might have done this for years, and it’s a risk-reward decision similar to deciding how to use GenAI, especially public tools.

To learn more, contact Lou.Kerestesy@DWPAssociates.com.

Prompts Are Easy. Adoption Is Hard. Here’s How to Be Ready. (Part 2 of 2) – December 8

Part 1 of this two-part article defined adoption and talked about what makes it hard in any organization. Part 2 describes the ways you can manage adoption challenges.

Exec Summary: Over the past year, countless blogs, articles, books, videos, courses – even job descriptions – focused on prompts and prompt engineering. While prompting is essential to effective GenAI use, it’s only one thing to consider. Generative AI outputs are another, and they need more attention.

Recall the IPO Model – inputs, processes, and outputs. For a simple generative AI use, prompts are the inputs, algorithms are the process, and a GPT’s response is the output. For more complex uses, inputs and processes combine as a user and the GPT interact through a set of prompts. Outputs can also take on a new importance, depending where they lead.

If the outputs of my GPT use become inputs to a business task or process you own, we face added requirements for communication, collaboration, and probably change management. And that calls for an approach to addressing the hard adoption questions.

How Do We Answer The Hard Adoption Question?

Meeting the challenge of generative AI adoption will require a comprehensive and methodical approach. Here are three principles we’re applying at DWPA we recommend you consider.

  1. Use an adoption framework
  2. Clarify goals and objectives
  3. Think like an entrepreneur

Use An Adoption Framework

The grand-daddy of innovation adoption frameworks might be Everett Rogers’ Diffusion Innovation Theory. In his 1962 classic (updated through a 5th edition in 2003), Rogers explains how an innovation diffuses through, or is adopted by, a social system. There’s a lot to Rogers’ research and it would be worth your time to read select portions of this book. But we can highlight the pieces you can use immediately.

Most well-known might be adoption types Rogers identified and arrayed temporally in an S-curve. Rogers called it the innovation adoption customer because he studied many types of innovation. Today it’s popularly known as the technology adoption curve, as shown here:

This shows that adopters organize in five types within any social system – your company, the Federal government, the govcon market, etc. – and that they adopt at different rates. This happens because of the time they take moving through six stages Rogers identified:

  1. Knowledge is gained when someone learns of the existence of an innovation, and gains some understanding of how it works. This leads to Persuasion.
  2. Persuasion occurs when someone forms a favorable or unfavorable impression of an innovation, generally before using. This leads to a Decision.
  3. Decision occurs when someone engages in activities which lead to adoption or rejection. When adoption occurs, this leads to Implementation.
  4. Implementation occurs when someone puts an innovation to work. This leads to Confirmation.
  5. Confirmation occurs when someone is reinforced for additional use, or reverses their decision and rejects the innovation.

Finally, adopters do all this because of the different ways they evaluate the following Innovation Adoption Factors:

  • Relative advantage is the degree to which an innovation is perceived as better than the idea it supersedes.
  • Compatibility is the degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters.
  • Complexity is the degree to which an innovation is perceived as difficult to understand and use.
  • Trialability is the degree to which an innovation may be experimented with on a limited basis.
  • Observability is the degree to which the results of an innovation are visible.

You might be familiar with another simpler framework called the Technology Adoption Model which looks at just two factors – perceived usefulness, and perceived ease of use. You might have a preferred framework, model, or theory. The important thing is to use one (or more) so everyone works with the same concepts and terms. Without that, people who need to be on the same page won’t be.

Clarify Goals And Objectives

The second principle is to clarify both uses or use cases, and broader adoption goals and objectives. It helps to clarify uses with a statement like the following, which captures use case elements:

As a [role], I want to [perform some action] on [some artifact] to produce [some output] 
in order to [accomplish something] or for [some reason].

This will not only help everyone think through any single use case, but it’ll promote uniformity and consistency across uses by all individuals, teams, and other organizational units. There are other ways to do this but the important thing, again, is that you get everyone on the same page by framing uses with concepts whose meanings are shared.

Clarifying adoption goals and objectives is trickier because adoption occurs by individuals, teams, business verticals, business functions, and the entire enterprise. Each properly has its own business-related goals and objectives which can exist in nested, prioritized, instrumental, or a number of other relationships.

Because adoption is about making full use of generative AI, and because making full use should do something better than what you’re currently doing, it’s important to use frameworks for figuring out what better means at any level. You might already use frameworks for individual performance, collaboration, productivity, innovation, or other things related to one or more levels. DWPA uses the Business Model Canvas. 

Think Like An Entrepreneur

“Think like an entrepreneur” is a way to summarize DWPA’s entire GenAI Discovery Project, which we’ve written about extensively. 

We’ve described our process for state assumptions about generative AI, our clients, and the govcon market and how we turned them into hypotheses to test. Test results are evidence we’re using to fashion capture and proposal generative AI-assisted services to validate with customers before going to market with them.

Generative AI is innovative and your use of it is also innovative. It helps to think like an entrepreneur because by adopting an innovation you are literally doing something different to create new value for yourself, internal recipients, and perhaps your customers.

At the outset there will be nothing but assumptions because you can’t have evidence for generative AI use you don’t have. State all the assumptions you can think of, turn important ones into hypotheses, and test them. Tests can be quick and easy – generative AI trials, simulations, if-then scenarios, voice of the customer, and more.

You need hours and days to try something to see what you get, and that’s your evidence. You’ll get strong evidence. You’ll get weak evidence. Collect it. Appraise it against goals and objectives, and apply it to see what happens in what actually amounts to another round of hypothesis testing and evidence gathering.

Conclusion

Generative AI is a powerful technology which is changing the human-machine relationship. And that has the potential to change the human-human relationship. Whether that change is beneficial or not depends entirely on us.

Use generative in the way you use all other software and you’ll get some ROI but not what you could get. Shift your thinking from use to adoption and you’ll not only execute tasks faster, you’ll improve communication, collaboration, and problem solving.

Prompts Are Easy. Adoption Is Hard. Here’s How to Be Ready. (Part 1 of 2) – December 7

Prompts and prompt engineering became all the rage just a year ago once the world had free access to a powerful, personal new AI tool called generative AI (GenAI). “How to” prompting blogs, articles, books, videos, and entire courses quickly appeared. And for good reason.

The way generative AI works is entirely different from most software we use, and learning to prompt it is essential to benefiting from it. But the benefit is in what we do with what generative AI gives us. In the outputs, not just the inputs. And that means thinking harder about adoption.

This two-part series defines and describes the adoption challenge, explains why it matters for business, and offers tips for managing it.

Follow ThinkSpace for weekly insights and contact Lou.Kerestesy@DWPAssociates.com for more information.

Prompts Are Easy

To prompt a generative AI system or tool – let’s call them GPTs – is to instruct it to do something for you. There are different ways to prompt GPTs, each of which has a purpose.

Prompt terminology sounds esoteric and much more intimidating than necessary.

  • N-shot prompting gives a GPT several examples to learn from before you ask it to do something for you. ‘N’ stands for the number of examples you give.
  • Generated knowledge prompting involves using information that the GPT has previously generated as a basis for new responses.
  • Maieutic prompting is a method based on the Socratic method where questions are used to encourage deeper thinking and self-discovery.

All logical and reasonable, right? (Want a chuckle? Maieutic is from the Greek and means “acting as a midwife,” which is truly fitting.) But these and a dozen more you or your teams might have done without the labels. Knowing they exist is a good starting place, and having a list in front of you can help if you hit a roadblock.

What makes prompting easy?

It’s conversational in nature, something we humans excel at. We prompt with natural language, not software language. We are at the center of the interaction, not a spreadsheet formula or word processing workflow. We see results quickly, which is generally reinforcing. We can end a session and start another if things aren’t working. You can try your first prompts in seconds, improve in minutes, and become reasonably good in an hour. You might have to learn to prompt for different results in different ways, but none of this is hard. We can get GPTs to prompt themselves.

There is an art to some prompting. “Summarize this article?” No art required. Just intent and knowledge of three words. Asking a GPT to help you and a team think through a knotty problem with no clear answer? That’ll require some artfulness – a little cleverness, thoughtfulness, experimentation, iterations, and patience. But it’s still easier than learning the art of cooking, golf, or piano playing.

What Is Adoption? And What Makes It Hard?

Has this happened to you?

You use generative AI successfully on one small task and immediately wonder if it’ll help you with a second task. You successfully use it on a few tasks and think to yourself, “I could make a process better!” Or, a team experiments, beneficially, with generative AI. Members compare notes and see the possibility of improving whole workflows and processes.

Adoption refers to making full use of an innovation. Organizations first try generative AI in piecemeal ways, which is entirely logical. But use will diffuse across the organization, and it will happen in different ways.

Some uses will remain “local,” where the output of a GPT stays with the person who provided the input. “Summarize this article for me,” or “Give me a first draft of a position description,” are examples. But the output of some uses will become inputs to others – or imply them – and use will spread. Using a GPT to evaluate project plans, technical approaches, or budget narratives might lead to better written content. But it can also lead to revised processes for producing content, revised workflows to better use the improved artifacts, and increased integration with related processes.

What constitutes full use will depend on the output, not the input or prompt. Full use can have big implications beyond prompts and even GPT responses. Many of these might be unforeseen when users start playing with a GPT. But they’ll emerge and this is one of the things that makes adoption hard.

In this way, organizations will see generative AI use lead to change. Generative AI could become a significant change agent, helping people do things differently to produce new value for themselves, internal beneficiaries, and paying customers. Many users will absolutely use generative AI to work more effectively and efficiently, and those uses will be voluminous. But generative AI’s true promise and threat could very well be change. And change is hard.

Unknowns make adoption hard, too, and there are quite a few with generative AI:

  • How it works
  • How to use it effectively
  • How to use it safely
  • What makes it hallucinate and what to do

And you’ve no doubt heard or read the speculation that GenAI, or AI, might take over the world. There’s some uncertainty.

Generative AI adoption will vary by user and that will make adoption hard on teams, business units, business functions, and entire organizations. Different people will see different opportunities and boundaries in generative AI use, different benefits and risks, and even different value and ethics questions.

Finally, full use will be an investment by organizations which can be hard. Who will be trained, for what?  How many? At what cost? On what schedule? To do what and change what in which parts of the organization? What business objectives, outcomes, and measures should be applied? And what about our products and services? Are any candidates for adding generative AI capability customers would like to have? What will that entail?

Part 2 of this two-part series will answer the question, how do we answer the hard adoption question?

When AI Adoption Means Different Things to Different People, How Do You Get Them on the Same Page? – November 27

Work teams usually contain members of different types. Some are risk taking, others risk averse. Some think big picture, others work in the weeds. You have introverts and extroverts, different DiSC profiles, and more.

When people think, feel, and communicate accordingly, getting everyone on the same page can be a challenge. This is especially true of change, and that presents a special challenge for adopting generative AI (GenAI).

How do you help different types try and use generative AI so individuals, teams, and your organization benefit in ways you intend?

If you don’t have time to sort all this out, we’ll get you started. This article highlights two adoption frameworks and shows you how to put them to work, together.

Follow this page for weekly insights, and contact Lou.Kerestesy@DWPAssociates.com for more information.

A GenAI Challenge And Opportunity

Individuals and teams face known challenges adopting innovations. One 2015 National Library of Medicine study examined 20 adoption theories and frameworks, some of which have become well-known over time. One is the innovation adoption curve, popularly known as the technology adoption curve. Another is the technology acceptance model. More on both below.

What’s less well-known is how to support adoption by type. We have less research on this, and much of what’s available is written by typology advocates. With different types on our teams, this presents a generative AI adoption challenge.

When we talk about GenAI at work, we might expect colleagues or team members to hear one conversation. The same conversation. We might not consider that different types hear the conversation in different ways. That they think and feel their way through adoption in ways that make sense to them – and that there isn’t one way for all types.

So, if frameworks tell us the types of questions adopters ask, how do we help types of adopters answer them, together? How do we help individuals and teams have a collective conversation about generative AI without talking past one another?

Let’s look at the two frameworks we mentioned to see if they’ll help.

Framework 1: The Technology Acceptance Model

Developed in the 1980s, the Technology Acceptance Model (TAM) explains how users come to accept and use technology. It identifies two primary acceptance considerations – Perceived Usefulness and Perceived Ease of Use.

Perceived Usefulness refers to how much one believes a technology will enhance performance. Perceived Ease of Use refers to how much effort one believes will be required. TAM proposes that these two perceptions affect one’s intent to use a technology and that, in turn, affects one’s actual use.

One general finding is that Perceived Ease of Use affects Perceived Usefulness. The easier a technology is perceived to be, the more it’s seen as enabling better performance. The four-cell matrix is a common representation among these basic relationships.

Framework 2: The Diffusion of Innovation Theory

Developed in the 1960s, the Diffusion of Innovation Theory explains how innovation spreads through systems to be adopted or rejected. Like TAM, the Diffusion of Innovation Theory has been widely researched and popularly used.

The theory’s original author, Everett Rogers, investigated how innovations diffuse through social systems ranging from businesses to agrarian tribes. There are few adopters of any innovation at first, then more, then many, and over time adoption levels off. Adoption follows an S-shaped curve and Rogers identified five adopter categories along it:

  • Innovator
  • Early Adopter
  • Early Majority
  • Late Majority
  • Laggards

Rogers called this the innovation adoption curve, but today it’s well-known as the technology adoption curve. Rogers also hypothesized that adopters proceeded through five adoption stages:

  1. Knowledge – Knowledge is gained when an individual learns of an innovation’s existence, and gains some understanding of how it functions
  2. Persuasion – Persuasion takes place when an individual forms a favorable or unfavorable attitude toward an innovation
  3. Decision – Decision occurs when an individual engages in activities that lead to a choice to adopt or reject an innovation
  4. Implementation – Implementation takes place when an individual puts an innovation to use
  5. Confirmation – Confirmation occurs when an individual seeks reinforcement of their decision to use an innovation, or reverses that decision if exposed to conflicting information

Rogers, Everett M.. Diffusion of Innovations, 5th Edition (p. 23). Free Press. Kindle Edition.

Combining Frameworks

If we create a matrix using both framework’s key elements, we provide each adopter a way to think about and document their thoughts, feelings, hopes and dreams with regard to GenAI adoption:

AdopterFramework

In it, any user can document the information they look for to judge usefulness or ease of use (knowledge), what would make them form a favorable or unfavorable view of GenAI’s perceived usefulness or ease of use (persuasion), etc. Because each user would fill in the table, their comments would represent their voice as whatever type they happen to be.

Individuals, teams, or entire organizations could identify types if that were considered useful, but doing so isn’t necessary. If each person records their rationale in each cell, it gives colleagues, teams, business units, and organizations a basis for understanding perceived usefulness and perceived ease of use from multiple, diverse, perspectives.

That should be enough for differences related to types to benefit conversation, without making a study of types.

Conclusion

We want conversations to “get at” the different ways colleagues and team members think about generative AI, and excavating views by type is especially valuable for two reasons.

First, generative AI is available to your organization as general-purpose apps, domain-specific apps, and GPTs you create. Not only do you need to evaluate GenAI as a capability, you need to evaluate the form or forms in which individuals and teams will adopt and use it. It could be very instructive to see if different types prefer different forms.

Second, as organizational change goes, generative AI might be especially weighty because of its expected impacts to jobs and performance. Some will wonder, “Will my job change? Will I be able to change with it? Will my job go away?” Others might think, “We could use GenAI to improve so much – but we’re dragging our feet!”

To whatever degree views of GenAI’s change impacts depend on types, it would help you make adoption and investment decisions to know not just that certain individuals or teams view GenAI the way they do, but why they do. Especially if other individuals or teams view GenAI in the opposite way. Conversations about GenAI adoption with voices by type will help get you there.

Follow DWPA’s company page for weekly discovery insights. To learn more or launch your own discovery project, contact Lou.Kerestesy@DWPAssociates.com.

FedRAMP Modernization – November 20

The Office of Management and Budget (OMB) released a draft memorandum on October 27, 2023, outlining its recommendations for modernizing the Federal Risk and Authorization Management Program (FedRAMP). The OMB’s recommendations for modernizing FedRAMP are significant, as they reflect a recognition that the program needs to be updated to keep up with the evolving cloud computing landscape. If the recommendations are implemented, they could make it easier for agencies to adopt cloud services, drive innovation and improve the overall security of the federal government’s cloud computing environment. In addition, the recommendation could reduce the time, energy and perhaps cost of entry into the Federal government for cloud-based technology companies. A potential win-win for Government and Industry  

The need for speed and innovation 

OMB and the FedRAMP program office recognizes that our government must move faster to remain competitive and to stay ahead of our Adversaries Software as a Service (SaaS) remains the fastest growing segment within government cloud acquisitions. The US government is increasing its adoption of SaaS applications at a rapid pace. In 2022, US federal agencies spent a record $6.1 billion on cloud-based and SaaS applications, and this number is expected to continue to grow in the coming years. Factors driving this growth include: the need to improve efficiency and reduce costs, the desire to increase agility and innovation, and the need to improve security.  At the same time the introduction of new technologies/innovation such as Security, Artificial Intelligences, Machine Learning, Back Office Automation have exploded Artificial Intelligence (AI) market is a great example. As of November 2023, there were approximately 18,000 AI companies are based in the United States. This number has been growing rapidly in recent years, as AI technology has become increasingly powerful and accessible. Factors driving the growth of AI in the commercial and Public Sector markets include improved efficiency and reduced costs, desire to increase agility and innovation, and the need to improve security. SaaS providers typically have more resources and expertise in security than government agencies, which can help to protect government data from cyberattacks. 

The OMB recommendations are intended to accelerate the adoption of new technologies by the government.  

OMB key recommendations in the draft memo: 

  • Become more responsive to the risk profiles of individual services, as well as evolving risks throughout the cyber environment. This would involve developing a more risk-based approach to FedRAMP authorizations and considering the unique needs of each cloud service. 
  • Increase the quantity of products and services receiving FedRAMP authorizations by bringing agencies together to evaluate the security of cloud offerings and strongly incentivizing reuse of one FedRAMP authorization by multiple agencies. There is also language around a “No Sponsor “accreditations and the ability for companies implement “Proof of Concepts” up to 1 year for non FedRAMP compliant offerings. This would involve streamlining the authorization process for businesses and making it easier for agencies to adopt cloud services. A determination by the PMO would need to be made on the minimum number of security controls that would need to be implemented and the criteria for which the two approaches can be implemented. 
  • Streamline the authorization process by automating appropriate portions of security evaluations, consistent with industry best practices. This would involve using technology to reduce the manual burden of FedRAMP assessments and make them more efficient. The adoption of technologies and the refinement in approaches (Oscal, Continuous Monitoring.) should make Agencies more receptive to sponsoring new technologies. 
  • Improve sharing of information with the private sector, including emerging threats and best practices. This would help to ensure that both the government and the private sector are working together to protect cloud-based systems from cyber threats. 
  • In addition to these general recommendations, the draft memo also includes specific recommendations for improving FedRAMP’s approach to continuous monitoring, security controls, and risk assessments. 

The OMB’s recommendations for modernizing FedRAMP are significant, as they reflect a recognition that the program needs to be updated to keep up with the evolving cloud computing landscape. If the recommendations are implemented, they could make it easier for agencies to adopt cloud services and improve the overall security of the federal government’s cloud computing environment. 

FedRAMP Accreditation = Success?  

Congratulations, your company has invested the required energy, time, and capital to achieve FedRAMP accreditations. This is not an easy feat, but you now have an enterprise class offering which will be recognized by your potential customers in the Federal Government but also by the commercial markets that you serve (regulatory markets, retail, etc.) However, this does not guarantee your success in the Federal market. Understanding the nuances of the market is the difference between success and failure in this marketplace. 

Failure to develop a business case  

Many companies that attempt to enter the Federal market failed because they didn’t develop a business case. The companies fail because they don’t understand the dynamics of Federal Government market, the unique mission of the customers in order to secure sales let alone gain market share, fail to understand its competitors and their incumbency positions and/ or existing contract vehicles, fail to adapt their business model and/ or understand and comply with regulatory hurdles. Understanding your total addressing market within Federal is critical, it should be the first thing a business does before entering the market. 

By developing a business case, companies can identify and mitigate the risks associated with entering a new market. They can also ensure that they have the resources and capabilities necessary to be successful.  

Deep Water Point and Associates (DWPA) provides a 3rd party, unbiased market/business justification for companies wanting to enter the Federal marketplace. DWPA provides end to end services to accelerate client growth in areas of market research and intelligence, strategy and management consulting, business development services across the entire growth lifecycle. This is why so many businesses rely on the expertise of Deep Water Point and Associates to accelerate their understanding, entry and growth within the Federal marketplace. 

For more information, go to https://dwpassociates.com/ or contact Tom Ruff tom.ruff@dwpassociates.com

Federal agencies banner

What Is a GenAI Discovery Project? (And why do I need to know?) – November 14

Whether you use GenAI within your organization or you want to add it to services, where to start is a challenging question. Internally, you could run a small, first use at minimal cost and risk. You might even absorb the cost and risk of a somewhat larger trial. But costs and risks are different when you take a product or service to market.

To learn that clients don’t want the GenAI-assisted solution you took to market by taking it to market is costly. It wastes time and money, incurs opportunity costs, and can damage relationships and brand. There’s a way to prevent this and, ironically, it’s in the very unknowns that worry us about adopting GenAI.

The smart start is precisely with the things you don’t know. State all your guesses and assumptions. Turn some into hypotheses. Then test those to gather evidence for decision making. That’s exactly what DWPA’s GenAI Discovery Project does, and that’s what we’ll describe below.

What Will We Discover?

The term discovery in GenAI Discovery Project isn’t just descriptive – it’s prescriptive. It refers to a particular method for turning hypotheses about markets and customers into facts. It’s part of a larger method called customer development, created by Steve Blank and Bob Dorf to answer the question, “Why do startups with great ideas fail?”

In The Startup Owner’s Manual, Blank and Dorf argue startups risk great ideas by conducting product or service development without also conducting customer development. By conducting them in tandem, startups greatly increase their odds of going to market with a product or service customers want, and are ready to buy.

You don’t have to be a startup to benefit from customer development.

Today’s govcon market for GenAI-assisted products and services is so new, we’re all startups within it. Filled with more questions and hunches than facts, adding GenAI to existing services is sufficiently startup-like to benefit from customer development. That’s why DWPA is using the method to turn assumptions into facts for investment decision making. That’s what our GenAI Discovery Project is.

DWPA’s Discovery Process

We started in August by brainstorming every assumption we could think of about customers and the market. We generated dozens and grouped them in Osterwalder and Pigneur’s Business Model Canvas. 

Using that layout, we could see assumptions held about value propositions, customer relationships, customer segments, and channels – all outward facing from DWPA to the market. We could also see assumptions we held about inward-facing parts of the business model: Key activities, resources, and partnerships, revenue streams, and cost models.

Next, we turned assumptions into hypotheses. “GenAI will save time” became, “GenAI will get clients to a pink team draft faster by finding and aggregating content.” We literally reworded select assumptions as testable propositions using measures we could discuss or directly observe. To test, we used several capture and proposal tools on a trial basis, and we interviewed clients about their generative AI experiences.

Here’s where it got interesting. Testing didn’t just confirm or disconfirm hypotheses in a thumbs-up or thumbs-down way. Testing revealed new information which suggested new opportunities for support.

Testing the hypothesis that “GenAI will get clients to a pink team draft faster by finding and aggregating content,” for example, became evidence of several things:

  1. Clients will, in fact, save time
  2. We can help them plan time savings in different ways
  3. We can help them use time savings for different purposes
  4. We can help vendors to serve them, their clients, or both

With such evidence we could fashion provisional services and validate them with customers – which is the next step of Blank and Dorf’s customer development process.

We Discover Something Unexpected

Our discovery process led to an ahh-ha! moment we didn’t see coming.

The wording of assumptions read like results or outcomes, as they should: Time saved, money saved, the summary of a section, etc. Tests would demonstrate the possibility, and perhaps the probability, of realizing them. But they demonstrated more.

Tests highlighted requirements for realizing a benefit, and also highlighted steps which would logically follow from a benefit. The view into workflows, benefits and risks, option analysis and decision making – all related to use cases GenAI could support – expanded opportunities for support. Not every opportunity would be a value proposition, but some could be. One unexpected value of hypothesis testing was the broadening of our conversation about value propositions.

Earlier this year, there might not have been a single service in your line of work which included generative AI. There might not have been a single customer wondering how generative AI could benefit them. Today, every customer is probably wondering how GenAI might help, and first offers might be under development by competitors. Customer development is a methodical way you can manage risks in a new and emerging market, and capitalize on its opportunities.

To learn more or launch your own discovery project, contact Lou.Kerestesy@DWPAssociates.com.

ThinkSpace 11-23 GenAI Discovery Project

November 2023 – Vol. 12; Issue 11

Competing for Admissions: Gaining Access to Top Contract Vehicles

Instability in Vehicle Admissions/Acceptance Processes

When selecting the most suitable schools or vehicles to apply to, understanding the structure, scope, and other risks involved in the procurement can help avoid unnecessary setbacks. The cancellation of Alliant 2’s small business track several years ago serves as a formidable example of why establishing a wide portfolio of potential schools to apply to is important. Many applicants who had centered their focus on this vehicle for IT services, found themselves disappointed and exposed to unnecessary uncertainty and a long gap until Polaris began to take shape. Even to today during the pre-award stage of the Polaris solicitation, protests have been filed and uncertainty is increasing. While unclear, the concern for many is that the new Polaris solicitation could be disrupted just like Alliant 2 was. In the situations where government agencies cancel admission of a particular program, two key outcomes typically occur: students and resources are redirected to different vehicles with the same offerings and, to address the demand for specific services, the obligations are distributed elsewhere. These continued issues regarding the government’s ability to consistently provide successful options underscores the importance of prioritizing additional pathways that offer stability and align with an individual or organization’s needs.

Select Your Top Schools and Have Strategic Backups

Many businesses are aware that recent paths from RFP release to award have become increasingly convoluted with changing timetables, multiple amendments, and significant protests. Even getting to submission isn’t a guarantee that there will be an award – see the recent cancellation of the $5.3B Air Force EC2 vehicle. Students on the EC2 degree path who initially hoped to secure their future engagement (and subsequent revenue stream) selling high value cyber solutions are left to explore alternative options and try to recoup their significant B&P investment. Vehicles and schedules with rolling admissions, such as Seaport NXG and the GSA Multiple Award Schedule (MAS) can be excellent safety schools in these scenarios. Despite the admissions process allowing the government the right to cancel the application process at any time, the disappointment of the contractors who put their valuable resources and time remains evident. Just getting on a contract vehicle in and of itself, as with applying to different schools, is an expensive proposition. Whether you’re a 2-person 8(a) or a 10,000-person large, the cost can easily run $100k, if not significantly more. Despite the expense, contractors continue to apply and invest knowing that these contracts can serve as a primary revenue source and making it increasingly important for individuals to consider multiple contract types for successful procurement. The cost associated with applying to various contracts or schools presents firms with the opportunity to allocate their resources strategically, focusing on institutions where they have a higher likelihood of acceptance.

Studying your Options

University Case Study: NASA SEWP VI

Even with Ivy League, best-in-class contracts like NASA SEWP, the procurement process can be unpredictable. While SEWP shows promise for continuing to be labeled as a best-in-class contract vehicle, in its current draft form, it still presents several challenges. Unlike some other contract vehicles, SEWP VI doesn’t impose a limit on the number of students awarded, which may change how some agencies and other schools view it. Additionally, if an individual doesn’t keep their grades up and secure a task order within 12 months of receiving an award, they effectively lose the ability to compete further. Moreover, the introduction of IT service categories has left large IT service providers uncertain about whether their related work will fall within that 12-month period or be pushed to the next semester. With the extensive cost involved for any organization to compete for contracts such as SEWP, the pathway to acquiring work remains uncertain. So even when you ask your college advisor if applying to a specific school will be a worthwhile investment, one simply cannot definitively tell. 

Avenues to Admissions: Teaming Arrangements

Contractors considering their options in today’s competitive college landscape for high-value scholarships and contracts can find a variety of pathways to mitigate costs and risks including CTAs, Joint Ventures, and Mentor-Protege Joint Ventures. For instance, Mentor-Protege JVs offer small companies access to the past performance of larger companies, unlocking opportunities they couldn’t access independently for both the large and small firms. CTAs, on the other hand, allow peer companies to combine their unique capabilities to better serve customers. However, despite the multitude of risk-reduction strategies, they can also introduce additional complexities and management challenges. In the highly competitive and uncertain market of procurement, understanding the appropriate amount of attention each application needs can make a significant difference in one’s initial and continued success.

10 Ways to Prepare for Upcoming GWACs

  1. Stay informed on all major vehicle schedules, familiarize yourself with RFP drafts, and begin identifying back-up schools/strategies in the event you aren’t accepted into your first choice
  2. Choose a degree path that is in alignment with your current/future capabilities to avoid having to change majors, or worse, getting kicked out of the program
  3. Use the application evaluation criteria provided in the RFP to determine which completed and/or ongoing projects position you the strongest
  4. Begin compiling your supplementary materials NOW to identify documentation gaps and reach out to COs before the busy application season starts
  5. Contact your previous institution/customer for a new, updated copy of your FY23 CPARS transcript
  6. Ask questions and continue asking until the government answers – it may take multiple rounds of Q&A before you actually receive a clear answer
  7. Get creative with your strategy and consider finding a teaming partner if you’re unsure whether your grades alone are high enough to meet the threshold
  8. Attend industry days and watch webinars from agency/sponsored admissions advisors as crucial information is oftentimes provided, as well as additional opportunities for Q&A
  9. Use advisors, peers, and other “upperclassman.” Consultants with extensive knowledge of the application process, in areas such as benchmarking and teaming arrangements, can help an organization better position themselves for success
  10. Ensure your immunization records including financial systems, facility clearance levels, and organization certifications are up to date and note that administrative/agency priorities may require new certifications in the future (e.g., sustainability)

Stephen Abernathy
571-409-4189
stephen.abernathy@dwpassociates.com

Andrew Stringer
401-261-0267
andrew.stringer@dwpassociates.com

Fiona Cronin
850-559-6395
fiona.cronin@dwpassociates.com