When AI Adoption Means Different Things to Different People, How Do You Get Them on the Same Page? – November 27

Work teams usually contain members of different types. Some are risk taking, others risk averse. Some think big picture, others work in the weeds. You have introverts and extroverts, different DiSC profiles, and more.

When people think, feel, and communicate accordingly, getting everyone on the same page can be a challenge. This is especially true of change, and that presents a special challenge for adopting generative AI (GenAI).

How do you help different types try and use generative AI so individuals, teams, and your organization benefit in ways you intend?

If you don’t have time to sort all this out, we’ll get you started. This article highlights two adoption frameworks and shows you how to put them to work, together.

Follow this page for weekly insights, and contact Lou.Kerestesy@DWPAssociates.com for more information.

A GenAI Challenge And Opportunity

Individuals and teams face known challenges adopting innovations. One 2015 National Library of Medicine study examined 20 adoption theories and frameworks, some of which have become well-known over time. One is the innovation adoption curve, popularly known as the technology adoption curve. Another is the technology acceptance model. More on both below.

What’s less well-known is how to support adoption by type. We have less research on this, and much of what’s available is written by typology advocates. With different types on our teams, this presents a generative AI adoption challenge.

When we talk about GenAI at work, we might expect colleagues or team members to hear one conversation. The same conversation. We might not consider that different types hear the conversation in different ways. That they think and feel their way through adoption in ways that make sense to them – and that there isn’t one way for all types.

So, if frameworks tell us the types of questions adopters ask, how do we help types of adopters answer them, together? How do we help individuals and teams have a collective conversation about generative AI without talking past one another?

Let’s look at the two frameworks we mentioned to see if they’ll help.

Framework 1: The Technology Acceptance Model

Developed in the 1980s, the Technology Acceptance Model (TAM) explains how users come to accept and use technology. It identifies two primary acceptance considerations – Perceived Usefulness and Perceived Ease of Use.

Perceived Usefulness refers to how much one believes a technology will enhance performance. Perceived Ease of Use refers to how much effort one believes will be required. TAM proposes that these two perceptions affect one’s intent to use a technology and that, in turn, affects one’s actual use.

One general finding is that Perceived Ease of Use affects Perceived Usefulness. The easier a technology is perceived to be, the more it’s seen as enabling better performance. The four-cell matrix is a common representation among these basic relationships.

Framework 2: The Diffusion of Innovation Theory

Developed in the 1960s, the Diffusion of Innovation Theory explains how innovation spreads through systems to be adopted or rejected. Like TAM, the Diffusion of Innovation Theory has been widely researched and popularly used.

The theory’s original author, Everett Rogers, investigated how innovations diffuse through social systems ranging from businesses to agrarian tribes. There are few adopters of any innovation at first, then more, then many, and over time adoption levels off. Adoption follows an S-shaped curve and Rogers identified five adopter categories along it:

  • Innovator
  • Early Adopter
  • Early Majority
  • Late Majority
  • Laggards

Rogers called this the innovation adoption curve, but today it’s well-known as the technology adoption curve. Rogers also hypothesized that adopters proceeded through five adoption stages:

  1. Knowledge – Knowledge is gained when an individual learns of an innovation’s existence, and gains some understanding of how it functions
  2. Persuasion – Persuasion takes place when an individual forms a favorable or unfavorable attitude toward an innovation
  3. Decision – Decision occurs when an individual engages in activities that lead to a choice to adopt or reject an innovation
  4. Implementation – Implementation takes place when an individual puts an innovation to use
  5. Confirmation – Confirmation occurs when an individual seeks reinforcement of their decision to use an innovation, or reverses that decision if exposed to conflicting information

Rogers, Everett M.. Diffusion of Innovations, 5th Edition (p. 23). Free Press. Kindle Edition.

Combining Frameworks

If we create a matrix using both framework’s key elements, we provide each adopter a way to think about and document their thoughts, feelings, hopes and dreams with regard to GenAI adoption:

AdopterFramework

In it, any user can document the information they look for to judge usefulness or ease of use (knowledge), what would make them form a favorable or unfavorable view of GenAI’s perceived usefulness or ease of use (persuasion), etc. Because each user would fill in the table, their comments would represent their voice as whatever type they happen to be.

Individuals, teams, or entire organizations could identify types if that were considered useful, but doing so isn’t necessary. If each person records their rationale in each cell, it gives colleagues, teams, business units, and organizations a basis for understanding perceived usefulness and perceived ease of use from multiple, diverse, perspectives.

That should be enough for differences related to types to benefit conversation, without making a study of types.

Conclusion

We want conversations to “get at” the different ways colleagues and team members think about generative AI, and excavating views by type is especially valuable for two reasons.

First, generative AI is available to your organization as general-purpose apps, domain-specific apps, and GPTs you create. Not only do you need to evaluate GenAI as a capability, you need to evaluate the form or forms in which individuals and teams will adopt and use it. It could be very instructive to see if different types prefer different forms.

Second, as organizational change goes, generative AI might be especially weighty because of its expected impacts to jobs and performance. Some will wonder, “Will my job change? Will I be able to change with it? Will my job go away?” Others might think, “We could use GenAI to improve so much – but we’re dragging our feet!”

To whatever degree views of GenAI’s change impacts depend on types, it would help you make adoption and investment decisions to know not just that certain individuals or teams view GenAI the way they do, but why they do. Especially if other individuals or teams view GenAI in the opposite way. Conversations about GenAI adoption with voices by type will help get you there.

Follow DWPA’s company page for weekly discovery insights. To learn more or launch your own discovery project, contact Lou.Kerestesy@DWPAssociates.com.

FedRAMP Modernization – November 20

The Office of Management and Budget (OMB) released a draft memorandum on October 27, 2023, outlining its recommendations for modernizing the Federal Risk and Authorization Management Program (FedRAMP). The OMB’s recommendations for modernizing FedRAMP are significant, as they reflect a recognition that the program needs to be updated to keep up with the evolving cloud computing landscape. If the recommendations are implemented, they could make it easier for agencies to adopt cloud services, drive innovation and improve the overall security of the federal government’s cloud computing environment. In addition, the recommendation could reduce the time, energy and perhaps cost of entry into the Federal government for cloud-based technology companies. A potential win-win for Government and Industry  

The need for speed and innovation 

OMB and the FedRAMP program office recognizes that our government must move faster to remain competitive and to stay ahead of our Adversaries Software as a Service (SaaS) remains the fastest growing segment within government cloud acquisitions. The US government is increasing its adoption of SaaS applications at a rapid pace. In 2022, US federal agencies spent a record $6.1 billion on cloud-based and SaaS applications, and this number is expected to continue to grow in the coming years. Factors driving this growth include: the need to improve efficiency and reduce costs, the desire to increase agility and innovation, and the need to improve security.  At the same time the introduction of new technologies/innovation such as Security, Artificial Intelligences, Machine Learning, Back Office Automation have exploded Artificial Intelligence (AI) market is a great example. As of November 2023, there were approximately 18,000 AI companies are based in the United States. This number has been growing rapidly in recent years, as AI technology has become increasingly powerful and accessible. Factors driving the growth of AI in the commercial and Public Sector markets include improved efficiency and reduced costs, desire to increase agility and innovation, and the need to improve security. SaaS providers typically have more resources and expertise in security than government agencies, which can help to protect government data from cyberattacks. 

The OMB recommendations are intended to accelerate the adoption of new technologies by the government.  

OMB key recommendations in the draft memo: 

  • Become more responsive to the risk profiles of individual services, as well as evolving risks throughout the cyber environment. This would involve developing a more risk-based approach to FedRAMP authorizations and considering the unique needs of each cloud service. 
  • Increase the quantity of products and services receiving FedRAMP authorizations by bringing agencies together to evaluate the security of cloud offerings and strongly incentivizing reuse of one FedRAMP authorization by multiple agencies. There is also language around a “No Sponsor “accreditations and the ability for companies implement “Proof of Concepts” up to 1 year for non FedRAMP compliant offerings. This would involve streamlining the authorization process for businesses and making it easier for agencies to adopt cloud services. A determination by the PMO would need to be made on the minimum number of security controls that would need to be implemented and the criteria for which the two approaches can be implemented. 
  • Streamline the authorization process by automating appropriate portions of security evaluations, consistent with industry best practices. This would involve using technology to reduce the manual burden of FedRAMP assessments and make them more efficient. The adoption of technologies and the refinement in approaches (Oscal, Continuous Monitoring.) should make Agencies more receptive to sponsoring new technologies. 
  • Improve sharing of information with the private sector, including emerging threats and best practices. This would help to ensure that both the government and the private sector are working together to protect cloud-based systems from cyber threats. 
  • In addition to these general recommendations, the draft memo also includes specific recommendations for improving FedRAMP’s approach to continuous monitoring, security controls, and risk assessments. 

The OMB’s recommendations for modernizing FedRAMP are significant, as they reflect a recognition that the program needs to be updated to keep up with the evolving cloud computing landscape. If the recommendations are implemented, they could make it easier for agencies to adopt cloud services and improve the overall security of the federal government’s cloud computing environment. 

FedRAMP Accreditation = Success?  

Congratulations, your company has invested the required energy, time, and capital to achieve FedRAMP accreditations. This is not an easy feat, but you now have an enterprise class offering which will be recognized by your potential customers in the Federal Government but also by the commercial markets that you serve (regulatory markets, retail, etc.) However, this does not guarantee your success in the Federal market. Understanding the nuances of the market is the difference between success and failure in this marketplace. 

Failure to develop a business case  

Many companies that attempt to enter the Federal market failed because they didn’t develop a business case. The companies fail because they don’t understand the dynamics of Federal Government market, the unique mission of the customers in order to secure sales let alone gain market share, fail to understand its competitors and their incumbency positions and/ or existing contract vehicles, fail to adapt their business model and/ or understand and comply with regulatory hurdles. Understanding your total addressing market within Federal is critical, it should be the first thing a business does before entering the market. 

By developing a business case, companies can identify and mitigate the risks associated with entering a new market. They can also ensure that they have the resources and capabilities necessary to be successful.  

Deep Water Point and Associates (DWPA) provides a 3rd party, unbiased market/business justification for companies wanting to enter the Federal marketplace. DWPA provides end to end services to accelerate client growth in areas of market research and intelligence, strategy and management consulting, business development services across the entire growth lifecycle. This is why so many businesses rely on the expertise of Deep Water Point and Associates to accelerate their understanding, entry and growth within the Federal marketplace. 

For more information, go to https://dwpassociates.com/ or contact Tom Ruff tom.ruff@dwpassociates.com

Federal agencies banner

What Is a GenAI Discovery Project? (And why do I need to know?) – November 14

Whether you use GenAI within your organization or you want to add it to services, where to start is a challenging question. Internally, you could run a small, first use at minimal cost and risk. You might even absorb the cost and risk of a somewhat larger trial. But costs and risks are different when you take a product or service to market.

To learn that clients don’t want the GenAI-assisted solution you took to market by taking it to market is costly. It wastes time and money, incurs opportunity costs, and can damage relationships and brand. There’s a way to prevent this and, ironically, it’s in the very unknowns that worry us about adopting GenAI.

The smart start is precisely with the things you don’t know. State all your guesses and assumptions. Turn some into hypotheses. Then test those to gather evidence for decision making. That’s exactly what DWPA’s GenAI Discovery Project does, and that’s what we’ll describe below.

What Will We Discover?

The term discovery in GenAI Discovery Project isn’t just descriptive – it’s prescriptive. It refers to a particular method for turning hypotheses about markets and customers into facts. It’s part of a larger method called customer development, created by Steve Blank and Bob Dorf to answer the question, “Why do startups with great ideas fail?”

In The Startup Owner’s Manual, Blank and Dorf argue startups risk great ideas by conducting product or service development without also conducting customer development. By conducting them in tandem, startups greatly increase their odds of going to market with a product or service customers want, and are ready to buy.

You don’t have to be a startup to benefit from customer development.

Today’s govcon market for GenAI-assisted products and services is so new, we’re all startups within it. Filled with more questions and hunches than facts, adding GenAI to existing services is sufficiently startup-like to benefit from customer development. That’s why DWPA is using the method to turn assumptions into facts for investment decision making. That’s what our GenAI Discovery Project is.

DWPA’s Discovery Process

We started in August by brainstorming every assumption we could think of about customers and the market. We generated dozens and grouped them in Osterwalder and Pigneur’s Business Model Canvas. 

Using that layout, we could see assumptions held about value propositions, customer relationships, customer segments, and channels – all outward facing from DWPA to the market. We could also see assumptions we held about inward-facing parts of the business model: Key activities, resources, and partnerships, revenue streams, and cost models.

Next, we turned assumptions into hypotheses. “GenAI will save time” became, “GenAI will get clients to a pink team draft faster by finding and aggregating content.” We literally reworded select assumptions as testable propositions using measures we could discuss or directly observe. To test, we used several capture and proposal tools on a trial basis, and we interviewed clients about their generative AI experiences.

Here’s where it got interesting. Testing didn’t just confirm or disconfirm hypotheses in a thumbs-up or thumbs-down way. Testing revealed new information which suggested new opportunities for support.

Testing the hypothesis that “GenAI will get clients to a pink team draft faster by finding and aggregating content,” for example, became evidence of several things:

  1. Clients will, in fact, save time
  2. We can help them plan time savings in different ways
  3. We can help them use time savings for different purposes
  4. We can help vendors to serve them, their clients, or both

With such evidence we could fashion provisional services and validate them with customers – which is the next step of Blank and Dorf’s customer development process.

We Discover Something Unexpected

Our discovery process led to an ahh-ha! moment we didn’t see coming.

The wording of assumptions read like results or outcomes, as they should: Time saved, money saved, the summary of a section, etc. Tests would demonstrate the possibility, and perhaps the probability, of realizing them. But they demonstrated more.

Tests highlighted requirements for realizing a benefit, and also highlighted steps which would logically follow from a benefit. The view into workflows, benefits and risks, option analysis and decision making – all related to use cases GenAI could support – expanded opportunities for support. Not every opportunity would be a value proposition, but some could be. One unexpected value of hypothesis testing was the broadening of our conversation about value propositions.

Earlier this year, there might not have been a single service in your line of work which included generative AI. There might not have been a single customer wondering how generative AI could benefit them. Today, every customer is probably wondering how GenAI might help, and first offers might be under development by competitors. Customer development is a methodical way you can manage risks in a new and emerging market, and capitalize on its opportunities.

To learn more or launch your own discovery project, contact Lou.Kerestesy@DWPAssociates.com.

ThinkSpace 11-23 GenAI Discovery Project