The Small Business Innovation Research (SBIR) Program: Helping Technology Firms Make a Big Impact. DWPA: Here to Amplify Your Success – December 6

In the quest to connect the innovative products and services created by small technology start-ups with federal agencies facing mission challenges, the Small Business Innovation Research (SBIR) program bridges the gap. This impactful initiative fosters technological innovation and stimulates economic growth by allowing entrepreneurs, start-ups, and small businesses to submit proposals for research and development (R&D) projects. If selected, these organizations receive funding to further develop their offerings, facilitating the leap from early-stage concepts to commercial viability while addressing the specific needs of federal agencies.

How Does the Program Work?

Each year, SBIR awards over $4 billion in grants, ranging from $50,000 to $1.5 million, in areas aligned with U.S. government national priorities, including autonomous systems, artificial intelligence (AI), machine learning (ML), cloud computing, cybersecurity, biotechnology, and space technology. Selected companies undergo a three-phase process: proof of concept, technology development, and commercialization. At the commercialization state, non-SBIR resources – such as private investors, government contracts, or sales revenue – take over funding. Various agencies across the Department of Defense (DoD) and federal civilian sector, including the U.S. Army, Navy, Air Force, Space Force, Small Business Administration (SBA), Department of Homeland Security (DHS), and many more, distribute SBIR funding annually.

Benefits of the SBIR Program for Small Tech Businesses and Investors

Participating in the SBIR program provides small businesses with early-stage, high-risk funding that may otherwise be inaccessible. The grants and contracts do not require equity stakes for issuing agencies, allowing businesses to retain full ownership and control. SBIR funding enhances the competitiveness of participating small businesses, enabling them to develop cutting-edge technologies and solutions that keep pace with an ever-evolving market.

For venture capitalists (VCs) seeking validation for investments in promising portfolio companies, SBIR awardees often develop disruptive technologies that signal prime investment opportunities. SBIR funding effectively de-risks early-stage ventures, allowing VCs to leverage government-backed validation to invest with confidence. Additionally, the program ensures that VCs are investing in areas critical to national security and economic growth, since SBIR-funded projects align with federal R&D mission needs.

Where Does Deep Water Point & Associates (DWPA) Fit In?

DWPA combines government expertise and industry insights, providing immense value to small businesses and VCs looking to thrive and grow in the complex federal market. With a bench of over 450 government experts averaging 32 years of experience, DWPA is the ideal partner to help you mitigate risks, navigate complexities, and increase your chances of success in the U.S. federal marketplace.

As part of your SBIR journey, DWPA leverages AI to offer automated opportunity alerts tailored to your specific interests and strengths (or those of your portfolio companies). Once the right opportunity is identified, we assist in crafting compelling proposals that improve your probability of securing SBIR funding. We also provide ongoing SBIR training to educate VCs and portfolio companies on best practices, along with data-driven insights into the total addressable market (TAM) within the federal government, including government spending, competition, and routes to market. Throughout your journey, we provide transaction advisory services and build superior merger and acquisition (M&A) situational awareness to support your continued growth. The DWPA SBIR program also provides potential access to third-party Cloud Service Provider (CSP) partner funding to accelerate the small tech’s market entry.

Ready to Get Started?

If you’re eager to bring your innovative idea to market or explore promising new investment opportunities, we’re here to guide you! Click here to learn more about the SBIR program and DWPA’s SBIR service offerings today.

“To Shape or Not to Shape?” Can a Company be too Selective in its Opportunity Pursuit Decisions? – June 26

“We don’t bid opportunities if we haven’t shaped them. So, we don’t need to evaluate everything released on our vehicles.” Engaging with an agency’s potential users of your solution and their makers of decisions to buy is far better than bidding an opportunity you know nothing about. You want to learn about their primary cares, culture, the context for the solution, and their thoughts about alternatives and competitors — information that won’t be in the solicitation. And ideally you want to become a trusted advisor to them by studying their problems and engaging in a consultative development process in which you and the customer together define the key criteria for success, assess the alternatives to address them, and build a consensus among the decisionmakers. This is the best practice. When I’ve done it, I won those opportunities more often than not.

Some companies follow this rule exclusively and succeed. On January 2 each year, they set down a plan of all the opportunities they are going to bid in the year, and they stick to it. No “pop ups” allowed. It works because they bid multiple times the number of opportunities they need to win to achieve their growth target. So, if they lose some, and some get delayed, they can still hit their number. It is expensive, but it works.

Even for companies that can afford to be that selective, I would argue every company should be evaluating every opportunity announced on their vehicles. First, because agencies issue market surveys and RFIs that indicate possible future procurements and that represent opportunities to engage with an agency to have those consultative dialogs. 

Second, especially this year, because agencies may move quickly to issue solicitations to commit new funding to address new starts. They may not have the time for consultative discussions; and, your business developer, however great he or she is, probably doesn’t have perfect knowledge of the agencies’ procurement plans.

Third: if you can have an automated 24x7x365 co-worker with unlimited capacity to download and read everything and tell you about the announcements of any opportunity that fit you, or that fit your competitors, why wouldn’t you want to know?  That’s what NorthStar does for subscribers. At the least, you expand or maintain you knowledge about what your customers are doing. Better than that, you can detect opportunities to exceed your plan.

Besides all that, it is getting less expensive to bid pop-ups and easier to win them.  Imagine this: you have a NorthStar subscription. It is ingesting and scoring opportunities from GSA MAS and your other vehicles hourly. At 10 AM you get an alert from NorthStar. You see a solicitation that has a high Druthers Score™ fit to your preferences. You open up NorthStar and can quickly scan a summary of the opportunity, see why it has that score, see that responses are due in two weeks, and that there are no “showstoppers” that keep you from being able to prime. You see the scope is for services at which your firm excels and that the task is for an agency where your protégé firm has a strong track record. You call them up and they are available to team. You need to close your customer intimacy gap, so you contact your representative at Deep Water Point & Associates and she says they have an agency expert who worked in that office until last year and knows the opportunity, the relevant operations, and the decisionmakers. You schedule a meeting with that expert to get briefed on the ground truth of the opportunity. You push the solicitation materials into your generative AI proposal writer. It produces a compliant outline and a first draft response. You pass parts of this out to your rapid response proposal team and over to your protégé to further develop.  Five days later, your team has completed a proposal that demonstrates knowledge of the agency’s context for the solution, differentiates on the most important factors, is written in the agency’s language, presses all the right hot buttons, and ghosts the competition’s weaknesses. You are ready to submit way before the deadline and for a small fraction of the cost of the typical capture and proposal. 

You can’t run a business depending on pop-ups. Neither should a business be certain it knows the best opportunities that will be released in the year ahead. Maybe a flexible model that does both opportunity shaping and responds to pop-ups is the fastest growth path.

If you’d like to work smarter, not harder to identify relevant opportunities, reduce costs, increase profitability, and win more contracts, schedule a demo today. We’d love to show you how GWAC NorthStar can help you crush agency deadlines and secure more business in federal government contracting.

My Kingdom for a Horse! Finding the Right GovCon Opportunities Requires the Right Tools – May 14

At the end of the Shakespearean play, King Richard III, Richard cries out, “A horse, a horse, my kingdom for a horse!” He realizes he is in a dire situation, surrounded by his enemies, and his horse is dead. In desperation, he is willing to forfeit his entire kingdom if only someone would give him a horse. Horses were a key component of winning battles and King Richard knew that without his horse, he would lose. So, what does this all have to do with government contracting? Having the right tools can improve the chances of a good outcome when it comes to battling with adversaries or trying to improve the probability of winning more government contracts.

Winning a governmentwide acquisition contract (GWAC) is a great day in any company. It means you’ve made the cut and joined a club that has the exclusive right to bid on volumes of services or products for an agency, or in the case of a GWAC, every agency in the federal government. But if the company is not equipped to swiftly process all the announcements released on the vehicle, their return on winning the vehicle is diminished right from the beginning. Furthermore, if the company is trying to process the opportunities with people power alone, then they are spending far too much of their business development budget on it.

The idea behind these contract vehicles is to consolidate agency spending among the top providers. In awarding a GWAC, the contracting agency prequalifies contractors’ capabilities, bona fides, and prices. This presumably enables buying agencies to get orders out faster because in every subsequent competition for a task order, the source selection team only needs to evaluate the technical approach, solution, and extended price.

However, the flip side of that consolidation of spending is that a smaller set of companies receive a large flow of opportunity announcements. If a company is not adequately staffed to evaluate the announcements and find opportunities that fit their best capabilities and interests, then they may spend their time and resources studying or bidding on incompatible opportunities while the prime ones whiz by. This means that the agency searching may not be getting all the competition expected or receiving bids from the best-suited providers.

Let’s quantify the problem. The table below provides a real example of the target-rich environment of an aerospace and technical services provider. It shows several GWACs they hold and the categories/special item numbers they have on the Multiple Award Schedule (MAS). For each vehicle, we can see the average annual count of awards over the last three years and estimate the number of announcements by the government on the way to making an award. Announcements are requests for information, draft requests for proposal (RFPs), questions and answers about draft RFPs, final RFPs, more questions and answers, and amendments related to RFPs. We estimate at least four announcements per opportunity prior to an award.

Contract Vehicle
Special Item Numbers
Average Annual Count of Awards
Estimated Annual Count of Announcements

MAS

541330ENG, 541380, 541420, 541611, 541614, 541614SVC 541715, 561210FS, 611430, 611512, OLM

6,703

26,812

HCaTS Pool 2

20

80

OASIS SB POOL 4

6

24

OASIS SB POOL 5B

6

24

OASIS SB POOL 6

15

60

OASIS Unrestricted POOL 1

113

452

OASIS Unrestricted POOL 3

12

48

OASIS Unrestricted POOL 4

10

40

This company has to process 27,450 announcements annually, many of them simultaneously. Now let’s look at exactly what it takes to process them. See how it compares to the way your company handles this problem.

  1. First, assume someone must log in to eBuy three times a day to check for anything released to the firm, note the metadata about the opportunity in the eBuy portal, and download all attachments. Estimating this at 12 minutes each time they log in, 350 days of the year, this step consumes about 210 hours annually.
  2. The next step is to perform quick key word searches, in Windows File Explorer or in Mac Finder, at the file level on materials downloaded from eBuy. This search is to determine which items should be opened by detecting missions, scopes, technologies, or use cases of interest to the firm. Performing this three times per day on downloaded materials, 15 minutes each time, 350 days per year requires another 263 hours annually. This produces a “filtered list” of opportunities that only may fit. By the way, we all know that key words about scope, NAICS, or desirable technologies are not the only factors to consider, but this is the limit of the typical company’s technology available to filter opportunities.
  3. Then, a preliminary screening of items found in the filtered list must be performed by an employee. If we assume 30% of the annual count of announcements on each vehicle has key word matches, performing this step requires 10 minutes to navigate to the objectives and scope sections and quickly skim to assess the opportunity, then in the above example 1,377 hours are required for preliminary reading annually. Ten minutes is needed even if someone dumps the solicitation attachments into ChatGPT to get a summary they can quickly digest.
  4. The next step is to perform a deeper reading of the opportunity files for those that pass the preliminary screening. If we assume that 20% of the opportunities passed preliminary screening (i.e., 20% of the 30%) and that performing a deeper reading of those takes two hours per opportunity, this adds 3,305 hours annually.
  5. The last step is to route the opportunities that fit the firm’s interests to the company personnel who need to decide whether or not to commit resources. If we assume that half of the 20% that were read deeply fit well, and that 30 minutes is required to disposition each opportunity (50% of the 20% of the 30%), then we need another 330 hours annually.

Totaling this up, 5,485 hours are required to perform the rudimentary process as we have described it. That’s equivalent to 2.8 full-time personnel, which would cost around $300,000 in today’s market. The bottom line: for many companies, $300,000 is the equivalent cost to prepare a proposal submission and too much to spend on finding opportunities.

In the past, this onerous process has been how companies have handled the drudgery of finding seemingly relevant government contracting opportunities to bid on. But now, there’s a revolutionary opportunity evaluation solution available to organizations of all sizes that can greatly reduce the amount of time and money spent on identifying the right opportunities to consider.

For a much lower cost, all five of the above steps can be done automatically, faster, and better than with people power alone. GWAC NorthStar™ applies business developers’ logic and criteria to determine how well an opportunity fits each client. We codified decision criteria that you can directly configure in the system. Then, our solution automatically ingests, reads, and discovers all the attributes necessary to your decision, and scores each opportunity accordingly.

This tool enables you to, at a glance, see the most relevant items deserving your attention, saving you from wasting precious time and resources on the others.

Every service Deep Water Point & Associates (DWPA) provides combines our unique and powerful combination of agency insights applied by practitioner experts to enable clients to produce compelling advantages. A subscription to GWAC NorthStar can also lead to better win probability. DWPA has over 400 former federal agency executives with deep knowledge of programs, operations, culture, competitors, and decision makers. GWAC NorthStar subscribers can engage these experts to prepare for sales calls, for knowledge transfer on opportunity context, key personnel hot buttons, etc.

Find better opportunities faster and win more. Interested in learning more? Schedule a demo today and find out how GWAC NorthStar can revolutionize the way your company grows federal government sales.

Navigating the Federal AI Landscape—with a Guide – January 29

The Federal AI landscape is enormous, and the terrain varies widely. Whether you’re entering for the first time or hiking in a new area, DWPA’s AI Innovation Cell and AI Landscape report are the map and compass you need to avoid missteps and wasted time.

If you were dropped into unfamiliar terrain with a map and compass, you could navigate to any destination. But if you had to navigate without a map and compass, you’d make a lot of guesses. 

Is that my destination I see, or something else in the landscape? How far can I follow this river? Does the ravine get too steep to walk? Is that peak the actual summit or a false summit? 

With each guess leading to new discovery, you might stack guesses on guesses as you “correct.” 

Add some weather, makeshift shelters, finding food and water, and the occasional predator, and could spend a long time not reaching your destination. At potentially great cost. 

This describes navigating the Federal AI landscape, today. The landscape is enormous and varies widely. It contains some known features and paths, but much is untamed and unmarked. Here are a few noteworthy features of that landscape: 

  • Some agencies have used artificial intelligence for decades. But Defense, Intelligence, and Civilian sectors have different histories of use, needs, budgets, and suppliers. Do you know how AI manifests itself in mission and business priorities? Do you know what customers will buy next? 
  • Generative AI use is much newer, and users are fewer in number. As programs begin their own navigation of that terrain, they have more questions than answers. Do you know their environment well-enough to guide their journey?
  • The Executive Branch has issued numerous complex strategies, frameworks, guidance, policies, procedures, and blueprints. Some protect missions. Some protect civil rights. Some blend the two. Do you know what’s foremost on the minds of potential customers?
  • The government is concerned about civil rights violations in AI-supported analysis and decision making. Do you know how to meet these requirements during solution development? In business development – especially in early requirements-shaping conversations – do you know what to say to demonstrate your knowledge of and compliance with the requirements?
  • Some AI legislation has been passed, and Congress has more in the hopper. Add to these Executive Orders, OMB directives and proposed regulations; budgets and budget artifacts; agency strategies and frameworks; standards-setting documents; SBIR/STTR releases, OTA solicitations, and R&D announcements; Congressional testimony and Committee reports; GAO and CRS reports; and trade regulations. Do you know where to watch and read to stay up on developments that will impact your business? 
  • The Biden Administration’s October 30, 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” contained 186 shall statements and 98 deadlines. Do you have the resources and expertise to know which affect you?
  • Dual-use featured prominently in this EO, addressing AI and generative AI, and clearly describing when export control laws and regulations applied. Dual-use controls might apply to entities never before covered. Do you know if they’ll apply to you?

The Federal AI landscape presents daunting competitive, contracting, and project management challenges for existing and new AI solution providers. DWPA’s Federal AI Landscape market intelligence report will provide the map and compass you need to navigate the environment and meet those challenges. 

The scope of change and opportunity is enormous. A single example is the just-released Department of Defense (DoD) 2023 ‘Data, Analytics, and Artificial Intelligence Adoption Strategy,’ which focuses on longstanding goals for a “unified approach across data, analytics, and AI activities; an educated, empowered workforce skilled at incorporating commercial teams and tools; continued advanced research and rapid experimentation; and effective integration with our Allies and partners.” The DWPA Federal AI Landscape will forecast where that strategy is likely to lead to help you shape opportunities that are likely to follow.  

It’ll also track the DoD’s Chief Digital and Artificial Intelligence Office’s effort to understand how DoD might accelerate the adoption of generative AI to support warfighters. It’ll also evaluate DoD objectives in conjunction with mandates from the new FY2024 National Defense Authorization Act, which signals an urgent need for AI proficiency backed by appropriations exceeding $34B for AI/Machine learning (ML) technologies and basic research.  

The Federal AI Landscape report is being developed by DWPA’s AI Innovation Cell. The Cell is staffed with select agency, technology, and business development experts from nearly 500 company Associates. Using primary sources and comprehensive research, the Cell analyzes and tracks the “features” of the landscape noted above, plus more, to provide clients actionable information about the who, what, where, how, and when of Federal AI opportunities for client capabilities. And with the depth of DWPA’s agency experts, you’ll understand the why

DWPA will begin taking subscriptions for the Federal AI Landscape report and AI Innovation Cell in March 2024. Contact Ted.Milone@DWPAssociates.com or Michael.Dougherty@DWPAssociates.com in our Market Intelligence section for more information.

Follow us here on ThinkSpace to learn more.

Will Generative AI Help You Grow in 2024? – January 9

There’s no standard way organizations first try generative AI, but it’s common for early adopters to use it, initially, on job-related tasks. One user searches for specific experience in scattered resumes. Another analyzes and formats data for a report. A third revises content for a proposal. Early adopters typically set out “to see what they can do” using tasks they know well, and then see what they learn.

Positive experience is reinforcing, and that gives generative AI the potential to spread rapidly. What starts as unplanned point improvements can quickly become planned process improvements, as early adopters see the potential for efficiency and effectiveness gains. There’s also a logic to beginning with single-task uses and advancing to uses broader in scope and impact. As the pyramid figure suggests, this progression begins with tasks before moving to parts of a process, entire processes, related processes, and then broad business functions.

AI-01.09.24

This trajectory isn’t inevitable. Individuals and teams need time to gain generative AI knowledge and skill, and organizations can do many things to enable or hinder that knowledge and skill acquisition. You can expect early adopters to be motivated to do more, however, and the next adopters to observe with interest. Whether individuals and teams began using generative AI in 2023 or they start in 2024, you’ll notice they want to advance use to create more benefit. Only leadership can turn use into adoption to produce strategic gains, not just tactical.

In December 7 and December 8 ThinkSpace articles, DWPA distinguished adoption and use and explained how adoption can lead to strategic gains. Growth is a central strategic gain, so how do you harness early adopters’ experience and energy to grow in 2024? Consider the following four principles or practices.

  1. First, know your strategic intent and write it down for everyone to know. Clarifying growth goals will channel early adopters’ efforts who would use generative AI differently for different ends, such as to position in the market, enter an adjacent market, reduce costs, or create new value propositions.
  2. Second, decide how you’ll measure progress and success. This not only tells you how well efforts produce results, it helps early adopters further target generative AI use. Generative AI is a powerful, nuanced capability which requires a fair degree of trial and iteration. Working from broad objectives subject to interpretation can waste time, money, and effort. Knowing exactly what target to aim at will enable individuals and teams to make the best choices. It’ll also help with Practice 3.
  3. The third practice is to assess organizational capabilities against your strategic intent. This can be an extensive effort you might wish to undertake for many business reasons. For purposes of harnessing early adopters’ experience and energy to grow in 2024, you can chunk it down. Ask early adopters to identify capabilities needed to accomplish the growth objectives they support with generative AI. They’ll know the task, process, resource, partner, and other requirements they need to succeed[1].
  4. The fourth and final practice is to think like an entrepreneur. By adopting generative AI, you’re doing something different to create new value. This necessarily involves the discovery and validation of new business model elements, and your teams might not be familiar with ways to do this. Encourage them to identify assumptions, formulate hypotheses to test, and then review evidence they gather. Establish the practice of discovery, appraisal, and application of what is learned and you’ll increase your odds of using generative AI to grow.

As you start a new calendar year, one-third through the fiscal year, the question isn’t whether you’ll harness early adopters’ efforts to grow. Nor is the question when, because when is now. The question is how you’ll draw together the curiosity, talent, motivation, and ingenuity of individuals and teams to support growth and other business objectives. These four generative AI adoption practices will get you started.

Follow us here on ThinkSpace to learn more. For details, contact your Client Executive or Lou.Kerestesy@DWPAssociates.com.

[1] To organize what can be far-ranging discussions, DWPA recommends using the Business Model Canvas (or similar framework) for these conversations.

AI, Export Controls, And You – December 18

On October 30, 2023, the Biden Administration issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. You can read DWPA’s summary of the Order’s purpose and intent here. Below we explain what the Order’s language about “dual use technologies” could mean to your business.

Artificial Intelligence has been part of government contracting and consulting for decades. As of September 1, 2023, AI.gov lists more than 700 use cases across 19 departments. The U.S. Government Accountability Office’s December 12, 2023 report, Artificial Intelligence: Agencies Have Begun Implementation But Need to Complete Key Requirements, identifies more than 1200 current and planned uses in 23 departments. And the General Services Administration identifies over 1200 members from 60 agencies in its AI Community of Practice.

Generative AI (GenAI) promises to increase use cases as readily available tools make GenAI accessible, affordable, and powerful for government agencies and contractors.

The Biden Administration’s Executive Order renewed focus on how AI policy will impact competitiveness, intellectual property, privacy, and national security. A key impact for U.S. companies will be compliance with Export Controls as firms consider export constraints to develop, implement, and offer AI systems and tools. 

Key Things to Know

Robust export controls already exist in the US, in two ways. One is “defense articles and services” governed by the State Department’s International Traffic in Arms Regulations (ITAR). The other is control of “dual use” technologies with both commercial and potential national security uses, governed by the Commerce Department’s Export Administration Regulations (EAR).

It’s relatively straightforward to identify and apply controls to “defense articles and services” subject to ITAR. It’s in the area of dual use technologies that regulations are less well-known, and more ambiguity exists. These require vigilance on the part of companies to ensure compliance as they consider how to employ AI in their offerings.

A critical business question is what will be controlled?  Generally, dual use technologies are controlled by “item-based” controls like systems and hardware (e.g. CHIPS Act export direction impacting advanced semi-conductor release), or by “end-user” controls on countries, organizations, or individuals (e.g. the “Entity List”). But there is also a category of less well-understood controls that focus on the “end use” itself, and place obligations on exporters to have “knowledge” of what end users might do with the technology. These are end uses that could involve support of nuclear, missile or unmanned aerial vehicles, or chemical/biological capabilities.

The responsibility to abide by these controls and requirements for compliance is entirely on “US persons,” defined as both individuals and companies. There are substantial penalties, both criminal and civil, that apply to both.     

Call To Action For Companies

In addition to the existing export controls, the EO will almost certainly drive new rulemaking at both the Departments of State and Commerce. Future regulations combined with the fast-evolving AI landscape mean companies should carefully evaluate and address export controls as they bring new capabilities to market.

The best practice is normally to obtain expert export control and/or legal advice. The best risk management for companies at this point is to do this early to avoid risky and potential expensive impacts from export considerations.

To learn more contact Lou.Kerestesy@DWPAssociates.com.

Summary of AI EO Purpose and Intent – December 18

On October 30, 2023, the Biden Administration issued its “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

Deep Water Point & Associates (DWPA) has substantial experience with laws, regulations, guidance, programs, and requirements central to the Order. We’re analyzing the EO to understand whether and how it might affect our AI and generative AI use, and our clients. Federal contractors and SaaS or PaaS cloud service providers not well-versed in Department of Commerce Export Administration Regulations (EAR) might start investigating. This article summarizes the Order’s purpose and content.

This EO builds on recently published AI strategic documents and frameworks by Federal agencies and institutes. It points out that much of what’s already done for software development and data laws also applies to AI. With 186 shall statements and 98 deadlines, the EO establishes clear direction and cadence for next steps needed by the Federal government. It addresses AI and generative AI, and clearly describes when export control law and regulation apply.

The Order’s 13 sections are summarized below. Sections 4 – 11 constitute the Order’s “eight guiding principles and priorities.”

Sec. 1. Purpose emphasizes the significance of responsible AI use, highlighting its potential to address critical challenges and improve various aspects of society, while also acknowledging the risks associated with irresponsible use. It underscores the need for collaboration between government, the private sector, academia, and society to harness AI for good while mitigating its risks.

Sec. 2. Policy and Principles states that it is the policy of the Biden Administration to advance and govern the development and use of AI in accordance with eight guiding principles and priorities. This Section separately describes the eight guiding principles and priorities, which are Sections 4 – 11.

Sec. 3. Definitions defines 32 terms. Noteworthy among them is the term dual-use foundation model, which is used 16 times and is central to developer, user, and agency requirements and prohibitions.

Sec. 4. Ensuring the Safety and Security of AI Technology is the largest of the Order containing more than a quarter of the entire Order, one-quarter of its deadlines, and almost one-third of its shall statements. This section details guidance and direction pertaining to safe and reliable use, almost two dozen infrastructure as a service requirements, cybersecurity, biosecurity, and other types of uses and risks. Section 4 contains one of two uses of the term red-teaming pertaining to generative AI. Section 10 contains the other.

Sec. 5. Promoting Innovation and Competition outlines measures to attract and retain AI talent to the US, promote innovation through public-private partnerships, provides guidance to patent examiners, and identifies measures to support AI in healthcare, for Veterans, in climate change, scientific research, and other domains.

Sec. 6. Supporting Workers emphasizes the government’s commitment to understanding and addressing AI impacts on the workforce. It directs the development of reports analyzing labor market effects and workforce disruption mitigation principles and best practices, and education and workforce development.

Sec. 7. Advancing Equity and Civil Rights outlines the government’s efforts to address discrimination, promote equity, and protect civil rights in various aspects of AI deployment, including the criminal justice system, government benefits and programs, and the broader economy.

Sec. 8. Protecting Consumers, Patients, Passengers, and Students highlights the government’s efforts to ensure the responsible and ethical use of AI in healthcare, education, transportation, and communications, while protecting consumers and addressing potential fraud, discrimination, and privacy risks.

Sec. 9. Protecting Privacy emphasizes the government’s efforts to address and mitigate privacy risks associated with AI, promote the use of privacy-enhancing technologies (PET), and support PET guidelines, research, and development.

Sec. 10. Advancing Federal Government Use of AI is the second largest section of the Order. It highlights steps and guidelines to advance the Federal government’s use of AI and enhance its AI talent and management. It forms an interagency council to coordinate the development and use of AI in agency programs and operations, other than the use of AI in national security systems. Section 10 contains all the Order’s only references to the Technology Modernization Fund. It also contains one of two uses of the term red-teaming pertaining to generative AI. Section 4 contains the other.

Sec. 11. Strengthening American Leadership Abroad underscores the importance of the United States in global AI leadership, setting standards, promoting responsible AI development and deployment abroad, and addressing cross-border AI risks, particularly in critical infrastructure.

Sec. 12. Implementation establishes the White House AI Council which will coordinate AI-related activities and policies across the Federal government. It identifies the Assistant to the President and Deputy Chief of Staff for Policy to serve as the Councils’ Chair. It identifies 28 agencies’ secretaries, directors, and chairs as members, plus the heads of such other agencies, independent regulatory agencies, and executive offices as the Chair may designate or invite to participate.

Section 13. General Provisions ensures that this EO is not read as impairing authorities granted by law, or as establishing existing authorities or government functions.  

To learn more contact Lou.Kerestesy@DWPAssociates.com.

Using Generative AI Safely – December 13

A conference presenter recently told an audience, “Whatever you put on ChatGPT is out there. Gone for good. Out of your control.”

We hear that dire warning a lot and it raises serious concerns about business use of public tools like ChatGPT or Bard. The warning could also be more cautious than it needs to be, and cost you more than it buys in protection. Let’s see.

What Is Generative AI, And How Does It Work?

Most software we use is deterministic. It produces the same output given the same inputs and conditions. We rely on that predictability when it comes to writing emails and reports, and analyzing sales or budget scenarios.

By contrast, GenAI is generative. It’s designed to produce diverse and even creative outcomes using the same or similar inputs. We want it to brainstorm with us. To summarize a report in its words. Or to change the tone of an email for us.

GenAI does this by using language patterns. It recognizes the relationship of words, phrases, and sentences and then uses statistical probability to select the best sequence of words to return to you, based on your prompts.

When you hear talk of GenAI training, this is what’s meant – training it to recognize and use language patterns. As an example, ChatGPT was trained on 300B words, including scoring and weighting them based on how they were used in sentences. This “deep learning” is what makes generative AI useful.

What Has GenAI Training to Do with Safe Use?

The way GenAI works tends to limit what others can know about your use. While it’s true that GenAI tools read your prompts and might store them for future training, GenAI’s focus on language patterns rather than whole entries helps control risk but not eliminate it. Consider an example.

Say you cook and want to make a tomato sauce you’ve never made before. You search online for something you haven’t heard of, and search engines return entire recipes to you. All the ingredients, quantities, steps, and times for you to read – as you would expect.

But what if you used GenAI?

Let’s say I had previously put my grandmother’s secret tomato sauce recipe – which includes a dash of soy sauce at the end – in a prompt asking a generative AI tool (a GPT) to make a shopping list for me. Let’s also say the GPT stored my prompt for future training. Would it return my grandmother’s recipe to you like search engines would?

Because GPTs analyze language patterns to return language patterns to you, it’s not likely to return her entire recipe the way a search engine would. But, had you told it you wanted to try something unusual, it could very well inform you that “Some tomato sauce recipes use a dash of soy sauce at the end” because that’s novel. It could offer that tip along with others, all based on novel ingredients from thousands (tens of thousands?) of tomato sauce recipes.

It matters little if a GPT returns my grandmother’s entire recipe to you if her secret ingredient is identified for you. Her secret is out. But had you asked a GPT for Indian tomato sauce recipes, or different recipes with paprika, it might not have considered a dash of soy sauce at the end relevant. Remember, it’s all about what you ask and the relevance a GPT determines using language patterns and statistical probability.

So, is your proprietary or privileged business information at risk of being made public, through your use of GPTs trained on your prompts?

The answer is not no, but is it ever? The answer is yes, depending, and now you understand why. What, then, are safe uses of public GPTs?

A Word About Types of GPTs

AI terminology can be confusing. Glossaries contain dozens of terms, many of which sound like they say the same thing. Even the boundaries between simple terms like open, public, and proprietary aren’t so clean that certain terms always and only apply to ChatGPT or Bard, for example, while other terms always and only apply to, say, ACME Inc’s AI-assisted proposal tool. For the sake of easy reference, let’s divide products this way:

  • Public refers to ChatGPT, Bard, and others you can try for free by registering at the tool’s website
  • Private refers to dedicated, domain-specific tools you pay to use by user, per month, or by some other unit

We realize this might confuse architectures, fail to account for products with free and paid versions, ignore distinctions between publicly and privately held companies, and more. That’s okay because making those distinctions won’t change what we’re saying about safe use.

One safe-use advantage of a private tool is you can build and separate your document repository, and use only your repository to train the tool. Your vendor’s tool might also have a data relationship to foundational models, however, which might expose your data to others through training. Vendors know how to firewall your data and let you opt out of model training. Read the vendor’s and data use and privacy policies, understand the tool’s settings, and talk to the vendor if you have questions.

Can you also use a public tool safely? You can.

First, public tools might also permit you to prevent sessions from being used to train the GPT. Read their data use and privacy policies to understand how your data will be used, and to see if you can opt out of training.

Second, many valuable uses will have nothing to do with proprietary or privileged data. A proposal manager might use a GPT to improve their understanding of technical issues, to improve their conversations with technical SMEs. A team lead might role play with a GPT to understand the perspective of others on the team without ever using proprietary information. If you want to keep the risk-reward scales tipped in your favor, clarify what you want to accomplish with a particular use, know what success looks like, and ask yourself what might go wrong. You’ll find many ways to prompt a GPT which don’t require business data or information.

So, What’s the Bottom Line?

Recall my colleague’s dire warning at the conference: “Whatever you put on ChatGPT is out there. Gone for good. Out of your control.”

It’s true that the content of your prompts can be out there, depending on policies and settings. But it’s also true you can prevent the leaking of proprietary and privileged information.

But it’s also true that the way GenAI uses what’s out there reduces some risk for you. How safe that feels is a subjective judgment we’ll talk about in the next article. But understanding how GenAI trains helps you understand how information you provide in prompts can show up for future users.

In the GenAI Discovery Project, DWPA is experimenting with public and private tools. Using public tools, we know there’s zero chance we’ll give competition any advantage – because there’s no advantage at stake. There’s no soy sauce in the prompts. For uses where there’s a chance we could give something away, we know it’s a small chance and we weigh the gain we want from the harm we don’t want, and act accordingly.

DWPA has not used private tools, yet, beyond Discovery Project trials, so we can’t speak to practices with them. We know private tools have additional safeguards built in. If you use or are considering a private tool, talk to your vendor about how it’s trained and how your data might be included.

Whether using a public or private tool, read your tool’s privacy policy or statement. They’re not generally written for human reading, but gut it out so you know what’s happening to your data. You’ll probably see a choice for opting your content out of tool training. DWPA has exercised that option.

Beyond understanding how GenAI tools train and work, safe use comes down to use cases and risk tolerance. We’ll look at that in the next article but, for now, we’ll leave you with the thought that you probably already engage in a practice which is like determining GenAI safe use: Asking questions at an industry day, or in written Q&A during a solicitation process.

You can ask in ways which show your hand, or in ways which don’t. You weigh the odds of gaining information to your advantage versus benefiting your competition and neutralizing your gain. You might have done this for years, and it’s a risk-reward decision similar to deciding how to use GenAI, especially public tools.

To learn more, contact Lou.Kerestesy@DWPAssociates.com.

Prompts Are Easy. Adoption Is Hard. Here’s How to Be Ready. (Part 2 of 2) – December 8

Part 1 of this two-part article defined adoption and talked about what makes it hard in any organization. Part 2 describes the ways you can manage adoption challenges.

Exec Summary: Over the past year, countless blogs, articles, books, videos, courses – even job descriptions – focused on prompts and prompt engineering. While prompting is essential to effective GenAI use, it’s only one thing to consider. Generative AI outputs are another, and they need more attention.

Recall the IPO Model – inputs, processes, and outputs. For a simple generative AI use, prompts are the inputs, algorithms are the process, and a GPT’s response is the output. For more complex uses, inputs and processes combine as a user and the GPT interact through a set of prompts. Outputs can also take on a new importance, depending where they lead.

If the outputs of my GPT use become inputs to a business task or process you own, we face added requirements for communication, collaboration, and probably change management. And that calls for an approach to addressing the hard adoption questions.

How Do We Answer The Hard Adoption Question?

Meeting the challenge of generative AI adoption will require a comprehensive and methodical approach. Here are three principles we’re applying at DWPA we recommend you consider.

  1. Use an adoption framework
  2. Clarify goals and objectives
  3. Think like an entrepreneur

Use An Adoption Framework

The grand-daddy of innovation adoption frameworks might be Everett Rogers’ Diffusion Innovation Theory. In his 1962 classic (updated through a 5th edition in 2003), Rogers explains how an innovation diffuses through, or is adopted by, a social system. There’s a lot to Rogers’ research and it would be worth your time to read select portions of this book. But we can highlight the pieces you can use immediately.

Most well-known might be adoption types Rogers identified and arrayed temporally in an S-curve. Rogers called it the innovation adoption customer because he studied many types of innovation. Today it’s popularly known as the technology adoption curve, as shown here:

This shows that adopters organize in five types within any social system – your company, the Federal government, the govcon market, etc. – and that they adopt at different rates. This happens because of the time they take moving through six stages Rogers identified:

  1. Knowledge is gained when someone learns of the existence of an innovation, and gains some understanding of how it works. This leads to Persuasion.
  2. Persuasion occurs when someone forms a favorable or unfavorable impression of an innovation, generally before using. This leads to a Decision.
  3. Decision occurs when someone engages in activities which lead to adoption or rejection. When adoption occurs, this leads to Implementation.
  4. Implementation occurs when someone puts an innovation to work. This leads to Confirmation.
  5. Confirmation occurs when someone is reinforced for additional use, or reverses their decision and rejects the innovation.

Finally, adopters do all this because of the different ways they evaluate the following Innovation Adoption Factors:

  • Relative advantage is the degree to which an innovation is perceived as better than the idea it supersedes.
  • Compatibility is the degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters.
  • Complexity is the degree to which an innovation is perceived as difficult to understand and use.
  • Trialability is the degree to which an innovation may be experimented with on a limited basis.
  • Observability is the degree to which the results of an innovation are visible.

You might be familiar with another simpler framework called the Technology Adoption Model which looks at just two factors – perceived usefulness, and perceived ease of use. You might have a preferred framework, model, or theory. The important thing is to use one (or more) so everyone works with the same concepts and terms. Without that, people who need to be on the same page won’t be.

Clarify Goals And Objectives

The second principle is to clarify both uses or use cases, and broader adoption goals and objectives. It helps to clarify uses with a statement like the following, which captures use case elements:

As a [role], I want to [perform some action] on [some artifact] to produce [some output] 
in order to [accomplish something] or for [some reason].

This will not only help everyone think through any single use case, but it’ll promote uniformity and consistency across uses by all individuals, teams, and other organizational units. There are other ways to do this but the important thing, again, is that you get everyone on the same page by framing uses with concepts whose meanings are shared.

Clarifying adoption goals and objectives is trickier because adoption occurs by individuals, teams, business verticals, business functions, and the entire enterprise. Each properly has its own business-related goals and objectives which can exist in nested, prioritized, instrumental, or a number of other relationships.

Because adoption is about making full use of generative AI, and because making full use should do something better than what you’re currently doing, it’s important to use frameworks for figuring out what better means at any level. You might already use frameworks for individual performance, collaboration, productivity, innovation, or other things related to one or more levels. DWPA uses the Business Model Canvas. 

Think Like An Entrepreneur

“Think like an entrepreneur” is a way to summarize DWPA’s entire GenAI Discovery Project, which we’ve written about extensively. 

We’ve described our process for state assumptions about generative AI, our clients, and the govcon market and how we turned them into hypotheses to test. Test results are evidence we’re using to fashion capture and proposal generative AI-assisted services to validate with customers before going to market with them.

Generative AI is innovative and your use of it is also innovative. It helps to think like an entrepreneur because by adopting an innovation you are literally doing something different to create new value for yourself, internal recipients, and perhaps your customers.

At the outset there will be nothing but assumptions because you can’t have evidence for generative AI use you don’t have. State all the assumptions you can think of, turn important ones into hypotheses, and test them. Tests can be quick and easy – generative AI trials, simulations, if-then scenarios, voice of the customer, and more.

You need hours and days to try something to see what you get, and that’s your evidence. You’ll get strong evidence. You’ll get weak evidence. Collect it. Appraise it against goals and objectives, and apply it to see what happens in what actually amounts to another round of hypothesis testing and evidence gathering.

Conclusion

Generative AI is a powerful technology which is changing the human-machine relationship. And that has the potential to change the human-human relationship. Whether that change is beneficial or not depends entirely on us.

Use generative in the way you use all other software and you’ll get some ROI but not what you could get. Shift your thinking from use to adoption and you’ll not only execute tasks faster, you’ll improve communication, collaboration, and problem solving.

Prompts Are Easy. Adoption Is Hard. Here’s How to Be Ready. (Part 1 of 2) – December 7

Prompts and prompt engineering became all the rage just a year ago once the world had free access to a powerful, personal new AI tool called generative AI (GenAI). “How to” prompting blogs, articles, books, videos, and entire courses quickly appeared. And for good reason.

The way generative AI works is entirely different from most software we use, and learning to prompt it is essential to benefiting from it. But the benefit is in what we do with what generative AI gives us. In the outputs, not just the inputs. And that means thinking harder about adoption.

This two-part series defines and describes the adoption challenge, explains why it matters for business, and offers tips for managing it.

Follow ThinkSpace for weekly insights and contact Lou.Kerestesy@DWPAssociates.com for more information.

Prompts Are Easy

To prompt a generative AI system or tool – let’s call them GPTs – is to instruct it to do something for you. There are different ways to prompt GPTs, each of which has a purpose.

Prompt terminology sounds esoteric and much more intimidating than necessary.

  • N-shot prompting gives a GPT several examples to learn from before you ask it to do something for you. ‘N’ stands for the number of examples you give.
  • Generated knowledge prompting involves using information that the GPT has previously generated as a basis for new responses.
  • Maieutic prompting is a method based on the Socratic method where questions are used to encourage deeper thinking and self-discovery.

All logical and reasonable, right? (Want a chuckle? Maieutic is from the Greek and means “acting as a midwife,” which is truly fitting.) But these and a dozen more you or your teams might have done without the labels. Knowing they exist is a good starting place, and having a list in front of you can help if you hit a roadblock.

What makes prompting easy?

It’s conversational in nature, something we humans excel at. We prompt with natural language, not software language. We are at the center of the interaction, not a spreadsheet formula or word processing workflow. We see results quickly, which is generally reinforcing. We can end a session and start another if things aren’t working. You can try your first prompts in seconds, improve in minutes, and become reasonably good in an hour. You might have to learn to prompt for different results in different ways, but none of this is hard. We can get GPTs to prompt themselves.

There is an art to some prompting. “Summarize this article?” No art required. Just intent and knowledge of three words. Asking a GPT to help you and a team think through a knotty problem with no clear answer? That’ll require some artfulness – a little cleverness, thoughtfulness, experimentation, iterations, and patience. But it’s still easier than learning the art of cooking, golf, or piano playing.

What Is Adoption? And What Makes It Hard?

Has this happened to you?

You use generative AI successfully on one small task and immediately wonder if it’ll help you with a second task. You successfully use it on a few tasks and think to yourself, “I could make a process better!” Or, a team experiments, beneficially, with generative AI. Members compare notes and see the possibility of improving whole workflows and processes.

Adoption refers to making full use of an innovation. Organizations first try generative AI in piecemeal ways, which is entirely logical. But use will diffuse across the organization, and it will happen in different ways.

Some uses will remain “local,” where the output of a GPT stays with the person who provided the input. “Summarize this article for me,” or “Give me a first draft of a position description,” are examples. But the output of some uses will become inputs to others – or imply them – and use will spread. Using a GPT to evaluate project plans, technical approaches, or budget narratives might lead to better written content. But it can also lead to revised processes for producing content, revised workflows to better use the improved artifacts, and increased integration with related processes.

What constitutes full use will depend on the output, not the input or prompt. Full use can have big implications beyond prompts and even GPT responses. Many of these might be unforeseen when users start playing with a GPT. But they’ll emerge and this is one of the things that makes adoption hard.

In this way, organizations will see generative AI use lead to change. Generative AI could become a significant change agent, helping people do things differently to produce new value for themselves, internal beneficiaries, and paying customers. Many users will absolutely use generative AI to work more effectively and efficiently, and those uses will be voluminous. But generative AI’s true promise and threat could very well be change. And change is hard.

Unknowns make adoption hard, too, and there are quite a few with generative AI:

  • How it works
  • How to use it effectively
  • How to use it safely
  • What makes it hallucinate and what to do

And you’ve no doubt heard or read the speculation that GenAI, or AI, might take over the world. There’s some uncertainty.

Generative AI adoption will vary by user and that will make adoption hard on teams, business units, business functions, and entire organizations. Different people will see different opportunities and boundaries in generative AI use, different benefits and risks, and even different value and ethics questions.

Finally, full use will be an investment by organizations which can be hard. Who will be trained, for what?  How many? At what cost? On what schedule? To do what and change what in which parts of the organization? What business objectives, outcomes, and measures should be applied? And what about our products and services? Are any candidates for adding generative AI capability customers would like to have? What will that entail?

Part 2 of this two-part series will answer the question, how do we answer the hard adoption question?