ThinkSpace

Prompts Are Easy. Adoption Is Hard. Here’s How to Be Ready. (Part 1 of 2) – December 7

Prompts and prompt engineering became all the rage just a year ago once the world had free access to a powerful, personal new AI tool called generative AI (GenAI). “How to” prompting blogs, articles, books, videos, and entire courses quickly appeared. And for good reason.

The way generative AI works is entirely different from most software we use, and learning to prompt it is essential to benefiting from it. But the benefit is in what we do with what generative AI gives us. In the outputs, not just the inputs. And that means thinking harder about adoption.

This two-part series defines and describes the adoption challenge, explains why it matters for business, and offers tips for managing it.

Follow ThinkSpace for weekly insights and contact Lou.Kerestesy@DWPAssociates.com for more information.

Prompts Are Easy

To prompt a generative AI system or tool – let’s call them GPTs – is to instruct it to do something for you. There are different ways to prompt GPTs, each of which has a purpose.

Prompt terminology sounds esoteric and much more intimidating than necessary.

  • N-shot prompting gives a GPT several examples to learn from before you ask it to do something for you. ‘N’ stands for the number of examples you give.
  • Generated knowledge prompting involves using information that the GPT has previously generated as a basis for new responses.
  • Maieutic prompting is a method based on the Socratic method where questions are used to encourage deeper thinking and self-discovery.

All logical and reasonable, right? (Want a chuckle? Maieutic is from the Greek and means “acting as a midwife,” which is truly fitting.) But these and a dozen more you or your teams might have done without the labels. Knowing they exist is a good starting place, and having a list in front of you can help if you hit a roadblock.

What makes prompting easy?

It’s conversational in nature, something we humans excel at. We prompt with natural language, not software language. We are at the center of the interaction, not a spreadsheet formula or word processing workflow. We see results quickly, which is generally reinforcing. We can end a session and start another if things aren’t working. You can try your first prompts in seconds, improve in minutes, and become reasonably good in an hour. You might have to learn to prompt for different results in different ways, but none of this is hard. We can get GPTs to prompt themselves.

There is an art to some prompting. “Summarize this article?” No art required. Just intent and knowledge of three words. Asking a GPT to help you and a team think through a knotty problem with no clear answer? That’ll require some artfulness – a little cleverness, thoughtfulness, experimentation, iterations, and patience. But it’s still easier than learning the art of cooking, golf, or piano playing.

What Is Adoption? And What Makes It Hard?

Has this happened to you?

You use generative AI successfully on one small task and immediately wonder if it’ll help you with a second task. You successfully use it on a few tasks and think to yourself, “I could make a process better!” Or, a team experiments, beneficially, with generative AI. Members compare notes and see the possibility of improving whole workflows and processes.

Adoption refers to making full use of an innovation. Organizations first try generative AI in piecemeal ways, which is entirely logical. But use will diffuse across the organization, and it will happen in different ways.

Some uses will remain “local,” where the output of a GPT stays with the person who provided the input. “Summarize this article for me,” or “Give me a first draft of a position description,” are examples. But the output of some uses will become inputs to others – or imply them – and use will spread. Using a GPT to evaluate project plans, technical approaches, or budget narratives might lead to better written content. But it can also lead to revised processes for producing content, revised workflows to better use the improved artifacts, and increased integration with related processes.

What constitutes full use will depend on the output, not the input or prompt. Full use can have big implications beyond prompts and even GPT responses. Many of these might be unforeseen when users start playing with a GPT. But they’ll emerge and this is one of the things that makes adoption hard.

In this way, organizations will see generative AI use lead to change. Generative AI could become a significant change agent, helping people do things differently to produce new value for themselves, internal beneficiaries, and paying customers. Many users will absolutely use generative AI to work more effectively and efficiently, and those uses will be voluminous. But generative AI’s true promise and threat could very well be change. And change is hard.

Unknowns make adoption hard, too, and there are quite a few with generative AI:

  • How it works
  • How to use it effectively
  • How to use it safely
  • What makes it hallucinate and what to do

And you’ve no doubt heard or read the speculation that GenAI, or AI, might take over the world. There’s some uncertainty.

Generative AI adoption will vary by user and that will make adoption hard on teams, business units, business functions, and entire organizations. Different people will see different opportunities and boundaries in generative AI use, different benefits and risks, and even different value and ethics questions.

Finally, full use will be an investment by organizations which can be hard. Who will be trained, for what?  How many? At what cost? On what schedule? To do what and change what in which parts of the organization? What business objectives, outcomes, and measures should be applied? And what about our products and services? Are any candidates for adding generative AI capability customers would like to have? What will that entail?

Part 2 of this two-part series will answer the question, how do we answer the hard adoption question?