What is Prompt Engineering?

Prompt engineering is a concept based on Natural Language Processing (NLP). It involves creating well-crafted inputs or prompts – whether images or text – to help you get the most relevant results for you from the AI models such as Dalle-2, ChatGPT, etc. Although prompt engineering is a new concept, it has already emerged as a lucrative career option. Crafting prompts that result in relevant, more accurate output is not as easy as it seems.

What is a Prompt?

A prompt is a string of text, an input, that you give to a language model, such as ChatGPT, to get an output. The prompts can be highly descriptive or they can be as simple as a question or a rough description of an image (it can even be an emoji or very vague content). However, simple prompts may not always give you the best or most accurate results. This is where prompt engineering comes in.

Here are some examples of prompts:

Example 1:

‘Generate a picture of a dog’ is a simple prompt.

‘Generate the picture of a large, muscular, brown dog with white patches, riding a skateboard on the road on a cloudy day with mountains in the background’ is an example of a highly descriptive prompt.

Example 2:

‘What is magnetism’ is a simple prompt, which may not provide satisfactory results.

‘Explain what magnetism is in a funny way as if I am a 5-year-old’ is an example of a more detailed prompt that can provide you with the exact output that you were looking for.

What is Prompt Engineering?

Prompt engineering is creating the most relevant prompts possible to help the language model to generate the most relevant outcome.

The quality of the prompt determines the quality of the result. While this sounds pretty simple (writing descriptive tasks, etc.), engineering good-quality prompts require practice and knowledge about how the language models operate.

An overload of information may interfere with the model’s natural learning process. You may feel tempted to bombard loads of information into the prompts, but this is not necessary. The prompts must have just the right amount of information needed for the model to accomplish the task.

Note
Keep in mind that even with excellent prompt engineering you may get completely nonsensical, made-up results. A current, major problem with language models is that they may not always provide factually accurate results. The results may sound plausible but are complete nonsense. It is always a good idea to check for accuracy (by a human) when accurate information is critical.

Prompt Types

Almost anything can be a prompt, even if it is just an emoji. Here are some frequently used prompt types:

  • Question – Example: How to plan a trip to Greece during winter? Give me suggestions for hotels under $300 per night in Athens. Tell me what to do there. What are the places I can visit during a day trip? What food is most famous in Greece? What should I avoid doing there?
  • Instructions – Create an image of a dalmatian dog cutting a birthday cake under the moonlight with other dogs standing near him.
  • Input data – Use this information to write my personal experience with growing lettuce in my home garden. I love lettuce, and I like them crunchy and fresh.
  • Example prompts – I liked the Netflix show Wednesday. What are some other similar shows I can watch with my teenager?

A prompt can consist of one or more of these elements.

Note.
Some language models, like ChatGPT, may not have access to the internet and therefore may not have current accurate information. However, AI language model chatbots are popping up like mushrooms and you can find many that can pull all sorts of specific information.

Best Practices for Creating Prompts

1. Try a Few Different Inputs

Prompts can be a combination of various elements, such as:

‘Question + Instruction’
‘Example + Instruction’
‘Instruction + Input Data’
‘Question + Input Data + Instruction’

Any combination can work, provided it helps the language model figure out what type of output is expected from it. Also, various combinations will yield various results. Experiment with different inputs to see which one produces the best results.

2. Provide Enough Context

Make sure you create prompts with sufficient information for the model to process and generate relevant results.

Use succinct but quality inputs. If you want the output to express a particular feeling (funny, thoughtful, etc.), mention it in the prompt.

Mention the style or the tone of the output, such as written in layman’s terms, contemporary image, etc.

Avoid using abstract words because the language model may be unable to process them, such as dream, adequate, soul, courage, etc. Instead, use adjectives that are definable. Limit the use of jargon as much as possible.

Mention what you want – not what you don’t want. If needed, refine the output later.

3. Be Specific

Prompts should be precise. How long do you want the output to be? Do you want references from credible sources? Do you want it to be written as a joke, an essay, or a blog post? What should the tone of the output be? Do you want the output to look realistic or abstract? What do you want in the background of an image output?

The more specific your prompt is, the better results the models will yield.

4. Be Patient

When you are first trying out prompting whether it is with ChatGPT, MidJourney, or any other language model it can be frustrating. Learning how to craft effective prompts can take time. Keep experimenting and practicing with various combinations of prompts to get a hang of it or you can contact us to help you with prompt engineering for your business.

Final Thoughts

AI and GPT tools have swooped in and are causing a massive shift in business as we know it. The loud rumblings are claiming this is bigger than when the world wide web was created. In light of this, expect prompt engineering to stay critical in working with AI and GPT tools – at least for the near future, the AI tools cannot read out minds, yet anyway.

Alie Jules
AI Educator and consultant