Zero-shot Prompting is the most basic type of prompt engineering, where you provide a description of the task you want the Large Language Model (LLM) to perform without giving any specific examples. The prompt typically includes instructions or a question and some initial text for the LLM to start with. The term “zero-shot” signifies that the model is expected to perform the task with no prior demonstrations or examples in the prompt.

In the larger context of Prompt Engineering, zero-shot prompting is the starting point. As described in the sources, prompt engineering is the process of crafting effective text prompts (sometimes accompanied by other modalities like images) to guide an LLM to produce a specific output. It’s an iterative process where many aspects of the prompt, such as the model used, its training data, configurations, word choice, style, tone, structure, and context, all significantly impact the prompt’s efficacy. Inadequate prompts can lead to ambiguous or inaccurate responses.

The sources highlight that LLMs are tuned to follow instructions and are trained on vast amounts of data, enabling them to understand a prompt and generate an answer even without explicit examples. Zero-shot prompting leverages this inherent ability of LLMs. An example provided in Table 1 illustrates a zero-shot prompt for classifying movie reviews as positive, neutral, or negative using the gemini-pro model. The prompt simply states the task and provides a movie review followed by “Sentiment:“. The model then attempts to classify the sentiment without any prior examples of positive, neutral, or negative reviews.

However, the sources also indicate that zero-shot prompting might not always be sufficient. When it fails to produce the desired results, more advanced techniques like one-shot and few-shot prompting can be employed, where one or multiple examples are included in the prompt to guide the model. Providing examples is considered a “powerful teaching tool” and the “most important best practice” in prompt engineering. These examples can help the model understand the desired output format, style, or pattern, especially for more complex tasks.

Furthermore, zero-shot prompting exists alongside other important prompt engineering techniques that aim to guide LLM output, such as:

  • System prompting: Sets the overall context and purpose of the LLM.
  • Contextual prompting: Provides specific background information relevant to the current task.
  • Role prompting: Frames the model’s output style and voice.
  • Step-back prompting: Prompts the LLM to first consider a general question related to the specific task.
  • Chain of Thought (CoT): Encourages the LLM to generate intermediate reasoning steps.
  • Self-consistency: Combines sampling and majority voting over diverse reasoning paths.
  • Tree of Thoughts (ToT).
  • ReAct (reason & act).

These techniques, along with zero-shot, one-shot, and few-shot prompting, form the toolkit of prompt engineers. The choice of which technique to use depends on the specific task, the capabilities of the LLM, and the desired output.

The sources also touch upon LLM output configuration, including parameters like temperature, top-K, and top-P, which can influence the randomness and creativity of the LLM’s responses in zero-shot prompting as well. For tasks requiring no creativity in zero-shot prompts, it’s recommended to use a low temperature.

In summary, zero-shot prompting is the fundamental approach to instructing LLMs, relying on their pre-trained knowledge to understand and execute tasks based solely on a textual description. While simple and sometimes effective, it is often complemented or superseded by more sophisticated prompting techniques within the broader practice of prompt engineering, especially when more specific guidance or improved accuracy is required. Understanding zero-shot prompting provides a crucial foundation for exploring and applying the diverse range of prompt engineering strategies available.