Prompt Generation by LLMs, also referred to as Automatic Prompt Engineering, is recognized as a technique within the larger field of Prompt Engineering.
Here’s what the sources say about this topic:
-
Automatic Prompt Engineering as a Technique: The “Prompt Engineering” document dedicates a specific section to “Automatic Prompt Engineering”. This clearly positions it as a recognized method within the broader domain of designing effective prompts for Large Language Models (LLMs).
-
LLMs as Prompt Engineers: The title of one of the cited endnotes, “Automatic Prompt Engineering - Large Language Models are Human-Level Prompt Engineers”, strongly suggests the core concept: leveraging the capabilities of LLMs to generate prompts themselves.
-
Context within Prompting Techniques: The inclusion of “Automatic Prompt Engineering” alongside other prompting techniques like zero-shot, few-shot, and Chain of Thought (CoT) indicates that it’s considered a way to develop or refine prompts, potentially in contrast to manual creation and iteration by human prompt engineers.
-
Implication for Efficiency and Effectiveness: The very existence of this category suggests an effort to automate or improve the process of finding optimal prompts. Since manual prompt engineering can be an iterative process requiring “tinkering to find the best prompt”, automatic methods could potentially lead to more efficient discovery of effective prompts or even uncover prompts that humans might not naturally devise.
-
Integration with the Prompt Engineering Workflow: While the sources don’t provide specific details on how automatic prompt engineering is implemented or integrated into a typical prompt engineering workflow, its inclusion implies that it’s a tool or approach that can be utilized alongside other methods to achieve desired LLM outputs. The best practices for prompt engineering, such as the importance of tracking prompt iterations and experimenting, would likely also apply to the development and evaluation of automatically generated prompts.
In summary, the sources highlight Automatic Prompt Engineering as a distinct and potentially powerful approach within the larger context of Prompt Engineering. It leverages LLMs themselves to generate or optimize prompts, hinting at a move towards more automated and potentially more effective ways of interacting with and guiding these models. While the sources don’t delve into the specifics of how this is achieved, its recognition as a prompting technique underscores its importance in the evolving field of prompt engineering.