Prompt Engineering 101 is a post series designed and written to unveiling the principles and techniques of prompt engineering, the art of crafting clear and effective texts, to prompt language models and get exactly what you are looking for. Furthermore, this series is not restricted to autoregressive language models like ChatGPT, rather aims to address Prompt Engineering as a general technique to any Generative models, including text-to-image models such as Stable Diffusion or Midjourney. In addition, address limitations of Language Models like hallucinations, techniques to prevent them, privacy and security concerns, and more…
On the other hand, there are a variety of tutorials on Prompt Engineering: blogs, boot camps in LLMs, reading lists, etc. This series does not pretend to be the best source or an ultimate reference. Far from it, it merely intends to offer a complementary, practical, and easy-to-read look to this subject.
This posts series is inspired by the outstanding courses of Andrew Ng and Isabella Fulford, as well as the excellent LLM Bootcamp provided by Charles Frye, Sergey Karayev, and Josh Tobin (both courses are mentioned in the resources section). After completing these learning programs, I found myself eager to delve deeper, exploring academic papers and tutorials. This led me on a journey through the Internet, discerning high-quality resources from junk. I even ordered two books from Amazon on Prompt Engineering and Generative AI for Artwork, which turned out to be poorly written and a complete waste of money. After several weeks of intense work, headaches and several cups of coffee, I found myself with a collection of resources of a great value about Prompt Engineering. In the spirit of helping others in their Prompt Engineering journey, I decided to share my experience writing this series of posts.
I hope these posts will be helpful and that you enjoy them as much as I did writing them.