Skip to content

Commit

Permalink
Update src/pages/[platform]/ai/concepts/prompting/index.mdx
Browse files Browse the repository at this point in the history
Co-authored-by: Ian Saultz <[email protected]>
  • Loading branch information
dbanksdesign and atierian authored Nov 13, 2024
1 parent 3adc3c9 commit ca8086e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/pages/[platform]/ai/concepts/prompting/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ export function getStaticProps(context) {

LLM prompting refers to the process of providing a language model, such as Claude or Amazon Titan, with a specific input or "prompt" in order to generate a desired output. The prompt can be a sentence, a paragraph, or even a more complex sequence of instructions that guides the model to produce content that aligns with the user's intent.

The key idea behind prompting is that the way the prompt is structured and worded can significantly influence the model's response. By crafting the prompt carefully, users can leverage the LLM's extensive knowledge and language understanding capabilities to generate high-quality and relevant text, code, or other types of output.
The way the prompt is structured and worded can significantly influence the model's response. By crafting the prompt carefully, users can leverage the LLM's extensive knowledge and language understanding capabilities to generate high-quality and relevant text, code, or other types of output.

Effective prompting involves understanding the model's strengths and limitations, as well as experimenting with different prompt formats, styles, and techniques to elicit the desired responses. This can include using specific keywords, providing context, breaking down tasks into steps, and incorporating formatting elements like bullet points or code blocks.

Expand Down

0 comments on commit ca8086e

Please sign in to comment.