Skip to content

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

License

Notifications You must be signed in to change notification settings

bganapathycei/prompt-in-context-learning

 
 

Repository files navigation

Typing SVG

An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.

📝 Papers | ⚡️ Playground | 🛠 Prompt Engineering | 🌍 ChatGPT Prompt⛳ LLMs Usage Guide

version Awesome

⭐️ Shining ⭐️: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.

The resources include:

🎉Papers🎉: The latest papers about in-context learning or prompt engineering.

🎉Playground🎉: Large language models that enable prompt experimentation.

🎉Prompt Engineering🎉: Prompt techniques for leveraging large language models.

🎉ChatGPT Prompt🎉: Prompt examples that can be applied in our work and daily lives.

🎉LLMs Usage Guide🎉: The method for quickly getting started with large language models by using LangChain.

In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk):

  • Those who enhance their abilities through the use of AI;
  • Those whose jobs are replaced by AI automation.

💎EgoAlpha: Hello! human👤, are you ready?

Table of Contents

📢 News

☄️ EgoAlpha releases the TrustGPT focuses on reasoning. Trust the GPT with the strongest reasoning abilities for authentic and reliable answers. You can click here or visit the Playgrounds directly to experience it。

👉 Complete history news 👈


📜 Papers

You can directly click on the title to jump to the corresponding PDF link location

Survey

A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future2023.09.27

The Rise and Potential of Large Language Model Based Agents: A Survey2023.09.14

Textbooks Are All You Need II: phi-1.5 technical report2023.09.11

Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models2023.09.03

Point-Bind&Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following2023.09.01

Large language models in medicine: the potentials and pitfalls2023.08.31

Large Graph Models: A Perspective2023.08.28

A Survey on Large Language Model based Autonomous Agents2023.08.22

Instruction Tuning for Large Language Models: A Survey2023.08.21

Scientific discovery in the age of artificial intelligence2023.08.01

👉Complete paper list 🔗 for "Survey"👈

Prompt Engineering

Prompt Design

CodeFusion: A Pre-trained Diffusion Model for Code Generation2023.10.26

Woodpecker: Hallucination Correction for Multimodal Large Language Models2023.10.24

GraphGPT: Graph Instruction Tuning for Large Language Models2023.10.19

MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models2023.10.18

OpenAgents: An Open Platform for Language Agents in the Wild2023.10.16

JMedLoRA: Medical Domain Adaptation on Japanese Large Language Models using Instruction-tuning2023.10.16

The Consensus Game: Language Model Generation via Equilibrium Search2023.10.13

Understanding the Effects of RLHF on LLM Generalisation and Diversity2023.10.10

SWE-bench: Can Language Models Resolve Real-World GitHub Issues?2023.10.10

Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading2023.10.08

👉Complete paper list 🔗 for "Prompt Design"👈

Automatic Prompt

👉Complete paper list 🔗 for "Automatic Prompt"👈

Chain of Thought

CodeFusion: A Pre-trained Diffusion Model for Code Generation2023.10.26

Large Language Models Cannot Self-Correct Reasoning Yet2023.10.03

A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future2023.09.27

LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models2023.09.21

Graph of Thoughts: Solving Elaborate Problems with Large Language Models2023.08.18

Exploring the Intersection of Large Language Models and Agent-Based Modeling via Prompt Engineering2023.08.14

Cumulative Reasoning with Large Language Models2023.08.08

AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?2023.07.31

Chain-Of-Thought Prompting Under Streaming Batch: A Case Study2023.06.01

Majority Rule: better patching via Self-Consistency2023.05.31

👉Complete paper list 🔗 for "Chain of Thought"👈

Knowledge Augmented Prompt

Are Pre-trained Language Models Useful for Model Ensemble in Chinese Grammatical Error Correction?2023.05.24

Referral Augmentation for Zero-Shot Information Retrieval2023.05.24

Decomposing Complex Queries for Tip-of-the-tongue Retrieval2023.05.24

LLMDet: A Large Language Models Detection Tool2023.05.24

OverPrompt: Enhancing ChatGPT Capabilities through an Efficient In-Context Learning Approach2023.05.24

Frugal Prompting for Dialog Models2023.05.24

Bi-Drop: Generalizable Fine-tuning for Pre-trained Language Models via Adaptive Subnetwork Optimization2023.05.24

In-Context Demonstration Selection with Cross Entropy Difference2023.05.24

A Causal View of Entity Bias in (Large) Language Models2023.05.24

SelfzCoT: a Self-Prompt Zero-shot CoT from Semantic-level to Code-level for a Better Utilization of LLMs2023.05.19

👉Complete paper list 🔗 for "Knowledge Augmented Prompt"👈

Evaluation & Reliability

TouchStone: Evaluating Vision-Language Models by Language Models2023.08.31

Shepherd: A Critic for Language Model Generation2023.08.08

Self-consistency for open-ended generations2023.07.11

Jailbroken: How Does LLM Safety Training Fail?2023.07.05

Towards Measuring the Representation of Subjective Global Opinions in Language Models2023.06.28

On the Reliability of Watermarks for Large Language Models2023.06.07

SETI: Systematicity Evaluation of Textual Inference2023.05.24

From Words to Wires: Generating Functioning Electronic Devices from Natural Language Descriptions2023.05.24

Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples2023.05.24

EvEval: A Comprehensive Evaluation of Event Semantics for Large Language Models2023.05.24

👉Complete paper list 🔗 for "Evaluation & Reliability"👈

In-context Learning

CodeFusion: A Pre-trained Diffusion Model for Code Generation2023.10.26

SuperHF: Supervised Iterative Learning from Human Feedback2023.10.25

Woodpecker: Hallucination Correction for Multimodal Large Language Models2023.10.24

MemGPT: Towards LLMs as Operating Systems2023.10.12

LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models2023.09.21

Adapting Large Language Models via Reading Comprehension2023.09.18

Giraffe: Adventures in Expanding Context Lengths in LLMs2023.08.21

Prompt Switch: Efficient CLIP Adaptation for Text-Video Retrieval2023.08.15

Exploring the Intersection of Large Language Models and Agent-Based Modeling via Prompt Engineering2023.08.14

PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification2023.08.05

👉Complete paper list 🔗 for "In-context Learning"👈

Multimodal Prompt

BiLL-VTG: Bridging Large Language Models and Lightweight Visual Tools for Video-based Texts Generation2023.10.16

MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens2023.10.03

Kosmos-2.5: A Multimodal Literate Model2023.09.20

Investigating the Catastrophic Forgetting in Multimodal Large Language Models2023.09.19

Physically Grounded Vision-Language Models for Robotic Manipulation2023.09.05

Physically Grounded Vision-Language Models for Robotic Manipulation2023.09.05

Point-Bind&Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following2023.09.01

PE-MED: Prompt Enhancement for Interactive Medical Image Segmentation2023.08.26

SeamlessM4T-Massively Multilingual & Multimodal Machine Translation2023.08.22

VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use2023.08.12

👉Complete paper list 🔗 for "Multimodal Prompt"👈

Prompt Application

Narratron: Collaborative Writing and Shadow-playing of Children Stories with Large Language Models2023.10.29

CodeFusion: A Pre-trained Diffusion Model for Code Generation2023.10.26

GraphGPT: Graph Instruction Tuning for Large Language Models2023.10.19

Creative Robot Tool Use with Large Language Models2023.10.19

MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models2023.10.18

BiLL-VTG: Bridging Large Language Models and Lightweight Visual Tools for Video-based Texts Generation2023.10.16

JMedLoRA: Medical Domain Adaptation on Japanese Large Language Models using Instruction-tuning2023.10.16

Table-GPT: Table-tuned GPT for Diverse Table Tasks2023.10.13

MemGPT: Towards LLMs as Operating Systems2023.10.12

Ferret: Refer and Ground Anything Anywhere at Any Granularity2023.10.11

👉Complete paper list 🔗 for "Prompt Application"👈

Foundation Models

The Foundation Model Transparency Index2023.10.19

Language Models Represent Space and Time2023.10.03

Effective Distillation of Table-based Reasoning Ability from LLMs2023.09.22

Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions2023.09.18

Replacing softmax with ReLU in Vision Transformers2023.09.15

ZGaming: Zero-Latency 3D Cloud Gaming by Image Prediction2023.09.01

Explaining Vision and Language through Graphs of Events in Space and Time2023.08.29

PE-MED: Prompt Enhancement for Interactive Medical Image Segmentation2023.08.26

SkipcrossNets: Adaptive Skip-cross Fusion for Road Detection2023.08.24

SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding2023.08.21

👉Complete paper list 🔗 for "Foundation Models"👈

👨‍💻 LLM Usage

Large language models (LLMs) are becoming a revolutionary technology that is shaping the development of our era. Developers can create applications that were previously only possible in our imaginations by building LLMs. However, using these LLMs often comes with certain technical barriers, and even at the introductory stage, people may be intimidated by cutting-edge technology: Do you have any questions like the following?

  • How can LLM be built using programming?
  • How can it be used and deployed in your own programs?

💡 If there was a tutorial that could be accessible to all audiences, not just computer science professionals, it would provide detailed and comprehensive guidance to quickly get started and operate in a short amount of time, ultimately achieving the goal of being able to use LLMs flexibly and creatively to build the programs they envision. And now, just for you: the most detailed and comprehensive Langchain beginner's guide, sourced from the official langchain website but with further adjustments to the content, accompanied by the most detailed and annotated code examples, teaching code lines by line and sentence by sentence to all audiences.

Click 👉here👈 to take a quick tour of getting started with LLM.

✉️ Contact

This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via [email protected].

We are willing to engage in discussions with friends from the academic and industrial communities, and explore the latest developments in prompt engineering and in-context learning together.

🙏 Acknowledgements

Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. We also would like to express our sincere gratitude to the authors of the relevant resources. Your efforts have broadened our horizons and enabled us to perceive a more wonderful world.

About

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.6%
  • HTML 1.4%