Prompt Engineering Guide🎓 Prompt Engineering Course🎓 Prompt Engineering CourseServicesServicesAboutAbout
CTRL K
GitHubGitHub (opens in a new tab)DiscordDiscord (opens in a new tab)
CTRL K
  • Prompt Engineering
  • Introduction
    • LLM Settings
    • Basics of Prompting
    • Prompt Elements
    • General Tips for Designing Prompts
    • Examples of Prompts
  • Prompting Techniques
    • Zero-shot Prompting
    • Few-shot Prompting
    • Chain-of-Thought Prompting
    • Meta Prompting
    • Self-Consistency
    • Generate Knowledge Prompting
    • Prompt Chaining
    • Tree of Thoughts
    • Retrieval Augmented Generation
    • Automatic Reasoning and Tool-use
    • Automatic Prompt Engineer
    • Active-Prompt
    • Directional Stimulus Prompting
    • Program-Aided Language Models
    • ReAct
    • Reflexion
    • Multimodal CoT
    • Graph Prompting
  • Agents
    • Introduction to Agents
    • Agent Components
    • Optimizing Prompts
  • Applications
    • Fine-tuning GPT-4o
    • Function Calling
    • Context Caching with LLMs
    • Generating Data
    • Generating Synthetic Dataset for RAG
    • Tackling Generated Datasets Diversity
    • Generating Code
    • Graduate Job Classification Case Study
    • Prompt Function
  • Prompt Hub
    • Classification
      • Sentiment Classification
      • Few-Shot Sentiment Classification
    • Coding
      • Generate Code Snippet
      • Generate MySQL Query
      • Draw TiKZ Diagram
    • Creativity
      • Rhymes
      • Infinite Primes
      • Interdisciplinary
      • Inventing New Words
    • Evaluation
      • Evaluate Plato's Dialogue
    • Information Extraction
      • Extract Model Names
    • Image Generation
      • Draw a Person Using Alphabet
    • Mathematics
      • Evaluating Composite Functions
      • Adding Odd Numbers
    • Question Answering
      • Closed Domain Question Answering
      • Open Domain Question Answering
      • Science Question Answering
    • Reasoning
      • Indirect Reasoning
      • Physical Reasoning
    • Text Summarization
      • Explain A Concept
    • Truthfulness
      • Hallucination Identification
    • Adversarial Prompting
      • Prompt Injection
      • Prompt Leaking
      • Jailbreaking
  • Models
    • ChatGPT
    • Claude 3
    • Code Llama
    • Flan
    • Gemini
    • Gemini Advanced
    • Gemini 1.5 Pro
    • Gemma
    • GPT-4
    • Grok-1
    • LLaMA
    • Llama 3
    • Mistral 7B
    • Mistral Large
    • Mixtral
    • Mixtral 8x22B
    • OLMo
    • Phi-2
    • Sora
    • LLM Collection
  • Risks & Misuses
    • Adversarial Prompting
    • Factuality
    • Biases
  • LLM Research Findings
    • LLM Agents
    • RAG for LLMs
    • LLM Reasoning
    • RAG Faithfulness
    • LLM In-Context Recall
    • RAG Reduces Hallucination
    • Synthetic Data
    • ThoughtSculpt
    • Infini-Attention
    • LM-Guided CoT
    • Trustworthiness in LLMs
    • LLM Tokenization
    • What is Groq?
  • Papers
  • Tools
  • Notebooks
  • Datasets
  • Additional Readings
Question? Give us feedback → (opens in a new tab)Edit this page
Prompt Hub
Question Answering

Question Answering with LLMs

This section contains a collection of prompts for testing the question answering capabilities of LLMs.

Last updated on September 19, 2024
Adding Odd NumbersClosed Domain Question Answering

Copyright © 2024 DAIR.AI