🪴 Quartz 4.0

Search

SearchSearch
      • gte-multilingual-base
      • Workflow
      • BERT
      • Bi-encoders vs. Cross-encoders
      • Coursera DLS Course Notes
      • Daily to Do
      • Denoising Autoencoder vs. Masked LMs
      • Distillation
      • emrecan
      • emrecan filter word felsefe only
      • Emrecan Türkçe Sentence Transformers
      • GTE
      • Instruction following
      • Jalammar Attention Blogpost
      • Jalammar Transformers Blogpost
      • Literature Review
      • Llama-1
      • Llama-2
      • Llama3
      • LLMs are few shot learners
      • Meeting Notes
      • ML LifeCycle Meeting Notes
      • Model Araştırması
      • Module 1
      • Module 2
      • Module 3
      • Module 5
      • oquzhansahin
      • Outpus.json results
      • Pretraining BERT
      • PyTorch
      • RAG blogpost
      • RAG için dataset araştırması
      • Reinforcement Learning Meeting Notes
      • SBERT
      • Sentence Transformers
      • Synthetic Data Generation
      • The differences between Bert-base and Bert-large
      • Türkçe Sentence Transformers
      • Unidirectional vs. Bidirectional Models
    Home

    ❯

    LLMs are few shot learners

    LLMs are few shot learners

    Sep 09, 20241 min read

    LLMs are few shot learners 4-lms_are_few_shot_learners.pdf , 6-Language_Models_are_Few-Shot_Learners.pdf

    • I skim through the paper. I especially get the idea of how gpt3 is trained with x-shot settings.

    • Figure 2.1 : How gpt3 is trained with zero-shot, one-shot, few-shot settings:

      • Given:
        • Task description
        • One or more examples ( … → …)
        • Prompt (… → ?)

    Graph View

    Backlinks

    • LLMs are few shot learners
    • Literature Review

    Created with Quartz v4.2.3 © 2024

    • GitHub
    • Discord Community