Quartz 4

Home

❯

reference

❯

glossary

❯

LoRA

LoRA

Jun 23, 20251 min read

LoRA (Low-Rank Adaptation) is a parameter-efficient fine-tuning technique that injects low-rank matrices into pre-trained language models, enabling adaptation to new tasks with minimal additional parameters and compute overhead.

Sources:

  • Hu et al.: LoRA: Low-Rank Adaptation of Large Language Models
  • Hugging Face PEFT: LoRA Documentation

Graph View

Created with Quartz v4.5.1 © 2025

  • GitHub
  • Discord Community