Tag: shift-short-attention
- LongLoRA Efficient Fine-tuning of Long-Context Large Language Models (27 Sep 2023)
This is my reading note on LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models. The paper proposes a method to fine tune a pretrained LLM to handle long context. To this end, it divide the data into different groups and performed attention within group; for half of heads, it shift the groups by half to enable attention across the groups.