📚 LLM in a Flash: Efficient Large Language Model Inference with Limited Memory
Nachrichtenbereich: 🔧 AI Nachrichten
🔗 Quelle: machinelearning.apple.com
This paper was accepted at the ACL 2024 Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their substantial computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of… ...