The TinyLlama project is an open initiative aimed at training a compact 1.1B parameter Llama model on 3 trillion tokens. Designed for low-resource environments, TinyLlama offers efficient performance with minimal computational and memory requirements.
Key Features
- Lightweight model with only 1.1B parameters
- Optimized for efficiency in environments with restricted compute power
- Trained on 3 trillion tokens for broad language understanding
- Minimal memory footprint (~638MB)
Usage
- Run TinyLlama with Ollama:bashCopyEdit
ollama run tinyllama
Technical Details
- Architecture: Llama
- Parameters: 1.1B
- Quantization: Q4_0 (638MB)
- System Role: General-purpose AI assistant
- License: Open-source