Daniel Han
CEO
2nd October, 2024, 9:30am (GST)
Title: | How to make LLM training faster (Advanced) |
Abstract: | Learn how we at Unsloth make open-source fine-tuning of models like Llama 3, Phi-3, Gemma and more faster (advanced). |
Bio: | Hey I'm Daniel, building Unsloth with my brother! I previously worked at NVIDIA and we make LLM training faster and finetuning 30x faster, use 85% less VRAM through software and maths tricks with no approximations! We have a custom optimized autograd engine, rewrite everything in CUDA and Triton + more! Our open-source is 2x faster, uses 70% less VRAM. We have 2.5M+ Hugging Face downloads per month. |