ExperimentMarch 18, 2025•Updated March 20, 2025•15 min read
Training a LoRA on My Writing Style
I fine-tuned Llama 2 on three years of my blog posts. The results were uncomfortably accurate.
Alex Rivera
AI Researcher
7892 views89 comments312 shares
Fine-tuning Llama 2 on my own writing was both exciting and a little unsettling. Here's what happened.
The Process
I gathered three years of blog posts and used LoRA to fine-tune Llama 2. The training process was straightforward, but the results were surprising.
Training Command
python train_lora.py --model llama-2 --data my_blog_posts.txt --output my_lora_model
Results
- The model picked up my favorite phrases
- It even mimicked my typos and quirks
- Some outputs felt eerily personal
"The AI wrote a paragraph I couldn't distinguish from my own work."
Takeaways
- LoRA is powerful for personalizing LLMs
- Be careful what you teach your model!
Morgan Lee
ML Enthusiast
About Alex Rivera
AI researcher and machine learning engineer exploring the boundaries of language models and personalization.