Back to Journal
ExperimentMarch 18, 2025Updated March 20, 202515 min read

Training a LoRA on My Writing Style

I fine-tuned Llama 2 on three years of my blog posts. The results were uncomfortably accurate.

Alex Rivera
Alex Rivera
AI Researcher
7892 views89 comments312 shares
Training a LoRA on My Writing Style
Neural network visualization representing the fine-tuning process of language models

Fine-tuning Llama 2 on my own writing was both exciting and a little unsettling. Here's what happened.

The Process

I gathered three years of blog posts and used LoRA to fine-tune Llama 2. The training process was straightforward, but the results were surprising.

Training Command
python train_lora.py --model llama-2 --data my_blog_posts.txt --output my_lora_model

Results

  • The model picked up my favorite phrases
  • It even mimicked my typos and quirks
  • Some outputs felt eerily personal
"The AI wrote a paragraph I couldn't distinguish from my own work."

Takeaways

  • LoRA is powerful for personalizing LLMs
  • Be careful what you teach your model!
Morgan Lee
ML Enthusiast
Next Article
Benchmarking Chain-of-Thought Prompting
Comprehensive analysis of CoT effectiveness
#lora#fine-tuning#llama-2#experiment
Alex Rivera

About Alex Rivera

AI researcher and machine learning engineer exploring the boundaries of language models and personalization.

Related Experiments

Training Style LoRAs on Architectural Drawings
experiment

Training Style LoRAs on Architectural Drawings

15 min read
How DALL-E Understands Space and Form
research

How DALL-E Understands Space and Form

20 min read
Building an AI Art Gallery in Esy
build

Building an AI Art Gallery in Esy

8 min read

Comments (89)

Join the discussion about this experiment