AI Infrastructure & Local LLMsModule 4

4.3Training a Custom Model for Pakistani Use Cases

35 min 3 code blocks Practice Lab Quiz (4Q)

Training a Custom Model for Pakistani Use Cases

You've built your dataset. You've configured your LoRA adapter. Now it's time to actually run the training loop and produce a model that understands Pakistani context better than any general-purpose LLM on the market. This lesson walks through a complete end-to-end training run using TRL's SFTTrainer — the most production-ready fine-tuning framework available today.

The Full Training Stack

For a QLoRA fine-tuning run on Pakistani data, you need four components working together:

  1. Base model: A pre-trained open-source LLM (Llama 3 8B, Mistral 7B, or Qwen 2.5 7B)
  2. PEFT/LoRA config: The adapter configuration from the previous lessons
  3. Dataset: Your cleaned Alpaca-format Pakistani training data
  4. SFTTrainer: Hugging Face's Supervised Fine-Tuning trainer that handles the training loop

The Training Script

python
from transformers import AutoTokenizer, TrainingArguments
from trl import SFTTrainer
from datasets import load_dataset

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
tokenizer.pad_token = tokenizer.eos_token

# Load your dataset
dataset = load_dataset("json", data_files={"train": "karachi_data_train.json",
                                             "test": "karachi_data_val.json"})

# Training arguments
training_args = TrainingArguments(
    output_dir="./karachi-llm-v1",
    num_train_epochs=3,
    per_device_train_batch_size=2,
    gradient_accumulation_steps=4,  # effective batch = 8
    learning_rate=2e-4,
    fp16=True,
    logging_steps=10,
    evaluation_strategy="epoch",
    save_strategy="epoch",
    load_best_model_at_end=True,
)

trainer = SFTTrainer(
    model=model,  # your QLoRA model from lesson 4.1
    train_dataset=dataset["train"],
    eval_dataset=dataset["test"],
    tokenizer=tokenizer,
    args=training_args,
    dataset_text_field="text",  # the formatted field from lesson 4.2
    max_seq_length=1024,
)

trainer.train()
trainer.model.save_pretrained("./karachi-llm-v1-adapter")

Understanding the Hyperparameters

Learning rate (2e-4): This is the most important hyperparameter. Too high (1e-3) and the adapter diverges. Too low (1e-5) and training is painfully slow. For LoRA fine-tuning, 1e-4 to 3e-4 is the standard range. Always use a learning rate scheduler (cosine decay is default in most frameworks).

Epochs: 3 epochs is often sufficient for small datasets (500-2000 examples). Watch your validation loss — if it stops decreasing or starts increasing, stop training immediately (early stopping).

Gradient accumulation: If your GPU can only handle batch size 2, gradient_accumulation_steps=4 simulates a batch size of 8. This stabilizes training without requiring more VRAM.

Pakistani Use Case: Training on Urdu-Roman Customer Queries

For a real Karachi customer service bot, your training data might include conversations like:

json
{
  "instruction": "Respond to this WhatsApp query from a customer.",
  "input": "Bhai ghar delivery available hai DHA Phase 6 mein? Aur COD hota hai?",
  "output": "Ji bilkul! DHA Phase 6 mein delivery available hai, usually 45-60 minutes. Cash on Delivery bhi accept karte hain. Minimum order PKR 500 hai. Kuch order karna hai aaj?"
}

After training on 500-1000 such examples, the model will naturally respond in the appropriate mix of Roman Urdu and English, understand Pakistani addresses and payment methods, and match the tone of a professional Karachi business.

Monitoring Training Progress

Loss curves tell you everything. A healthy training run shows:

  • Training loss declining steadily from ~2.5 to ~0.8 over 3 epochs
  • Validation loss tracking training loss (not diverging)
  • No sudden spikes (which indicate learning rate is too high)

Use Weights & Biases (wandb) for free training visualization — it's one command to enable: TrainingArguments(..., report_to="wandb"). Sign up for free at wandb.ai.

Saving and Loading the Adapter

The trained LoRA adapter is surprisingly small — typically 10-50 MB for a rank-8 adapter on a 7B model. Save it with:

python
model.save_pretrained("./my-pakistan-adapter")
tokenizer.save_pretrained("./my-pakistan-adapter")

To use it: load the base model, then load the adapter on top using PeftModel.from_pretrained(). This means you can store one base model and multiple adapters for different clients or use cases — a restaurant adapter, a legal adapter, an HR adapter — each just 20-30 MB.

Estimating Training Cost in PKR

For a 500-example dataset training Llama 3 8B for 3 epochs on Google Colab Pro (A100 GPU):

  • Training time: approximately 45-90 minutes
  • Colab Pro cost: PKR 4,500/month for unlimited compute
  • Equivalent AWS/PaperSpace cost: $0.80-1.50 per training run (PKR 220-420)

For a client charging PKR 50,000 for a custom chatbot, the compute cost is less than 1% of revenue. The value is in the data curation and the deployment, not the GPU time.

Practice Lab

Practice Lab

  1. Set up a Colab notebook with the full QLoRA stack: install transformers, peft, trl, bitsandbytes, datasets. Verify everything imports without error.

  2. Run a 10-example toy training run using any small model (e.g., Qwen/Qwen2-0.5B) and 10 Alpaca-format examples. Confirm training loss decreases over 3 epochs. This validates your pipeline is working before committing to a full run.

  3. Inspect the saved adapter: After training, look at the file size of the saved adapter directory. Note how much smaller it is than the full model. Consider: how would you version and distribute adapters to different clients?

Key Takeaways

  • SFTTrainer handles the full fine-tuning loop with minimal boilerplate — focus your energy on data quality
  • Learning rate 1e-4 to 3e-4 is the standard range for LoRA; watch validation loss to avoid overfitting
  • Pakistani language mix (Roman Urdu + English) requires specific training examples — general datasets won't teach this
  • A trained LoRA adapter is 10-50 MB — small enough to version-control and distribute to multiple deployments

Lesson Summary

Includes hands-on practice lab3 runnable code examples4-question knowledge check below

Quiz: Training a Custom Model for Pakistani Use Cases

4 questions to test your understanding. Score 60% or higher to pass.