<small>**_Stable Diffusion XL + LORA : Building a dynamic pipeline for real-time visual style-switching._**</small>
I built a modular pipeline using LoRAs (Low-Rank Adaptation) and the PEFT library to see how easy it's to tune visual styles.
I used adapters to layer a delta weight onto the frozen base : $W + \Delta W$. I wanted to understand how LoRA adapters let you change a model's visual style without reloading 6GB of weights every time.
Instead of retraining from scratch, I loaded tiny style adapters and toggled between them.
I added adapters one at a time, then started combining them and interpolating between their weights.
![[../assets/Attachments/Lora/Gemini_Generated_Image_w27il1w27il1w27i.png]]
| $W = W_{\text{stable-diffusion}} + W_{\text{ikea}}$ | $W = W_{\text{stable-diffusion}} + W_{\text{toyface}}$ | $W = W_{\text{stable-diffusion}} + W_{\text{cereal}}$ |
| :-------------------------------------------------: | :-----------------------------------------------------: | :---------------------------------------------------: |
| ![[../assets/Attachments/Lora/a_man_ikea.png]] | ![[../assets/Attachments/Lora/a_man_toy.png]] | ![[../assets/Attachments/Lora/a_man_cereal.png]] |
| $W = W_{\text{stable-diffusion}} + \alpha_{\text{cereal}}A_{\text{cereal}}B_{\text{cereal}} + \alpha_{\text{ikea}}A_{\text{ikea}}B_{\text{ikea}}$ |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![[../assets/Attachments/Lora/lora_blend_grid.png]] |
***Check my code*** [**Github**](https://github.com/cocoritzy/stable_diffusion)