360-degree panoramic shot - LTX-2
Details
Download Files
Model description
When I realized LTX-2 can generate 4K video, my first thought was: holy shit, we can finally start pumping out VR videos. So I immediately rushed to make this LoRA without really thinking it through, just to see whether LTX-2 could already do this by default.
Short answer: kind of.
Much like the Hardcut LoRA for Wan 2.2, LTX-2 understands the concept of 360° video, but struggles to execute it properly. This LoRA gives it the extra push it needs to reliably generate true 360-degree content without turning into a mangled mess.
That said, for whatever reason, it does not seamlessly close the seam, so there’s a noticeable vertical line in the 360 sphere when you turn around. I’m not sure if a node exists that can fix this yet, but please let me know if you find a solution.
Note: the video ends are matching they just cut off and slightly different points so technically you could crop the video horizontally so it ends on one side exactly where it starts on another.
Recommended Settings
Weight: 0.6–1 works well
- I’ve even gotten away with 0.2, so feel free to experiment
Aspect Ratio: 2:1
Post-Processing (Optional)
The raw video can be played in most 360 media players or VR players as-is. However, if you want actual depth in VR, you’ll need to apply Stereoscopic depth to the video.
This node can do that:
https://github.com/SamSeenX/ComfyUI_SSStereoscope?tab=readme-ov-file
⚠️ Warning: It appears to have a size limit.
For example, one of my videos ended up around 500 MB, which exceeds what the node (and even ComfyUI itself) will accept for upload.
If you find a workaround, please let me know. Otherwise, you’ll need to use an external depth tool or handle the depth manually.
VR Metadata Injection (Highly Recommended)
It’s also a good idea to inject VR metadata so headsets and players automatically recognize the video as VR content.
You can use Google’s Spatial Media tool for this:
https://github.com/google/spatial-media/releases
It’s free and very easy to use.
TL;DR
Yes, it works
Use 2:1 aspect ratio
You can make VR videos
You can make them better by adding depth and VR metadata
Extra Banter
Honestly, I’m kind of glad LTX-2 can’t do this out of the box. I was already a full day into training this LoRA before I realized I probably should’ve checked that first.
More importantly, I now fully understand why there are so few LoRAs like this floating around. Even a 5090 didn’t have enough VRAM to train it. I had to use one of the 48 GB Ada cards. On top of that, finding usable flat panoramic 360° video datasets was a nightmare, so I couldn’t build a massive dataset. Thankfully, I didn’t need one.
I’ll be real though: if this had failed after the two days of training, I would’ve just said fuck it.
Anyway, as always, if you like what I do and want to support the work, feel free to buy me a coffee ☕
