Wan2.2 I2V V2V Video Extend Quant 14B
详情
| 模型类型 | 工作流 |
| 基础模型 | Wan Video 14B i2v 720p |
| 发布时间 | 1/19/2026 |
下载文件
关于此版本
Cascade Video Workflow — User Guide
This workflow is designed to generate long videos in chained segments using a Cascade → Cascade approach.
Each segment continues seamlessly from the last frame of the previous one.
You never regenerate from scratch — you extend the motion.
1. Core Concept (Read Once)
The workflow generates video in chunks.
Each chunk:
Starts from an image.
Produces a short video segment.
The last frame of one segment becomes the start image of the next segment.
This process can be repeated indefinitely.
This is called Cascade.
2. Required Models (Load Once)
Before running anything, make sure these are loaded:
High UNet model (structure / motion)
Low UNet model (detail / stability)
CLIP model compatible with WAN
VAE model compatible with WAN
Optional:
LoRA models (LightX, AccVid, or user-chosen LoRAs)
These models remain the same across all cascades unless you intentionally change them.
3. Step A — Initial Image-to-Video (First Segment)
Purpose
Create the first video segment from a starting image.
Steps
Load a starting image (this defines character, framing, style).
Enter Positive Prompt (Start).
Enter Negative Prompt (Start).
WanImageToVideo generates:
latent video seed
conditioning
Run High KSampler (motion / structure).
(Optional) Run RAM / VRAM Cleanup.
Run Low KSampler (detail refinement).
Decode frames using VAEDecodeTiled.
Save the video segment.
Output
A short video segment.
A batch of frames.
4. Extract the Last Frame (Critical Step)
Purpose
Create the entry point for the next Cascade.
Steps
Connect the decoded frame batch to ImageFromBatch.
Set:
batch_index = 999
length = 1
Output is the last frame of the video.
This image becomes the start image for the next cascade.
5. Step B — Cascade Segment (Extend the Video)
Purpose
Extend motion naturally from the previous segment.
Steps
Connect the last frame into a new WanImageToVideo node as start_image.
Enter Positive Prompt (Extend)
(describe continuation, not a new scene).
Enter Negative Prompt (Extend).
Run High KSampler.
(Optional) Run RAM / VRAM Cleanup.
Run Low KSampler.
Decode frames with VAEDecodeTiled.
Output
A second video segment that continues motion seamlessly.
6. Cascade → Cascade (Repeat for Long Videos)
To create longer videos:
Take the last frame of the current cascade.
Feed it into another WanImageToVideo.
Repeat the Cascade steps.
There is no hard limit to how many cascades you can chain.
7. Combining Video Segments
To produce a final continuous video:
Collect frame batches from:
Initial segment
Each Cascade segment
Merge them using ImageBatch.
Export using VHS_VideoCombine.
Result:
One continuous video file.
8. Prompt Writing Rules (Very Important)
Do
Describe continuation, not reset.
Keep:
same character
same camera
same environment
Use gradual changes.
Do NOT
Radically change scene, camera, or identity.
Increase denoise aggressively.
Change resolution or aspect ratio mid-cascade.
9. Common Problems & Fixes
Motion resets
Denoise too high
Prompt too different from previous
Identity drift
CFG too high
Low sampler too strong
Missing cleanup between High and Low
Visible seam between segments
Last frame not correctly extracted
Different frame count or FPS between segments
10. Best Practice Summary
One story, many cascades
Last frame always feeds next start
High = motion, Low = refinement
Prompts evolve slowly
Cleanup when extending long chains
Final Note
This workflow is built for:
long-form motion
stable identity
memory-efficient generation
modular extension
模型描述
万全特 2.1 2.2 高低模型通用工作流程