ComfyUI 2026 Update: Best Models, Video, LoRA, and VRAM Optimization (04/12/26)
Use this guide to quickly apply the most useful current ComfyUI improvements, skip low-value experiments, and run better image and video workflows on both high-end and lower-VRAM systems.
Source Video
New in ComfyUI: Best Models, Video, LoRA, Memory Optimization, and Hidden Nodes
Channel: Vladimir Chopine [GeekatPlay]
Published: 04/09/26
Length: 11m 21s
Watch on YouTube
1) What you will accomplish
- Pick stronger current model paths in ComfyUI for image and video.
- Use LTX-focused video workflows with realistic expectations.
- Keep LoRA in your pipeline where it still matters.
- Run more reliably on lower VRAM by using slim/lighter model strategy.
- Apply hidden node combinations for pose, face/mask, and expression work.
2) Prerequisites
- ComfyUI installed and launching normally.
- Basic comfort with loading workflows and model checkpoints.
- A GPU setup appropriate for your target output. Lower VRAM is workable with lighter models and tighter settings.
3) Recommended workflow path
Step 1, Start with templates and tested flows
- Open current local templates first, instead of building from zero.
- Pick one use case per run:
image refinement, video generation, or pose/expression control.
Step 2, Prioritize video path with LTX where appropriate
- Use LTX 2.3 workflows for local video experiments.
- For scene continuity, use first/last-frame guidance and controlled prompt transitions.
- Treat audio support as evolving. Validate what is currently stable in your exact workflow before scaling output.
Step 3, Keep LoRA in production workflows
- Use LoRA when style consistency or subject fidelity matters.
- Do not assume base models replace LoRA for repeatable client-grade output.
Step 4, Optimize for lower VRAM early
- Choose slim/lighter models first for previews and iteration.
- Increase quality only after composition and motion are correct.
- Use staged rendering, shorter clips, and reduced resolution where needed.
Step 5, Use hidden node stack for control-heavy jobs
- Apply pose and multi-person detection for composition control.
- Layer face, mask, and expression identification for cleaner character edits.
- Use pose-to-video or animation flows when motion continuity is more important than raw speed.
Step 6, Match model choice to task
- Image quality/detail: test stronger image models first, then compare with lighter variants.
- Video throughput: favor models/nodes with known lower-memory behavior.
- Reference-driven output: use reference-based generation when consistency across frames/images is critical.
4) Success checks
- You can run at least one stable image workflow and one stable video workflow end-to-end.
- Your LoRA-enabled runs produce more consistent outputs than base-model-only runs.
- You can complete iterations on your available VRAM without repeated out-of-memory failures.
- You can use pose/face/mask nodes to intentionally control output, not just prompt for it.
5) Troubleshooting
Frequent VRAM crashes
- Switch to lighter checkpoints.
- Lower resolution, frame count, or batch size.
- Run short validation clips before final render.
Video output looks unstable or drifts
- Add first/last-frame guidance where supported.
- Simplify scene/prompt complexity, then re-introduce detail gradually.
LoRA appears weak or inconsistent
- Verify LoRA compatibility with the active base model.
- Adjust LoRA strength incrementally and compare outputs side-by-side.