FORGE-Tree

Diffusion-Forcing Tree Search for Long-Horizon Robot Manipulation

Yanjia Huang1*, Shuo Liu2*, Sheng Liu3, Qingxiao Xu1, Mingyang Wu1, Xiangbo Gao1, Zhengzhong Tu1

1Texas A&M University     2University of Washington     3Karlsruhe Institute of Technology, Germany
*Equal Contributors

Paper Code Cite

Abstract

Long-horizon robot manipulation tasks remain challenging for Vision-Language-Action (VLA) policies due to drift and exposure bias, often denoise the entire trajectory with fixed hyperparameters, causing small geometric errors to compound across stages and offering no mechanism to allocate extra test-time compute where clearances are tight. To address these challenges, we introduce FORGE-Tree, a plug-in control layer that couples a stage-aligned Diffusion Forcing (DF) head with test-time Monte Carlo Tree Diffusion (MCTD). With a frozen VLA encoder, DF aligns timesteps to subtask stages; during inference we partially denoise only a target segment while keeping other tokens frozen, turning trajectory refinement into a sequence of local edits. We then apply Monte Carlo Tree Diffusion to select the next segment to refine. A scene graph supplies priors for expansion and geometry relation-aware scoring for rollouts, yielding tree-structured denoising whose performance scales with search budget while preserving the executed prefix. Evaluation on LIBERO, FORGE-Tree improves success rate by +13.4–+17.2 pp over the native VLA baselines with both OpenVLA and Octo-Base. Gains remain consistent under comparable compute budgets, especially on long-horizon variants.

Overview

FORGE-Tree Pipeline

Pretrained VLAs encode the instruction and observations, builds a scene graph 𝒢, our diffusion head predicts noise, the partial-denoising sampler edits a selected future segment with meta-actions a=(k,m,s,w,τ), and MCTD evaluates candidates with geometry-aware rewards before executing the first segment and replanning.

Training & Inference

DF
MCTD

Left: Noise as masking—subtasks 𝒮j share per-subtask timesteps t[i]=tj (darker cells indicate larger t). Middle (training): conditioned on c=fVLA(o,u), the diffusion head predicts εθ(xt[i],c,t[i]), recovers 0[i], and is supervised by the DF loss (noise MSE + end-of-trajectory and stage-end geometric terms + smoothness). Right (inference): we partially denoise a selected segment Sk,m using jumpy DDIM with geometry guidance w∇x_tU(ẋ0;𝒢); only the segment evolves while the complement is frozen, and the first segment is committed before replanning.

Experiment Results

Experiment Results

BibTeX

@inproceedings{huang2026forge_tree,
  title={Diffusion-Forcing Tree Search for Long-Horizon Robot Manipulation},
  author={Huang, Yanjia and ...},
  booktitle={ICRA},
  year={2026}
}