Next Block Prediction: Video Generation via Semi-Autoregressive Modeling

Shuhuai Ren1, Shuming Ma2, Xu Sun1, Furu Wei2

1Peking University, 2Microsoft Research

Abstract

Next-Token Prediction (NTP) is a de facto approach for autoregressive (AR) video generation, but it suffers from suboptimal unidirectional dependencies and slow inference speed. In this work, we propose a semi-autoregressive (semi-AR) framework, called Next-Block Prediction (NBP), for video generation. By uniformly decomposing video content into equal-sized blocks (e.g., rows or frames), we shift the generation unit from individual tokens to blocks, allowing each token in the current block to simultaneously predict the corresponding token in the next block. Unlike traditional AR modeling, our framework employs bidirectional attention within each block, enabling tokens to capture more robust spatial dependencies. By predicting multiple tokens in parallel, NBP models significantly reduce the number of generation steps, leading to faster and more efficient inference. Our model achieves FVD scores of 103.3 on UCF101 and 25.5 on K600, outperforming the vanilla NTP model by an average of 4.4. Furthermore, thanks to the reduced number of inference steps, the NBP model generates 8.89 frames (128x128 resolution) per second, achieving an 11x speedup. We also explored model scales ranging from 700M to 3B parameters, observing significant improvements in generation quality, with FVD scores dropping from 103.3 to 55.3 on UCF101 and from 25.5 to 19.5 on K600, demonstrating the scalability of our approach.

Approach

Framework

Left: 3D discrete token map produced by our video tokenizer. The input video consists of one initial frame, followed by \(n\) clips, with each clip containing \(F_T\) frames. Right: Examples of block include token-wise, row-wise, and frame-wise representations. When the block size is set to 1x1x1, it degenerates into a token, as used in vanilla AR modeling. Note that the actual token corresponds to a 3D cube, we omit the time dimension here for clarity.

Framework

Comparison between a vanilla autoregressive (AR) framework based on next-token prediction (left) and our semi-AR framework based on next-block prediction (right). \(x^{(i)}_{j}\) indicates the \(j^{th}\) video token in the \(i^{th}\) block, with each block containing \(L\) tokens. The dashed line in the right panel presents that the \(L\) tokens generated in the current step are duplicated and concatenated with prefix tokens, forming the input for the next step's prediction during inference.

NTP v.s. NBP

NTP v.s. NBP

Comparison of next-token prediction (NTP) and next-block prediction (NBP) models in terms of performance and speed, evaluated on the K600 dataset (5-frame condition, 12 frames (768 tokens) to predict). Inference time was measured on a single A100 Nvidia GPU. All models are implemented by us under the same setting and trained for 20 epochs. FPS denotes ``frame per second''. The measurement of inference speed includes tokenization and de-tokenization processes. KV-cache is used for both models.

Benchmarking with Previous Systems

Main

Comparions of class-conditional generation results on UCF-101 and frame prediction results on K600. MTM indicates mask token modeling. Our model on K600 is trained for 77 epochs, we gray out models that use significantly more training computation (e.g., those trained for over 300 epochs) for a fair comparison.

Ablation Study

Ablation

Left: Generation quality (FVD, lower is better) and inference speed (FPS, higher is better) of various block sizes from 1 to 256. Right: Generation quality (FVD) of various block shape.

Class-conditional Video Generation (UCF-101, 128x128)

A happy elephant wearing a birthday hat walking under the sea

fencing

golf swing

blowing candles

bowling

brushing teeth

cliff diving

cutting in kitchen

bench press

baseball pitch

apply eye makeup

apply lipstick

boxing punching bag

BibTeX

@article{ren2024nbp,
      title={Next Block Prediction: Video Generation via Semi-Autoregressive Modeling},


      year={2024}
}