ChatPaper.aiChatPaper

POS-ISP:面向任务感知ISP的序列级流水线优化

POS-ISP: Pipeline Optimization at the Sequence Level for Task-aware ISP

April 8, 2026
作者: Jiyun Won, Heemin Yang, Woohyeok Kim, Jungseul Ok, Sunghyun Cho
cs.AI

摘要

近期研究探索了通过组合预定义模块并使其适配任务特定目标,来优化图像信号处理(ISP)流水线的方法。然而,联合优化模块序列与参数仍具挑战性。现有方法依赖神经架构搜索(NAS)或分步强化学习(RL),但NAS存在训练-推理不匹配问题,而分步RL由于需进行阶段性决策,会导致训练不稳定和计算开销过高。我们提出POS-ISP这一序列级RL框架,将模块化ISP优化建模为全局序列预测问题。该方法通过单次前向传播即可预测完整模块序列及其参数,并利用终端任务奖励优化流水线,无需中间监督和冗余执行。在多下游任务上的实验表明,POS-ISP在提升任务性能的同时降低了计算成本,凸显了序列级优化作为任务感知ISP的稳定高效范式。项目页面详见:https://w1jyun.github.io/POS-ISP
English
Recent work has explored optimizing image signal processing (ISP) pipelines for various tasks by composing predefined modules and adapting them to task-specific objectives. However, jointly optimizing module sequences and parameters remains challenging. Existing approaches rely on neural architecture search (NAS) or step-wise reinforcement learning (RL), but NAS suffers from a training-inference mismatch, while step-wise RL leads to unstable training and high computational overhead due to stage-wise decision-making. We propose POS-ISP, a sequence-level RL framework that formulates modular ISP optimization as a global sequence prediction problem. Our method predicts the entire module sequence and its parameters in a single forward pass and optimizes the pipeline using a terminal task reward, eliminating the need for intermediate supervision and redundant executions. Experiments across multiple downstream tasks show that POS-ISP improves task performance while reducing computational cost, highlighting sequence-level optimization as a stable and efficient paradigm for task-aware ISP. The project page is available at https://w1jyun.github.io/POS-ISP
PDF21April 11, 2026