Ziteng Cui (崔子藤)

I'm a Ph.D. student at The University of Tokyo, where I am supervised by Prof. Tatsuya Harada. Before that, I received my master's degree from Shanghai Jiao Tong University.

I mainly work on Vision Robustness & Computational Photography & 3D Computer Vision. My favorite things are DOTA2, hiking, JOJO (Jolyne Cujoh) and Pink Floyd.

I am graduating in April 2025 and am seeking opportunities in both academia and industry. Please feel free to connect with me via email.

Email  /  Google Scholar  /  Github  /  Twitter

profile photo
Selected Publications

I'm now interested in the physics modeling of low-level vision and 3D computer vision, I especially interested in the neural radiance field. "*" means authors contribute equally. Full publication list please refer to here. Some papers are highlighted.

BMVC24 Discovering an Image-Adaptive Coordinate System for Photography Processing
Ziteng Cui, Lin Gu, Tatsuya Harada.
BMVC, 2024  
arxiv / bibtex / poster

Instead of just using image-adaptive curves or 3D LUTs, why not design an image-adaptive coordinate system tailored to different photography processing tasks?

RAW-Adapter: Adapting Pre-trained Visual Model to Camera RAW Images
Ziteng Cui, Tatsuya Harada.
ECCV, 2024   [github] GitHub stars
website / arxiv / bibtex / poster

We analyze the relationship between camera RAW data and sRGB images pre-trained models, and propose RAW-Adapter for effective RAW data based vision tasks.

Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption
Ziteng Cui, Lin Gu, Xiao Sun, Xianzheng Ma, Yu Qiao, Tatsuya Harada.
AAAI, 2024   [github] GitHub stars
website / arxiv / arxiv (old version) / bibtex / poster

Blight NeRF's volume rendering function with concealing fields, to handle novel view synthesis under low-light conditions and over-exposure conditions.

ICCV23 Monodetr: Depth-guided transformer for monocular 3d object detection
Renrui Zhang, Han Qiu, Tai Wang, Ziyu Guo, Ziteng Cui, Yu Qiao, Hongsheng Li, Peng Gao
ICCV, 2023   [github] GitHub stars
arxiv / bibtex / code / poster

MonoDETR introduces a depth-guided transformer for monocular 3D object detection, enhancing Mono3D with non-local depth cues and achieving SOTA results.

Improving Fairness in Image Classification via Sketching
Ruichen Yao*, Ziteng Cui*, Xiaoxiao Li, Lin Gu.
NeurIPS Workshop TSRML, 2022  
arxiv / bibtex / poster

Image-to-sketching may be an effective solution against image classification's unfairness, including both general scene and medical scene.

You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction
Ziteng Cui, Kunchang Li, Lin Gu, Shenghan Su, Peng Gao, Zhengkai Jiang, Yu Qiao, Tatsuya Harada.
BMVC, 2022   [github] GitHub stars
website / arxiv / bibtex / demo / poster

A super light-weight (only 90k+ parameters) transformer-based network Illumination Adaptive Transformer, for real time image enhancement and exposure correction.

ECCV_Restoredet Exploring Resolution and Degradation Clues as Self-supervised Signal for Low Quality Object Detection
Ziteng Cui, Yingying Zhu, Lin Gu, Guo-Jun Qi, Xiaoxiao Li, Renrui Zhang, Zenghui Zhang, Tatsuya Harada.
ECCV, 2022   [github] GitHub stars
arxiv / bibtex / poster

Combination detection with self-supervised super-resolution, for robust detection under various degradation conditions (noise, blurry, low-resolution).

ICCV_MAET Multitask AET with Orthogonal Tangent Regularity for Dark Object Detection
Ziteng Cui, Guo-Jun Qi, Lin Gu, Shaodi You, Zenghui Zhang, Tatsuya Harada.
ICCV, 2021   [github] GitHub stars
arxiv / bibtex / poster

Using Camera-ISP pipeline for low-light image synthetic, then using self-supervised learning to improving the performance of low-light condition object detection.

Misc
SJTU Shanghai Jiao Tong University
1. National Scholarship
2. Excellent Graduate Student
Service
Reviewer: ICCV, CVPR, ECCV, ICLR, ICML, NIPS, AISTATS, BMVC, ACCV, Eurographics, Pacific Graphics

This website is borrow from Jon Barron.
Also, consider using Leonid Keselman's Jekyll fork of this page.