Mnocular Depth Estimation 모델인 Depth Anything V2의 환경구축 및 테스트
실행 환경
- CPU: Intel(R) Xeon(R) w5-3423 @ 12cores
- RAM: 256GB
- OS: Ubuntu-22.04 LTS
- GPU: NVIDIA RTX A6000 x 2ea
- CUDA Toolkit: CUDA 11.8, cuDNN 8.9.7
- 날짜: 2026.03.31.
Depth Anything V2 환경구축
git source download
git clone https://github.com/DepthAnything/Depth-Anything-V2pip install
cd Depth-Anything-V2
pip install -r requirements.txtpre-trained models download
| Model | Params | Checkpoint |
|---|---|---|
| Depth-Anything-V2-Small | 24.8M | Download |
| Depth-Anything-V2-Base | 97.5M | Download |
| Depth-Anything-V2-Large | 335.3M | Download |
| Depth-Anything-V2-Giant | 1.3B | Coming soon |
- 다운로드 받은 model은
/Depth-Anything-V2/checkpoints/에 위치하도록함
Depth Anything V2 테스트
Inference
- inference script
# Running script on images
python run.py \
--encoder <vits | vitb | vitl | vitg> \
--img-path <path> --outdir <outdir> \
[--input-size <size>] [--pred-only] [--grayscale]
# Running script on videos
python run_video.py \
--encoder <vits | vitb | vitl | vitg> \
--video-path assets/examples_video --outdir video_depth_vis \
[--input-size <size>] [--pred-only] [--grayscale]- vitb 모델 테스트
python run.py --encoder vitb --img-path assets/examples --outdir depth_visResults
-Depth-Anything-V2-테스트_image_5.png)