Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SketchDesc: Learning Local Sketch Descriptors for Multi-view Correspondence (2001.05744v3)

Published 16 Jan 2020 in cs.CV

Abstract: In this paper, we study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object and predict as output the semantic correspondence among the sketches. This problem is challenging since the visual features of corresponding points at different views can be very different. To this end, we take a deep learning approach and learn a novel local sketch descriptor from data. We contribute a training dataset by generating the pixel-level correspondence for the multi-view line drawings synthesized from 3D shapes. To handle the sparsity and ambiguity of sketches, we design a novel multi-branch neural network that integrates a patch-based representation and a multi-scale strategy to learn the pixel-level correspondence among multi-view sketches. We demonstrate the effectiveness of our proposed approach with extensive experiments on hand-drawn sketches and multi-view line drawings rendered from multiple 3D shape datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Deng Yu (5 papers)
  2. Lei Li (1293 papers)
  3. Youyi Zheng (26 papers)
  4. Manfred Lau (6 papers)
  5. Yi-Zhe Song (120 papers)
  6. Chiew-Lan Tai (12 papers)
  7. Hongbo Fu (67 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.