Kernel Proposal Network for Arbitrary Shape Text Detection. (arXiv:2203.06410v2 [cs.CV] UPDATED)

Segmentation-based methods have achieved great success for arbitrary shape
text detection. However, separating neighboring text instances is still one of
the most challenging problems due to the complexity of texts in scene images.
In this paper, we propose an innovative Kernel Proposal Network (dubbed KPN)
for arbitrary shape text detection. The proposed KPN can separate neighboring
text instances by classifying different texts into instance-independent feature
maps, meanwhile avoiding the complex aggregation process existing in
segmentation-based arbitrary shape text detection methods. To be concrete, our
KPN will predict a Gaussian center map for each text image, which will be used
to extract a series of candidate kernel proposals (i.e., dynamic convolution
kernel) from the embedding feature maps according to their corresponding
keypoint positions. To enforce the independence between kernel proposals, we
propose a novel orthogonal learning loss (OLL) via orthogonal constraints.
Specifically, our kernel proposals contain important self-information learned
by network and location information by position embedding. Finally, kernel
proposals will individually convolve all embedding feature maps for generating
individual embedded maps of text instances. In this way, our KPN can
effectively separate neighboring text instances and improve the robustness
against unclear boundaries. To our knowledge, our work is the first to
introduce the dynamic convolution kernel strategy to efficiently and
effectively tackle the adhesion problem of neighboring text instances in text
detection. Experimental results on challenging datasets verify the impressive
performance and efficiency of our method. The code and model are available at
https://github.com/GXYM/KPN.

DoctorMorDi

DoctorMorDi

Moderator and Editor