Despite the recent success of deep learning methods in automated medical image analysis tasks, their acceptance in the medical community is still questionable due to the lack of explainability in their decision-making process. The highly opaque feature learning process of deep models makes it difficult to rationalize their behavior and exploit the potential bottlenecks. Hence it is crucial to verify whether these deep features correlate with the clinical features, and whether their decision-making process can be backed by conventional medical knowledge. In this work, we attempt to bridge this gap by closely examining how the raw pixel-based neural architectures associate with the clinical feature based learning algorithms at both the decision level as well as feature level. We have adopted skin lesion classification as the test case and present the insight obtained in this pilot study. Three broad kinds of raw pixel-based learning algorithms based on convolution, spatial self-attention and attention as activation were analyzed and compared with the ABCD skin lesion clinical features based learning algorithms, with qualitative and quantitative interpretations.
3 - 17