This paper presents a parallel implementation model for real-time video image processing of saliency maps (SMs), which are a model to predict gaze directions based on human visual attention. It can extract high visual saliency regions that differ from...
This paper presents a parallel implementation model for real-time video image processing of saliency maps (SMs), which are a model to predict gaze directions based on human visual attention. It can extract high visual saliency regions that differ from surroundings in scene images. In computer vision, SMs are used for various applications because of the sparse feature representation. As an implementation device, we use IMAPCAR2, which is a single instruction multiple data (SIMD) processor with 64 processing elements (PEs). The features of IMAPCAR2 are high performance, low power consumption, and easy programming using one-dimensional C (1DC) of the ANSI-C compatible. We compared the performance of a sequential model and our parallel model for all steps. The processing speed of our model was 250 times higher than that of the sequential model. We compared our model with the existing parallel model. The processing speed of our model was 5.6 times higher than that of the existing model. For real-time video image processing, we implemented our model on an IMAPCAR2 evaluation board. The processing cost was 47.5 ms for the video images of 640 × 240 pixel resolution.