Skip to main content

Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

NLOOK: a computational attention model for robot vision

Abstract

The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.

References

  1. 1.

    Ando S. Image field categorization and edge/corner detection from gradient covariance.IEEE Transactions on Pattern Analysis and Machine Intelligence 2000; 22(2):179–190.

  2. 2.

    Burt PJ, Hong T and Adelson EH. The laplacian pyramid as a compact image code.IEEE Transactions on Communications 1983; 31(4):532–540.

  3. 3.

    Connor CE, Egeth HE and Yantis S. Visual attention: bottom-up versus top-down.Current Biology 2004; 14(19):850–852.

  4. 4.

    Crowley JL, Riff O and Piater J. Fast computation of characteristic scale using a half octave pyramid. In:Proceedings of International Workshop on Cognitive Vision; 2002; Zurich, Switzerland. Berlin, Germany: Springer-Verlag; 2002. p. 1–8.

  5. 5.

    Daugman JG. Complete discrete 2-d gabor transforms by neural networs for image analysis and compression.Proceedings of IEEE Transactions on Acoustics, Speech, and Signal 1988; 36(7):1169–1179.

  6. 6.

    Desimone R and Duncan J. Neural mechanisms of selective visual attention.Annual Reviews Neuroscience 1995; 18(1):193–222.

  7. 7.

    Draper BA, Baek K and Boody J. Implementing the expert object recognition pathway. In:Proceedings of International Conference on Computer Vision Systems; 2003; Graz, Austria. Berlin, Germany: Springer-Verlag; 2003. p. 1–11.

  8. 8.

    Draper BA and Lionelle A. Evaluation of selective attention under similarity transformations.Computer Vision and Image Understanding 2002; 100(1):152–171.

  9. 9.

    Engel PM.INBC: an incremental algorithm for dataflow segmantation based on a probabilistic approach. Porto Alegre: Universidadade Federal do Rio Grande do Sul; 2009. (Technical Report RP-3690)

  10. 10.

    Engel S, Zhang X and Wandell B. Colour tuning in human visual cortex measured with functional magnetic resonance imaging.Nature 1997; 388(6637):68–71.

  11. 11.

    Frintrop S. VOCUS: a visual attention system for object detection and goal-directed search. [PhD thesis]. Bonn:Universität Bonn; 2006.

  12. 12.

    Greenspan S, Belongie S, Goodman R, Perona P, Rakshit S and Anderson CH. Overcomplete steerable pyramid filters and rotation invariance. In:Proceedings of IEEE Computer Vision and Pattern Recognition; 1994; Seattle, WA. Los Alamitos, CA: IEEE Press; 1994. p. 222–228.

  13. 13.

    Harel J and Koch C. On the optimality of spatial attention for object detection. In:Proceedings of 5 International Workshop on Attention in Cognitive Systems; 2009; Santorini, Grécia. Berlin, Germany: Springer-Verlag; 2009. p. 1–14. (v. 5395).

  14. 14.

    Heinen MR and Engel PM. Visual selective attention model for robot vision. In:Proceedings of 5 IEEE Latin American Robotics Symposium; 2008; Salvador, Brazil. Los Alamitos, CA: IEEE Press; 2008. p. 1–6.

  15. 15.

    Heinen MR and Engel PM. Evaluation of visual attention models under 2d similarity transformations. In:Proceedings of 24 ACM Symposium on Applied Computing; 2009; Honolulu, Hawaii. New York, NY: ACM press; 2009. (Special Track on Intelligent Robotic Systems).

  16. 16.

    Indiveri G, Mürer R and Kramer J. Active vision using an analog VLSI model of selective attention.IEEE Transactions on Circuits and Systems 2001; 48(5):492–500. (parte II, Analog and digital signal processing).

  17. 17.

    Itti L. Models of bottom-up attention and saliency. San Diego: Elsevier Press; 2005. p. 576–582.

  18. 18.

    Itti L and Koch C. Computational modeling of visual attention.Nature Reviews. 2001; 2(3):194–203.

  19. 19.

    Itti L, Koch C and Niebur E. A model of saliency-based visual attention for rapid scene nalysis.IEEE Transactions on Pattern Analysis and Machine Intelligence. 1998; 20(11):1254–1259.

  20. 20.

    Kentridge R, Heywood C and Davidoff J. Color perception. In: Arbib MA. (Ed.).The handbook of brain theory and neural networks. 2 ed. Cambridge: MIT Press; 2003. p. 230–233.

  21. 21.

    Klein RM. Inhibition of return.Trends in Cognitive Sciences. 2000; 4(4):138–147.

  22. 22.

    Koch C and Ullman S. Shifts in selective visual attention: toward the underlying neural circuitry.Human Neurobiology 1985; 4(4):219–227.

  23. 23.

    Lee KW, Buxton H and Jianfeng F. Cue-guided search: a computational model of selective attention.IEEE Trans. Neural Networks 2005; 16(4):910–924.

  24. 24.

    Leventhal AG.The neural basis of visual function. Boca Raton: CRC Press; 1991. (v. 4, Vision and visual dysfunction).

  25. 25.

    Lindeberg T. Feature detection with automatic scale selection.International Journal of Computer Vision 1998; 30(2):79–116.

  26. 26.

    Liu YH and Wang XJ. Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron.Journal of Computational Neuroscience 2001; 10(1):25–45.

  27. 27.

    Lowe DG. Distinctive image features from scale-invariant keypoints.International Journal of Computer Vision 2004; 60(2):91–110.

  28. 28.

    Marfil R, Bandera A, Rodríguez JA and Sandoval F. A novel hierarchical framework for object-based visual attention. In:Proceedings of 5 International Workshop on Attention in Cognitive Systems; 2009; Santorini, Grécia. Berlin, Germany: Springer-Verlag; 2009. p. 27–40. (v. 5393).

  29. 29.

    Marques O, Mayron L, Borba G and Gamba H. An attentiondriven model for similar images with image retrieval applications.EURASIP Journal on Advances in Signal Processing 2007; (1):1–17.

  30. 30.

    Mozer MC and Sitton M. Computational modeling of spatial attention. In: Pashler H. (Ed.).Attention. London: Psychology Press, London; 1998. p. 341–395.

  31. 31.

    Nagai Y. From bottom-up visual attention to robot action learning. In:Proceedings of 8 IEEE International Conference on Development and Learning; 2009; Shanghai, China. Los Alamitos, CA: IEEE Press.

  32. 32.

    Niebur E and Koch C. Control of selective visual attention: modeling the “where” pathway.Neural Information Processing System 1996; 8(1):802–808.

  33. 33.

    Orabona F, Metta G, and Sandini G. Object-based visual attention: a model for a behaving robot. In:Proceedings of 3 Attention and Performance in Computational Vision; 2005; San Diego, CA. Los Alamitos, CA: IEEE Press; 2005.

  34. 34.

    Ouerhani N, Bur A and Hügli H. Visual attention-based robot self-localization. In:Proceedings of European Conference on Mobile Robots; 2005; Ancona, Italy. Los Alamitos, CA: IEEE Press; 2005. p. 8–13.

  35. 35.

    Pashler, H.The Psycology of Attention. Cambridge: MIT Press; 1997.

  36. 36.

    Perko R, Wojek C, Schiele B and Leonardis A. Integrating visual context and object detection within a probabilistic framework. In:Proceedings of 5 International Workshop on Attention in Cognitive Systems; 2009; Santorini, Grécia. Berlin, Germany: Springer-Verlag; 2009. p. 54–68. (v. 5395).

  37. 37.

    Treisman AM. Features and objects: the fourteenth bartlett memorial lecture.The Quarterly Journal of Experimental Psychology 1988; 40(2):201–237.

  38. 38.

    Treisman AM and Gelade G. A feature integration theory of attention.Cognitive Psychology 1980; 12(1):97–136.

  39. 39.

    Tsotsos JK, Culhane SM, Wai WYK, Lai Y, Davis N, and Nuflo F. Modeling visual attention via selective tuning.Artificial Intelligence 1995; 78(1/2):507–545.

  40. 40.

    Vieira-Neto H.Visual novelty detection for autonomous inspection robots. [PhD thesis]. Essex: University of Essex; 2006.

  41. 41.

    Vieira-Neto H and Nehmzow U. Visual novelty detection with automatic scale selection.Robotics and Autonomous Systems 2007; 55(9):693–701.

  42. 42.

    Wang T, Zheng N and Mei K. A visual brain chip based on selective attention for robot vision application. In:Proceedings of IEEE International Conference on Space Mission Challenges for Information Technology; 2009. Los Alamitos, CA: IEEE Press; 2009. p. 93–97.

  43. 43.

    Witkin AP. Scale-space filtering. In:Proceedings of International Joint Conference on Artificial Intelligence; 1983; Karlsruhe, Germany. San Fransisco, CA: Morgan Kaufman; 1983. p. 1019–1022.

Download references

Author information

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Heinen, M.R., Engel, P.M. NLOOK: a computational attention model for robot vision. J Braz Comp Soc 15, 3–17 (2009). https://doi.org/10.1007/BF03194502

Download citation

Keywords

  • robot vision
  • visual attention
  • selective attention
  • focus of attention
  • biomimetic vision