Skip to main content

Infeasible paths in the context of data flow based testing criteria: Identification, classification and prediction

Abstract

Infeasible paths constitute a bottleneck for the complete automation of software testing, one of the most expensive activities of software quality assurance. Research efforts have been spent on infeasible paths, basically on three main approaches: prediction, classification and identification of infeasibility. This work reports the results of experiments on data flow based criteria and of studies aimed at the three approaches above. Identification, classification, and prediction of infeasible paths are revisited in the context of data flow based criteria (Potential Uses Criteria-PU). Additionally, these aspects are also addressed in the scope of integration and object-oriented testing. Implementation aspects of mechanisms and facilities to deal with infeasibility are presented taking into consideration Poketool — a tool that supports the application of the Potential Uses Criteria Family. The results and ideas presented contribute to reduce the efforts spent during the testing activity concerning infeasible paths.

References

  1. [1]

    H. Agrawal and et al. Mining systems tests to aid software maintenance.IEEE Computer, 31(7):64–3, July 1998.

    Google Scholar 

  2. [2]

    D. Callahan. The program summary graph and flow-sensitive interprocedural data flow analysis. InProceedings of the SIGPLAN’88 Conference on Programming Languages, Design and Implementation, pages 47–56. Atlanta - Georgia, 22–24, June 1986.

  3. [3]

    A. Carniello.Teste Baseado na estrutura de casos de uso. Master Thesis, DCA/FEEC/Unicamp, February 2003. (in Portuguese).

  4. [4]

    M.L. Chaim.POKE-TOOL — Uma Ferramenta para Suporte ao Teste Estrutural de Programas Baseado em Análise de Fluxo de Dados. Master Thesis, DCA/FEEC/Unicamp, Campinas — SP, Brazil, April 1991. (in Portuguese).

    Google Scholar 

  5. [5]

    M. B. Dwyer, L.A. Clarke, J. M. Cobleigh, and G. Naumovich. Flow analysis for verifying properties of concurrent software systems.ACM Trans. On Software Engineering and Methodology, Vol 13(4), October 2004.

  6. [6]

    F.G. Frankl.The use of Data Flow Information for the Selection and Evaluation of Software Test Data. PhD Thesis, Department of Computer Science, New York University, New York, U.S.A., October 1987.

    Google Scholar 

  7. [7]

    S. Fujiwara, G. v. Bochmann, F. Khendek, M. Amalou, and A. Ghedamsi. Test selection based on finite state models.IEEE Trans. on Soft. Engin., Vol 17(6), June 1991.

  8. [8]

    A. Haley and S. Zweben. Development and application of a white box approach to integration testing.The Journal of Systems and Software, 4:309–315, 1984.

    Article  Google Scholar 

  9. [9]

    M.J. Harrold and G. Rothermel. Performing data flow on classes. InACM-SIGSOFT, pages 154–163. New Orleans-USA, December 1994.

  10. [10]

    M.J. Harrold and M.L. Soffa. Selecting and using data for integration testing.IEEE Software, Vol. 8(2):58–65, March 1980.

    Article  Google Scholar 

  11. [11]

    D. Hedley andM.A. Hennell. The causes and effects of infeasible paths in computer programs. InProceedings of VIII International Conference on Software Engineering, pages 259–266. UK, 1985.

  12. [12]

    W.E. Herman. Flow analysis approach to program testing.The Australian Computer Journal, Vol. 8(3):259–266, November 1976.

    Google Scholar 

  13. [13]

    R.M. Hierons, T.-H. Kim, and H. Ural. On the testability of SDL specifications.Computer Networks, 44(5):681–700, October 2004.

    MATH  Article  Google Scholar 

  14. [14]

    J.R. Horgan and S. London.ATAC- Automatic Test Coverage Analysis for C Programs. Bellcore Internal Memorandum, June 1990.

  15. [15]

    Z. Jin and A. J. Offut. Integration testing based on software couplings. InProceedings of the X Annual Conference on Computer Assurance (COMPASS 95), pages 13–23, Gaithersburg, Maryland, January 1995.

  16. [16]

    P. Jorgesen and C. Erickson. Object oriented integration.Communications of the ACM, 9, September 1994.

  17. [17]

    B.W. Kernighan and P.J. Plauger.Software Tools in Pascal. Addison-Wesley Publishing Company Reading, Massachusetts - USA, 1981.

    MATH  Google Scholar 

  18. [18]

    Y.W. Kim. Efficient use of code coverage in large-scale software development. InProceedings of the 2003 conference of the Centre for Advanced Studies on Collaborative research, pages 145–155. IBM Press, Toronto-Canada, 2003.

    Google Scholar 

  19. [19]

    J.W. Laski and B. Korel. A data flow oriented program testing strategy.IEEE Transactions on Software Engineering, Vol. SE-9(3):347–354, May 1983.

    Article  Google Scholar 

  20. [20]

    U. Linnenkugerl and M. Mullerburg. Test data selection criteria for (software) integration testing. InProceedings of the First International Conference on Systems Integration, pages 709–717. IEEE Press, Morristown, New Jersey, April 1990.

    Chapter  Google Scholar 

  21. [21]

    C.H. Liu, D.C. Kung, P. Hsia, and C.T. Hsu. Structural testing of web applications. In11th International Symposium on Software Reliability Engineering, pages 84–96. IEEE Press, 2000.

  22. [22]

    J.C. Maldonado.Critérios Potenciais Usos: Uma Contribuição ao Teste Estrutural de Software. Doctorate Dissertation, DCA/FEEC/Unicamp, Campinas- SP, Brazil, July 1991. (in Portuguese).

    Google Scholar 

  23. [23]

    J.C.Maldonado,M.L. Chaim, andM. Jino. Bridging the gap in the presence of infeasible paths: Potential uses testing criteria. InXII International Conference of the Chilean Computer Science Society, pages 323–340. Santiago, Chile, October 1992.

  24. [24]

    N. Malevris, D.F. Yates, and A. Veevers. Predictive metric for likely feasibility of program paths.Information and Software Technology, Vol. 32(2):115–118, March 1990.

    Article  Google Scholar 

  25. [25]

    V. Martena, A. Orso, and M. Pezze. Interclass testing of object oriented software. InInternational Conference on Engineering of Complex Computer Systems (ICECCS’01), pages 135–144. IEEE Press, Maryland- USA, December 2002.

    Google Scholar 

  26. [26]

    T. McCabe. A software complexity measure.IEEE Transactions on Software Engineering, Vol. SE-2(4):308–320, December 1976.

    Article  MathSciNet  Google Scholar 

  27. [27]

    S.C. Ntafos. On required element testing.IEEE Transactions on Software Engineering, SE-10(6):795–803, November 1984.

    Article  Google Scholar 

  28. [28]

    S. Rapps and E.J. Weyuker. Selecting software test data using data flow information.IEEE Transactions on Software Engineering, SE-11(4):367–375, April 1985.

    Article  Google Scholar 

  29. [29]

    F. Ricca and P. Tonella. Analysis and testing of web applications. In23rd International Conference on Software Engineering (ICSE’01), pages 25–34. IEEE Press, Toronto-Canada, May 2001.

    Google Scholar 

  30. [30]

    H. Ural and B. Yang. A structural test selection criterion.Information Processing Letters, 28(3):157–163, July 1988.

    MATH  Article  MathSciNet  Google Scholar 

  31. [31]

    S.R. Vergilio.Caminhos Não Executáveis: Caracterização, Previsão e Determinação para Suporte ao Teste de Programas. Master Thesis — DCA/FEEC/Unicamp, Campinas - SP, Brazil, January 1992. (in Portuguese).

    Google Scholar 

  32. [32]

    S.R. Vergilio, J.C. Maldonado, and M. Jino. Infeasible paths within the context of data flow based criteria. InVI International Conference on Software Quality, pages 310–321. Ottawa-Canada, October 1996.

  33. [33]

    S.R. Vergilio, S.R.S. Souza, and P.S.L. Souza. Coverage testing criteria for message passing parallel programs. InIEEE Latin American Test Workshop (LATW’05), pages 161–166. Salvador -Bahia, Brazil, March 2005.

  34. [34]

    P. Vilela, J.C. Maldonado, and M. Jino. Data flow based integration testing. InXIII Brazilian Symposium on Software Engineering, pages 393–409. Florian opolis, SC, Brazil, October 1999.

  35. [35]

    E.J. Weyuker. An empirical study of the complexity of data flow testing. InProceedings of the Second Workshop on Software Testing, Verification and Analysis, pages 188–195. Computer Science Press, Banff - Canada, July 1988.

    Chapter  Google Scholar 

  36. [36]

    E.J. Weyuker. The evaluation of program-based software test data adequacy criteria.IEEE Transactions on Software Engineering, Vol. SE-16(2):121–128, February 1988.

    MathSciNet  Google Scholar 

  37. [37]

    E.J. Weyuker. More experience with data flow testing.IEEE Transactions on Software Engineering, Vol. SE-19(3):914–919, September 1993.

    Google Scholar 

  38. [38]

    L.J. White and E.I. Cohen. A domain strategy for computer program testing.IEEE Transactions on Software Engineering, Vol. SE-6(3):247–257, May 1980.

    Article  Google Scholar 

  39. [39]

    R-D. Yang and C-G. Chung. Path analysis testing of concurrent programs.Information and Software Technology, 34(1):101–130, January 1992.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Vergilio, S.R., Maldonado, J.C. & Jino, M. Infeasible paths in the context of data flow based testing criteria: Identification, classification and prediction. J Braz Comp Soc 12, 73–88 (2006). https://doi.org/10.1007/BF03192389

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF03192389

Keywords

  • software testing
  • data flow based criteria
  • infeasible paths