•
• •
• • •
•
•
•
•
• •
•
•
•
• •
•
•
Software failure failure can cause loss of time, money, company reputation, and can even cause injury and death. Error/mistake -> defect/fault/bug->failure Not all defects turns into failure Defects can occur because human beings are fallible, time pressure, pressure, complexity of infrastructure, complex complex code, changing technologies and many systems’ interaction. Per syllabus, eciency is also a nonfunctional behavior. behavior. The standard for software product quality is – !" #$%& !earning lessons from previous project and incorporating them is an aspect of "uality assurance Development standard, training, defect analysis are all "uality assurance activity, alongside testing. #esting #esting can have the objectives$ objectives$ %nding defects, preventing preventing defects, gaining understanding, understanding, providing information for decision ma&ing. Designing tests early can help prevent defects being introduced into the code. During operational testing, the main objective may be to assess system characteristics such as reliability and availability. Dynamic testing can show failures that are caused by defects. Debugging is the de'elopment acti'ity to %nd, analy'e and remove the cause of defect. #esting #esting can have the following objectives$ objectives$ (inding defects )aining con%dence about the level of "uality Providing information regarding regarding decision ma&ing Providing Preventing defects. Preventing #he thought process and activities activities involved in designing tests tests early in the life cycle *verifying the test basis via test design+ can help to prevent defects from being introduced into the code. nstead of exhaustive testing, risk analysis and priorities should be used to focus testing e-orts. arly testing should have de%ned objectives. (inding and %xing defects does not help if the system built is unusable and does not ful%ll the users’ needs and expectation. (undamental test processes may o'erlap or take place concurrently #est #est planning is the activity of de%ning the objectives of testing testing and the speci%cation of test activities in order to meet the objectives and mission. #est #est control is the ongoing activity of comparing actual progress progress against the test plan, and reporting status including deviation from the plan.
Test analysis and design ) *E+ T (*" !.+01!/2E3 0""45 /oo& 0 re'iewing software integrity le'el )risk le'el5 6 risk analysis report valuate testability of the test basis and test objects. dentify and prioriti7e test condition based on analysis of test item, the speci%cation. Design and prioriti7e high level test cases dentify necessary test data to support the test condition and test cases environment setup and identify and re"uired Design the test environment infrastructure infrastructure and1or tools 8reating by-directional traceability traceability between test cases and test basis mplementation and execution of test (inali'e, implementing and prioriti'ing test cases * ncluding the identi%cation of test data+ 2erifying and updating bidirectional traceability between the test basis and test cases. 0 /oo& E'aluating e9it criteria and reporting should be done for each test le'el: ndependent testing maybe maybe carried out at any le'el of testing. ndependence means avoiding author bias, not replacement replacement of familiarity. !oo&ing for failures in a system re"uires curiosity, professional pessimism, a critical eye, attention to detail, and good communication with the development peers and experience experience on which to base error guessing. efect information can help de'elopers de'eloping their skills on a weekend while testers can go on a holiday. #esters #esters should collaborate rather rather than battles 3 remind remind everyone of the common goal of better "uality system. 8ode of ethics- page %; 2E3 2E3 23 <108, 8E2T +2 + 2 E<".E*, <*"18T <*"18 T, =1EE2T =1 EE2T,, +2+EE2T, <*"(E!!"2, 8"E+1E!, !E( , Software development development model must be adapted to the context of project and product characteristics (unctional and structure test can be carried out at any le'el ndicators for maintenance testing? modi@cation, migration and retirement 45#S6 commercial o- the shelf
•
• •
•
•
•
•
• •
•
•
•
•
• •
• •
•
•
•
•
•
•
•
A model can ha'e more or less than B le'els 477 6 48P8/!#9 78#:;#9 75D! N#);8#5N !oftware life cycle processes )ieee/iec $%%;C5 ;egression ;egression testing is increasingly important in an iterativeincremental iterativeincremental development model 4haracteristics of good testing corresponding testing (or every development activity there is a corresponding activity*2 model+ Each test le'el has test obDecti'es speci@c to that le'el #he analysis and design of tests for a given test test level should begin during the corresponding corresponding development activity #esters #esters should be involved involved in reviewing reviewing documents as soon as drafts are available in the development life cycle. Test le'els can be combined and reorgani7ed *ead test le'el from the syllabus ;ead test basis and test objectives in di-erent test levels from the syllabus n component testing stubs, drivers and simulators are used. n component testing, test cases are derived from wor& products products such as a speci%cation of the component, the software design or the data model. n System integration testing, the developing organi'ation may control only one side of the interface. #his might be considered as a ris&. /usiness process implemented as wor&
• •
•
•
•
•
• •
• •
•
•
•
(inding defects is not the main focus of acceptance testing: 8cceptance testing is not necessarily a %nal level of testing. 8 large scale system testing may come after acceptance testing 8 45#S can be a subject for acceptance testing before it is installed, a component may be acceptance tested during component testing, a new functionality can be acceptance tested. n operational acceptance testing includes$ bitmabs 0 data loads and migration task 8 test type is focused on a particular test objective, which could be$ functional testing, nonfunctional testing, structure or architecture architecture testing, con%rmation and regression regression testing. xample of di-erent type of testing$ structure testing$ control
•
*epeatability is a characteristics of tests used for regression/con@rmation regression/con@rmation testing *egression can be used at all test levels and it includes, functional1nonfunctional1structural functional1nonfunctional1structur al testing. 7aintenance testing is triggered by modi%cation, migration or retirement : + distinction should be made between planned release and hot @9es: igration testing ) con'ersion testing 5 is needed for data migration also 7odi%cation includes corrective1emergency corrective1emergency changes, patches to correct correct new exposed1discovered exposed1discovered vulnerabilities of the operating system, planned enhancement, planned operating system1database upgrade, planned 45#S upgrade. 7aintenance testing of migration includes operational tests of new environment environment as well as the change software. t also includes 7igration1conversion testing 7aintenance testing for the retirement retirement may include testing of data migration or archiving for long data retention policy. aintenance testing can be done for all test le'els and all test types aintenance testing can be dicult if speci@cations are out of data/missiong, data/missiong, or testers with domain knowledge are not a'ailable: +ny software work product can be re'iews /ene%ts of reviews$ D;# 0 early detection and correction, fewer defects *e'iew, static analysis, and dynamic testing all ha'e same obDecti'es – identifying defects: #he way a review review is conducted out depends on the agreed agreed objectives of the review *e.g. %nding defects, gain understanding, educate testers and new team members, or discussion and decision by consensus. Planning$ boo& 0 de%ning review review criteria0 chec&ing entry criteria *>>> + +n e9it criterion is checked during follow up: o not forget itFFF 8 single software product or related wor& product p roduct may be the subject of more than one review. ain purpose of informal re'iew? ine9pensi'e way to get some bene@t #echnical #echnical review review may include peers peers and technical experts experts with optional management participation
•
•
•
#echnical #echnical review review also includes review review report report with a list of %ndings, and when appropriate, recommendation recommendation related to %nding. * 2ot in books5 #here is a role role called optional reader in nspection * most formal review+
!o the actual sequence is de'eloped during test implementation implementation and e9ecution The purpose of test design technique is to identify test conditions, test cases and test data . Speci%cation based or blac&box testing includes both functional and a nd non functional testing. n speci@cation based testing, formal/informal models are used for the speci@cation of the problem to be sol'ed, the software or its components: Test cases can be deri'ed systematically from these models: 8part from the &nowledge of users, developers, testers, and other stac& holders, its usage, environment environment and &nowledge about li&ely defects and their distribution is another source s ource of information. Equi'alence partitioning partitioning can be found for both 'alid and in'alid data: Partitions Partitions can also be identi%ed for outputs, internal values, time related values and for interface parameters Equi'alence partitioning partitioning can be used to achie'e input and output co'erage goals: /ehavior at the edge is more li&ely to be incorrect incorrect that behavior within the partitions Tests can be designed for both 'alid and in'alid boundary 'alues /oundary value analysis can be applied to all test levels. /28 /2 8 is relatively easy to apply and its defect %nding capability is high. /oundary value analysis is an extension for P and other blac&box techni"ues can be applied for user input on screen or time range or table range. Decision tables are a good way to capture system re"uirements re"uirements that contain logical conditions and to document internal system design. 0 business rules ;ules ;ules can be either true or false 8 state table shows the relationship between inputs and states, and can highlight possible transitions that are invalid. #ests #ests can be designed to cover any and every every &ind of transitions1states. State transition is heavily used within embedded system and technical automation in general, testing screen dialogue
System use cases on the system functionality level are system use cases. Each use case has preconditions which need to be met for the use case to work successfully: Each use case terminates with post conditions which are the obser'able results and @nal state of the system: + use case has a mainstream )most likely5 likely5 scenario and alternati'es scenarios: 1se cases are 'ery useful for designing acceptance tests with customer/user participation: participation: They also help unco'er integration defects caused by the interaction and interfaces of diGerent components: Structure based testing can be applied to component level*statements, level*statements, decisions, branches+, integration level*tree+ level*tree+ or system level level *menu structure, business process or process structure+ ecision co'erage is stronger than statement co'erage: $;;H decision co'erage guarantees $;;H statement co'erage, but not 'ice 'ersa 4ondition coverage is stronger than decision coverage #ool #ool support useful for structural testing testing of code. The concept of co'erage can also be applied to other test le'els )e:g: integration testing5 rror guessing is one form fo rm of experience based techni"ues. 8 systematic approach to error guessing is fault attac& enumerate a list of possible defects and design tests to attack these defects: 8ll the testing activities are concurrent concurrent in exploratory testing and they are time boxed. xploratory testing can serve as a chec& on the test process to help ensure that most serious defects are found. Developers may participate in lower level of testing, but their lac& of objective often limits their e-ectiveness. 8n independent tester can verify assumptions people made during speci%cation and implementation. Draw bac&s for independent testing includes$ isolation from the development team, developers may become irresponsible, irresponsible, independent testers may be seen as a bottlenec& of blamed for delays. #esting #esting may be done by speci%c testing testing role or by pm, pm, "m,dev, sme or the mochus * infrastructure or # operations+
#he role of test leader leader may be performed performed by a project manager, development manager, "uality assurance manager, or the manager of a test group. n large groups, two positions may exist$ test manager and test leader. leader. #he test leader plans, monitors and controls controls the testing activities. activities. #esters #esters at the component and integration integration level would be developers developers #esters #esters test level would would be business experts experts and users. #esters #esters for operational acceptance acceptance testing would be operators. operators. #est #est Planning activity$ activity$ boo& 0 De%ning the amount, level of detail, detail, structure and templates for the test documentation #P8$ #P8$ boo& 0 Setting the level level of detail for test procedures procedures in order order to provide enough information to support reproducible reproducible test preparation and execution. ntry criteria$ a'ailability of test en'ironment, test tool, testable code and test data: xit criteria$ boo& 0 estimate of defect density or reliability measures, residual ris&, such as defects not %xed or lac& of test coverage in certain areas. #esting #esting e-ort I Development Development process$ boo&s 0 stability of of the organi'ation and test process. #he test approach is the implementation implementation of the test strategy strategy for a speci%c project. Test approach is used for selecting the test design techniques, test types to be applied and for de@ning entry/e9it criteria: criteria: The selected approach depends on the conte9t and may consider risk, ha7ards and safety, a'ailable resources and skills, the technology, the nature of the system, test obDecti'es and regulations: regulations: 8ction ta&en during test control may cover any test activity and may a-ect any other software life cycle activity or tas&. #est #est controlI boo&s 0 7a&ing decisions decisions based on information from from test monitoring. #he purpose of con%guration management is is to establish and maintain the integrit of the products *components, data and a nd documentation+ of the software or system through the project and product. During test planning , the con%guration management procedures procedures and infrastructure * tools+ should be chosen, documented and implemented.
The le'el of risk will be determined by the likelihood of an ad'erse 'ent happening and the impact ) the harm resulted from that e'ent5 Project Project ris&I organi'ational factors I boo&s 0 improper attitude toward or expectations of testing * not appreciati app reciating ng the value of %nding defects testing+ Project Project ris&I technical1specialist issues$ boo& 0 test environment not ready on time, late data conversion,migration planning and development and testing data conversion1migration tool. Project Project ris& I technical1specialist issues$ low "uality of the design, code, con%guration data, test data and tests. Product ris&$ boo& 0 Poor data integrity and "uality *e.g. data migration issues, data conversion problems, data transport problems, violation of data standard+ #esting #esting as a ris&control activity activity provides provides feedbac& about the residual residual ris& by measuring the e-ectiveness of critical defect removal and of contingency plans. 8 ris& based approach app roach starts in the initial stages of a project. ;is& identi%ed can be used to derive$ boo& * test techni"ue 0 test prioriti'ing 0 nontesting activities + 0 extent of testing to be carried out. 8n organi'ation should establish an incident management process and rules for classi%cation. ncidents may be raised during de'elopment, re'iew, testing or use of software product: )any software life cycle 5 ncidents can be raised against code and +2. kind kind of documentation: tems in incident management$ boo&s 3 Scope, severity, priority of incidentA Date of incident was discovered. #ools #ools can be used to J types of activity activity $ execution, execution, management, exploration1monitoring, exploration1monitoring, others * e.g. spreadsheets, email, word doc etc + Purpose of test tools in testing$ increase eKciency eKciency by automatic repetitive repetitive tas&s, 8utomate activities that re"uire re"uire signi%cant resource resource * static testing+ , automate activities that cannot be done manually * performance + , increase reliability reliability * large data comparison or simulating + . Test framework means? reusable and e9tensible test libraries, a type of design of test automation ) data/keyword dri'en 5 , o'erall process of e9ecution of testing:
Some tools can be intrusive, a-ecting the actual outcome of the test due to di-erence in actual timing or extra instructions. #he conse"uence of intrusive tools is called the probe probe e-ect. Tools to support acti'ities o'er the entire software life cycle? test management tool, requirements management tool, con@guration management tool, incident management tool: #est #est 7anagement tool provides provides interfaces for executing executing tests, tests, trac&ing defects1management re"uirements re"uirements and support for "uantitative analysis and support. #hey also support tracing the test objects to re"uirements re"uirements and might have their own version control or an interface to external version control. ;e"uirement management tool store re"uirements, their attributes and trace re"uirements re"uirements to individual tests 3 may also help indentify incorrect1identify mission re"uirements. #ool #ool support for static testing$ re'iew tool, static analysis tool, modeling tool: ;eview ;eview tools assists with review process, chec&list and guidelines and stores review comments. !tatic analysis tool? these tools help h elp de'elopers and testers @nd defects prior to dynamic testing by pro'iding support for enforcing coding standards ) including secure coding5, analysis of structures and dependencies, also help planning or risk analysis by pro'iding metrics of the code: #ool #ool support for test speci%cation$ test test design tool 0 test data preparation tool. #est #est design tools are used to generate generate test inputs or executable executable tests or test oracles from re"uirements, re"uirements, ):, design models or code #est #est data preparation tool manipulates manipulates data %le, databases, or data transmission to setup test data to be used. #est #est execution execution tools enables tests to be run both automatically automatically or semi automatically ## *test execution execution tools+ usually provides provides a test log for each test test run. #S# supports ): based con%guration for parameteri'ation parameteri'ation of data. Security tools evaluate the ability of the softwar s oftware e to protect data con%dentiality, integrity, authentication, authori'ation, availability and non repudiation. *D88N8+ Security tools are most focused with particular technology, platform and purpose. ynamic analysis tools @nd defects that are only e'ident when software is e9ecuting
Dynamic analysis tools are typically used in component and component integration testing. Dynamic analysis tools are also used for testing middleware. middleware. #est #est machines used in Performance Performance testing testing are &nown as load generator 7onitoring tools continuously analy'e, verify and report on usage of speci%c system resources resources and give warnings of possible poss ible service problems ata quality assessment tools is a tool supported for speci@c need ;epetitive wor& is reduced, )reater consistency and repeatability, 5bjective assessment * static measures, coverage+ , ase of access to information about tests or testing. *isk of using tools (*" !.+01! Scripts made by capturing tests by recording recording may be unstable when unexpected events occur. Data driven tests take in data from a separate source other than scripts (e.g. spreadsheets) or algorithm which generates input automatically based on runtime parameters supplied to the application. Key word driven tests uses keyword stored in a spreadsheet s preadsheet to decide on actions and test data. Test (or execution execution tools, the expected results results for each test need to be stored for later comparison. #est #est 7anagement tool need to interface interface with other test tools. tools. Success factors read it from syllabus,