Quantcast
Channel: H2K Infosys » Software Quality Assurance – General Testing Interview Questions
Viewing all articles
Browse latest Browse all 8

Software Quality Assurance – General Testing Interview Questions

$
0
0

Interview Questions for QA, Software Testers

Q. 51: What is Parallel Testing?

Parallel testing involves testing multiple products or sub-components simultaneously. A
parallel test station typically shares a set of test equipment across multiple test sockets,
but, in some cases, it may have a separate set of hardware for each unit under test (UUT).

The majority of nonparallel test systems test only one product or sub-component at a time,
leaving expensive test hardware idle more than 50 percent of the test time. Thus, with
parallel testing, you can increase the throughput of manufacturing test systems without
spending a lot of money to duplicate and fan out additional test systems.

<<<<<< =================== >>>>>>

Q. 52: What is Comparison Testing?

Comparison testing is testing that compares software weaknesses and strengths to those of
competitors’ products.

<<<<<< =================== >>>>>>

Q. 53: What is Probe Testing?

It is almost same as Exploratory testing. It is a creative, intuitive process. Everything
testers do is optimized to find bugs fast, so plans often change as testers learn more about
the product and its weaknesses.

Session-based test management is one method to organize and direct exploratory testing. It
allows us to provide meaningful reports to management while preserving the creativity that
makes exploratory testing work. This page includes an explanation of the method as well as
sample session reports, and a tool we developed that produces metrics from those reports.

<<<<<< =================== >>>>>>

Q. 54: What questions you would ask to yourself while deciding to automate the
tests?

Best approach would be to raise the following questions:

1) Automating this test and running it once will cost more than simply running it manually
once. How much more?

2) An automated test has a finite lifetime, during which it must recoup that additional cost.
Is this test likely to die sooner or later? What events are likely to end it?

3) During its lifetime, how likely is this test to find additional bugs (beyond whatever bugs it
found the first time it ran)? How does this uncertain benefit balance against the cost of
automation?

<<<<<< =================== >>>>>>

Q. 55: What do we lose with Automation compared to Manual Testing?

Creating an automated test is usually more time-consuming & costly than running it once
manually. The cost differential varies, depending on the product and the automation style.

1) If the product is being tested through a GUI and your automation style is to write scripts
that drive the GUI, an automated test may be several times as expensive as a manual test.

2) If you use a GUI capture / replay tool that tracks your interactions with the product and
builds a script from them, automation is relatively cheaper. It is not as cheap as manual
testing, though, when you consider the cost of recapturing a test from the beginning after
you make a mistake, the time spent organizing and documenting all the files that make up
the test suite, the aggravation of finding and working around bugs in the tool, and so forth.
Those small “in the noise” costs can add up surprisingly quickly.

3) If you’re testing a compiler, automation might be only a little more expensive than
manual testing, because most of the effort will go into writing test programs for the
compiler to compile. Those programs have to be written whether or not they’re saved for
reuse.

<<<<<< =================== >>>>>>

Q. 56: What is the difference between Structural testing & functional testing?

Structural testing examines how the program works, taking into account possible pitfalls in
the structure and logic.

Functional testing examines what the program accomplishes, without regard to how it works
internally.

<<<<<< =================== >>>>>>

Q. 57: What is the difference between code coverange analysis & test coverage
analysis?

Both these terms are similar. Code coverage analysis is sometimes called test coverage
analysis. The academic world generally uses the term “test coverage” whereas the
practitioners use the term “code coverage”.

<<<<<< =================== >>>>>>

Q. 58: What are the basic assumptions behind coverage analysis?

Following assumptions tell us about the strengths and limitations of coverage analysis
technique.

1) Bugs relate to control flow and you can expose Bugs by varying the control flow. For
example, a programmer wrote “if (c)” rather than “if (!c)”.

2) You can look for failures without knowing what failures might occur and all tests are
reliable, in that successful test runs imply program correctness. The tester understands
what a correct version of the program would do and can identify differences from the
correct behavior.

3) Other assumptions are achievable specifications, no errors of omission, and no
unreachable code.

<<<<<< =================== >>>>>>

Q. 59: What are main advantages of statement coverage metric of software
testing?

1) The main advantage of statement coverage metric is that it can be applied directly to
object code and does not require processing source code. Usually the performance profilers
use this metric.

2) Bugs are evenly distributed through code; therefore the percentage of executable
statements covered reflects the percentage of faults discovered.

<<<<<< =================== >>>>>>

Q. 60: What are the drawbacks of statement coverage metric of software testing?

1) It is insensitive to some of the control structures.

2) It does not report whether loops reach their termination condition – only whether the
loop body was executed. With C, C++, and Java, this limitation affects loops that contain
break statements.

3) It is completely insensitive to the logical operators (|| and &&).

4) It cannot distinguish consecutive switch labels.

<<<<<< =================== >>>>>>


Viewing all articles
Browse latest Browse all 8

Trending Articles