Posts

test cases are so pen-and-paper scripting

Image
  thoughts on  https://www.linkedin.com/posts/ i'd agree that testing is still about artefacts and algorithms, but test cases are so pen-and-paper scripting. And maybe they are the earliest, if but the crudest manifestations of testmaping. What we once thought could be resolved by test cases, are actually more thoroughly resolved with testmapping / mapping out by scenarios. Hence i would readily junk test case logging for testmap documentation.

Transient WIP vs Permanent Logs (about title-only-tickets)

Image
Personal take: when your development team (coders+testers) is small (and i mean real small, like less than 10 people), you know you got each others' backs. And so I've accepted the 'title-only-ticket' (especially one with a descriptive title), given that whoever created it can and will reliably discuss it with the group, or visually demo it, with a greater than 99% certainty of replicability. It is then up to me to docu my own understanding of the issue at hand, repeat it to them, and once agreed, my docu becomes the standard by which this issue will be judged. The above scenario mostly happens within mature teams, i guess. A most irresponsible move is to create a title-only-ticket, then halfway through your demo you can't remember the STR(steps-to-replicate) to show everyone the issue. That is an irresponsible waste of everyone's time. A team that can abbreviate it's WIP(work in progress) is a great team; after all, WIP is transient -- however we go about i...

{Q-scoped} ~ {Features/Featurettes} , {Scenarios} , {Evaluations}

Image
Cartoon image is from Wayne Roseberry ( source link )   totally agree with this [ source link ] , though I seem to call them slightly differently: a. Model/modeling = how i look at the {subject_under_test}, what is it trying to solve, what are its other uses b. Parts, topic, assesments == {Features/Featurettes} , {Scenarios} , {Evaluations} ; c. {Coverage} = [ Qty{Features,Featurettes} + Qty{Scenarios} ] ; d. {Q} = Evaluations[ {Features,Featurettes} x {Scenarios} ] ; e.  {Q scoped }  = Evaluations[ Qty{Features,Featurettes} x Qty{Scenarios} ] ; the last one (e.) is... .... a corollary of (c.) and (d.) ; .... essentially the contents of {test report, bug report} .

do QA/Testers contribute any value to the Product's Quality?

Image
With all due respect, my take on the matter is:  a.  hypothetically,   if a teamA{devs+testers+designers} >> work together == resulting in a product{Quality = 0.95 good}  b. if same teamA{devs+designers} minus the {Testers} >> work together == resulting in a product{Q = 0.65 good} c. then it means the product{Q} suffers a {0.30 good} reduction without the {Testers}  d. so, mathematically, the Testers seem to contribute to the Product{Q} an exact value of "N"  assuming the Devs work on the findings of Testers e. if Devs don't work on the Testers' findings, then the testers still would've contributed their share to the product being built , but the negative impact of the Devs' under-contribution will adversely affect the overall Product{Q} -- making {Q} fall below agreed values.  f. The relationship of contributions from actors {designers,devs,testers} is apparently non-linear --otherwise things would've been so obvious, this wouldn't be...

Senior vs Junior Testers be like:

Clip intended for educational purposes only. (no copyright infringement intended)

A Tester's Life is constant proofs

Image
QA/Tester's personal log: 2025.254 Time flies, it is Thursday, and the week has just zipped by. One may wonder what a Tester's day looks like. Well, for me, everyday is like 'thesis day' --  (a) you volunteer to take up a project (i.e. subject_under_test: could be a super_system, sytem, sub_sytem);   (b) pick your way into it ('pick' verb, like using a pick-axe); learning everything you can about the subject_under_test (i.e. explore it), and  (c) compile a test_report of your evaluation of the subject_under_test ; Note that (c) is not just a list of ...  what it 'can do'  and 'not do',  it also specifies under which scenarios it behaves as such,  and what inputs triggered which behavior.  It also requires you to provide proof of your claims: this comes in the form of vids, pics, urls, files, software builds -- virtually anything that could be an artefact (yep, archeologists don't have a monopoly on artifacts) showing proof that supp...

Pyramid, Honeycomb, Feather Testing Models

Image
How much time should be allotted to each level of test activity (unit test, integration test, system test, end-to-end test)? it varies; but definitely just enough to get the job done to an agreed level of Acceptance Credence.  A model that is not adaptable to the task at hand constricts development, and needlessly constricts the actors themselves(coder, tester) -- such that, either  (a) the team starts to get coerced to follow the model, sacrificing any innate efficiency; or  (b) the team ignores the model and  lets the nature of the task dictate the most efficient work flow.  The nature of the test_paths themselves would provide the impetus  to place  emphasis on either  (c)   integration tests (e.g. majority of the test_paths are integration subsystems); (d) a balance of unit tests and end-to-end tests (e.g. the product is simple, or a self-contained supersystem); or (e) a balance of unit, integration, and comprehensive systemic tests ...