Posts

Showing posts from 2025

The phenomenon of QA(quality advocacy) in Test-PhaseSpace

Image
  Quality Advocacy                      \___ 3rd tier/level --- designers,  coders                      \___ 2nd tier/level --- quality advocates                      \___ 1st   tier/level  --- grassroots testers  ================= definitions:  ================= ..Quality [ Q ] = the state of the product (high or low) at any given time (m).  ..QA [ Quality Advocacy ] = the monitoring/measuring of quality to know quality ; unless it is monitored/measured, quality [ Q ] cannot be known. Quality Advocacy extends to all 3 tiers/levels.  ..QA [ Quality Assurance ] =  working together  as a  Development Team (coders + testers + proj.mgr + designers + etc.) which will always deliver a specific level of quality. ..software tester = 1st level, grassroots interactor [someone...

The Quality Advocate (Test supervisor / manager / lead)

Image
A Quality Advocate isn't / shouldn't be about auditing the work of your fellow testers.  The work of a Quality Advocate should be to check the current testing procedures vis-a-vis the product/prototype under test.  When you have testers testing under you / for you, the supervisor's role is not to check if the tester tested the product properly. Because the tester has already been coached to be critical-thinking, and exploratory -- so assuming his/her evaluative report shows exactly that -- then rather the sup's job is to see if there's an expanded way to model the product under test, to give a more justified evaluation.  What if we find there is nothing more to expand? Then either (a) you, as a supervisor, have failed in your task; or (b) your test team is above par excellent.   note: 'justified' :   General Context: In a general sense, "justified" can mean that something is reasonable, logical, or well-founded. For example, "Her concerns were...

i came, i tested, i asked.

Image
As a software tester / QA (Quality Advocate) i no longer use the word "no."  i always start every report with "i found this [issue, with details of its STR(steps to replicate), risk, and impact]" -- and then follow up with "how do we look at it? what do we do with it?" almost instantly, everyone in the meeting will take a pause and conveniently reach to a common conclusion.  mature QA is no longer about happy path vs. negative testing. It is about how our product will fare out there in the wild.  link

Personnel file ST-06

Image

A specific elaboration of Test Phase-Space 1.2 : Systems Thinking

Image
Definition of terms and concepts  

Evaluative Testing -- poking the subject, and analyzing the results: expected / unexpected -- that is true exploration

Image
    "The combination of not quite sure the right thing and not knowing for certain what the code was going to do - but evaluating as it happens, to me that is exploring" -- exactly same with my own personal observation. that's why being clumsy sometimes is very helpful, because that clumsy data set will become part of the test input and the results are as fascinating / insightful as what one sees in the Large Haldron Collider 😇 💡 https://www.linkedin.com/posts/agw-59661220b_softwaredevelopment-softwaretesting-activity-7305868821058265090-JGjp?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAADVOWcIBc7VFWTKgh6Qof566qxbgu2eqJDQ

What is the title Tester?

Image
  "Testers are not just gatekeepers." --- they are scientists, mathematicians, analysts, with their own colours, who walk the plank on a daily basis  https://www.linkedin.com/posts/agw-59661220b_softwaretesting-softwareengineering-quality-activity-7305870725913948161-O6Lr?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAADVOWcIBc7VFWTKgh6Qof566qxbgu2eqJDQ

A Question on Certification/s ?

Image
  if the ISTQB (or any other Tester Certifying institution) can produce superstar 'testers' -- like this or that 'scientist' did something brilliant out of Cambridge, Yale, or Oxford, what does that say? well, for one, it would add 'brilliance and prestige' to the institution, but in hindsight it's not the institution -- but the *people* (in our scenario: *the actual testers*) that add the brilliance, the sparkle, the shine of skill. Riding on a name (or certificate), without observable proof of effectivity in the field (textbook certifications matter little), is more like resting on laurels taken from a tree, than earning them through challenge. https://www.linkedin.com/posts/agw-59661220b_how-to-scare-a-tester-part-2-activity-7305875698978631680-CjT-?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAADVOWcIBc7VFWTKgh6Qof566qxbgu2eqJDQ

client questioning your test results?

Image
[link] my test documentation should always include:  a. test map: trace-referenced, logged, and updated to include all the scenario branches that were actually tested at the time of testing; b. test evidences: vid captures, screen captures, actual app outputs (images, jsons, txt files, etc), app exe file, that prove how the app behaved at the time of testing -- any and all relevant artefacts that captured the app's behaviour; c. bug/issue reports: replete with STRs (steps to replicate), evidences, paired against expected results; d. the app build or version number tested;  e. the platform tested; d. Pass/fix statement on the test report: indicating (a), (b), (c), (d), (e); this is why, broadly speaking, all testing should have these elements to provide a thorough proof that testing was executed to properly evaluate the software under test. Any misses will be self evident with this method of documentation.