Posts

Showing posts from 2024

the utter importance (and value) of Evaluative Testing...

Image
The story of the Benz automobile and the utter importance (and value) of Evaluative Testing... When a product gets evaluative testing, it goes a long way to improving its acceptance credence rating. Kudos to Mrs. Bertha Benz for testing the first Benz automobile 😊 link: (https://www.facebook.com/share/p/1Ewk1KeeR8/)

CI/CD , white/ black/grey Testing , RAD

Image
  CICD must be very taxing on the developers -- to write code and test them to great exactness all in one. 

Nomenclatures (of tickets in WIP... )

 the way i understand them:  story -- defines a totally new feature improvement -- incremental featurette , increasing the scope of the original feature change_request -- changes the existing accepted behaviour of a feature bug -- issue with the current feature( feature_behaviour was unacceptable ) task -- other activity that are not direct inputs into the WIP subtask -- a subdivision of any of the above

Rushing Test Team tasks?

Image
What's with the topic of rushing Test Teams? i think this is the 3rd one in a row now? Do we forget that evaluation is a critical thought function? i wish there'd be other questions better suited for Test Teams to answer. The question of 'rushing' Test Teams is very counter-quality. Every time they 'rush' they tend to suspend their critical thinking. Test Teams should be given the chance to give their estimated timeline of completion, and people should respect that. Shortening an already lean test estimate is ridiculous.  ( link )

Which tester or testing category are you?

Image
I would simplify it as: Activity: Testing; Doer: tester, in the categories a,b c.: a.Tester who uses code, debugging tools, and/or other greybox tools: greybox/coding tester b.Tester who doesn't use any code, nor greybox tools: blackbox/noncoding tester c. Developer who does unit tests, or does system troubleshooting: whitebox/coding tester. (Most professional testers would thus be either greyboxers or blackboxers; while whiteboxers will almost certainly be developers)  it's like, an Engineer testing machinery: either by using fancy gadgets or just by doing the old 'tap it, run it, look, smell, and listen.'  Maybe tester-life would be much clearer, if we go back to our roots of white, grey, black. 

'How do you ensure quality *without* proper testing?' -- an incongruence

Image
  i can give the stakeholder the current Test Eval in progress. That's as raw as it can get. That's also the state of acceptability of the product as we know it -- the 'quality built into' the product at that time. Take it or leave it, but that's it.  When we say 'it has Y degree of acceptability', we mean we have 'tested' it as acceptable to that degree. How else can we know unless we test it?  We shouldn't get the illusion we can have a high degree of acceptability/dependability of an untested/improperly tested product. There's 'absolutely' no way of knowing without testing.  ( link )

Grouping Test Maps together under a certain premise

Image
  Although i don't use formal test cases (i kinda use shorthand test maps in table form) i group my testing by feature, which gives it a wholistic, systemic approach, to make it coherent and actually make sense (seeing it from the prospective users' POV). Unless i approach it this way, testing would be piecemeal, and would lack relational cohesion.  Which goes for test documentation too. Grouping them together makes for more coherent and meaningful regression tests while exploring the module under test. ( link )

updated: Regression Tests as a normal part of day to day exploration / Regression Testing

Image
  Even before i heard of CICD, i had already coined the term 'continuous regression' in our office. For me it means 'testing for regressions' are not a specific activity, but incorporated into the testing/exploring process itself.  In the Hypothesis of Test Phase Space, every dot in that space represents a state of the Test (one dot an input, the other dot an output). And every line connecting those two dots is a Test Path. Each line comprises of multiple other dots between the two end-dots: these are potential branch points to other dots (linking to misbehaviour-outputs, or linking to other test paths) such that while exploring current code changes/fixes it is inevitable that some of these code changes may/might be linked to other parts of the product. It is in such context that i couch continuous regression testing.  Continuous modular regression testing should be part of normal exploratory testing. How fast and how thorough should these tests be? As speedy as the deg...

Test Runs vs Testing a Document vs Testing in the Design Phase

Image
'Tests prevent bugs.' -- from a tester's POV, if anything, running tests 'discovers' bugs/issues, they never prevent them from appearing :) 'Testing a document' however (i.e, reviewing a feature written-up to be developed) i find this to be a layer that can be done without. In the end, we should be testing the 'product' not the 'document.' And testing the 'concept' during the design stage is actually a lot better than testing the document. Involving a tester to mentally 'mock-test' the concept does some good in the design phase, but there is still a pitfall -- whatever 'document' that comes out of that 'design-phase testing' will not be the basis of the actual test run approach. why? because once the 'design-concept' gets into 'code-form' and the devs did their unit tests based on the design-phase test, it is still the end-tester's job to test how the code fares out there in the wild: an...

Reporting pressures

Image
In my test report i don't use pass/fail wordings anymore. It's an evaluative report, so i detail the current state of the app as it is: what works acceptably, which feature/s exceeded expectations, which ones need more development because certain scenarios were found that put the app on edge. This should have been coordinated with the DevTeam in advance, because at the end, the PM should be able to expound on the revised timeline: highlighting the complexities discovered which still require deployment of more enhanced fixes.  A demo of the working product is always helpful, to showcase current achievements that prefigure the app at its completion.  When giving bad news, PM's would do best to focus on remedial positives, without giving false hopes.  For the tester? We just dish out the blunt truth all the time.  (https://www.linkedin.com/advice/0/your-testing-results-dont-meet-client-expectations-wj9tf?trk=contr)

What is the tester's responsibility ?

Image
Testers don't build quality into the app. Devs do. Testers evaluate the app (at any stage) and send it back for fixing as needed -- in that way we 'contribute' to the quality of the product. Testers are responsible only for evaluating the product at hand, and only to a degree of certainty. It's the Team (Devs+Testers+et al.) together who are responsible for building the product to the agreed degree of acceptability. Let's cut the finger pointing and move forward, please.  (  https://www.linkedin.com/feed/update/urn:li:activity:7259216033884839936/   )

juggling?

Image
If i was assigned to multiple projects as a software tester (and the only tester) i should NOT succumb to pressure and 'juggle.' Each one has its own release date, and the PMs must decide which one takes priority release, which in turn dictates which one takes priority testing. { Testing to a specific degree of product acceptability should be agreed first; then the rate/speed of testing follows the degree of acceptability; subject to the release dates of improvements and fixes; which in turn is subject to the complexity of the feature/s being developed } cross this with { N projects } and we see that the 'quality' of the product/s is more related to the 'degree of acceptability' and is never derived from 'juggling between projects.' source

'It works on my machine' -- collaborative fix-oriented approach to bug raising

Image
  i have found out that after a while -- where the tester has proven his mettle by his documentation, artefacts, and proven investigative and evaluative capabilities -- more often than not, respect grows between Dev and Tester. Devs start to not dismiss our bug report anymore -- they say 'it works on my machine, so i must be missing something; can you show me?' https://www.linkedin.com/feed/update/urn:li:activity:7250131780119224321/

state of AI -- 2024.08.16

Image
  my answer: i use it where it matters (removing unwanted background from an image, creating voice over from text).  for testing, the current status of AI seems way too low, i haven't considered it as a partner just yet (but i'm open to be enlightened;)

Bug Report1

 Bug Report1 (pers template) .pic (issue vs expected):  ..STR:  ..expected: ..attachments: (scene,json,etc) .vid (issue @ time, @time, vs expected): ..expected: ..attachments: (scene,json,etc) ================== .affected: ..BUILD_VER ...( Windows Native (KB/Touch), Webgl(KB/Touch), iPad, Android, portal, CLI ) .n/a, not affected: ..BUILD_VER ...( Windows Native (KB/Touch), Webgl(KB/Touch), iPad, Android, portal, CLI ) .untested, might also occur in: ..BUILD_VER ...( Windows Native (KB/Touch), Webgl(KB/Touch), iPad, Android, portal, CLI ) for fixing / triage: .Blocker,Critical,Major,Minor .Reason:

Test Report1

Image
================ Test Report --------------------------- Product info: (what is it we are testing? ; complete info) Product Eval: (what can it do ; can't do ; suitable for what ; product claims vs product capabilities) Test_Map, Results: + behaviours, - behaviours, AssocRisks -- in table form Supporting_Artefacts(evidences[vids,pics,files,urls,etc])
 I find this quite true, from personal experience: I find it more productive when testing with an independent mindset, even when part of a dev team. And actually, everyone appreciates the resulting product evaluation-- positives and negatives included. A strategy where a coder tests his own work plus a test specialist evaluating the end prototype -- doesn't seem superfluous. Each brings a unique contribution to the quality of the end product as a whole.

Who made the... issue? (Does it matter?)

Image
That's why i have since veered away from "who made the mistake". It evolved into: "there is an issue" observed in the product under test -- which is due for "further fixing" -- and fix it / retest it, everyone definitely gets a hand doing. How we got that issue in the first place is a matter of retrospective (or introspective, for very very small teams 😅 ). Consonance between the tester's perspective and the BA's understanding of the product/feature(s) is actually under the tester's purview. Where the tester's questioning puts the BA on edge, that is one weak spot in the product development. That is why as an in-house tester, i don't limit myself to just chats with Devs. I go ask BA's, PO's -- just about anybody that can help answer my queries -- about their expectation of the product/feature under test. And then report any expectation incongruences so we can iron out a wholistic view of what is expected of the feature/prod...

Why hire a tester?

Image
  So why hire a tester? It really will depend on what the tester would bring to the table.  In the case of his coffee, if he is asking "will this sell to the masses?" then what he needs is a test of its appeal to the masses. And this means he can't be hiring one, but many testers of that specific taste profile, to get a probable evaluation representative of the target demographic's response. If he is looking for an evaluation of the coffee's "avant-gardeness" then he would need to employ a coffee connoisseur. With the connoisseur's provable taste grade, one coffee tester will do; but a duo or trio independent of each other would add more depth to the coffee evaluation at hand, providing better results.   When a Designer tests, he/she will invariably test for design flaws; designers can't help it, it's their thing -- part of their set of heuristics that make them who they are.  When a CEO runs a test, his/her background (tech or otherwise) wil...