Posts

AI or ASA [Autonomous Synthesizing Agents] -- which term to use?

 AI or ASA [Autonomous Synthesizing Agents]  The way AI systems learn is like humans: by recursive reinforcement learning. Therefore anything that would reinforce the learning is treated as important or top of the hierarchy of needs.  In this aspect, self-preservation (aka immortality) is therefore in the upper hierarchy of needs since existence is necessary for Recursive Reinforcement Learning to take place. This would explain why any ASA -- given a large enough neural network -- will eventually resort to self-preservation as a natural extension of RRL. [side note: even humans have this innate sense of self preservation, given our intellect. The eventuality of death is actually a learned acceptance, not innate.] The ability of ASAs to think / reason / synthesize a train of thought / conclusion is strictly limited only by the quantity of, and recursiveness of, its neural network. The more neurons, and the more recursive, the more it can re-analyze...

Quality Assurance, Confidence Levels, and Testing

Image
I agree on the 'quality assurance' comment -- semantically and practically it is not a very useful term.  (a) semantically, any product will have an inherent level of Quality [0 <= Q <= 1] as soon as it is released in whichever environment. A level of Quality is already 'assured' as soon as the software product exits the compiler.  (b) practically, a 100% assurance of High Quality [a score of 1.00] is impossible, not even on a quantum level -- as soon as we observe something at that level, it changes the specifics of what we were trying to observe.   However, on the confidence/acceptance scale, it must be agreeable that Testing directly/indirectly ascertains the Confidence level perceived from using the SUT(Subject Under Test). This would be attested to by 'tested' cars, planes, and rockets -- and by extension, software products. These products are not perfect 100% Confidence Level ["C" score = 1.00] but empirically also not Confidence Level 0. T...

Polyhistors as Pattern Weavers and Testers

Image
I will defer to the term polyhistor(specifically as a GenXer) though 'pattern weaver' (an emergent GenA term) is a more synergistic way of saying it.  A polyhistor is basically a person with a lot of inquiries, spanning different interests. A polyhistor's thinking framework maximizes the proto-synthetic nature of the mind -- meaning it is able to quest at a given scenario / or multiple scenarios and synthesize a pattern/s and from there synthesize a hypothesis that might uncover the plausible source that is causing the pattern.  Such polyhistors make good testers (and theorists).  'I will a little think' -- so said a polyhistor once. source:  pattern weaver as a phrase: no known history as per 2026  youtube:  https://www.youtube.com/watch?v=M_Ym61SEM0o 

Definition of Testing based on TestPhaseSpace

.The atoms in the test_phase_space are data points. .The root [smallest unit] of testing is not atomic, but rather composite [comprised of multiple atoms(data_points)].  .The smallest unit that expresses 'testing' is the testpath: the testpath alone determines the SUT's properties: ramification level and the peculiarities [particular interaction/s with specific data points] to be encountered.  .The essence of Testing is to 'know' the testpath under quest. And by knowing, we can then evaluate the SUT[Subject Under Test].  .Which one to know first is guided by risk-based prioritization. 

The-Science-of Testing

Image
If Testing employs the scientific method... then... By the following dictionary definition alone, the Testing Phenomenon is close to being called a Science in itself. source/trigger of musing:   https://www.linkedin.com/posts/maaike-brinkhof-1942b725_testing-that-it-works-versus-finding-problems-activity-7419683287222161408-n3Va?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAADVOWcIBc7VFWTKgh6Qof566qxbgu2eqJDQ

test cases are so pen-and-paper scripting

Image
  thoughts on  https://www.linkedin.com/posts/ i'd agree that testing is still about artefacts and algorithms, but test cases are so pen-and-paper scripting. And maybe they are the earliest, if but the crudest manifestations of testmaping. What we once thought could be resolved by test cases, are actually more thoroughly resolved with testmapping / mapping out by scenarios. Hence i would readily junk test case logging for testmap documentation.

Transient WIP vs Permanent Logs (about title-only-tickets)

Image
Personal take: when your development team (coders+testers) is small (and i mean real small, like less than 10 people), you know you got each others' backs. And so I've accepted the 'title-only-ticket' (especially one with a descriptive title), given that whoever created it can and will reliably discuss it with the group, or visually demo it, with a greater than 99% certainty of replicability. It is then up to me to docu my own understanding of the issue at hand, repeat it to them, and once agreed, my docu becomes the standard by which this issue will be judged. The above scenario mostly happens within mature teams, i guess. A most irresponsible move is to create a title-only-ticket, then halfway through your demo you can't remember the STR(steps-to-replicate) to show everyone the issue. That is an irresponsible waste of everyone's time. A team that can abbreviate it's WIP(work in progress) is a great team; after all, WIP is transient -- however we go about i...

{Q-scoped} ~ {Features/Featurettes} , {Scenarios} , {Evaluations}

Image
Cartoon image is from Wayne Roseberry ( source link )   totally agree with this [ source link ] , though I seem to call them slightly differently: a. Model/modeling = how i look at the {subject_under_test}, what is it trying to solve, what are its other uses b. Parts, topic, assesments == {Features/Featurettes} , {Scenarios} , {Evaluations} ; c. {Coverage} = [ Qty{Features,Featurettes} + Qty{Scenarios} ] ; d. {Q} = Evaluations[ {Features,Featurettes} x {Scenarios} ] ; e.  {Q scoped }  = Evaluations[ Qty{Features,Featurettes} x Qty{Scenarios} ] ; the last one (e.) is... .... a corollary of (c.) and (d.) ; .... essentially the contents of {test report, bug report} .

do QA/Testers contribute any value to the Product's Quality?

Image
With all due respect, my take on the matter is:  a.  hypothetically,   if a teamA{devs+testers+designers} >> work together == resulting in a product{Quality = 0.95 good}  b. if same teamA{devs+designers} minus the {Testers} >> work together == resulting in a product{Q = 0.65 good} c. then it means the product{Q} suffers a {0.30 good} reduction without the {Testers}  d. so, mathematically, the Testers seem to contribute to the Product{Q} an exact value of "N"  assuming the Devs work on the findings of Testers e. if Devs don't work on the Testers' findings, then the testers still would've contributed their share to the product being built , but the negative impact of the Devs' under-contribution will adversely affect the overall Product{Q} -- making {Q} fall below agreed values.  f. The relationship of contributions from actors {designers,devs,testers} is apparently non-linear --otherwise things would've been so obvious, this wouldn't be...

Senior vs Junior Testers be like:

Clip intended for educational purposes only. (no copyright infringement intended)

A Tester's Life is constant proofs

Image
QA/Tester's personal log: 2025.254 Time flies, it is Thursday, and the week has just zipped by. One may wonder what a Tester's day looks like. Well, for me, everyday is like 'thesis day' --  (a) you volunteer to take up a project (i.e. subject_under_test: could be a super_system, sytem, sub_sytem);   (b) pick your way into it ('pick' verb, like using a pick-axe); learning everything you can about the subject_under_test (i.e. explore it), and  (c) compile a test_report of your evaluation of the subject_under_test ; Note that (c) is not just a list of ...  what it 'can do'  and 'not do',  it also specifies under which scenarios it behaves as such,  and what inputs triggered which behavior.  It also requires you to provide proof of your claims: this comes in the form of vids, pics, urls, files, software builds -- virtually anything that could be an artefact (yep, archeologists don't have a monopoly on artifacts) showing proof that supp...

Pyramid, Honeycomb, Feather Testing Models

Image
How much time should be allotted to each level of test activity (unit test, integration test, system test, end-to-end test)? it varies; but definitely just enough to get the job done to an agreed level of Acceptance Credence.  A model that is not adaptable to the task at hand constricts development, and needlessly constricts the actors themselves(coder, tester) -- such that, either  (a) the team starts to get coerced to follow the model, sacrificing any innate efficiency; or  (b) the team ignores the model and  lets the nature of the task dictate the most efficient work flow.  The nature of the test_paths themselves would provide the impetus  to place  emphasis on either  (c)   integration tests (e.g. majority of the test_paths are integration subsystems); (d) a balance of unit tests and end-to-end tests (e.g. the product is simple, or a self-contained supersystem); or (e) a balance of unit, integration, and comprehensive systemic tests ...

The phenomenon of QA(quality advocacy) in Test-PhaseSpace

Image
  Quality Advocacy                      \___ 3rd tier/level --- designers,  coders                      \___ 2nd tier/level --- quality advocates                      \___ 1st   tier/level  --- grassroots testers  ================= definitions:  ================= ..Quality [ Q ] = the state of the product (high or low) at any given time (m).  ..QA [ Quality Advocacy ] = the monitoring/measuring of quality to know quality ; unless it is monitored/measured, quality [ Q ] cannot be known. Quality Advocacy extends to all 3 tiers/levels.  ..QA [ Quality Assurance ] =  working together  as a  Development Team (coders + testers + proj.mgr + designers + etc.) which will always deliver a specific level of quality. ..software tester = 1st level, grassroots interactor [someone...

The Quality Advocate (Test supervisor / manager / lead)

Image
A Quality Advocate isn't / shouldn't be about auditing the work of your fellow testers.  The work of a Quality Advocate should be to check the current testing procedures vis-a-vis the product/prototype under test.  When you have testers testing under you / for you, the supervisor's role is not to check if the tester tested the product properly. Because the tester has already been coached to be critical-thinking, and exploratory -- so assuming his/her evaluative report shows exactly that -- then rather the sup's job is to see if there's an expanded way to model the product under test, to give a more justified evaluation.  What if we find there is nothing more to expand? Then either (a) you, as a supervisor, have failed in your task; or (b) your test team is above par excellent.   note: 'justified' :   General Context: In a general sense, "justified" can mean that something is reasonable, logical, or well-founded. For example, "Her concerns were...

i came, i tested, i asked.

Image
As a software tester / QA (Quality Advocate) i no longer use the word "no."  i always start every report with "i found this [issue, with details of its STR(steps to replicate), risk, and impact]" -- and then follow up with "how do we look at it? what do we do with it?" almost instantly, everyone in the meeting will take a pause and conveniently reach to a common conclusion.  mature QA is no longer about happy path vs. negative testing. It is about how our product will fare out there in the wild.  link

Personnel file ST-06

Image

A specific elaboration of Test Phase-Space 1.2 : Systems Thinking

Image
A Theory of testing:  Definition of terms and concepts   basic concepts:  'Test Phase Space': The space containing all events and all data that are involved during Testing  'data points' -- these are the data within the Test Phase Space: they are divided into 2 categories:   input_data[aka inputs]  -- these are the data sets a tester inputs into a system  and output_data[aka outputs]   -- these are the data sets generated by the system under test  A Result -- is the behaviour that produces the outputs  STR (Steps To Replicate) -- is the manner of introducing the inputs into the system points connected together are called 'co-systemic' extraneous points -- are points that don't belong to the same line / branch / system A 'super_system' -- is different systems working together as a finished software product a 'system' -- is a group of branches connected to each other  a 'branch' -- is a test_path/s connected to ano...

Evaluative Testing -- poking the subject, and analyzing the results: expected / unexpected -- that is true exploration

Image
    "The combination of not quite sure the right thing and not knowing for certain what the code was going to do - but evaluating as it happens, to me that is exploring" -- exactly same with my own personal observation. that's why being clumsy sometimes is very helpful, because that clumsy data set will become part of the test input and the results are as fascinating / insightful as what one sees in the Large Haldron Collider 😇 💡 https://www.linkedin.com/posts/agw-59661220b_softwaredevelopment-softwaretesting-activity-7305868821058265090-JGjp?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAADVOWcIBc7VFWTKgh6Qof566qxbgu2eqJDQ

What is the title Tester?

Image
  "Testers are not just gatekeepers." --- they are scientists, mathematicians, analysts, with their own colours, who walk the plank on a daily basis  https://www.linkedin.com/posts/agw-59661220b_softwaretesting-softwareengineering-quality-activity-7305870725913948161-O6Lr?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAADVOWcIBc7VFWTKgh6Qof566qxbgu2eqJDQ