Stop Testing Now! Ronald van Loon Reveals the 5 Secrets to Ditching QA Forever

Antriksh Tewari
Antriksh Tewari2/2/20265-10 mins
View Source
Ditch QA forever! Ronald van Loon shares 5 secrets to stop testing now. Learn proven ways to streamline your software development process.

The Illusion of Infinite Testing: Why Current QA Models Are Broken

The software industry is drowning in its own success. As delivery cadence accelerates, traditional Quality Assurance (QA) models, built around lengthy, end-of-cycle inspection phases, are buckling under the pressure. These final gatekeeping stages, often perceived as necessary evils, have become bottlenecks. The escalating cost—not just in dedicated personnel hours but in delayed market entry—is staggering. More critically, the fundamental logic is flawed: attempting to bolt quality onto a finished product is inefficient at best and catastrophic at worst. Bugs found during late-stage testing are exponentially more expensive to fix because they often require backtracking through multiple layers of architecture and code committed weeks prior. The industry consensus is shifting rapidly: the focus must pivot from reactive defect detection to proactive defect prevention. This fundamental change requires not just process refinement, but a complete overhaul of how we perceive responsibility for quality, a transformation being championed by thought leaders like @Ronald_vanLoon, who outlined five radical secrets to effectively ditching QA as a separate function.

This breakage is symptomatic of a larger misalignment. We’ve treated testing as a necessary tax rather than an inherent component of building. When QA teams only see the compiled artifact, they are merely reviewing the symptoms of poor development practices upstream. The inherent limitations of catching issues late mean that any significant architectural flaw or missed requirement only surfaces when the entire team is under intense deadline pressure, leading to shortcuts and technical debt that perpetuate the cycle. The industry demands agility, yet clings to a waterfall-style safety net that actively slows down true speed.

The realization is dawning that if the QA process consumes 30% of the timeline, the pipeline isn't fast; it's merely delaying the inevitable bottleneck. Moving quality upstream transforms this bottleneck into a continuous stream, embedding quality checks so early that the very concept of a standalone "QA phase" begins to dissolve.

Secret 1: Shifting Left to the Developer Desk

The first, and perhaps most crucial, step in dissolving traditional QA is radical empowerment of the developer. Quality must become the developer’s primary output, not something handed off for validation. This means emphasizing unit testing and integration testing as the foundational layer of defense, owned and maintained meticulously by the engineer writing the code. If a developer cannot demonstrate robust coverage and successful integration tests, the code is simply not ready for review, let alone deployment.

Test-Driven Development (TDD) serves as the philosophical backbone for this shift. TDD forces the developer to define what "done" looks like before writing the functional code, inherently baking in testability and fulfilling requirements from the ground up. This discipline ensures that quality is present before the code even leaves the developer’s local environment.

Furthermore, automation must integrate seamlessly into the developer workflow. Static analysis tools are no longer optional extras; they must be integrated directly into the Integrated Development Environment (IDE) and the Continuous Integration (CI) pipeline. These tools instantly catch stylistic inconsistencies, common pitfalls, and simple logic errors, preventing trivial bugs from ever polluting the shared repository.

The overarching principle here is treating testability as a non-functional requirement equivalent to performance or security. If a feature is inherently difficult to test—if it lacks clear hooks, mocks, or integration points—it signals a design flaw, not a testing challenge.

Secret 2: The Code Quality Gate: Mandatory CI/CD Checks

Once code leaves the developer desk, the automated pipeline must act as an uncompromising sentinel. This is where the concept of a Mandatory Code Quality Gate becomes non-negotiable. Before any commit can proceed, merge, or be considered for deployment, it must pass a series of automated, non-negotiable thresholds.

These thresholds must be quantifiable and objective: minimum code coverage percentages, successful vulnerability scan scores, static analysis adherence, and successful execution of the entire integration suite. If any threshold is breached, the build fails immediately. There is no option to "push through" or grant exceptions without high-level sign-off, which itself should be a rare event.

This automated gating system eradicates the pervasive "works on my machine" syndrome. By automating environment provisioning and configuration verification within the pipeline itself, the system guarantees that the artifact tested is exactly the artifact that will run in staging or production.

The power lies in the principle of "failing fast." By halting progress the moment a quality deviation is detected—whether it’s a missed test case or a security vulnerability—the cost of rework is minimized. The context for the fix is fresh, and the entire team sees the failure instantly, reinforcing accountability.

Secret 3: Embracing Exploratory "Guardrail" Testing

If developers are handling unit and integration assurance, and CI handles foundational verification, what is the purpose of human testing? The answer is to redefine manual testing from repetitive drudgery to high-leverage strategic input. Manual effort should no longer be spent re-running regression scripts that automation already handles flawlessly. Instead, it transforms into high-value, strategic exploratory testing.

This involves utilizing human intuition, creativity, and domain expertise to stress-test complex, high-risk user journeys that automation struggles to model holistically. Session-based testing techniques allow testers to focus intently on specific areas of high uncertainty for a defined period, maximizing intellectual input over sheer execution volume.

Crucially, the focus shifts to establishing automated safety nets in the live environment. These are not comprehensive test suites, but lightweight, continuous smoke tests or health checks running constantly in production. They verify core functionality (e.g., "Can a user log in? Can they check out?") without subjecting the system to deep functional validation.

This leads to a critical philosophical distinction: verification versus validation. Verification asks, "Did we build the product right?" (Automation handles this). Validation asks, "Did we build the right product?" (Exploratory testing and product input handle this).

Secret 4: The Product Owner as the Primary Validator

For too long, the Product Owner (PO) or Business Analyst (BA) has delivered requirements, only to wait weeks for QA feedback on whether the resulting implementation actually met the intent. Secret 4 demands that the PO steps up as the primary validator of the feature before it merges into main branches.

This handover is facilitated by tools that bridge the gap between business language and executable code, namely Behavior-Driven Development (BDD) frameworks like Cucumber or SpecFlow. Business requirements are written directly as executable specifications—scenarios written in plain language that are automatically translated into test code.

When the developer implements the feature, they build the automated tests directly against these BDD scenarios. The PO then reviews and validates the feature by executing these specifications themselves, often directly in a pre-merge environment. This automates functional sign-off entirely. The PO confirms the implementation adheres to the acceptance criteria as code, removing the dedicated QA cycle for functional acceptance.

Secret 5: Observability Over Post-Deployment Scrutiny

The final stage of quality assurance—the post-deployment monitoring phase—must abandon the pursuit of exhaustive pre-release scenario testing. Instead, the goal is systemic observability. If you are still relying heavily on staging environments to mimic production load, you are fundamentally behind the curve. The focus must shift to designing systems that instantly signal anomalies when they encounter real-world load and complexity in production.

This requires rigorous implementation of the three pillars of observability: robust logging, comprehensive monitoring, and distributed tracing. These tools don't just tell you if the system is down; they tell you why it's slow, where the latency spike originated, and which specific user journey is affected under load.

To mitigate the risk inherent in this strategy, modern deployment practices are essential. Utilizing feature flags allows functionality to be deployed "dark," enabling engineers to toggle features on or off instantly without redeployment. Canary releases ensure that new code is exposed only to a small subset of users initially. These practices limit the "blast radius" of any inevitable production issue, allowing for immediate rollback or targeted hotfixes, rendering exhaustive pre-release scenario coverage nearly redundant.

The ultimate realization: if your system is designed such that you cannot see its behavior under stress, you are flying blind. Observability is the replacement for exhaustive pre-release guessing games.

The New Role of the Quality Professional

The secrets outlined by @Ronald_vanLoon fundamentally dismantle the traditional separation between "development" and "testing." This transformation does not eliminate the need for quality experts; it elevates them. Quality Assurance professionals evolve into Quality Engineers or Quality Coaches.

Their new mandate shifts from manual execution to building the infrastructure of quality itself. They become the architects of the CI/CD quality gates, the coaches who train developers on writing effective integration tests, and the designers of the organization’s risk models. Their goal is no longer to catch defects, but to design an ecosystem where defects are architecturally impossible or immediately self-evident. The ultimate objective, therefore, is to make the act of "testing" as a distinct phase obsolete by ensuring quality is built intrinsically into every line of code and every step of the pipeline.


Source: Ronald van Loon via X (@Ronald_vanLoon) - Link to Original Post

Original Update by @Ronald_vanLoon

This report is based on the digital updates shared on X. We've synthesized the core insights to keep you ahead of the marketing curve.

Recommended for You