Niwot Ridge Resources

A Source of Information for Mission Critical Systems, Management Processes, and Strategies

Pseudo–science and the Art of Software Methods

Ascertaining of success and applicability

The structure of this checklist is take directly from Scientific American's essay on scientific baloney.

How reliable is the source of the claim?

Self pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, or occasionally even fabricated.

In many instances the statistics used to support the claims are weak or poorly formed. Relying on surveys, small population samples, classroom experiments, or worse anecdotal evidence, the expert extends personal experience to a larger population.

Does this source often make similar claims?

Self pronounced experts have a habit of going well beyond the facts and generalizing the claim to a larger population of problems or domains. Many proponents of development method make claims that cannot be substantiated within a scientific framework. This is the nature of early development in the engineering world. Of course, some great thinkers do frequently go beyond the data in their creative speculations.

Have the claims been verified by another source?

Typically self pronounced experts make statements that are unverified or verified only by a source within their own belief circle, or who's conclusions are based primarily on anecdotal information.

We must ask, Who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good science as it is crucial to good methodology development.

How does the claim fit with what we know about how the world works?

Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their methodology.

Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics.

Has anyone gone out of the way to disprove the claim, or has only supportive evidence been sought?

This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore dis–confirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid.

It is why the methods of science that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical.

Does the preponderance of evidence point to the claimant's conclusion or to a different one?

Evidence is the basis of all scientific theory confirmation. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the theory, fit the theoretical model, or somehow participate in the theory in a supportive manner.

Is the claimant employing the accepted rules of reason and tools of research, or have these been abandoned in favor of others that lead to the desired conclusion?

Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not scientifically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first course taken in graduate school is quantitative methods. This course sets the ground rules for conducting research in the field.

Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation?

This is a classic debate strategy—criticize your opponent and never affirm what you believe to avoid criticism.

If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did?

This concept is usually lost on "innovative" researchers. The need to explain previous results is mandatory. Without this bridge to past results, a new theory has no foundation for acceptance.

Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?

All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice? Usually during the peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.

Home | Search |Site Map | Copyright