跳转到主要内容

Occam's razor/Ockham's razor

n. Occam's razor is a logical principle attributed to the medieval philosopher William of Occam (or Ockham). The principle states that one should not make more assumptions than the minimum needed. This principle is also called the principle of parsimony, and it underlies scientific modeling and theory building. The principle also admonishes one 
to choose the simplest of a set of otherwise equivalent models of a given phenomenon. In any given model, Occam's razor helps us to “shave off” those concepts, variables, or constructs that are not essential to explain the phenomenon. By doing that, developing the model becomes easier, and there is less chance of introducing inconsistencies, ambiguities, and redundancies.

The principle is considered essential for model building because of what is known as the “underdetermination of theories by data.” For a given set of observations, there is always an infinite number of possible models explaining the same data. This is because a model normally represents an infinite number of possible cases, of which the observed cases are only a finite subset. The nonobserved cases are inferred by postulating general rules covering both actual and potential observations.

For example, through two data points in a diagram one can always draw a straight line and induce that all further observations will lie on that line. However, one could also draw an infinite variety of the most complicated curves passing through those same two points, and these curves would fit the empirical data just as well. Only Occam's razor would in this case guide one in choosing the “straight” (i.e., linear) relation as the best candidate model. Similar reasoning can be used for n data points lying in any kind of 
distribution.

Occam's razor is especially important for universal models, such as those developed in general systems theory, mathematics, or philosophy, because there the subject domain is of an unlimited complexity. If one starts with overly complicated foundations for a theory that potentially encompasses the universe, the chance of producing any manageable model is greatly reduced. Moreover, the principle is sometimes the only remaining guideline when entering domains of such a high level of abstraction that no concrete tests or observations can decide between rival models. In mathematical modeling of systems, 
the principle can be made more concrete in the form of the principle of uncertainty maximization: from your data, induce that model which minimizes the number of additional assumptions. – TJM