EXTENSIONISM

Conman (1958) coined the term extensionism and was the first to apply BNT to data trends.

Extensionism is the application of the BNT Laws to extension of data. Data are often noisy, inconsistent and can be sparse across the range of interest. Many techniques based on statistics have been developed to help the practitioner analyze and then extend a set of data. Many realize that using statistical techniques may produce results that are sub-optimum for the investigator’s use. BNT allows a rational framework upon which to optimize the expected outcome.

Law II provides for extrapolation of any subset of data without regard to the rest of the data. This powerful concept frees the practitioner from trying to reconcile all of the data in order to extrapolate the data in the manner desired.

A simple demonstration can be illustrated using the synthetic data shown with trend lines on the charts to the right. (click on a chart to enlarge). These examples are based on simple linear extrapolations which may seem unimpressive to more sophisticated readers. These were among the first applications of BNT and are the easiest cases to use for example. In addition, although simplistic, linear extrapolation is still the tool of choice for debate.

The overall linear trend has a very slight positive slope and a poor R-sqaured for the linear fit. However, it can be said that the overall trend is positive and that extended for a significant X value, the Y value could become quite large. Unfortunately, small trends using noisy data are rarely dramatic enough to cause alarm or secure funding.

One naturally assumes that there is a rationale to drop inconvenient data points - but utilizing BNT, we only have to calculate and track Browness as a result our our data cull. In this simple case, Browness can be approximated using the procedures of Conman. Conman reasoned that Browness was partially due to the fit of the data to the line and partially due to the portion of the data set used. The portion due to the data fit is based on R squared (symbolized as R2):

BrR =  1/(R23 +1)

The portion approximated from using only a portion of the data set is approximated as:

BrD =  1/(D4 +1)

Conman found that for his purposes the best combination of the two values is:

Br = BrR * BrD -(0.2 * R2 * D)

For the total data case (the first chart) Br = 0.5 or 50%. It may be counterintuitive to the novice, but this is not Bs. In fact, if one examines the equations carefully, it can be demonstrated that true Bs (Br = 100%) cannot be attained. It can also be determined that the lowest Br value is 5%. Conman felt that there was no true lack of Browness in the universe - he saw a small amount of Browness in all transactions. An ongoing area of research is to determine the background Browness of the universe and to determine if there any absolute facts as postulated in the Three Laws.

Now consider the subsets of the data presented with their respective linear fits. Subset A shows how the trend can be increased and actually reversed for this data by dropping early values. Subset B demonstrates the deletion of the final data points and reverses the trend as well as providing a better fit.

Using the equations above, Subset A has a Br = 68% and Subset B has a Br = 62%. Both have a higher Br value and are therefore preferred over the entire data set.

As this simple case demonstrates, by increasing Browness one can achieve various outcomes from a set of data.

 

Pages in this section

OVERALL

OVERALL

SUBSET_A

SUBSET_A

SUBSET_B

SUBSET_B

(c) 1982, 2009 Jorge Branche, Jr.