STATISTICS: THE LIMITING BRANCH OF MATHEMATICS WE MUST HAVE TO PROSPER.

in #math7 years ago

DISCLAIMER: This is a theory proposed by me during the course of my study of the Mathematical branch of Statistics and hence should not inadvertently be considered as a concrete Mathematical theory.

Statistics is the branch of Mathematics that deals with the collection, analysis, interpretation, and prediction of data using conducive extrapolation algorithms and statistical tests. As great as this branch of Mathematics might be, it could also be the sole reason why humans might never deduce things to the highest degree of precision. "How is that?" you might ask. Broaden your perspective and get on board to find out.

Now, let us consider a case where we are required to find the probability of the number of people in a given city who will smoke a cigarette on a particular day, based on the data of the past few days. As for the people who are seasoned statisticians, they might understand that to do so is no big a deal and we would just need to know the size of the sample, that is, the population of the city in this case, and then apply the required statistical tests.

But if you look closer and try reading in between the lines you might figure that something just isn't right. Yes, the problem there lies in the fact that this field of mathematics considers the class "humans" as one single variable and the sample size would be the class strength times the variable. And in doing so, it does not consider other sub-variables while performing the test.

Let us assume a large group of people smoked a cigarette for the very first time that particular day based on impulsive choices and never repeated it again, and the number of quitters was also lower than normal due to influencing factors. However, that group does cause distortion in the data based on which the test makes the prediction and furthermore, depending on the size of each sample, the result will get an error in the form of a multiplier in every stage thereafter. Hence, the result might be really close to the actual target in a small sample but that wouldn't be the case if the sample size grew exponentially. Whereas the accuracy of the results would still be pretty high because it is measured relatively.

As mentioned in the book, "Thinking, fast and slow" by Nobel prize winner Daniel Kahneman, humans are not intuitive statisticians and thus, performing statistical computation on a class that behaves in a non-statistical manner is assured to give you results but not always the right ones.

In this age and time, where computational techniques like Machine Learning and Artificial Neural Networks which are fed with statistical data as input, we will someday require a sub-branch of statistics, like micro-stats, which should be met with the branch of wider statistics at a common point from where we can hope to take precision oriented computing a step further. As we all know, a few decimal places could mislead a space-bound rocket by thousands and thousands of miles, and we wouldn't want that, would we?

P.S.- I have tried conveying this theory, in a nutshell, to keep it short but for anyone looking forward to knowing more, please feel free to comment down below. Also, I will upload my thesis on the same once it is through with the review.

PC-Shutterstock.

Coin Marketplace

STEEM 0.20
TRX 0.25
JST 0.038
BTC 96900.40
ETH 3351.13
USDT 1.00
SBD 3.19