How to measure that Big Data is "big" enough

in #bigdata7 years ago (edited)

Before an analysis is being carried on, we need to define sources, collect and cleanse data. Data must be "trusted", or it would hardly represent a reality, we'd like to perceive.
Even if data is trusted, who could guarantee that the resulting data insights are the same. An ordinary person would probably reveal, if Artificial Intelligence (AI) goes wrong with its digital advice in banking, online shopping or cheap flights. But there is an increasing number of children with cognitive disabilities (Autism, Asperger), who are expected in future mostly rely on digital labor (DL) assistance. In an autistic person adult life, this digital advice through a wearable devices will replace parents, who currently remind those children their everyday use-cases and journeys persistently.
The requirements to AI and DL must be reviewed now, so as people with disabilities in cognitive functions are protected in the digital ecosystem. Big Data and robust models are solutions. If we have a dataset that is big enough, the errors would not influence a common trend and, as consequence, the final insight. So, what is the measure for considering a data in the distinct area to be "big" enough?big-data.jpg

Sort:  

И как поставить лайк на твой пост? ))

@ashumanakh и как тебе здесь

Там есть поле со знаком доллара, а рядом Кружок со стрелкой - на нее нажать надо, т. е. сделать Upvote

Coin Marketplace

STEEM 0.16
TRX 0.15
JST 0.028
BTC 57975.44
ETH 2289.64
USDT 1.00
SBD 2.46