You are viewing a single comment's thread from:

RE: Would you love to see how I trained my small, tiny, powerful neural network?

in #steemstem6 years ago (edited)

Well, I personally didn't have experience with such problem, but based on what you said, it looks like you have to reduce the number of the features without losing any important data - That looks to me as a problem for Karhuenen - Loeve expansion, you can check out this article on wiki https://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem and google it further.

I hope that I was helpful :)

Sort:  

Karhunen–Loève theorem
In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.Stochastic processes given by infinite series of this form were first considered by Damodar Dharmananda Kosambi. There exist many such expansions of a stochastic process: if the process is indexed over [a, b], any orthonormal basis of L2([a, b]) yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the total mean squared error.

Yes, yes, there is a whole bunch of similar techniques, each one of them good for the extraction/ elimination of "noise" based on slightly different statistical parameters.

Thank you!

Coin Marketplace

STEEM 0.26
TRX 0.20
JST 0.037
BTC 94320.05
ETH 3419.91
USDT 1.00
SBD 3.88