You are viewing a single comment's thread from:

RE: Would you love to see how I trained my small, tiny, powerful neural network?

in #steemstem6 years ago (edited)

My father used to make me crazy while I was a kid, by me telling that he is going to sell our car and buy Subaru Rex! Puke! Bloody Subaru Rex! I hate it! :D

Anyway, Matlab...

Nice and tidy, I like that :P

Now I'm serious: do you have some experience with data that were already transformed before used as input for NN? I will need to solve something with hyperspectral images. I have two options, either to use raw data (matrices, 64k x 4k) and put them into NN to train the hell out of them. Or... Should it be better to reduce their dimensionality to 64k by 3 (not 3k, 3) and than put them as input? I don't like to do the analysis of the analysis of the analysis... Yet, reduction of dimensionality is in my veins :)

Sort:  

Well, I personally didn't have experience with such problem, but based on what you said, it looks like you have to reduce the number of the features without losing any important data - That looks to me as a problem for Karhuenen - Loeve expansion, you can check out this article on wiki https://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem and google it further.

I hope that I was helpful :)

Karhunen–Loève theorem
In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.Stochastic processes given by infinite series of this form were first considered by Damodar Dharmananda Kosambi. There exist many such expansions of a stochastic process: if the process is indexed over [a, b], any orthonormal basis of L2([a, b]) yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the total mean squared error.

Yes, yes, there is a whole bunch of similar techniques, each one of them good for the extraction/ elimination of "noise" based on slightly different statistical parameters.

Thank you!

Coin Marketplace

STEEM 0.22
TRX 0.20
JST 0.034
BTC 98934.25
ETH 3347.90
USDT 1.00
SBD 3.08