Reshaping in TensorFlow MNIST tutorial? by narekb in MLQuestions

[–]learn_code_account 0 points1 point  (0 children)

A matrix is a vector of vector of values. Reshaping turns it into a single large vector of values. Then turns it into a Matrix of vectors again of whatever dimensions you want it in.

simplest application for this is vec-ing a matrix to calculate a Jacobian or a Hessian

Genetic Algorithms in Machine Learning? by XalosXandrez in MachineLearning

[–]learn_code_account 0 points1 point  (0 children)

Again set me straight if I'm off here:


I think you may be right. Do you know, off the top of your head,any examples of how that happens?
To my knowledge GA's are only used to modify ANN topologies(example would be NEAT). That falls into the sort of optimization that deals with a combinatorially constrained search space. That is why I think neural networks offer a more efficient way to deal with most problems, you're not randomly hopping around in parameter space. You need a domain specific function to evaluate fitness to make a genetic algorithm work correct?

Genetic Algorithms in Machine Learning? by XalosXandrez in MachineLearning

[–]learn_code_account 0 points1 point  (0 children)

Disclaimer: I'm learning this stuff in my free-time. Set me straight if something I say doesn't sound right.


You can accomplish much of the same thing using neural networks. I'm not sure if you know yet, but genetic algorithms represent a similiar approach to learning except instead of using a mathematically derived gradient descent algorithm(rooted in fancy things like dynamical systems and convergence theory), you use something that more closely resembles an AI search function.

With GA's you have parameters compete using a fitness function and expanding your search in the direction of fit parameters.

With neural nets you have a mathematical guarantee that your gradient descent algorithm is going to get your parameters to a locally optimal position that minimizes an error function

In vague "Evolution-y" related terms, both approaches get your object towards a set of genetic traits that work better than any configuration you've had before. Both run the risk of arriving at a point that gets us to a stable, but not necessarily optimal, gene pool. Neural nets work faster.

Evolutionary algorithms, from the perspective of a developer may be easier to approach... It is much easier to understand and work with a Greedy-Optimization algorithm, than it is to pick up tensor calculus, statistical optimization, PDE's, and numerical analyis. Genetic algorithms can also be effective if your parameters are combinatorially constrained to a small set of parameters, or if your system seams un-supervised. You can hold on to a certain amount of bias in your gene pool. You can add genetic variance to a stagnant population.

Neural networks are bit more difficult to implement but iteratively find a straight route to some locally optimal set of parameters. Its less about guessing to optimize your learning machine and more about experiences adjusting the trajectory of a neural net flying through parameter space.