Data Science Asked on May 17, 2021
I am currently writing a scientific thesis which consists of two parts. In the first part I am building ML models with neural networks, support vectors etc. and the second part is about finding global minima for optimization objectives with optimization methods like Particle Swarm Optimization, Ant Colony Optimization, Simulated Annealing etc.
When writing about the parts in the introductory part, I would write machine learning methods when referring to the first part and metaheuristic optimization methods when referring to the second part.
Is it possible to distinguish them like this?
In a wider context every machine learning method can be re-cast as some type of optimisation problem. For example for Neural Networks the associated optimisation problem is "find the weights which minimise some loss function of the data given an architecture". This is solved using back-propagation (which is a layered gradient-descent method) and when minima of the loss function are found we say the systen has "learned".
On the other hand there is nothing stopping us from re-casting optimisation problems as "learning" problems
. A possible differentiation (based on what you mention in the question) is between analytic methods and non-analytic methods.
A. Analytic Methods: Gradient-based, Primal-Dual etc... (eg NNs, SVMs)
B. Non-Analytic Methods: Particle systems, Genetic/Evolutionary systems, Simulation methods, Stochastic methods, etc..
The basic differrence is both whether the form of the objective function is known and if analytic methods (eg gradient-descent vs particle systems) are used to find optima.
Again, there is nothing, in principle, stopping us from using these non-analytic optimisation methods to do machine learning.
Correct answer by Nikos M. on May 17, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP