TransWikia.com

avoiding premature convergence with neural networks (EA's)

Data Science Asked by Nick Stevens on July 20, 2021

I am currently writing a program that would be able to play snake on an 25*25 grid. It works by optimizing a set of weights of 300 different solutions (each solution would be a different neural network) with the aid of an evolutionary strategy, thus by random mutation and parent selection. I have decided not to apply crossover on the parents pool due to the black box problem concerning MLPs and other neural networks. My population size is 300 and 10 parents are selected every generations (200 gen. total). Every other solution which is not a parent gets deleted to make place for 290 new mutated solutions based on the parents.

My network structure (MLP) is the following : 6 input nodes, (10,10) hidden nodes, 3 output nodes.

With my current version of the programme, I am able to reach a steady score of 60 apples (or points) eaten (around 15/200 have a score above 60). And this only after 50 generations! But now, it seems like i have bumped into a local minima that I am not able to overcome. Does anyone have any idea if further progress is still possible or that i might have hit the ceiling? The quickly elevating performance might be a sign that my exploration/eploitation balance might not be ideal, but with each solution containing 190 weights, more exploration would seem too computationally expensive and would take forever on my i7 10gen laptop :/.

Can i call this a premature convergence and is there a way to deal with this, or have i hit the limits of my idea?

One Answer

You can take a look at dropout, it will get you out of the local minima you are stuck in.

Answered by Abhishek Verma on July 20, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP