WARNING: This post is likely to be rather long, full of musings with very little concrete data to back them up. I hope it will interest you, but I don't promise anything...
Today I was playing with my little girl, Tamar (6 months), when some questions started to form in my head. When you think of the ultimate goal of each living species, it's rather simple: ensure the existence of the species for as long as possible. This is obtained by "mechanisms" such as reproduction, survival of the fittest, randomization mechanisms (e.g. mutations, weather changes, etc.) and numerous other techniques Mother Nature has to offer.
Scientists have been trying to simulate some of these mechanisms for various purposes for half a century, with variant degrees of success. Genetic Algorithms are the most widely known of them, and have been extensively researched in the past 20 years or so. The basic idea is that you have some objective function you want to optimize, and a large set of parameters to tune. The assumption is that the time it takes to compute every possible set of parameters is computationally prohibitive (unless you're willing to wait 10,000 years until you get the answer). The tuning of the parameters is done by applying operations known in genetics, such as reproduction or crossover (taking some parameters from one solution and some from another - creating a new solution in its own), mutation (changing some parameter(s) through some randomized mechanism in order to create a new solution), selection (each iteration keeps the best solution sets from previous iterations), and variants thereof. The main issue I would like focus on, is that there is always one objective function, and every solution (i.e. set of parameters) is judged by the score it acquires at that objective function.
Now let's get back to my little girl. As I said, as far as Nature is concerned, my daughter's ultimate purpose, like every human being, is to ensure the human race will persist for eternity. However, even Mother Nature knew that this is too big a goal to be of any concrete value. Therefore, it has set an infinite amount of sub-goals in our life, each of which bringing us closer to the ultimate global goal of Human Kind (be it in a minuscule amount). These sub-goals are, for example, to reproduce, which requires (as sub-sub-goal) to meet and get acquainted with males, which requires among other thing, to know how to walk (that's already a sub-sub-sub-goal), which has a precondition of being able to crawl (sub-sub-sub-sub-goal I think).
So Tamar's objective function, for our discussion, is to learn how to crawl. Not thinking about any of these things, I knew she needs to learn how to crawl (and frankly I wanted her to get tired, but that's even more besides the point), so I set her on a matress, with a blinking toy in front of her to attract her attention.
(I told you it would be long... don't lose patience, we're getting there...)
If we get back to Genetic Algorithms, the objective function of our problem, is to learn how to crawl (say we can give a score for the crawling quality). However, to reach this goal, I created a new objective function, apparently completely unrelated to the real goal - reaching this annoying blinking toy. She could end up reaching it in a million of ways, or never being able to reach it on her own (which is actually what happened, since she kept crawling backwards), but at the end of the day, her mind and body would have learned some new things that will, eventually, serve her to reach that real goal (crawling). In other words - I created a synthetic objective function, with only some vague relationship with the real objective function, with the express intent of improving the chances of, on the long run, optimizing the real objective function (crawling).
Now my question is - how could this approach be applied to Genetic Algorithms in general? How can we synthesize, throughout a GA's run, new objective functions, which will help the GA to reach the ultimate objective function?
Another rather different question that comes to mind - Tamar didn't reach the blinking toy, because she started crawling backward. I'm no specialist in these matter, but I allow myself to assume that at some level, she has learned something from this experience. Somewhere in her brain, something has marked that the movements she did were wrong (to reach her goal, i.e. get the toy), and that she should find another way. She will do the same movements once, twice, maybe thirty times - in the end she will grasp that the solution must be found elsewhere. As far as I know, Genetic Algorithms (and all their variants) are always concerned with "good" solutions (i.e. solutions that get good scores on the objective function). I don't think there is any algorithm that knows how to keep track of "bad" solutions, in order to avoid getting back to solutions close to that. A possible approach could be to run a "positive" GA, which strives to improve the objective function, but always keep track of the "bad" solutions. Whenever one the new generation's items is too similar to one of the bad solutions (or some representative of the bad guys), it is disposed of without even checking the objective function, and a new one is created instead. Of course, there is the risk of missing some good solutions that are accidentally similar to the bad ones, but in case that computing the objective function is an expensive task, it may be worth the tradeoff.