Genetic Algorithms.., A Primer

It is Charles Darwin’s 200th birthday. So I guess there must be loads of posts on GA, strewn across the Internet. Genetic Algorithms are special to me because they are the prime reason why I am doing whatever it is that I am doing.

This post is a light intro to Genetic Algorithms, sans its terms. Genetic Algorithms are delightfully simple. GA is an approximation algorithm, that is, they are used in situations where either the optimal solution is unnecessary or the complexity of the problem does not permit it. Sometimes, the Genetic Algorithm may produce the most optimal result also. Since examples are the best way to learn an algorithm, here’s the example problem we are to solve.

Balancing moments

Balancing moments

Given to you is a configuration of balances as shown in the figure. The problem is to balance the configuration by adding weights on either side. You are provided with 10 weights measuring 1,2,3……10 kg each. Each weight is to be used exactly once. Thanks to GreatBG for this puzzle. His puzzles are always most entertaining. This can roughly be equated to the traveling salesperson problem. In the problem, the salesperson is to visit all cities (and comeback ), in such a sequence so as to minimize the total distance covered. The most naive solution is to try all possible sequences and see which one gives the minimum cost. For a map of n cities, that comes to n! combinations. A similar approach would work in our case also, but then, when the map or the number of weights is larger, the technique, takes so much more time. A more professional approach would be ruffle up your knowledge of elementary physics and reduce the problem to a set of linear equations and then solve them. Naaa.

Genetic Algorithms

My code, in C++, is here. Here’s the pseudo-code, sourced from wikipedia

  1. Choose initial population
  2. Evaluate the fitness of each individual in the population
  3. Repeat until termination: (time limit or sufficient fitness achieved)
    1. Select best-ranking individuals to reproduce
    2. Breed new generation through crossover and/or mutation (genetic operations) and give birth to offspring
    3. Evaluate the individual fitnesses of the offspring
    4. Replace worst ranked part of population with offspring

Initializing the Population

The algorithm maintains a set of solutions. Not optimal solutions, just solutions. For example, in the case of the TSP, it may be a set of random ordering of the cities. In our problem, a set of random configuration of weights. In the program, I maintain the configuration as a string, where the weight alloted to each slot is represented by a character, with ‘A’ being 1, ‘B’ being 2 and so on. The weights are added bottom to top and left to right. One point to be noted here is that all solutions in the set must stick to the constraints of the problem. No city must be re-visited, no weight can be added twice.

Evaluating the Population

This is the key part of the algorithm. Once a way is found to evaluate each member of the ‘population’, the problem is half complete already. In our case, the “fitness” will be based on the resultant moment in the system. Therefore, best solution will have 0 fitness, the resultant moment in the system would be zero.

Until the required precision is obtained, or a maximum limit on the number of iterations is reached, we continue by doing the following opertions.


This is where we actually begin to understand why it is called Genetic Algorithms in the first place. According to Darwin’s thery of Natural Selection, the best members of the population mate to produce better offsprings. Similarly, in our algorithms, the best solutions found so far mate with each other to (hopefully) produce a better solution. In code, this translates as the merging of substrings from different members, chosen from among the best solutions found so far. Point to be noted is that if done improperly, this could result in an offspring, that is not even a valid member of the population. It might violate the problems’ constraints. In TSP, and in this case, the same city/weight might get listed again, which cannot be allowed. So we use a special crossover method called, Partially Matched Crossover(PMX in code) which is beyond the scope of a primer.


Some members of the population are randomly chosen to be mutated. Mutation is a process by which the offsprings get a trait that is inherited neither from the father nor the mother.While translating into code, this simply means swapping two characters within the string. You can choose more complex mutation techniques if desired. Again, the problem constraints must be satisfied. Mutation is required mainly to increase the variety of solutions in the population. Variety is very important.

Natural Selection and Elitism

Once these operations are done on the parents, a new generation of solutions are born. To make sure that the best solution in this generation is atleast as good as the best one in the previous generation, we introduce “elitism”. Normally, all the individuals in the previous generation are replaced by the members from the newer generation. But by having a non zero elitism value, a small portion of the best solutions from the previous generation survive on in the next generation.

Going by the paradigm of survival of the fittest, we now select the best solutions from the both the generations and these form the actual next generation. (My code is slightly varied. It does not use natural selection, the members of the younger generation simply replace all but the elite members of the elder population)

Local Maxima

Genetic Algorithms suffer from the problem that it might converge to a non-optimal solution. Consider that we are looking for the solution that is at the top of a mountain. It is possible that the population end up on an elevated hill, far away form the mountain. This is possible if the various values are not tuned properly. The algorithm will not get out of this situation by itself. the best way to go in this kind of a scenario is to simply start again.

People, Blogs, Conferences and Applications

Game AI Development Event at Kurukshetra 09

College of Engineering, Guindy is organizing a game AI development event this time at  Kurukshetra, the annual International Techno-Management festival.

Kurukshetra 09

Kurukshetra 09

We wanted the event to reflect the current trends in the Game AI development industry. It is left to the participants to be the judge of to what extent the main goal is accomplished. The trends seem to be

  • Player specific content
  • A move away from hard-coded-rules based opponents.

We chose Lunar Lander to be the test game for both problem statements. Why? Lunar Lander was one of the earliest games to use Vector Graphics. But that’s not the reason why. Lunar Lander is simple, to code and to play. And.., it seemed to offer scope for hacking both problem statements. And now to the problem statements.

The First problem statement is a challenge to create a player that plays the Lunar Lander. You could try a rule based approach and try coding it. And you may succeed too. But I’ll bet the easier way out is to make your player learn how to handle the Lander. There are many situations where a rule based approach would fail. Consider the case of a car racing game, a rule based approach may satisfy the initial requirements of the game. Your racer may successfully out-race the human player, but it wouldn’t do well on tracks you, as its programmer would not have seen.  Because your AI doesn’t know how to ‘drive’. Well, that’s the motivation for this problem statement.

The second problem statement is with regards to content creation. Simple games such as Lunar Lander, I think deserve to engage the player for longer than 3 levels. The content of the game is very simple. It involves very little graphics. Is it possible to use modern / not-so-modern techniques to generate an infinite set of levels?? Theoretically, yes. Further, would these levels, ‘engage’ the player as much as hand-crafted levels would? It is definitely worth a try.

Well, if you are a Game AI developer PRO, and all this seems, well, ‘childish’ to you,  or in case you disagree with what we have identified as the current trend in the industry, try the third problem statement. If you have any innovative ideas on game AI, an implementation of your concept is most welcome. Although, we are not intending to accept any text-book based search algorithms. Even slightly modified but better versions are accepted.

This is the link to the event’s page. Please go through the problem statements again. This is the first time we are attempting such an event. Your comments and suggestions are welcomed. They would be most invaluable to making this a more successful event in the next editions of Kurukshetra. Have fun, Merry Christmas and Happy New Year!!!

A new kind of a Game

It doesn’t look visually enticing, but I am pretty sure its darn good. No, I have not played the game yet.


Now, I wasn’t expecting to write a post on a game, any time soon. Not even on DotA. But this is just too good a game for me to resist. I have been quite intrigued by the concept of Evolutionary Computation. One lazy Sunday, I started looking for more stuff on Evolutionary Computation. I found a lot. Particularly, a post-doctoral researcher’s blog. Julian Togelius. He’s published a paper on solving Sudoku using Genetic Algorithms. Serious stuff that! One of his other papers have been judged as one of the top ten AI game innovations of the year. This dude was one of the organisers of a Simulated Car Racing Championship atWCCI 2008. It seems there are not a lot of games available that actually use the latest Learning Techniques.

It was very funny in the beginning when I looked at the games’ graphics( WCCI ). Not a ‘Need for Speed’ then. On further reading, I found more on the rules of the game. From a programmer’s stand point, I thought, not very easy to program a controller. All the various cases. How could it adapt?? Perhaps learn?? Neural Networks then?? Neural Networks are to put it very very simple terms, like a brain. ( Please note that this is a gross over-estimation, it resembles the brain only in that it can learn new stuff. It is not even close to the functioning of a brain ). Perhaps neural networks would have been the answer if the same question had been put forth a few years ago. Today, ANNs alone will not suffice!!

Genetic Algorithms are now used to train Neural Networks. OK. Imagine you are given the job of teaching something to someone. If you couple GA with ANN, it’d something like, you just have to tell the person, how well he has learnt the subject, and he’ll keep improving all by himself. ( Of course, this is another gross over-estimation. If this is very non-technical please refer to one of the links given in the side )

It just got more interesting after that. I learnt that one of the Car controllers was evolved using rtNEAT( real time NeuroEvolving Augmenting Technologies ) developed by Kenneth O Stanley at University of Texas at Austin. NEAT not only evolves the best neural network of a given structure, it also evolves the best structure for the purpose. Fantastic stuff. As I kept reading, link after link. I get here. The page gives a detail overview of how NERO is played and how it is being developed. Please do read it. Its an interesting read. You’ll start wondering what kind of games will be the next big trend.

I have added on my links lists some of the more interesting blogs I have found during my research. Anyone interested in AI, please do have a look at these pages. I definitely plan on writing more posts on my own discoveries on the Web about AI, more particularly, Evolutionary Computing. And next time, I hope I try to convey more technically what I actually wish to convey. Cheers.

Social Bookmarking