On Tigers, diffusion and chaos

Tiger

RT @sanchan89 Always good to know that my mojo is still intact after 4 years 🙂 Take a look http://bit.ly/bGv5su

So, after quite a while, I tried sketching again. And it came out pretty neat, I suppose. But then, I want this to be a tech blog, right?

We humans, have a somewhat dull skin[1]. It is completely bereft of any fur[2] and any pattern to be had on are to be painfully tattooed. Look, on the other hand , at the pervasive presence of spacial patterns on animal coats and flora. Most commonly seen in plants are the rosette patterns, and self replicating branching systems. Mammals and fishes are so much richer – their fur coats come with stripes, spots, rosettes, bands, blocks, combination of these  and in the case of some sea shells – Serpenski triangle! Even in animal behavior, there is so much elegance – if you have not seen the movements of large schools of fish or birds in flight, you have a serious problem and you should consult a psychotherapist. Even the inanimate desert sands are beautifully patterned with near parallel curves.

Spatial patterns collage

Spacial patterns collage

It makes us think, doesn’t it? How are these patterns formed?

If every skin cell were to take on either black or white color with a uniformly random probability, then a zebra would look like white noise. Try imagining one with a coat like that. No probabilistic distribution is ever going to generate patterns as fantastic as those on every tiger. Does every cell, then, have a global spacial view of itself? Is it possible that a single cell knows that it is a part of the stripe across the head of the tiger, and therefore, it should take on a black pigmentation? That seems like something phenomenally complex for a single living cell to know.

Related problems in developmental biology are cell differentiation, and Morphogenesis. Every cell in an embryo is the same. Embryos are homogenous; and yet without a single central controller, cells in different areas of the embryo develop into different parts of the body. Some develop into the the heart and lungs and others into brain and legs.

The first explanation for this phenomenon was provided by the British Mathematician, code breaker, logician, computer scientist ( and during the later days of his life, a chemist )  Alan Turing in his paper titled ‘The Chemical basis of Morphogenesis” in 1951.  ( This paper has its own wiki page ) The paper described mechanisms; borrowing heavily from concepts of self-organisation, well known in physics; by which non-uniformity may arise from uniform homogeneous states, and outlined the reaction-diffusion theory of morphogenesis. Reaction–diffusion systems are mathematical models that describe how the concentration of one or more substances distributed in space changes under the influence of two processes: local chemical reactions in which the substances are converted into each other, and diffusion which causes the substances to spread out in space.

Independently, around the same time a Russian biophysicist Boris Belousov discovered a reaction-diffusion system, now called the Belousov-Zhabotinsky reaction. Where Turing had proposed mathematical models, here was already a demonstration of the self-organizing tendencies of the non linear systems which he had proposed. Here’s a video of a BZ cocktail evolving

From a uniformly homogeneous solution – spirals, spots and expanding rings are formed. And no two instances of the same experiment will get you the exact same results. The equations that govern these reactions are so expressive that you can actually come up with results that show stripes and hexagons! Try it out yourself in this applet.

These two works are considered by some as the founding work on chaos theory. In a nutshell, Chaos theory deals with highly nonlinear and usually self looped systems that despite being fully deterministic, are subject to abrupt and seemingly random change. It is clear in this case –  In spite of explicitly and deterministically defined rules, the reaction-diffusion yields a uniquely different result every time. The source of all this chaos is, well…, in the source itself. The non-linearity of the system greatly magnifies even immeasurably small changes in the initial state. So, the next time you round up a value from 0.506127 to 0.506, be warned! As a sidenote, the phrase “Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?“, is not a misunderstanding of chaos theory. It is in fact the title of the talk presented by Edward Lorenz at AAAS, 1972 on chaos theory. The title was the result of inferences from a run of the simulation he had built, when he changed one of the values of atmospheric conditions as stated above.

If you are interested in learning more about Chaos theory, there is a fantastic one hour program on BBC. No, it does not have equations ( well, maybe just one )

  1. Well, thats not entirely true! We have fingerprints. And they are beautiful and unique, but not colorful or macro. Guess that still qualifies as dull.
  2. This is interesting because, man is the only mammal; excluding those that are aquatic, that has no fur, check out the February 2010 edition of Scientific American

Further reading

  • Stumble this here. Follow me on twitter @sanchan89.
  • If you find my posts interesting, you can follow me via RSS here.
  • Some of my shared items are here, in case you find it interesting.
Advertisements

Vision and Cognition

Quite eventful, these past 3 months. I presented my first publication at the World Congress on Natural and Bio-Inspired Computing in December; attended the Microsoft Research India Winter school on machine learning and computer vision, followed by Kurukshetra, in January.

Some pics first – NaBIC 09, MSRI Winter School, Kurukshetra. Our NaBIC paper is here and our code is here. In short, we show that a simple iterative algorithm can find a very good photomosaic when compared to other population based methods like genetic algorithms. To induce some interest, here is one of our results.

Now that the pictures have spoken the thousand words……

Human Vision

I have been a follower of Predicatbly Irrational after watching Dan Ariely’s talks ( here and here ) on Ted.com. Both talks are delightfully entertaining and demonstrate with wonderful examples the fallibility of our own decision-making capacity. For a while after each talk, I spent a lot of time probing into every decision I make, usually ending up almost undecided. As an undergrad who has to choose between a career and higher education – it couldn’t have come at a worse time. I was already in doubt whether I was applying to graduate schools simply because it was what the best among us were doing. Equally, I didn’t know whether it was simply cold feet/cowardice or the safety of a job at Amazon India that was making me have second thoughts about higher education.

Coming back; Dan starts off by using visual illusions as a metaphor. Here’s an example:

The Grey square optical illusionSame color illusion proofTake a look at the image on the left. It seems absolutely impossible that squares marked A and B are the same. They seem to be opposing colors, but in fact they are exactly the same. The proof is on the image to the right. `What is gong on here? How can it be that we see wrong? How can it be that even after we are shown that these two patches are identical we still can’t see them accurately when look left again?`

Dan’s ( more colorful ) example is here. Quoting from his blog:

Now, vision is our best system. We have lots of practice with it (we see many hours in the day and for many years) and more of our brain is dedicated to vision than to any other activities. So consider this — if we make mistakes in vision, what is the chance that we would not make mistakes in other domains? Particularly in domains which are more complex (dealing with insurance, money, etc.), and ones in which we have less practice? Domains such as decision making and economic reasoning?

So a few thousand years of evolution of the visual cortex ( which by the way is the largest part of the human brain ) and eyes has given us a visual system that can’t even see the world for what it truly is even; after it is explicitly demonstrated? Not exactly…..,

Gestalt Theory

One of the plenary talks @ NaBIC was on Gestalt psychology. Gestalt in German means – shape of an entity’s complete form. The principle behind gestalt theory is that out brain is very holistic and understands more than what the sum of the parts indicate. The concept itself is somewhat difficult to put down in words, but a few examples expose some interesting aspects of our cognition system. For example, take a look at the following picture:

Emergence

If at first you cannot make sense of the image, take a second to look at it before continuing with the text. The picture demonstrates `emergence `. After a while the scene with the dalmatian dog sniffing the path emerges. One can even make out the fallen leaves, the crossroads and the trees in the background. We do not recognize the dog’s body parts and then put them together to form the concept of the dog! Instead we perceive the dog and then make sense of its parts. The gestalt psychology theory gives only a description of this phenomenon and does not provide any explanation as to how we do it ( and it has been zinged a lot for that ).

Reification

Reification

Another phenomenon is called `reification`. It is when we understand more than what we see. Look to the right. There is no triangle in A, but we perceive it, no rectangle in B, but we see it. We can sense the presence of a sphere in C and the (absent) surface of water over which the snake glides!

Multistability

Multistability

Before this post becomes a copy of the wiki page, one last example. This phenomenon is called `multistability`. As you keep looking at the images to the left, you keep shifting between two interpretations of the same image. The first image is the necker cube.

You can read more on Gestalt in its wiki page. For reviews of technical articles, see here.

There is already some criticism about the traditional statistics based approach to Computer Vision, and it seems almost impossible for any system now to truly replicate any of the phenomenon demonstrated above. But, there have been some attempts.

It is said that almost every branch of science follows a steep sigmoid. And there was consensus at the winter school ( which included Jitendra Malik, Yan LeCun, Yair Weiss, Martin Wainwright, Bruno Olhaussen, Richard Zemmel, William Freeman and Manik Varma — I am mentioning only the CV people here ) that computer vision is currently only at the bottom end on the verge of the steep rise; and that fifty years from now the best computer vision systems could be based on a completely different set of fundamentals.

Understanding Motion

Federer in action

Federer in action

Point light displays

Point light display

The next level is understanding videos, or at a least of sequence of images. But can’t we understand the action from a single image? Sure, if its an image like the one on the left, it says a lot about the action being performed and precludes the need to understand image sequences. But what about the image on the right? Umm., not so sure. Looks like a human, but its not trivially understood what it is or what it is doing.

Watch the related videos on youtube.

That makes it a lot clearer. Dr. Malik presented a video on the research work of one Dr. Johansson from the 1970s ( I have not been able to locate the original video. Its not on youtube). There has been a lot of work after that trying to understand our pre-disposition to recognize biological motion. Apparently babies only three months old can perceive as much. So can other mammals!

Some pages on biological motion detection:

If you find my posts interesting, you can follow me via RSS here. Stumble this here. Some of my shared items are here, in case you find it interesting.

Computer Go (….continued)

The last post on Computer Go (about a year back) was a brief description of the game, and why it is very difficult to approach Go in the same way we tackled Chess.

In a 4 page article on IEEE spectrum, Feng – Hsiung Hsu – the architect and the principal designer of the IBM Deep Blue – discusses in some detail about how Go could be solved using brute force and variations of Min-Max search itself. The article does not go into great detail about the principal problem with computer Go – the evaluation of board positions. Also, one statement caught my eye:

On top of that, in 10 years Moore’s law is expected to present us with still another 100-fold speedup.

I was under the impression that Moore’s law had already reached its limit. However, by adding more and better processors, supercomputers, will undoubtedly be able to do more operations in parallel. In this article we take a look at the most successful technique that takes advantage of the massive parallel processing power of today’s supercomputers – Monte Carlo Techniques using UCT – and deal the advancements made to it in a later post.

As a side-note – For a while now, I had felt that these approaches are not the best way to tackle Go, and that systems that mimic the behavior of human pattern recognition and learning mechanisms are the way to go ( no pun intended ). There was recently a small discussion in the Go mailing list about the use of Neural Networks in tackling Go. This one remark by Alvaro Begue, has affected strongly my sentiments on the issue:

People tried to make flying machines by imitating birds for a long time. Planes and helicopters fly, but not like birds do it. Similarly, I believe that whenever we figure out how to make machines that play go well, they will not do it like a brain does it.

The point being made here is that the computing systems that we have today are very different from the wirings in our own brains. We do not have very good brain-like pattern recognition system. What we do have are really fast parallel systems that perform simple operations at an amazing speed. And our aim should be to learn to use that effectively.

Monte Carlo Go – MoGo and CrazyStone

The introduction of Monte Carlo methods was a great boost to Computer Go. Currently, the best Computer Go program is MoGo and it uses Monte Carlo evaluation with the UCT algorithm. MC-Go engines explore the game tree upto a certain depth and then use Monte Carlo simulation to evaluate the leaf positions. The biggest issue in computer Go is that there is no way to evaluate the resulting board position after the Min-max tree has been explored to a certain depth. Board evaluation is relatively easier in Chess. Positions and pieces directly give away the value of the board. There is no such simplicity in Go. Territories may change hands. And patterns formed initially may have complex implications across the board.

In Monte Carlo evaluation, multiple simulations are run to evaluate the board position. A stochastic player plays a number of random games against himself starting from this position; until the end, and the average of all the final scores is taken as an estimate of the board’s value. In some cases, the frequency of wins from the position is determined as the position’s value.

Something that we could improve is the stochastic player. Ideally, to effectively evaluate the board position, the stochastic player should model the opponent as closely as possible. This is understandably very difficult to achieve. Instead it is far more practical to make the stochastic player biased towards picking better moves ( as both players are in reality going to draw from a set of good moves ) instead of picking all moves uniformly randomly.

A naive implementation of Monte Carlo simulation based evaluation function, where every leaf is evaluated by running a constant number of simulations, will naturally waste simulations on poor board positions. Instead, its better distribute the available simulations properly to evaluate more promising moves more. This leads us to the Exploration vs Exploitation problem.

Exploration vs Exploitation problem

An intuitive example of this is the n-armed bandit problem. A gambler wants to maximize his winnings from a slot machine with n arms. Each arm has a different mean winning associated with it. Through repeated plays the gambler is to maximize his winnings by concentrating his plays on the best levers, except he doesn’t know which lever is the best. In the absence of this knowledge, the gambler’s choices are to choose between a greedy action (exploitation), where he selects the lever which has given him the maximum reward so far; or to explore and choose a different lever in the hope that it’ll give a greater reward. In MC-Go, during evaluation, every move is treated as different bandit problem.

Using UCT ( UCB1 applied to Trees ) in MC-Go

The UCB1 algorithm simply picks the move aj which maximizes this value UCB1 formula, where Xj is the average of all rewards obtained by selecting action aj, Tj is the number of times action aj was picked, and n is the total number of times any action was picked. It would be space-wise expensive if we were to maintain information about each bandit problem and its values. Therefore, both MoGo and CrazyStone do the Monte Carlo simulation in two phases. In the first phase, called Tree Policy UCB1 is followed, and in the other, a default policy ( using the stochastic player discussed earlier ) is used.

Starting from the board position to be evaluated as the root, MoGo selects a move according to UCB1. It then descends down the tree in a similar manner, until the selected node is not a part of the tree. MoGo then adds this node to the tree. The Tree Policy ends here. From here on the default policy using the stochastic player is used. The video shown illustrates the technique.

Resources for further reading

Most of the content here is from the slides of either Sylvian Gelly or Remi Coulom. Following up on the side-note, there are in fact two fairly strong Go programs built on neural networks – NeuroGo and Steenvreter. Lets see if I could read up on them and blog on it sometime. But soon, there’ll be a post on RAVE and hints on RL-Go.

Genetic Algorithms.., A Primer

It is Charles Darwin’s 200th birthday. So I guess there must be loads of posts on GA, strewn across the Internet. Genetic Algorithms are special to me because they are the prime reason why I am doing whatever it is that I am doing.

This post is a light intro to Genetic Algorithms, sans its terms. Genetic Algorithms are delightfully simple. GA is an approximation algorithm, that is, they are used in situations where either the optimal solution is unnecessary or the complexity of the problem does not permit it. Sometimes, the Genetic Algorithm may produce the most optimal result also. Since examples are the best way to learn an algorithm, here’s the example problem we are to solve.

Balancing moments

Balancing moments

Given to you is a configuration of balances as shown in the figure. The problem is to balance the configuration by adding weights on either side. You are provided with 10 weights measuring 1,2,3……10 kg each. Each weight is to be used exactly once. Thanks to GreatBG for this puzzle. His puzzles are always most entertaining. This can roughly be equated to the traveling salesperson problem. In the problem, the salesperson is to visit all cities (and comeback ), in such a sequence so as to minimize the total distance covered. The most naive solution is to try all possible sequences and see which one gives the minimum cost. For a map of n cities, that comes to n! combinations. A similar approach would work in our case also, but then, when the map or the number of weights is larger, the technique, takes so much more time. A more professional approach would be ruffle up your knowledge of elementary physics and reduce the problem to a set of linear equations and then solve them. Naaa.

Genetic Algorithms

My code, in C++, is here. Here’s the pseudo-code, sourced from wikipedia

  1. Choose initial population
  2. Evaluate the fitness of each individual in the population
  3. Repeat until termination: (time limit or sufficient fitness achieved)
    1. Select best-ranking individuals to reproduce
    2. Breed new generation through crossover and/or mutation (genetic operations) and give birth to offspring
    3. Evaluate the individual fitnesses of the offspring
    4. Replace worst ranked part of population with offspring

Initializing the Population

The algorithm maintains a set of solutions. Not optimal solutions, just solutions. For example, in the case of the TSP, it may be a set of random ordering of the cities. In our problem, a set of random configuration of weights. In the program, I maintain the configuration as a string, where the weight alloted to each slot is represented by a character, with ‘A’ being 1, ‘B’ being 2 and so on. The weights are added bottom to top and left to right. One point to be noted here is that all solutions in the set must stick to the constraints of the problem. No city must be re-visited, no weight can be added twice.

Evaluating the Population

This is the key part of the algorithm. Once a way is found to evaluate each member of the ‘population’, the problem is half complete already. In our case, the “fitness” will be based on the resultant moment in the system. Therefore, best solution will have 0 fitness, the resultant moment in the system would be zero.

Until the required precision is obtained, or a maximum limit on the number of iterations is reached, we continue by doing the following opertions.

Crossover

This is where we actually begin to understand why it is called Genetic Algorithms in the first place. According to Darwin’s thery of Natural Selection, the best members of the population mate to produce better offsprings. Similarly, in our algorithms, the best solutions found so far mate with each other to (hopefully) produce a better solution. In code, this translates as the merging of substrings from different members, chosen from among the best solutions found so far. Point to be noted is that if done improperly, this could result in an offspring, that is not even a valid member of the population. It might violate the problems’ constraints. In TSP, and in this case, the same city/weight might get listed again, which cannot be allowed. So we use a special crossover method called, Partially Matched Crossover(PMX in code) which is beyond the scope of a primer.

Mutation

Some members of the population are randomly chosen to be mutated. Mutation is a process by which the offsprings get a trait that is inherited neither from the father nor the mother.While translating into code, this simply means swapping two characters within the string. You can choose more complex mutation techniques if desired. Again, the problem constraints must be satisfied. Mutation is required mainly to increase the variety of solutions in the population. Variety is very important.

Natural Selection and Elitism

Once these operations are done on the parents, a new generation of solutions are born. To make sure that the best solution in this generation is atleast as good as the best one in the previous generation, we introduce “elitism”. Normally, all the individuals in the previous generation are replaced by the members from the newer generation. But by having a non zero elitism value, a small portion of the best solutions from the previous generation survive on in the next generation.

Going by the paradigm of survival of the fittest, we now select the best solutions from the both the generations and these form the actual next generation. (My code is slightly varied. It does not use natural selection, the members of the younger generation simply replace all but the elite members of the elder population)

Local Maxima

Genetic Algorithms suffer from the problem that it might converge to a non-optimal solution. Consider that we are looking for the solution that is at the top of a mountain. It is possible that the population end up on an elevated hill, far away form the mountain. This is possible if the various values are not tuned properly. The algorithm will not get out of this situation by itself. the best way to go in this kind of a scenario is to simply start again.

People, Blogs, Conferences and Applications

The Game of Go

Game of Go
Game of Go

It is amazing how simple rules can result in breathtaking complexity. This post is about such another game :

Go

Chances are, you have already seen people play the game. I had seen it first in ‘Fist of Legend‘, Jet Li’s adaptation of the ‘Fist of Fury‘.

It is so old that it was mentioned in the ‘Analects of Confucius‘( 479 B.C ). In fact, the American Go Association cites this is as one of the top ten reasons to play Go.

When you play go, you are doing something that billions of people have done for thousands of years. Many centuries ago, people were doing the exact same thing, in the exact same way.

What an absurd reason to give a promising player to choose the game!!!

Crisply, the rules as specified here:

  • Go-board(Goban):19 x 19 intersections.
  • Back and White play alternatively.
  • Black starts the game.
  • Adjacent stones ( of the same colour ) are called a string.
  • Liberties are the empty intersections next to the string.
  • Stones do not move, they are only added and removed from the board.
  • A string is removed if its number of liberties is 0.
  • Score: Territories(number of occupied or surrounded intersections).

Plain and simple. There is also a large set of terms associated with the game which you will nedd to know if you are playing / coding a player. All of them, Japanese. I strongly agree with the subject of this post, so try to play a few games before you get coding. I would suggest GNUGo, with Quarry to make a good beginning.

However, Computer Science considers a human level Computer Go player an open problem. It belongs to the same class of problems as the ` P = NP ` and ` The Church Turing Hypothesis `.

A comparison with Chess, ought to bring more light on the subject.

  • Computer Chess Programs have defeated Chess Grandmasters, and have become so good, that are now competing among themselves for Chess Supremacy.
  • An average Go program, on the other hand, are routinely trounced by Asian school children.
  • At any stage of the game, Chess, on average, has 37 possible moves, decreasing as the game progresses. ( Lesser coins, lesser moves )
  • Go, on the other hand, averages 250 possible replies per move.
  • There are terms in Go, look here, each of which needs a representation in the program.
  • Chess has very little such terms, en passe, fortress, and the like, and provides lesser challenge to a programmer who has little knowledge of the game.

If you cannot truly assess the complexity of the problem, here’s one more trivia, Go, is associated with the number 10700. This number is almost definitely an overestimate considering that the estimated number of particles in the Universe, when I last checked, was 1091!! However, more reasonable estimates aren’t too far from this number either.

A brief Intro on how computers play ( board ) games:

Computers, play games by searching in a state space tree. Many use a theorem called the Minimax Theorem. Chess, Go and most other board games are said to be complete-information deterministic games in that both players know everything there is to be known from the beginning to the end of the game. There is no role for chance here. In 1928, Von Neumann proved that such games are always zero sum. In non formal terms, a zero-sum game is a game in which one player’s winnings equal the other player’s losses. It is pretty much like cake cutting. So a good move means that the opponent has got only worse moves, all of them in all variants leading to favourable positions which are therefore unfavourable for the opponent.

An example of the minimax algorithm should be very useful here :

Minimax Example

Minimax Example

Consider a two-player game, where each player takes alternate turns ( which is the vast majority of the board games today ). Now as shown in the figure, you have three options, to each which your opponent has three options in reply. The tree’s root indicates that you’ll choose the move that lands you in the best position possible when you are to play next. The numbers in the circle indicates your position after that move. Level 2 is your opponent’s move. Initially, the tree is constructed with empty circles, and evaluation is done from the leaf to the root.

Some evaluation function, say f(x), evaluates the board position and returns the values at the leaf level. You take the minimum of the points your opponent’s move left you and put it in the corresponding higher level. Then, you take the maximum of all the points your move will leave you and put it in the higher level. What has happened? You are left with a guarantee that you’ll get at least +1 no matter what your opponent’s response is. IN simpler terms, you have maximised your minimum profit. And hence the term minimax.

** A minimax tree is generally more than 2 ply( 1 ply is a single player’s turn, turn itself being an ambiguous term, to be used across multiple games ) deep.

Here is more on zero-sum games. A wiki minimax example.

Here is an article that shows why MInimax Theorem is not always practically optimal.

Many optimizations have been made to this approach the prominent of them being, Alpha-Beta pruning.

Naturally, it is important how deep your tree is. The evaluation of the game state is also just as crucial if not more, in the grand scheme of things. A good evaluation function can reduce how deep you need to look.

All of the most famous Chess engines of our time use this technique. For a game like Chess, with an average branching factor of 35~50, specialised hardware with pre-calculated open-game and end-game tables is enough to beat a Chess Grandmaster. In fact it even defeated the highest graded Grandmaster in history.

Such an approach becomes immediately inadequate in the game of Go. No hardware can search in such large a state space. Also, an accurate evaluation function is also very difficult to arrive at. Well, then how do today’s programs approach Go?

A look at the current Computer Go Scenario :

The most powerful computer Go programs in the world today are :

  • GNUGo
  • MoGo
  • CrazyStone
  • Go++
  • NeuroGo

A rough analysis of these five should be of great interest. Updates on this post and further posts will be regarding this.


Bookmark and Share

The Game of Life

This is not an article on philosophy.

This is about a zero player game called, Life.

Considering its a zero – player game, who would want to play it???

Gosper's Glider

Gosper

But before that, let me just give you a brief intro on what it is and how it is played. It is what one would call in computer science terminology as a cellular automaton. Cellular automatons are studied primarily by biologists, mathematicians and Computer Scientists.

In the second year of our computer science course, we had a paper on Thoery of Computation. And “automata”

was the most uttered word in the class. The paper was complete theory and did not seem to give any scope for fun or enjoyment, unless you were a nerd, or a mathematician.

It was in Robert Kruse’s, Data Structures and Design, that I first came across Life. So here’s the deal.

The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead. Every cell interacts with its eight neighbours, which are the cells that are directly horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:

1. Any live cell with fewer than two live neighbours dies, as if by loneliness.
2. Any live cell with more than three live neighbours dies, as if by overcrowding.
3. Any live cell with two or three live neighbours lives, unchanged, to the next generation.
4. Any dead cell with exactly three live neighbours comes to life.

The initial pattern constitutes the ‘seed’ of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed — births and deaths happen simultaneously, and the discrete moment at which this happens is sometimes called a tick. (In other words, each generation is a pure function of the one before.) The rules continue to be applied repeatedly to create further generations.

Simple. Very basic indeed, simulations of how Life may exist, overcrowding, loneliness, all only too familiar for us humans. Simplicity is always a defining characteristic of an automaton. So there would be starting population, and once the sequence starts, it continues, without any intervention.

I see no further point in explaining in theory about stuff that you really can enjoy once you see it for yourself. I’ll direct you here to a Java Applet at Math.comhere Somewhere in the page is a link to a Java applet. I had also seen an online Flash implementation, as an active essay, which I am unable to retrieve now. Anyway, Go ahead. Experiment. Try your own signature, and see what it results in. How about if you draw a smiley, what happens??? Chances are most likely that the population eventually dwindled into nothing or it stabilised, i.e.., split up and formed gangs, stopped interacting with other gangs, and eventually settled down.

When John Conway published his ideas in Scientific American in 1970, he had conjectured that there could not exist initial populations, that would neither die off nor stabilise. He offered $50 as prize money to anyone who could prove or disprove his conjecture. A team from MITlead by Bill Gosper won the prize. Gosper disproved Conway’s idea by developing an initial population, called the Glider Gun, which generated a glider after every 30 generations. Thats the picture at the top of the page. Eventually more and more initial populations were found that had traits similar to the Glider Gun. Many of them were more or less.., Guns. After every X generations the population would produce a unit that would fly away, and live an endless life. I have provided a link at the bottom, in case you are interested.

Glider

Glider

Gosper’s glider was a hit. So much that, this symbol called ‘Glider’, to the right was proposed to be adopted as the symbol of the hacker community. “The Hacker Community” I am referring to here is not the one that goes around busting the Internet. Any enthusiastic coder, who tweaks a software.., could be roughly called a programmer. I am obliged to give a reference here.

[ A similar, but not related concept is the Collatz Sequence. A simple rule indicating what one particular number would mutate into next. The sequence of numbers end, when you reach one. The conjecture is that from any number, on applying the rules, it always ends in the same pattern => 16-8-4-2-1 ]

It only gets more interesting. This, “Game of Life is said to be Turing Complete“. A Turing machine is essentially an infinite tape with a pointer attached to it. But, amusingly, you can do EVERYTHING using a Turing machine that you can do using a modern day computer. It may sound , a little insane perhaps, when I say that a tape can simulate a PC. I restate, this Game can simulate your PC. Look at it this way. You would believe it if I say that AND, OR and NOT gates could simulate a PC. Naturally, that’s what PCs are made of right? Maybe it would sound a little more sane or credible to say that the game can simulate all of these gates.

Hash Life

Consider implementing the Game of Life using say, Java. Most normal approach is to keep two copies of a 2D Boolean array and juggle between them as the current population. Bill Gosper also developed an algorithm for a much faster implementation of Life called Hash Life. There are various places where one could tweak the original approach, but Hash Life remains the one of the fastest algorithms known. I hear that there are only 200 or so implementations of the algorithm all over the world. Maybe one day, I could implement one on my own…

In retrospect, John Conway’s implementation was using using stones and pebbles.

A crazy Proposition

I have been recently getting more interested in Evolution, physiology and Genetic Algorithms. And I started wondering if one could evolve an initial population that would keep multiplying. It seems utterly non-sense in one sense in that the “World” is utterly chaos. How do you judge the fitness of populations? How would one implement a cross-over?? Anyway, just a crazy idea. I just wanted to put it down, in case, somebody stumbles upon it and gives me a hint.

Further Reading: