Collective Intelligence - Based on the ant's system

If you order your custom term paper from our custom writing service you will receive a perfectly written assignment on Collective Intelligence - Based on the ant's system. What we need from you is to provide us with your detailed paper instructions for our experienced writers to follow all of your specific writing requirements. Specify your order details, state the exact number of pages required and our custom writing professionals will deliver the best quality Collective Intelligence - Based on the ant's system paper right on time.


Our staff of freelance writers includes over 120 experts proficient in Collective Intelligence - Based on the ant's system, therefore you can rest assured that your assignment will be handled by only top rated specialists. Order your Collective Intelligence - Based on the ant's system paper at affordable prices !


Nature has always inspired scientists since time immemorial let it be machines that left the ground and took to the skies a century or so ago or small robots that mimic insect behavior. This is mainly because nature has improved on its design over millions of years and has reached a level of perfection that any humanly devised machine or plan could only dream about.


Take for example the research done on social insects in the last fifty years that has shown the reasons behind an ant choosing the shortest path between its colony and the food source. Another example for such intelligence is in the case of termites building huge termite mounds with hardly any communication. The termite seems to follow a set of simple rules like, they, at first, search at random for a elevated spot, then drop some chewed dirt and keep dropping dirt on the any spot with any saliva on it. The termite does not need global knowledge or any more memory than is necessary to complete the sub-task in hand, it just needs to invoke a simple behavior dependent on the state of its immediate environment.


Such a kind of Multi-agent distributed artificial intelligence is called collective intelligence or swarm intelligence or emergent behavior. Instead of applying complex algorithms to static datasets, through studying social insects we can see that simple algorithms can often do just as well when allowed to make systematic changes to the data in question.


Other lessons that may help us make better algorithms as engineers is the idea of self organization. A self organizing (SO) system is any dynamic system from which order emerges entirely as a result of the properties of individual elements in the system, and not from external pressures. For example, a trail of ants exploiting a food source, as soon as the food depletes the number of ants on the trail decrease and thereby the indirect communication they have through chemicals (pheromones) also decreases and the ants finally abandon that path.


A valid question that may raise in our minds is how AI different from what has been just given about COllective Intelligence (COIN). AI tries to embody natural intelligence of the human mind into a machine but COIN tries to mimic a more primitive insect brain. AI requires either complex algorithms or a lot of time for machine learning, to become intelligent. Collective intelligence has features more like that of a Distributed system


ɨ Scalability


ɨ Fail-safe & Graceful degradation


ɨ Local autonomy (no centralized authority)


ɨ Redundancy (by having more than one on the same work)


ANTS COLONY AND MULTI-AGENTS


Individual ants are simple insects with limited memory and capable of performing simple actions. However, an ant colony expresses a complex collective behavior providing intelligent solutions to problems such as carrying large items, forming bridges and finding the shortest routes from the nest to a food source.


A single ant has no global knowledge about the task it is performing. The ants actions are based on local decisions and are usually unpredictable. The intelligent behavior naturally emerges as a consequence of the self-organization and indirect communication between the ants. This is what is usually called Emergent Behavior or Emergent Intelligence.


The fascinating behavior of ants has been inspiring researches to create new approaches based on some of the abilities of the ants colonies. Some of the existing applications include the Traveling Salesman Problem, graph coloring, logistics and a lot more.


The practical example covered in this essay involves finding a path linking two nodes in a graph. In order to solve this problem, two characteristics of ants colonies will be particularly useful


ɨ their ability to find the shortest route between the nest and a food source, which will be used to find and optimize a path in the graph;


ɨ the simplicity of each individual ant, which will make it easy for us to model the ant colony as a Multi-Agent System;


The foraging behavior of the ants that is our main concern is very simple to understand. Ants use a signaling communication system based on the deposition of pheromone over the path it follows, marking a trail. Pheromone is a hormone produced by ants that establishes a sort of indirect communication among them. Basically, an isolated ant moves at random, but when it finds a pheromone trail there is a high probability that this ant will decide to follow the trail.


An ant foraging for food lay down pheromone over its route. When this ant finds a food source, it returns to the nest reinforcing its trail. Other ants in the proximities are attracted by this substance and have greater probability to start following this trail and thereby laying more pheromone on it. This process works as a positive feedback loop system because the higher the intensity of the pheromone over a trail, the higher the probability of an ant start traveling through it.


In order to understand how this process leads the colony to optimize a route, lets take a look at the following example


Suppose some ants were randomly searching for food when they found two different routes between the nest and the source. Since the route B is shorter, the ants on this path will complete the travel more times and thereby lay more pheromone over it.


As the process continues, the pheromone concentration on trail B will increase at a higher rate than on A. And soon, even those ants on the route A will choose to follow the trail B.


Since most ants are no longer traveling through route A and also due to the volatile characteristic of the pheromone, the trail A will start evaporating and soon just the shortest route will remain


The Travelling Salesman Problem


The Travelling Salesman Problem (TSP) is a popular 'toy problem' in the AI community since it is at once simple to understand and very difficult (NP-hard) to solve. A salesman needs to complete a tour of a certain number of cities using the most efficient path possible. The salesman can travel from any city to any other city, but must visit each city once and only once. It doesn't sound like a difficult problem until you consider the sheer number of possible solutions, even for a small fifteen city problem. Researchers realized that ants' inherent ability to discover shortest paths could be put to use on the TSP and, it turned out, with some success.


The first TSP solution they developed was called Ant System and works like this ants first make a number of random tours starting from a random city, depositing pheromone as they go. At each city an ant picks its next destination using a combination of probability and the amount of pheromone present on that route to inform its decision. Every ant must visit every city once, as per the criteria of the TSP. After a tour is completed, an ant deposits a certain amount of pheromone on each edge of the graph, depending on how far the ant travelled during its tour shorter tours lead to more pheromone being deposited. A certain amount of pheromone will also decay, causing older solutions to fade away and be replaced by new ones.


After this process has run for some time, the pheromone present on the edges of the graph will tend to influence any ant traversing the graph towards an optimal solution. Ant System and subsequent versions of the algorithm such as Ant Colony Optimization do not always find optimal solutions, but are effective in finding good solutions in a reasonable number of iterations.


Modeling of the ant colonies as multi-agent system


The main idea of the system proposed is simply to put ants walking on a graph and observe the intensity of pheromone on its edges in time.


Think of graphs nodes as different places where ants could stop during a traversal and lets call them cities. The edges of the graph will, of course, represent the routes connecting cities. This virtual environment will be populated by Agents representing individual ants.


In the AI domain, an Agent might be considered as an autonomous entity that interacts within an environment. Agents have only a dynamic partial representation of its environment that can be changed by their sensing abilities. They can also perform actions based on local perceptions and on their internal representation of the environment. These actions can affect the environment, the agent itself or even other agents.


Implementation


An easy way to develop a multi-agent simulation environment is using a turn-based system. Think of it as a game where at each turn all players make one single move. This is exactly how the simulation will work. Each agent will perform just one of the actions described above at each turn.


All the elements of the environment were modeled using C++ classes. The Ant class represents the agents, the Route represents the edges of the graph, the City represents the nodes and the Civilization represents the environment.


Since these elements are simple, the implementation becomes simple. Many properties and methods of the classes are specific for the interface or just auxiliaries. To keep things simple, just the essential aspects will be covered here.


The City is the simplest entity. It basically knows its (X, Y) position in the environment. This will be necessary to calculate the distance between two cities.


The Route needs two pointers to City objects in order to identify the cities it is connecting. Another property is the length of the route. The longer the route, the more turns an agent will need to cross it. The pheromone intensity over a route is also represented in this class.


Another important thing that must be simulated is the volatile characteristic of the pheromone. The route will also need a method to simulate its evaporation.


class Route


{


private


float Length; // Length of the road


int Pheromone; // Pheromone intensity over the road


City FirstCity; // Cities connected by this road


City SecondCity;


public


void EvaporatePheromone(); // Simulate the evaporation of the pheromone



};


The most important characteristic of an ant in this context is related to its individual and unpredictable tendency to choose a certain route among the many available. Each instance of the class Ant must represent an individual agent with singular characteristics. This can be implemented by using a mathematical function. As described above the pheromone level over a route is measured by an integer number. The agent will use a method that evaluates its tendency of choosing a route based on the pheromone intensity. A good variability of the behavior of the agents can be expressed as a sinusoidal function with at least three coefficients alpha, beta and gamma.


The input PL is the pheromone level over a route. Alpha, Beta and Gamma will be properties of the Ant class initialized as random float numbers within an interval. These properties will make possible to have different individuals in the population. They are also needed by the Genetic Algorithms that will be covered in optimization.


class Ant


{


private


float alpha; // Indicates the Ants pheromone sensibility


float beta;


float gamma;


bool HaveFood; // Indicates if the Ant is carrying food


public


float GetTendency(int PheroLevel);// tendency of choosing a route


void PickFood(); // Pick food when in the food source


void LeaveFood(); // Leave food when in the nest


void PutPheromone(); // Increase pheromone level of route


void Walk(); // Walk one more step



};


The Civilization is the class that will control the whole environment and the simulation process. It will be also responsible for the evolution process. A graph can be created by the user through the interface of the proposed simulation program. Two nodes of the graph must be specified as the nest and the food source. When the simulation begins, a random number of agents are created in the nest. At each turn the agents will perform actions depending on their current position as explained before. As the simulation runs the most used routes will have their pheromone level increased and after some time a solution will emerge from the collective behavior of these virtual ants.


class Civilization


{


private


City FoodSourceCity; // The Civilizations Food Source City


City Nest; // The Civilizations Nest


TList Routes; // All Routes in the Environment


TList Cities; // All Cities in the Environment


TList Ants; // All Ants in the Environment


int NaturalSelection; // Turns remaining before the next natual selection


public


void NextTurn(); // Perform one turn of the simulation



};


Although this system can provide good results, a random number of agents with random characteristics may not always solve a given problem. Also, a population of agents that is able to find the shortest path in a graph may not be able to find a solution in a complete different environment. For these reasons an optimization of the ant farm can be built using Genetic Algorithms.


OPTIMIZING THE ANT COLONIES AN OVERVIEW


As already said genetic algorithms are used to reach satisfactorily optimized ant colony. It gives the answers to questions like


ɨ How to find out the best agents for a given problem?


ɨ How many agents are necessary to perform a search in graphs with different complexities?


ɨ How to balance agents with different characteristics to compose a good population?


Genetic algorithms are based on Darwins theory of evolution, driven by the survival of the fittest. An entire population of individuals is created, and each evaluated at a given task and assigned a fitness value based on that. The genetic algorithm can then create new individuals based on the current population it generally selects fitter individuals, and breeds them together. This can sometimes provide a big improvement of the solution, but generally its just a small adjustment. Performance increases greatly as generation after generation of individuals are created.


The foundations of Genetic Algorithms such as selection, crossover and mutation can be better understood looking at the practical example that has been discussed.


SELECTION


COMPETITION Successful individuals in this environment are those that can either collect a large amount of food (workers) or find alternative routes to the food source (explorers). By collecting food, an agent is increasing the pheromone level over a good route and thereby influencing other agents. By exploring the environment, an agent can find different and maybe shorter routes to the food source. Natural selection will allow these individuals to produce offspring and thereby propagate their characteristics. Similarly to the natural process, the evolution occurs because the new generation of individuals will sometimes be better then the parents.


EXTINCTION Agents that get lost within the environment can not help at all. For example an agent that keeps traveling between two nodes using just one edge is useless to the colony. Also if it keeps walking in circles. These individuals will be eliminated from the population.


OFFSPRING


CROSSOVER New offspring are created based on the characteristics of the successful individuals. Since there are two types of individuals that are necessary to the colony, two new individuals will be created at each evolutive cycle. One of them will be descendant of the two most successful workers and the other one will be descendant of the best two explorers. Genes representing the characteristics of the parents will be combined to compose a new chromosome which will originate a new individual. This combination is inspired by the biological process known as crossing over. Each characteristic of the new individual will come from one of the parents at random. The following figure shows two possible examples of descendants created with crossover combinations


MUTATION After the crossover there is a low probability that a mutation occurs. This will change one of the characteristics of the individual at random. Mutation can maintain diversity within the population.


MIGRATION


Migration will introduce a completely random new individual to the population. The effect is similar to the mutation because it will increase the diversity within the environment.


ANTS, PHONES AND PHEROMONES


The foraging metaphor has been put to work successfully in solving the problem of telecommunications routing. Ruud Schoonderwoerd of Hewlett-Packard labs collaborated with a group of scientists to create the worlds first ant-based routing system. As anyone who has used the Internet will know, communications networks are characteristically unpredictable. Sudden interest in a particular web site or a local crisis will lead to surges of network activity which must somehow be routed efficiently, minimizing both delays and congestion. The network must therefore dynamically route calls/requests through quieter sections of the network.


Congestion in a particular section of the network can be seen as analogous to depletion of a food source near an ant colony, causing the ants to search for new routes, dynamically updating the virtual pheromone trail between nodes. In the system developed by Schoonderwoerd et al., antlike agents are sent randomly between nodes from time to time, updating each nodes routing table as they go with information regarding how long the journey from their origin took, and which nodes they have used on the way. The routing table contains a list of the nodes immediate neighbors, and probabilities associated with using that neighbor as the next step on the journey to each target node on the network. The fastest ants will have a positive effect on the probability scores of the nodes they have used, while slow ants will have a negative effect.


A more recent algorithm for Internet routing designed by researchers (based on antlike agents) has, in simulation, outperformed all other routing methods, including the current standard routing protocol of the Internet, Open Shortest Path First.


SWARM ROBOTICS


Although the multi-agent approach to robotics is not a new idea, social insect studies have demonstrated how to make such an approach worthwhile. The advantages to roboticists are many


ɨ Rather than develop big, expensive and complicated robots, many cheap and simple ones can be used to achieve the same goal.


ɨ Many robot environments are precarious. If one robot is incapacitated in the traditional approach, youve lost your robot and cant go on. One individual in a swarm, however, is relatively expendable.


ɨ If your robot is faced with too big a task (e.g. an object too heavy to lift), ordinarily it would be back to the drawing board, but where a swarm is involved solutions are often inherently scalable.


CONCLUSION


In an age of booming complexity, where governments, economies and societies alike have become impossible for any individual to comprehend, where software engineers wrestle with intractable computer systems and, as weve seen, computer systems wrestle with intractable problems, the study of social insects can do more than just pave the way to more powerful algorithms and robotics.


The promises and successes of collective intelligence and the swarm paradigm are a powerful demonstration of complex and useful systems arising from easily comprehended rules and habits. The units of computation are as dumb as ever, but the overall result we unhesitatingly term intelligence.


Collective intelligence in Social insects by David Gordon


www.ai-depot.com/collectiveintelligence/socialinsects.html


A Swarm-robotics project


www.swarm-bots.org


Collective intelligence by Dr. David Wolpert and Dr. Kagan Tumer http//ic.arc.nasa.gov


Emergent Intelligence by Danilo Benzatti


www.ai-depot.com/collectiveintelligence/ant.html


Please note that this sample paper on Collective Intelligence - Based on the ant's system is for your review only. In order to eliminate any of the plagiarism issues, it is highly recommended that you do not use it for you own writing purposes. In case you experience difficulties with writing a well structured and accurately composed paper on Collective Intelligence - Based on the ant's system, we are here to assist you. Your cheap custom college paper on Collective Intelligence - Based on the ant's system will be written from scratch, so you do not have to worry about its originality. Order your authentic assignment and you will be amazed at how easy it is to complete a quality custom paper within the shortest time possible!


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.