Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:And this is different from OSPF how? (Score 1) 64

you should read the original papers before saying that the authors play semantic games ;-)
http://www.idsia.ch/~gianni/antnet.html

the differences are many, here are some important ones. Full paths are explicitly sampled by control packets (the 'ants') (wich is quite different from locally observing link costs and then flooding them). Sampling full paths also involves some
core issues that do not find any counterpart in ospf: how often ants are generated (i.e., when, how often, do I need to refresh my local routing information?), which destinations should be sampled? (some destinations are more important than others...), what kind of stochastic decision policy should be implemented in the hop by hop decisions? (how to balance exploration vs. exploitation). The routing tables adaptively constructed through ant sampling do not identify one single shortest path, but rather a bundle of paths that can be used to automatically implement stochastic multi-path routing (and have therefore automatic load balancing). Last but not least, OSPF is based on the knowledge of the full topology of the network (at the level the router is), that implies keeping flooding of up-to-date link information throughout the network. On the other hand, AntNet's routing table are just based on the knowledge of the local topology, that is, of the connected neighbors. This also makes the algorithm quite robust for possible ant losses (or 'errors' in the estimate of the quality of the sampled paths).

Comment Re:Hill Climbing (Score 1) 64

it's not really a stochastic hill-climbing algorithm. aco is based on repeated solution sampling, where each solution is constructed (by an 'ant') by stochastically selecting solution components one-at-a-time (e.g., in a tsp, by adding cities to the partial solution until a feasible Hamiltonian tour is constructed). The outcome of each constructed solution is then used to update/learn pheromone variables, that in turn bias/direct the stochastic decision policy that is applied to select each individual solution component while the solution is being constructed. The main idea is to implement, through repeated solution construction, a process of collective learning of the pheromone variables, that are the local parameters of the stochastic solution construction policy. An hill-climb approach would start from a full feasible solution (e.g., a complete TSP tour) and then it would iteratively modify this solution in the direction of the best local improvement (according to a defined local neighborhood) until no improvement looks possible anymore. No learning from previous solutions happen here. No solution construction is involved here. The two approaches are in fact totally different but somehow complementary: it's normal practice to combine aco with some local search: after N (N>= 1) iterations of the ant solution construction phase, the pheromone values are used to identify a candidate solution (or multiple candidate solutions), that are taken as starting points for the local search. The local search is then executed, and the final reached point(s) (the local minimum) is(are) then used to update the pheromone values used by the ant search. The process is then iterated. A good overview of how this works can be found in the ACO definition paper from Dorigo and Di Caro (1999), frankly speaking I think it's quite easy to read: Dorigo M., Di Caro G.A., "The Ant Colony Optimization Meta-Heuristic", in Corne D., Dorigo M., Glover F., "New Ideas in Optimization", McGraw-Hill, 1999. http://www.idsia.ch/~gianni/Papers/OptBook.ps.gz A more 'massive' discussions of these topics can be found in my thesis: http://www.idsia.ch/~gianni/Papers/thesis.pdf http://www.idsia.ch/~gianni/my_thesis.html

Slashdot Top Deals

"Truth never comes into the world but like a bastard, to the ignominy of him that brought her birth." -- Milton

Working...