遗传算法(GA)和遗传规划(GP)是一个有趣的研究领域。

我想知道你使用GA/GP解决的具体问题,以及如果你没有自己的库/框架,你使用了什么库/框架。

问题:

你用GA/GP解决过什么问题? 你使用了哪些库/框架?

我在寻找第一手的经验,所以请不要回答,除非你有。


当前回答

In 2007-9 I developed some software for reading datamatrix patterns. Often these patterns were difficult to read, being indented into scratched surfaces with all kinds of reflectance properties, fuzzy chemically etched markings and so on. I used a GA to fine tune various parameters of the vision algorithms to give the best results on a database of 300 images having known properties. Parameters were things like downsampling resolution, RANSAC parameters, amount of erosion and dilation, low pass filtering radius, and a few others. Running the optimisation over several days this produced results which were about 20% better than naive values on a test set of images unseen during the optimisation phase.

这个系统完全是从零开始编写的,我没有使用任何其他库。我并不反对使用这些东西,只要它们能提供可靠的结果,但是您必须注意许可兼容性和代码可移植性问题。

其他回答

In 2007-9 I developed some software for reading datamatrix patterns. Often these patterns were difficult to read, being indented into scratched surfaces with all kinds of reflectance properties, fuzzy chemically etched markings and so on. I used a GA to fine tune various parameters of the vision algorithms to give the best results on a database of 300 images having known properties. Parameters were things like downsampling resolution, RANSAC parameters, amount of erosion and dilation, low pass filtering radius, and a few others. Running the optimisation over several days this produced results which were about 20% better than naive values on a test set of images unseen during the optimisation phase.

这个系统完全是从零开始编写的,我没有使用任何其他库。我并不反对使用这些东西,只要它们能提供可靠的结果,但是您必须注意许可兼容性和代码可移植性问题。

进化计算研究生班: 开发了TopCoder马拉松比赛49:megpartty的解决方案。我的小组正在测试不同的域表示法,以及不同的表示法如何影响ga找到正确答案的能力。我们为这个问题编写了自己的代码。

Neuroevolution and Generative and Developmental Systems, Graduate Class: Developed an Othello game board evaluator that was used in the min-max tree of a computer player. The player was set to evaluate one-deep into the game, and trained to play against a greedy computer player that considered corners of vital importance. The training player saw either 3 or 4 deep (I'll need to look at my config files to answer, and they're on a different computer). The goal of the experiment was to compare Novelty Search to traditional, fitness-based search in the Game Board Evaluation domain. Results were relatively inconclusive, unfortunately. While both the novelty search and fitness-based search methods came to a solution (showing that Novelty Search can be used in the Othello domain), it was possible to have a solution to this domain with no hidden nodes. Apparently I didn't create a sufficiently competent trainer if a linear solution was available (and it was possible to have a solution right out of the gates). I believe my implementation of Fitness-based search produced solutions more quickly than my implementation of Novelty search, this time. (this isn't always the case). Either way, I used ANJI, "Another NEAT Java Implementation" for the neural network code, with various modifications. The Othello game I wrote myself.

我不知道家庭作业算不算…

在我学习期间,我们推出了自己的程序来解决旅行推销员问题。

我们的想法是对几个标准进行比较(映射问题的难度,性能等),我们还使用了其他技术,如模拟退火。

它运行得很好,但我们花了一段时间来理解如何正确地进行“复制”阶段:将手头的问题建模成适合遗传编程的东西,这对我来说是最难的部分……

这是一门有趣的课程,因为我们也涉猎了神经网络之类的知识。

我想知道是否有人在“生产”代码中使用这种编程。

在工作中,我遇到了这样一个问题:给定M个任务和N个dsp,如何将任务分配给dsp是最好的?“最佳”定义为“最大负载DSP的负载最小化”。有不同类型的任务,不同的任务类型有不同的性能分支,这取决于它们被分配到哪里,所以我将一组工作到dsp的分配编码为“DNA字符串”,然后使用遗传算法来“培育”我所能“培育”的最佳分配字符串。

它运行得相当好(比我之前的方法好得多,之前的方法是评估每个可能的组合……对于非平凡问题的大小,它将需要数年才能完成!),唯一的问题是无法判断是否已经达到了最优解。你只能决定当前的“最大努力”是否足够好,或者让它运行更长时间,看看它是否可以做得更好。

我开发了一个基于多线程摆动的模拟机器人导航通过一组随机网格地形的食物源和矿山,并开发了一个基于遗传算法的策略,探索机器人行为的优化和机器人染色体的适者生存基因。这是使用每个迭代周期的图表和映射来完成的。

从那以后,我发展了更多的游戏行为。我最近为自己构建的一个示例应用程序是一个遗传算法,用于解决在英国寻找路线时的旅行销售人员问题,考虑到起始和目标状态,以及一个/多个连接点,延误,取消,建筑工程,高峰时间,公共罢工,考虑最快和最便宜的路线。然后为某一天的路线提供一个平衡的建议。

一般来说,我的策略是使用基于POJO的基因表示,然后为选择、突变、交叉策略和标准点应用特定的接口实现。我的适应度函数就会变得非常复杂,这是基于我需要作为启发式测量应用的策略和标准。

我还研究了将遗传算法应用于代码中的自动化测试,使用系统突变周期,其中算法理解逻辑,并尝试确定带有代码修复建议的错误报告。基本上,这是一种优化我的代码并提供改进建议的方法,以及一种自动发现新编程代码的方法。我还尝试将遗传算法应用于音乐制作和其他应用。

一般来说,我发现进化策略就像大多数元启发式/全局优化策略一样,一开始学习很慢,但随着解决方案越来越接近目标状态,只要你的适应度函数和启发式很好地对齐,在你的搜索空间内产生收敛,它们就会开始学习。