最近我一直在iPhone上玩一款名为《Scramble》的游戏。有些人可能知道这个游戏叫拼字游戏。从本质上讲,当游戏开始时,你会得到一个字母矩阵:

F X I E
A M L O
E W B X
A S T U

The goal of the game is to find as many words as you can that can be formed by chaining letters together. You can start with any letter, and all the letters that surround it are fair game, and then once you move on to the next letter, all the letters that surround that letter are fair game, except for any previously used letters. So in the grid above, for example, I could come up with the words LOB, TUX, SEA, FAME, etc. Words must be at least 3 characters, and no more than NxN characters, which would be 16 in this game but can vary in some implementations. While this game is fun and addictive, I am apparently not very good at it and I wanted to cheat a little bit by making a program that would give me the best possible words (the longer the word the more points you get).

(来源:boggled.org)

不幸的是,我不太擅长算法或它们的效率等等。我的第一次尝试使用一个像这样的字典(约2.3MB),并进行线性搜索,试图匹配字典条目的组合。这需要花费很长时间来找到可能的单词,因为你每轮只有2分钟的时间,这是不够的。

我很有兴趣看看是否有任何Stackoverflowers可以提出更有效的解决方案。我主要是在寻找使用三大p的解决方案:Python、PHP和Perl,尽管任何使用Java或c++的东西也很酷,因为速度是至关重要的。

目前的解决方案:

Adam Rosenfield, Python, ~20岁 John Fouhy, Python, ~3秒 Kent Fredric, Perl, ~1s Darius Bacon, Python, ~1s rvarcher, VB。净,~ 1 s Paolo Bergantino, PHP(实时链接),~5s(本地~2s)


当前回答

我建议根据单词做一个字母树。这棵树将由字母结构组成,像这样:

letter: char
isWord: boolean

然后构建树,每个深度添加一个新字母。换句话说,第一层是字母表;然后从这些树中,会有另外26个条目,以此类推,直到你把所有的单词都拼出来。坚持这个解析树,它将使所有可能的答案更快地查找。

使用这个解析过的树,您可以非常快速地找到解决方案。下面是伪代码:

BEGIN: 
    For each letter:
        if the struct representing it on the current depth has isWord == true, enter it as an answer.
        Cycle through all its neighbors; if there is a child of the current node corresponding to the letter, recursively call BEGIN on it.

这可以通过一些动态编程来加快。例如,在你的样本中,两个“A”都在一个“E”和一个“W”旁边,这(从它们击中它们的点来看)是相同的。我没有足够的时间来详细说明这个代码,但我想你们可以理解。

此外,我相信你会找到其他解决方案,如果你谷歌“Boggle solver”。

其他回答

你可以把这个问题分成两部分:

某种搜索算法可以在网格中列举出可能的字符串。 一种测试字符串是否是有效单词的方法。

理想情况下,(2)还应该包括一种测试字符串是否是有效单词前缀的方法——这将允许您精简搜索并节省大量时间。

亚当·罗森菲尔德(Adam Rosenfield)的Trie是(2)的一个解决方案。它很优雅,可能是算法专家的首选,但有了现代语言和现代计算机,我们可能会更懒一点。此外,正如Kent所建议的,我们可以通过丢弃网格中没有字母的单词来减少字典的大小。这是一些蟒蛇:

def make_lookups(grid, fn='dict.txt'):
    # Make set of valid characters.
    chars = set()
    for word in grid:
        chars.update(word)

    words = set(x.strip() for x in open(fn) if set(x.strip()) <= chars)
    prefixes = set()
    for w in words:
        for i in range(len(w)+1):
            prefixes.add(w[:i])

    return words, prefixes

哇;常数时间前缀测试。加载你链接的字典需要几秒钟,但只有几秒钟:-)(注意words <= prefixes)

现在,对于第(1)部分,我倾向于用图表来思考。所以我将创建一个像这样的字典:

graph = { (x, y):set([(x0,y0), (x1,y1), (x2,y2)]), }

例如,graph[(x, y)]是你从位置(x, y)可以到达的坐标集。我还将添加一个虚拟节点None,它将连接到所有东西。

构建它有点笨拙,因为有8个可能的位置,你必须做边界检查。下面是一些相应笨拙的python代码:

def make_graph(grid):
    root = None
    graph = { root:set() }
    chardict = { root:'' }

    for i, row in enumerate(grid):
        for j, char in enumerate(row):
            chardict[(i, j)] = char
            node = (i, j)
            children = set()
            graph[node] = children
            graph[root].add(node)
            add_children(node, children, grid)

    return graph, chardict

def add_children(node, children, grid):
    x0, y0 = node
    for i in [-1,0,1]:
        x = x0 + i
        if not (0 <= x < len(grid)):
            continue
        for j in [-1,0,1]:
            y = y0 + j
            if not (0 <= y < len(grid[0])) or (i == j == 0):
                continue

            children.add((x,y))

这段代码还建立了一个字典映射(x,y)到相应的字符。这让我把一个位置列表转换成一个单词:

def to_word(chardict, pos_list):
    return ''.join(chardict[x] for x in pos_list)

最后,我们进行深度优先搜索。基本程序是:

搜索到达一个特定的节点。 检查到目前为止的路径是否可能是单词的一部分。如果不是,就不要进一步探索这个分支。 检查到目前为止的路径是否是一个单词。如果是,则添加到结果列表中。 探索迄今为止所有孩子未走的路。

Python:

def find_words(graph, chardict, position, prefix, results, words, prefixes):
    """ Arguments:
      graph :: mapping (x,y) to set of reachable positions
      chardict :: mapping (x,y) to character
      position :: current position (x,y) -- equals prefix[-1]
      prefix :: list of positions in current string
      results :: set of words found
      words :: set of valid words in the dictionary
      prefixes :: set of valid words or prefixes thereof
    """
    word = to_word(chardict, prefix)

    if word not in prefixes:
        return

    if word in words:
        results.add(word)

    for child in graph[position]:
        if child not in prefix:
            find_words(graph, chardict, child, prefix+[child], results, words, prefixes)

运行代码如下:

grid = ['fxie', 'amlo', 'ewbx', 'astu']
g, c = make_graph(grid)
w, p = make_lookups(grid)
res = set()
find_words(g, c, None, [], res, w, p)

检查保留区,看看答案。下面是为你的例子找到的单词列表,按大小排序:

 ['a', 'b', 'e', 'f', 'i', 'l', 'm', 'o', 's', 't',
 'u', 'w', 'x', 'ae', 'am', 'as', 'aw', 'ax', 'bo',
 'bu', 'ea', 'el', 'em', 'es', 'fa', 'ie', 'io', 'li',
 'lo', 'ma', 'me', 'mi', 'oe', 'ox', 'sa', 'se', 'st',
 'tu', 'ut', 'wa', 'we', 'xi', 'aes', 'ame', 'ami',
 'ase', 'ast', 'awa', 'awe', 'awl', 'blo', 'but', 'elb',
 'elm', 'fae', 'fam', 'lei', 'lie', 'lim', 'lob', 'lox',
 'mae', 'maw', 'mew', 'mil', 'mix', 'oil', 'olm', 'saw',
 'sea', 'sew', 'swa', 'tub', 'tux', 'twa', 'wae', 'was',
 'wax', 'wem', 'ambo', 'amil', 'amli', 'asem', 'axil',
 'axle', 'bleo', 'boil', 'bole', 'east', 'fame', 'limb',
 'lime', 'mesa', 'mewl', 'mile', 'milo', 'oime', 'sawt',
 'seam', 'seax', 'semi', 'stub', 'swam', 'twae', 'twas',
 'wame', 'wase', 'wast', 'weam', 'west', 'amble', 'awest',
 'axile', 'embox', 'limbo', 'limes', 'swami', 'embole',
 'famble', 'semble', 'wamble']

代码需要(字面上的)几秒钟来加载字典,但其余的在我的机器上是立即完成的。

最快的解决方案可能是将字典存储在一个trie中。然后,创建一个三元组队列(x, y, s),其中队列中的每个元素对应于一个可以在网格中拼写的单词的前缀s,结束于位置(x, y)。初始化队列中有N x N个元素(其中N是网格的大小),网格中的每个正方形都有一个元素。然后,算法进行如下:

While the queue is not empty:
  Dequeue a triple (x, y, s)
  For each square (x', y') with letter c adjacent to (x, y):
    If s+c is a word, output s+c
    If s+c is a prefix of a word, insert (x', y', s+c) into the queue

如果将字典存储在trie中,则可以在常数时间内测试s+c是否是单词或单词的前缀(前提是还在每个队列数据中保留一些额外的元数据,例如指向trie中当前节点的指针),因此此算法的运行时间为O(可拼写的单词数量)。

[编辑]下面是我刚刚编写的Python实现:

#!/usr/bin/python

class TrieNode:
    def __init__(self, parent, value):
        self.parent = parent
        self.children = [None] * 26
        self.isWord = False
        if parent is not None:
            parent.children[ord(value) - 97] = self

def MakeTrie(dictfile):
    dict = open(dictfile)
    root = TrieNode(None, '')
    for word in dict:
        curNode = root
        for letter in word.lower():
            if 97 <= ord(letter) < 123:
                nextNode = curNode.children[ord(letter) - 97]
                if nextNode is None:
                    nextNode = TrieNode(curNode, letter)
                curNode = nextNode
        curNode.isWord = True
    return root

def BoggleWords(grid, dict):
    rows = len(grid)
    cols = len(grid[0])
    queue = []
    words = []
    for y in range(cols):
        for x in range(rows):
            c = grid[y][x]
            node = dict.children[ord(c) - 97]
            if node is not None:
                queue.append((x, y, c, node))
    while queue:
        x, y, s, node = queue[0]
        del queue[0]
        for dx, dy in ((1, 0), (1, -1), (0, -1), (-1, -1), (-1, 0), (-1, 1), (0, 1), (1, 1)):
            x2, y2 = x + dx, y + dy
            if 0 <= x2 < cols and 0 <= y2 < rows:
                s2 = s + grid[y2][x2]
                node2 = node.children[ord(grid[y2][x2]) - 97]
                if node2 is not None:
                    if node2.isWord:
                        words.append(s2)
                    queue.append((x2, y2, s2, node2))

    return words

使用示例:

d = MakeTrie('/usr/share/dict/words')
print(BoggleWords(['fxie','amlo','ewbx','astu'], d))

输出:

['fa', 'xi', 'ie', 'io', 'el', 'am', 'ax', 'ae', 'aw', 'mi', 'ma', 'me', 'lo', 'li', 'oe', 'ox', 'em', 'ea', 'ea', 'es', 'wa', 'we', 'wa', 'bo', 'bu', 'as', 'aw', 'ae', 'st', 'se', 'sa', 'tu', 'ut', 'fam', 'fae', 'imi', 'eli', 'elm', 'elb', 'ami', 'ama', 'ame', 'aes', 'awl', 'awa', 'awe', 'awa', 'mix', 'mim', 'mil', 'mam', 'max', 'mae', 'maw', 'mew', 'mem', 'mes', 'lob', 'lox', 'lei', 'leo', 'lie', 'lim', 'oil', 'olm', 'ewe', 'eme', 'wax', 'waf', 'wae', 'waw', 'wem', 'wea', 'wea', 'was', 'waw', 'wae', 'bob', 'blo', 'bub', 'but', 'ast', 'ase', 'asa', 'awl', 'awa', 'awe', 'awa', 'aes', 'swa', 'swa', 'sew', 'sea', 'sea', 'saw', 'tux', 'tub', 'tut', 'twa', 'twa', 'tst', 'utu', 'fama', 'fame', 'ixil', 'imam', 'amli', 'amil', 'ambo', 'axil', 'axle', 'mimi', 'mima', 'mime', 'milo', 'mile', 'mewl', 'mese', 'mesa', 'lolo', 'lobo', 'lima', 'lime', 'limb', 'lile', 'oime', 'oleo', 'olio', 'oboe', 'obol', 'emim', 'emil', 'east', 'ease', 'wame', 'wawa', 'wawa', 'weam', 'west', 'wese', 'wast', 'wase', 'wawa', 'wawa', 'boil', 'bolo', 'bole', 'bobo', 'blob', 'bleo', 'bubo', 'asem', 'stub', 'stut', 'swam', 'semi', 'seme', 'seam', 'seax', 'sasa', 'sawt', 'tutu', 'tuts', 'twae', 'twas', 'twae', 'ilima', 'amble', 'axile', 'awest', 'mamie', 'mambo', 'maxim', 'mease', 'mesem', 'limax', 'limes', 'limbo', 'limbu', 'obole', 'emesa', 'embox', 'awest', 'swami', 'famble', 'mimble', 'maxima', 'embolo', 'embole', 'wamble', 'semese', 'semble', 'sawbwa', 'sawbwa']

Notes: This program doesn't output 1-letter words, or filter by word length at all. That's easy to add but not really relevant to the problem. It also outputs some words multiple times if they can be spelled in multiple ways. If a given word can be spelled in many different ways (worst case: every letter in the grid is the same (e.g. 'A') and a word like 'aaaaaaaaaa' is in your dictionary), then the running time will get horribly exponential. Filtering out duplicates and sorting is trivial to due after the algorithm has finished.

如何简单的排序和使用字典中的二进制搜索?

在0.35秒内返回整个列表,并可以进一步优化(例如删除含有未使用字母的单词等)。

from bisect import bisect_left

f = open("dict.txt")
D.extend([line.strip() for line in f.readlines()])
D = sorted(D)

def neibs(M,x,y):
    n = len(M)
    for i in xrange(-1,2):
        for j in xrange(-1,2):
            if (i == 0 and j == 0) or (x + i < 0 or x + i >= n or y + j < 0 or y + j >= n):
                continue
            yield (x + i, y + j)

def findWords(M,D,x,y,prefix):
    prefix = prefix + M[x][y]

    # find word in dict by binary search
    found = bisect_left(D,prefix)

    # if found then yield
    if D[found] == prefix: 
        yield prefix

    # if what we found is not even a prefix then return
    # (there is no point in going further)
    if len(D[found]) < len(prefix) or D[found][:len(prefix)] != prefix:
        return

    # recourse
    for neib in neibs(M,x,y):
        for word in findWords(M,D,neib[0], neib[1], prefix):
            yield word

def solve(M,D):
    # check each starting point
    for x in xrange(0,len(M)):
        for y in xrange(0,len(M)):
            for word in findWords(M,D,x,y,""):
                yield word

grid = "fxie amlo ewbx astu".split()
print [x for x in solve(grid,D)]

对于字典加速,有一个通用的转换/过程可以大大减少提前的字典比较。

鉴于上面的网格只包含16个字符,其中一些字符是重复的,您可以通过简单地过滤掉具有不可获取字符的条目来大大减少字典中的总键数。

我认为这是明显的优化,但看到没有人这么做,我就提出来了。

在输入过程中,它将我的字典从20万个键减少到只有2000个键。这至少减少了内存开销,并且这肯定会映射到某个地方的速度增加,因为内存不是无限快的。

Perl实现

我的实现有点头重脚轻,因为我重视能够知道每个提取的字符串的确切路径,而不仅仅是其中的有效性。

我也有一些适应在那里,理论上允许一个网格中有洞的功能,网格有不同大小的线(假设你得到了正确的输入,它以某种方式对齐)。

早期筛选器是我的应用程序中最重要的瓶颈,正如之前怀疑的那样,注释掉了一行从1.5s膨胀到7.5s的代码。

在执行时,它似乎认为所有的个位数都在他们自己的有效单词上,但我很确定这是由于字典文件的工作方式。

它有点臃肿,但至少我重用了cpan中的Tree::Trie

其中有些部分是受到现有实现的启发,有些是我已经想到的。

建设性的批评和改进的方法欢迎(/我注意到他从来没有在CPAN上搜索过一个拼字游戏解决器,但这更有趣)

新标准更新

#!/usr/bin/perl 

use strict;
use warnings;

{

  # this package manages a given path through the grid.
  # Its an array of matrix-nodes in-order with
  # Convenience functions for pretty-printing the paths
  # and for extending paths as new paths.

  # Usage:
  # my $p = Prefix->new(path=>[ $startnode ]);
  # my $c = $p->child( $extensionNode );
  # print $c->current_word ;

  package Prefix;
  use Moose;

  has path => (
      isa     => 'ArrayRef[MatrixNode]',
      is      => 'rw',
      default => sub { [] },
  );
  has current_word => (
      isa        => 'Str',
      is         => 'rw',
      lazy_build => 1,
  );

  # Create a clone of this object
  # with a longer path

  # $o->child( $successive-node-on-graph );

  sub child {
      my $self    = shift;
      my $newNode = shift;
      my $f       = Prefix->new();

      # Have to do this manually or other recorded paths get modified
      push @{ $f->{path} }, @{ $self->{path} }, $newNode;
      return $f;
  }

  # Traverses $o->path left-to-right to get the string it represents.

  sub _build_current_word {
      my $self = shift;
      return join q{}, map { $_->{value} } @{ $self->{path} };
  }

  # Returns  the rightmost node on this path

  sub tail {
      my $self = shift;
      return $self->{path}->[-1];
  }

  # pretty-format $o->path

  sub pp_path {
      my $self = shift;
      my @path =
        map { '[' . $_->{x_position} . ',' . $_->{y_position} . ']' }
        @{ $self->{path} };
      return "[" . join( ",", @path ) . "]";
  }

  # pretty-format $o
  sub pp {
      my $self = shift;
      return $self->current_word . ' => ' . $self->pp_path;
  }

  __PACKAGE__->meta->make_immutable;
}

{

  # Basic package for tracking node data
  # without having to look on the grid.
  # I could have just used an array or a hash, but that got ugly.

# Once the matrix is up and running it doesn't really care so much about rows/columns,
# Its just a sea of points and each point has adjacent points.
# Relative positioning is only really useful to map it back to userspace

  package MatrixNode;
  use Moose;

  has x_position => ( isa => 'Int', is => 'rw', required => 1 );
  has y_position => ( isa => 'Int', is => 'rw', required => 1 );
  has value      => ( isa => 'Str', is => 'rw', required => 1 );
  has siblings   => (
      isa     => 'ArrayRef[MatrixNode]',
      is      => 'rw',
      default => sub { [] }
  );

# Its not implicitly uni-directional joins. It would be more effient in therory
# to make the link go both ways at the same time, but thats too hard to program around.
# and besides, this isn't slow enough to bother caring about.

  sub add_sibling {
      my $self    = shift;
      my $sibling = shift;
      push @{ $self->siblings }, $sibling;
  }

  # Convenience method to derive a path starting at this node

  sub to_path {
      my $self = shift;
      return Prefix->new( path => [$self] );
  }
  __PACKAGE__->meta->make_immutable;

}

{

  package Matrix;
  use Moose;

  has rows => (
      isa     => 'ArrayRef',
      is      => 'rw',
      default => sub { [] },
  );

  has regex => (
      isa        => 'Regexp',
      is         => 'rw',
      lazy_build => 1,
  );

  has cells => (
      isa        => 'ArrayRef',
      is         => 'rw',
      lazy_build => 1,
  );

  sub add_row {
      my $self = shift;
      push @{ $self->rows }, [@_];
  }

  # Most of these functions from here down are just builder functions,
  # or utilities to help build things.
  # Some just broken out to make it easier for me to process.
  # All thats really useful is add_row
  # The rest will generally be computed, stored, and ready to go
  # from ->cells by the time either ->cells or ->regex are called.

  # traverse all cells and make a regex that covers them.
  sub _build_regex {
      my $self  = shift;
      my $chars = q{};
      for my $cell ( @{ $self->cells } ) {
          $chars .= $cell->value();
      }
      $chars = "[^$chars]";
      return qr/$chars/i;
  }

  # convert a plain cell ( ie: [x][y] = 0 )
  # to an intelligent cell ie: [x][y] = object( x, y )
  # we only really keep them in this format temporarily
  # so we can go through and tie in neighbouring information.
  # after the neigbouring is done, the grid should be considered inoperative.

  sub _convert {
      my $self = shift;
      my $x    = shift;
      my $y    = shift;
      my $v    = $self->_read( $x, $y );
      my $n    = MatrixNode->new(
          x_position => $x,
          y_position => $y,
          value      => $v,
      );
      $self->_write( $x, $y, $n );
      return $n;
  }

# go through the rows/collums presently available and freeze them into objects.

  sub _build_cells {
      my $self = shift;
      my @out  = ();
      my @rows = @{ $self->{rows} };
      for my $x ( 0 .. $#rows ) {
          next unless defined $self->{rows}->[$x];
          my @col = @{ $self->{rows}->[$x] };
          for my $y ( 0 .. $#col ) {
              next unless defined $self->{rows}->[$x]->[$y];
              push @out, $self->_convert( $x, $y );
          }
      }
      for my $c (@out) {
          for my $n ( $self->_neighbours( $c->x_position, $c->y_position ) ) {
              $c->add_sibling( $self->{rows}->[ $n->[0] ]->[ $n->[1] ] );
          }
      }
      return \@out;
  }

  # given x,y , return array of points that refer to valid neighbours.
  sub _neighbours {
      my $self = shift;
      my $x    = shift;
      my $y    = shift;
      my @out  = ();
      for my $sx ( -1, 0, 1 ) {
          next if $sx + $x < 0;
          next if not defined $self->{rows}->[ $sx + $x ];
          for my $sy ( -1, 0, 1 ) {
              next if $sx == 0 && $sy == 0;
              next if $sy + $y < 0;
              next if not defined $self->{rows}->[ $sx + $x ]->[ $sy + $y ];
              push @out, [ $sx + $x, $sy + $y ];
          }
      }
      return @out;
  }

  sub _has_row {
      my $self = shift;
      my $x    = shift;
      return defined $self->{rows}->[$x];
  }

  sub _has_cell {
      my $self = shift;
      my $x    = shift;
      my $y    = shift;
      return defined $self->{rows}->[$x]->[$y];
  }

  sub _read {
      my $self = shift;
      my $x    = shift;
      my $y    = shift;
      return $self->{rows}->[$x]->[$y];
  }

  sub _write {
      my $self = shift;
      my $x    = shift;
      my $y    = shift;
      my $v    = shift;
      $self->{rows}->[$x]->[$y] = $v;
      return $v;
  }

  __PACKAGE__->meta->make_immutable;
}

use Tree::Trie;

sub readDict {
  my $fn = shift;
  my $re = shift;
  my $d  = Tree::Trie->new();

  # Dictionary Loading
  open my $fh, '<', $fn;
  while ( my $line = <$fh> ) {
      chomp($line);

 # Commenting the next line makes it go from 1.5 seconds to 7.5 seconds. EPIC.
      next if $line =~ $re;    # Early Filter
      $d->add( uc($line) );
  }
  return $d;
}

sub traverseGraph {
  my $d     = shift;
  my $m     = shift;
  my $min   = shift;
  my $max   = shift;
  my @words = ();

  # Inject all grid nodes into the processing queue.

  my @queue =
    grep { $d->lookup( $_->current_word ) }
    map  { $_->to_path } @{ $m->cells };

  while (@queue) {
      my $item = shift @queue;

      # put the dictionary into "exact match" mode.

      $d->deepsearch('exact');

      my $cword = $item->current_word;
      my $l     = length($cword);

      if ( $l >= $min && $d->lookup($cword) ) {
          push @words,
            $item;    # push current path into "words" if it exactly matches.
      }
      next if $l > $max;

      # put the dictionary into "is-a-prefix" mode.
      $d->deepsearch('boolean');

    siblingloop: foreach my $sibling ( @{ $item->tail->siblings } ) {
          foreach my $visited ( @{ $item->{path} } ) {
              next siblingloop if $sibling == $visited;
          }

          # given path y , iterate for all its end points
          my $subpath = $item->child($sibling);

          # create a new path for each end-point
          if ( $d->lookup( $subpath->current_word ) ) {

             # if the new path is a prefix, add it to the bottom of the queue.
              push @queue, $subpath;
          }
      }
  }
  return \@words;
}

sub setup_predetermined { 
  my $m = shift; 
  my $gameNo = shift;
  if( $gameNo == 0 ){
      $m->add_row(qw( F X I E ));
      $m->add_row(qw( A M L O ));
      $m->add_row(qw( E W B X ));
      $m->add_row(qw( A S T U ));
      return $m;
  }
  if( $gameNo == 1 ){
      $m->add_row(qw( D G H I ));
      $m->add_row(qw( K L P S ));
      $m->add_row(qw( Y E U T ));
      $m->add_row(qw( E O R N ));
      return $m;
  }
}
sub setup_random { 
  my $m = shift; 
  my $seed = shift;
  srand $seed;
  my @letters = 'A' .. 'Z' ; 
  for( 1 .. 4 ){ 
      my @r = ();
      for( 1 .. 4 ){
          push @r , $letters[int(rand(25))];
      }
      $m->add_row( @r );
  }
}

# Here is where the real work starts.

my $m = Matrix->new();
setup_predetermined( $m, 0 );
#setup_random( $m, 5 );

my $d = readDict( 'dict.txt', $m->regex );
my $c = scalar @{ $m->cells };    # get the max, as per spec

print join ",\n", map { $_->pp } @{
  traverseGraph( $d, $m, 3, $c ) ;
};

Arch/执行信息进行比较:

model name      : Intel(R) Core(TM)2 Duo CPU     T9300  @ 2.50GHz
cache size      : 6144 KB
Memory usage summary: heap total: 77057577, heap peak: 11446200, stack peak: 26448
       total calls   total memory   failed calls
 malloc|     947212       68763684              0
realloc|      11191        1045641              0  (nomove:9063, dec:4731, free:0)
 calloc|     121001        7248252              0
   free|     973159       65854762

Histogram for block sizes:
  0-15         392633  36% ==================================================
 16-31          43530   4% =====
 32-47          50048   4% ======
 48-63          70701   6% =========
 64-79          18831   1% ==
 80-95          19271   1% ==
 96-111        238398  22% ==============================
112-127          3007  <1% 
128-143        236727  21% ==============================

关于正则表达式优化的更多嘟囔

我使用的正则表达式优化对于多解字典是无用的,而对于多解字典,您将需要一个完整的字典,而不是一个预先修整过的字典。

然而,也就是说,对于一次性解决,它真的很快。(Perl正则表达式是在C!:))

以下是一些不同的代码添加:

sub readDict_nofilter {
  my $fn = shift;
  my $re = shift;
  my $d  = Tree::Trie->new();

  # Dictionary Loading
  open my $fh, '<', $fn;
  while ( my $line = <$fh> ) {
      chomp($line);
      $d->add( uc($line) );
  }
  return $d;
}

sub benchmark_io { 
  use Benchmark qw( cmpthese :hireswallclock );
   # generate a random 16 character string 
   # to simulate there being an input grid. 
  my $regexen = sub { 
      my @letters = 'A' .. 'Z' ; 
      my @lo = ();
      for( 1..16 ){ 
          push @lo , $_ ; 
      }
      my $c  = join '', @lo;
      $c = "[^$c]";
      return qr/$c/i;
  };
  cmpthese( 200 , { 
      filtered => sub { 
          readDict('dict.txt', $regexen->() );
      }, 
      unfiltered => sub {
          readDict_nofilter('dict.txt');
      }
  });
}
           s/iter unfiltered   filtered
unfiltered   8.16         --       -94%
filtered    0.464      1658%         --

Ps: 8.16 * 200 = 27分钟。

我建议根据单词做一个字母树。这棵树将由字母结构组成,像这样:

letter: char
isWord: boolean

然后构建树,每个深度添加一个新字母。换句话说,第一层是字母表;然后从这些树中,会有另外26个条目,以此类推,直到你把所有的单词都拼出来。坚持这个解析树,它将使所有可能的答案更快地查找。

使用这个解析过的树,您可以非常快速地找到解决方案。下面是伪代码:

BEGIN: 
    For each letter:
        if the struct representing it on the current depth has isWord == true, enter it as an answer.
        Cycle through all its neighbors; if there is a child of the current node corresponding to the letter, recursively call BEGIN on it.

这可以通过一些动态编程来加快。例如,在你的样本中,两个“A”都在一个“E”和一个“W”旁边,这(从它们击中它们的点来看)是相同的。我没有足够的时间来详细说明这个代码,但我想你们可以理解。

此外,我相信你会找到其他解决方案,如果你谷歌“Boggle solver”。