许多网站提供一些统计数据,如“过去24小时内最热门的话题”。例如,Topix.com在其“新闻趋势”部分显示了这一点。在那里,你可以看到被提及次数增长最快的话题。

我也想为一个主题计算这样的“嗡嗡声”。我怎么能这样做呢?算法应该对热点较少的话题进行加权。通常(几乎)没有人提及的话题应该是最热门的话题。

谷歌提供“热门趋势”,topix.com显示“热门话题”,fav.or.it显示“关键字趋势”——所有这些服务都有一个共同点:他们只向你展示当前异常热门的即将到来的趋势。

像“布兰妮·斯皮尔斯”、“天气”或“帕丽斯·希尔顿”这样的词不会出现在这些榜单中,因为它们总是热门且频繁。这篇文章称之为“小甜甜布兰妮问题”。

我的问题是:如何编写算法或使用现有算法来解决这个问题?有一个在过去24小时内搜索的关键字列表,算法应该向您显示10个(例如)最热门的关键字。

我知道,在上面的文章中,提到了某种算法。我试着在PHP中编码,但我不认为它会工作。它只是找到了大多数人,不是吗?

我希望你能帮助我(代码示例将是伟大的)。


当前回答

这个问题需要一个z分数或标准分数,它会考虑历史平均值,就像其他人提到的,但也会考虑历史数据的标准差,这使得它比仅仅使用平均值更可靠。

在你的例子中,z分数是通过以下公式计算的,其中趋势是一个比率,如观看次数/天。

z-score = ([current trend] - [average historic trends]) / [standard deviation of historic trends]

当使用z分数时,z分数越高或越低,趋势就越不正常,例如,如果z分数高度正,那么趋势就会异常上升,而如果z分数高度负,则趋势就会异常下降。因此,一旦你计算出所有候选趋势的z分数,最高的10个z分数将与最不正常增长的z分数有关。

有关z分数的更多信息,请参阅维基百科。

Code

from math import sqrt

def zscore(obs, pop):
    # Size of population.
    number = float(len(pop))
    # Average population value.
    avg = sum(pop) / number
    # Standard deviation of population.
    std = sqrt(sum(((c - avg) ** 2) for c in pop) / number)
    # Zscore Calculation.
    return (obs - avg) / std

样例输出

>>> zscore(12, [2, 4, 4, 4, 5, 5, 7, 9])
3.5
>>> zscore(20, [21, 22, 19, 18, 17, 22, 20, 20])
0.0739221270955
>>> zscore(20, [21, 22, 19, 18, 17, 22, 20, 20, 1, 2, 3, 1, 2, 1, 0, 1])
1.00303599234
>>> zscore(2, [21, 22, 19, 18, 17, 22, 20, 20, 1, 2, 3, 1, 2, 1, 0, 1])
-0.922793112954
>>> zscore(9, [1, 2, 0, 3, 1, 3, 1, 2, 9, 8, 7, 10, 9, 5, 2, 4, 1, 1, 0])
1.65291949506

笔记

You can use this method with a sliding window (i.e. last 30 days) if you wish not to take to much history into account, which will make short term trends more pronounced and can cut down on the processing time. You could also use a z-score for values such as change in views from one day to next day to locate the abnormal values for increasing/decreasing views per day. This is like using the slope or derivative of the views per day graph. If you keep track of the current size of the population, the current total of the population, and the current total of x^2 of the population, you don't need to recalculate these values, only update them and hence you only need to keep these values for the history, not each data value. The following code demonstrates this. from math import sqrt class zscore: def __init__(self, pop = []): self.number = float(len(pop)) self.total = sum(pop) self.sqrTotal = sum(x ** 2 for x in pop) def update(self, value): self.number += 1.0 self.total += value self.sqrTotal += value ** 2 def avg(self): return self.total / self.number def std(self): return sqrt((self.sqrTotal / self.number) - self.avg() ** 2) def score(self, obs): return (obs - self.avg()) / self.std() Using this method your work flow would be as follows. For each topic, tag, or page create a floating point field, for the total number of days, sum of views, and sum of views squared in your database. If you have historic data, initialize these fields using that data, otherwise initialize to zero. At the end of each day, calculate the z-score using the day's number of views against the historic data stored in the three database fields. The topics, tags, or pages, with the highest X z-scores are your X "hotest trends" of the day. Finally update each of the 3 fields with the day's value and repeat the process next day.

新成员

Normal z-scores as discussed above do not take into account the order of the data and hence the z-score for an observation of '1' or '9' would have the same magnitude against the sequence [1, 1, 1, 1, 9, 9, 9, 9]. Obviously for trend finding, the most current data should have more weight than older data and hence we want the '1' observation to have a larger magnitude score than the '9' observation. In order to achieve this I propose a floating average z-score. It should be clear that this method is NOT guaranteed to be statistically sound but should be useful for trend finding or similar. The main difference between the standard z-score and the floating average z-score is the use of a floating average to calculate the average population value and the average population value squared. See code for details:

Code

class fazscore:
    def __init__(self, decay, pop = []):
        self.sqrAvg = self.avg = 0
        # The rate at which the historic data's effect will diminish.
        self.decay = decay
        for x in pop: self.update(x)
    def update(self, value):
        # Set initial averages to the first value in the sequence.
        if self.avg == 0 and self.sqrAvg == 0:
            self.avg = float(value)
            self.sqrAvg = float((value ** 2))
        # Calculate the average of the rest of the values using a 
        # floating average.
        else:
            self.avg = self.avg * self.decay + value * (1 - self.decay)
            self.sqrAvg = self.sqrAvg * self.decay + (value ** 2) * (1 - self.decay)
        return self
    def std(self):
        # Somewhat ad-hoc standard deviation calculation.
        return sqrt(self.sqrAvg - self.avg ** 2)
    def score(self, obs):
        if self.std() == 0: return (obs - self.avg) * float("infinity")
        else: return (obs - self.avg) / self.std()

样例输入输出

>>> fazscore(0.8, [1, 1, 1, 1, 1, 1, 9, 9, 9, 9, 9, 9]).score(1)
-1.67770595327
>>> fazscore(0.8, [1, 1, 1, 1, 1, 1, 9, 9, 9, 9, 9, 9]).score(9)
0.596052006642
>>> fazscore(0.9, [2, 4, 4, 4, 5, 5, 7, 9]).score(12)
3.46442230724
>>> fazscore(0.9, [2, 4, 4, 4, 5, 5, 7, 9]).score(22)
7.7773245459
>>> fazscore(0.9, [21, 22, 19, 18, 17, 22, 20, 20]).score(20)
-0.24633160155
>>> fazscore(0.9, [21, 22, 19, 18, 17, 22, 20, 20, 1, 2, 3, 1, 2, 1, 0, 1]).score(20)
1.1069362749
>>> fazscore(0.9, [21, 22, 19, 18, 17, 22, 20, 20, 1, 2, 3, 1, 2, 1, 0, 1]).score(2)
-0.786764452966
>>> fazscore(0.9, [1, 2, 0, 3, 1, 3, 1, 2, 9, 8, 7, 10, 9, 5, 2, 4, 1, 1, 0]).score(9)
1.82262469243
>>> fazscore(0.8, [40] * 200).score(1)
-inf

更新

正如David Kemp所正确指出的那样,如果给定一系列常数值,然后要求一个与其他值不同的观测值的zscore,那么结果应该是非零的。事实上,返回的值应该是无穷大。我改变了这条线,

if self.std() == 0: return 0

to:

if self.std() == 0: return (obs - self.avg) * float("infinity")

这一更改反映在fazscore解决方案代码中。如果你不想处理无穷大的值,一个可以接受的解决方案是将这一行改为:

if self.std() == 0: return obs - self.avg

其他回答

你需要一种算法来衡量一个话题的传播速度——换句话说,如果你把它画出来,你想要显示那些以惊人的速度增长的话题。

这是趋势线的一阶导数,将其作为整体计算的加权因素并不难。

正常化

One technique you'll need to do is to normalize all your data. For each topic you are following, keep a very low pass filter that defines that topic's baseline. Now every data point that comes in about that topic should be normalized - subtract its baseline and you'll get ALL of your topics near 0, with spikes above and below the line. You may instead want to divide the signal by its baseline magnitude, which will bring the signal to around 1.0 - this not only brings all signals in line with each other (normalizes the baseline), but also normalizes the spikes. A britney spike is going to be magnitudes larger than someone else's spike, but that doesn't mean you should pay attention to it - the spike may be very small relative to her baseline.

推导出

一旦你标准化了所有的东西,计算出每个主题的斜率。取两个连续的点,测量差值。正的差异呈上升趋势,负的差异呈下降趋势。然后你可以比较归一化的差异,并找出哪些主题与其他主题相比受欢迎程度上升-每个主题都适用于它自己的“正常”,这可能是与其他主题不同的量级。

这确实是解决这个问题的第一步。您还需要使用更高级的技术(主要是上述算法与其他算法的组合,根据您的需要进行加权),但这应该足以让您开始学习。

关于文章

The article is about topic trending, but it's not about how to calculate what's hot and what's not, it's about how to process the huge amount of information that such an algorithm must process at places like Lycos and Google. The space and time required to give each topic a counter, and find each topic's counter when a search on it goes through is huge. This article is about the challenges one faces when attempting such a task. It does mention the Brittney effect, but it doesn't talk about how to overcome it.

正如Nixuz指出的,这也被称为Z或标准分数。

也许一个简单的话题频率梯度就能起作用——大的正梯度=快速增长的受欢迎程度。

最简单的方法是将每天的搜索次数归位,这样你就有了

searches = [ 10, 7, 14, 8, 9, 12, 55, 104, 100 ]

然后看看它每天有多少变化:

hot_factor = [ b-a for a, b in zip(searches[:-1], searches[1:]) ]
# hot_factor is [ -3, 7, -6, 1, 3, 43, 49, -4 ]

只要应用某种阈值,那么那些增加了> 50的日子就被认为是“热”的。如果你愿意,你也可以让它变得更复杂。不是绝对差异,而是相对差异,所以从100到150被认为是热的,但从1000到1050不是。或者是考虑到不止一天的趋势的更复杂的梯度。

查德·伯奇和亚当·戴维斯的观点是正确的,你必须回顾过去,建立一个基线。你的问题,从措辞上看,表明你只想查看过去24小时的数据,这并不完全正确。

为数据提供一些内存而不必查询大量历史数据的一种方法是使用指数移动平均。这样做的好处是,您可以每个周期更新一次,然后刷新所有旧数据,因此您只需要记住一个值。所以如果你的周期是一天,你必须为每个主题维护一个“每日平均”属性,你可以通过:

a_n = a_(n-1)*b + c_n*(1-b)

其中a_n是第n天的移动平均值,b是0到1之间的某个常数(越接近1,内存越长),c_n是第n天的点击次数。美妙的是,如果你在第n天结束时执行更新,你可以刷新c_n和a_(n-1)。

需要注意的是,初始时它对a的初始值很敏感。

EDIT

如果这有助于可视化这个方法,取n = 5, a_0 = 1, b = .9。

假设新的值是5,0,0,1,4:

a_0 = 1
c_1 = 5 : a_1 = .9*1 + .1*5 = 1.4
c_2 = 0 : a_2 = .9*1.4 + .1*0 = 1.26
c_3 = 0 : a_3 = .9*1.26 + .1*0 = 1.134
c_4 = 1 : a_4 = .9*1.134 + .1*1 = 1.1206
c_5 = 4 : a_5 = .9*1.1206 + .1*5 = 1.40854

看起来不太像平均值,不是吗?请注意,即使我们的下一个输入是5,该值仍然接近1。这是怎么呢如果你展开计算,你会得到:

a_n = (1-b)*c_n + (1-b)*b*c_(n-1) + (1-b)*b^2*c_(n-2) + ... + (leftover weight)*a_0

我说的剩余重量是什么意思?在任何平均值中,所有的权重都必须加为1。如果n是无穷大,那么。可以一直延续下去,那么所有权值的和都是1。但如果n相对较小,原始输入就会有相当大的权重。

如果你研究了上面的公式,你应该意识到关于这个用法的一些事情:

所有数据永远都对平均值有所贡献。实际上,有一个点的贡献是非常非常小的。 最近的值比旧值贡献更大。 b越高,新值越不重要,旧值越重要。然而,b越高,就需要越多的数据来冲淡a的初值。

我认为前两个特点正是你要找的。为了给你一个简单的想法,这是一个python实现(减去所有的数据库交互):

>>> class EMA(object):
...  def __init__(self, base, decay):
...   self.val = base
...   self.decay = decay
...   print self.val
...  def update(self, value):
...   self.val = self.val*self.decay + (1-self.decay)*value
...   print self.val
... 
>>> a = EMA(1, .9)
1
>>> a.update(10)
1.9
>>> a.update(10)
2.71
>>> a.update(10)
3.439
>>> a.update(10)
4.0951
>>> a.update(10)
4.68559
>>> a.update(10)
5.217031
>>> a.update(10)
5.6953279
>>> a.update(10)
6.12579511
>>> a.update(10)
6.513215599
>>> a.update(10)
6.8618940391
>>> a.update(10)
7.17570463519

I had worked on a project, where my aim was finding Trending Topics from Live Twitter Stream and also doing sentimental analysis on the trending topics (finding if Trending Topic positively/negatively talked about). I've used Storm for handling twitter stream. I've published my report as a blog: http://sayrohan.blogspot.com/2013/06/finding-trending-topics-and-trending.html I've used Total Count and Z-Score for the ranking. The approach that I've used is bit generic, and in the discussion section, I've mentioned that how we can extend the system for non-Twitter Application. Hope the information helps.

我想知道在这种情况下是否有可能使用常规的物理加速度公式?

v2-v1/t or dv/dt

我们可以认为v1是每小时的初始点赞/投票/评论数,v2是过去24小时内每小时的当前“速度”?

这更像是一个问题,而不是一个答案,但似乎它可能会起作用。任何加速最快的内容都将成为热门话题……

我相信这并不能解决布兰妮的问题:-)