有没有O(1/n)种算法?
或者其他小于O(1)的数?
有没有O(1/n)种算法?
或者其他小于O(1)的数?
当前回答
这是一个简单的O(1/n)算法。它甚至做了一些有趣的事情!
function foo(list input) {
int m;
double output;
m = (1/ input.size) * max_value;
output = 0;
for (int i = 0; i < m; i++)
output+= random(0,1);
return output;
}
O(1/n) is possible as it describes how the output of a function changes given increasing size of input. If we are using the function 1/n to describe the number of instructions a function executes then there is no requirement that the function take zero instructions for any input size. Rather, it is that for every input size, n above some threshold, the number of instructions required is bounded above by a positive constant multiplied by 1/n. As there is no actual number for which 1/n is 0, and the constant is positive, then there is no reason why the function would constrained to take 0 or fewer instructions.
其他回答
其余的大多数答案都将大o解释为专门关于算法的运行时间。但是因为问题没有提到它,我认为值得一提的是大o在数值分析中的另一个应用,关于误差。
Many algorithms can be O(h^p) or O(n^{-p}) depending on whether you're talking about step-size (h) or number of divisions (n). For example, in Euler's method, you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.
欧拉方法是O(h)或O(1/n)算法,其中h通常被解释为步长n被解释为你划分一个区间的次数。
在实际数值分析应用中,由于浮点舍入误差,也可以有O(1/h)。你的间隔越小,某些算法的实现就会抵消得越多,丢失的有效数字就越多,因此在算法中传播的错误也就越多。
For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) - y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).
我猜小于O(1)是不可能的。算法所花费的任何时间都称为O(1)。但是对于O(1/n)下面的函数呢。(我知道这个解决方案中已经出现了许多变体,但我猜它们都有一些缺陷(不是主要的,它们很好地解释了这个概念)。这里有一个,只是为了方便讨论:
def 1_by_n(n, C = 10): #n could be float. C could be any positive number
if n <= 0.0: #If input is actually 0, infinite loop.
while True:
sleep(1) #or pass
return #This line is not needed and is unreachable
delta = 0.0001
itr = delta
while delta < C/n:
itr += delta
因此,随着n的增加,函数将花费越来越少的时间。此外,如果输入实际为0,则函数将永远返回。
有人可能会说,这将受到机器精度的限制。因此,由于c eit有一个上界,它是O(1)。但我们也可以绕过它,通过在字符串中输入n和C。加法和比较是对字符串进行的。用这个方法,我们可以把n减小到任意小。因此,即使忽略n = 0,函数的上限也是无界的。
我也相信我们不能说运行时间是O(1/n)。我们应该写成O(1 + 1/n)
那么这个呢:
void FindRandomInList(list l)
{
while(1)
{
int rand = Random.next();
if (l.contains(rand))
return;
}
}
随着列表大小的增加,程序的预期运行时间会减少。
你不能低于O(1)但是O(k) k小于N是可能的。我们称之为次线性时间算法。在某些问题中,次线性时间算法只能给出特定问题的近似解。然而,有时,一个近似解就可以了,可能是因为数据集太大了,或者计算所有数据的计算成本太高了。
没有比O(1)小的数 大o符号表示算法的最大复杂度
如果一个算法的运行时间是n³+ n²+ n + 5那么它就是O(n³) 低次在这里根本不重要,因为n ->正无穷,n^2与n^3相比是无关的
同样地,当n -> Inf时,O(1/n)与O(1)相比是不相关的,因此3 + O(1/n)将与O(1)相同,从而使O(1)的计算复杂度最小