我一直在研究。net反汇编和GCC源代码,但似乎在任何地方都找不到sin()和其他数学函数的实际实现…他们似乎总是在引用别的东西。

有人能帮我找到他们吗?我觉得不太可能运行C语言的所有硬件都支持三角函数,所以一定有软件算法,对吧?


我知道有几种方法可以计算函数,并且为了好玩,我写了自己的例程来使用泰勒级数计算函数。我很好奇生产语言是如何做到这一点的,因为我的所有实现总是慢几个数量级,尽管我认为我的算法非常聪明(显然不是)。


当前回答

我将尝试在一个C程序中回答sin()的情况,该程序用GCC的C编译器在当前的x86处理器(假设是Intel Core 2 Duo)上编译。

在C语言中,标准C库包含了一些常见的数学函数,而这些函数并不包含在语言本身中(例如pow, sin和cos分别表示幂,sin和cos)。它们的头文件包含在math.h中。

现在在GNU/Linux系统上,这些库函数是由glibc (GNU libc或GNU C库)提供的。但是GCC编译器希望您使用-lm编译器标志链接到数学库(libm.so),以启用这些数学函数的使用。我不确定为什么它不是标准C库的一部分。这些将是浮点函数的软件版本,或“软浮动”。

题外话:将数学函数分开的原因由来已久,据我所知,可能是在共享库可用之前,它仅仅是为了在非常古老的Unix系统中减少可执行程序的大小。

Now the compiler may optimize the standard C library function sin() (provided by libm.so) to be replaced with an call to a native instruction to your CPU/FPU's built-in sin() function, which exists as an FPU instruction (FSIN for x86/x87) on newer processors like the Core 2 series (this is correct pretty much as far back as the i486DX). This would depend on optimization flags passed to the gcc compiler. If the compiler was told to write code that would execute on any i386 or newer processor, it would not make such an optimization. The -mcpu=486 flag would inform the compiler that it was safe to make such an optimization.

现在,如果程序执行sin()函数的软件版本,它将基于CORDIC(坐标旋转数字计算机)或BKM算法,或者更可能是现在通常用于计算此类超越函数的表格或幂级数计算。(Src: http://en.wikipedia.org/wiki/Cordic应用程序)

任何最新的gcc版本(大约2.9倍以来)也提供了内置的sin版本__builtin_sin(),作为优化,它将用于取代对C库版本的标准调用。

我相信这是非常清楚的,但希望给你更多的信息比你期望的,和许多出发点,以了解更多自己。

其他回答

如果您想查看这些函数在C语言中的实际GNU实现,请查看glibc的最新主干。参见GNU C库。

库函数的实际实现取决于特定的编译器和/或库提供程序。不管它是用硬件还是软件,不管它是不是泰勒展开,等等,都会有所不同。

我意识到这完全没有帮助。

盲汉回答的改进版代码

#define EPSILON .0000000000001
// this is smallest effective threshold, at least on my OS (WSL ubuntu 18)
// possibly because factorial part turns 0 at some point
// and it happens faster then series element turns 0;
// validation was made against sin() from <math.h>
double ft_sin(double x)
{
    int k = 2;
    double r = x;
    double acc = 1;
    double den = 1;
    double num = x;

//  precision drops rapidly when x is not close to 0
//  so move x to 0 as close as possible
    while (x > PI)
        x -= PI;
    while (x < -PI)
        x += PI;
    if (x > PI / 2)
        return (ft_sin(PI - x));
    if (x < -PI / 2)
        return (ft_sin(-PI - x));
//  not using fabs for performance reasons
    while (acc > EPSILON || acc < -EPSILON)
    {
        num *= -x * x;
        den *= k * (k + 1);
        acc = num / den;
        r += acc;
        k += 2;
    }
    return (r);
}

如果你想要一个软件实现,而不是硬件实现,可以在《数值公式》的第5章中找到这个问题的明确答案。我的副本在一个盒子里,所以我不能给出细节,但简短的版本(如果我没记错的话)是你把tan(theta/2)作为你的基本操作,然后从那里计算其他的。计算是用级数近似完成的,但它比泰勒级数收敛得快得多。

抱歉,我没拿到书就想不起来了。

切比雪夫多项式,正如在另一个答案中提到的,是函数和多项式之间的最大差异尽可能小的多项式。这是一个很好的开始。

在某些情况下,最大误差不是你感兴趣的,而是最大相对误差。例如,对于正弦函数,x = 0附近的误差应该比较大的值小得多;你想要一个小的相对误差。所以你可以计算sinx / x的切比雪夫多项式,然后把这个多项式乘以x。

Next you have to figure out how to evaluate the polynomial. You want to evaluate it in such a way that the intermediate values are small and therefore rounding errors are small. Otherwise the rounding errors might become a lot larger than errors in the polynomial. And with functions like the sine function, if you are careless then it may be possible that the result that you calculate for sin x is greater than the result for sin y even when x < y. So careful choice of the calculation order and calculation of upper bounds for the rounding error are needed.

例如,sinx = x - x^3/6 + x^5 / 120 - x^7 / 5040…如果你天真地计算sinx = x * (1 - x^2/6 + x^4/120 - x^6/5040…),那么括号中的函数是递减的,如果y是x的下一个大的数字,那么有时siny会小于sinx。相反,计算sinx = x - x^3 * (1/6 - x^2/ 120 + x^4/5040…),这是不可能发生的。

例如,在计算切比雪夫多项式时,通常需要将系数四舍五入到双倍精度。但是,虽然切比雪夫多项式是最优的,但系数舍入为双精度的切比雪夫多项式并不是具有双精度系数的最优多项式!

For example for sin (x), where you need coefficients for x, x^3, x^5, x^7 etc. you do the following: Calculate the best approximation of sin x with a polynomial (ax + bx^3 + cx^5 + dx^7) with higher than double precision, then round a to double precision, giving A. The difference between a and A would be quite large. Now calculate the best approximation of (sin x - Ax) with a polynomial (b x^3 + cx^5 + dx^7). You get different coefficients, because they adapt to the difference between a and A. Round b to double precision B. Then approximate (sin x - Ax - Bx^3) with a polynomial cx^5 + dx^7 and so on. You will get a polynomial that is almost as good as the original Chebyshev polynomial, but much better than Chebyshev rounded to double precision.

Next you should take into account the rounding errors in the choice of polynomial. You found a polynomial with minimum error in the polynomial ignoring rounding error, but you want to optimise polynomial plus rounding error. Once you have the Chebyshev polynomial, you can calculate bounds for the rounding error. Say f (x) is your function, P (x) is the polynomial, and E (x) is the rounding error. You don't want to optimise | f (x) - P (x) |, you want to optimise | f (x) - P (x) +/- E (x) |. You will get a slightly different polynomial that tries to keep the polynomial errors down where the rounding error is large, and relaxes the polynomial errors a bit where the rounding error is small.

所有这些将使您轻松地获得最多0.55倍于最后一位的舍入误差,其中+,-,*,/的舍入误差最多为0.50倍于最后一位。