例如,乘法和除法可以使用位运算符来实现

i*2 = i<<1
i*3 = (i<<1) + i;
i*10 = (i<<3) + (i<<1)

等等。

用say (i<<3)+(i<<1)乘10是否比直接用i*10更快?有没有什么输入是不能用这种方法乘或除的?


这取决于处理器和编译器。一些编译器已经通过这种方式优化代码了,其他的还没有。 因此,每次需要以这种方式优化代码时,您都需要检查。

除非您迫切需要优化,否则我不会为了节省汇编指令或处理器周期而打乱源代码。

移位通常比指令级的乘法快得多,但你可能会浪费时间做过早的优化。编译器可以在编译时很好地执行这些优化。自己做会影响可读性,而且可能对性能没有影响。如果您已经进行了概要分析并发现这是一个瓶颈,那么这样做可能是值得的。

实际上,这种被称为“魔法除法”的除法技巧实际上可以产生巨大的收益。同样,你应该首先分析它是否需要。但是如果你真的使用它,周围有一些有用的程序可以帮助你弄清楚相同的除法语义需要什么指令。这里有一个例子:http://www.masm32.com/board/index.php?topic=12421.0

我从MASM32上的OP线程中引用了一个例子:

include ConstDiv.inc
...
mov eax,9999999
; divide eax by 100000
cdiv 100000
; edx = quotient

会产生:

mov eax,9999999
mov edx,0A7C5AC47h
add eax,1
.if !CARRY?
    mul edx
.endif
shr edx,16

据我所知,在一些机器上,乘法运算可能需要16到32个机器周期。是的,根据机器类型,位移运算符比乘除运算符快。

然而,某些机器确实有它们的数学处理器,其中包含乘法/除法的特殊指令。

简单回答:不太可能。

长一点的回答: 你的编译器有一个优化器,它知道如何像你的目标处理器体系结构一样快速地进行乘法运算。最好的办法是清楚地告诉编译器你的意图(即i*2而不是i << 1),让它决定最快的汇编/机器码序列是什么。甚至有可能处理器本身已经将乘法指令实现为微码中的移位和加法序列。

总之,不要花太多时间担心这个。如果你想换,那就换。如果你想乘,那就乘。做语义上最清楚的事情——你的同事以后会感谢你的。或者,更有可能的是,如果你不这样做,之后会诅咒你。

Shift和整数乘法指令在大多数现代cpu上具有相似的性能——在20世纪80年代,整数乘法指令相对较慢,但通常情况下不再是这样。整数乘法指令可能有更高的延迟,所以仍然可能有移位更可取的情况。同样的情况下,你可以让更多的执行单元忙(尽管这是有利有弊)。

整数除法仍然相对较慢,所以使用shift代替2的幂除法仍然是一种胜利,大多数编译器将其作为一种优化来实现。但是请注意,要使这种优化有效,红利需要是无符号的,或者必须已知是正的。对于负红利,移位和除法是不相等的!

#include <stdio.h>

int main(void)
{
    int i;

    for (i = 5; i >= -5; --i)
    {
        printf("%d / 2 = %d, %d >> 1 = %d\n", i, i / 2, i, i >> 1);
    }
    return 0;
}

输出:

5 / 2 = 2, 5 >> 1 = 2
4 / 2 = 2, 4 >> 1 = 2
3 / 2 = 1, 3 >> 1 = 1
2 / 2 = 1, 2 >> 1 = 1
1 / 2 = 0, 1 >> 1 = 0
0 / 2 = 0, 0 >> 1 = 0
-1 / 2 = 0, -1 >> 1 = -1
-2 / 2 = -1, -2 >> 1 = -1
-3 / 2 = -1, -3 >> 1 = -2
-4 / 2 = -2, -4 >> 1 = -2
-5 / 2 = -2, -5 >> 1 = -3

所以如果你想帮助编译器,那么确保变量或表达式在被除数显式无符号。

只是一个具体的衡量点:许多年前,我对两个进行了基准测试 我的哈希算法的版本:

unsigned
hash( char const* s )
{
    unsigned h = 0;
    while ( *s != '\0' ) {
        h = 127 * h + (unsigned char)*s;
        ++ s;
    }
    return h;
}

and

unsigned
hash( char const* s )
{
    unsigned h = 0;
    while ( *s != '\0' ) {
        h = (h << 7) - h + (unsigned char)*s;
        ++ s;
    }
    return h;
}

在我对它进行基准测试的每台机器上,第一台机器的速度至少和 第二。有些令人惊讶的是,它有时更快(例如在一个 Sun Sparc)。当硬件不支持快速乘法(和 大多数当时没有),编译器将转换乘法 转换成移位和加/减的适当组合。因为它 知道了最终的目标,它有时可以在少于指令的情况下这样做 当你明确地写出移位和加法/减法时。

请注意,这是15年前的事了。希望编译器 从那以后就越来越好了,所以你可以指望 编译器做正确的事情,可能比你做的更好。(另外, 这段代码看起来如此C'ish的原因是因为它是15年前的事情了。 显然,我今天会使用std::string和迭代器。)

不要这样做,除非你绝对需要这样做,并且你的代码意图是移位而不是乘法/除法。

在典型的日子里,你可能会节省一些机器周期(或松弛,因为编译器更知道优化什么),但成本并不值得——你把时间花在小细节上而不是实际的工作上,维护代码变得更加困难,你的同事会诅咒你。

对于高负载计算,您可能需要这样做,其中每个节省的周期意味着几分钟的运行时。但是,您应该一次优化一个地方,并每次都进行性能测试,看看您是否真的使它更快了,还是破坏了编译器逻辑。

除了所有其他好的答案,让我指出当你指除法或乘法时不使用shift的另一个原因。我从未见过有人因为忘记乘法和加法的相对优先级而导致错误。我曾经见过,当维护程序员忘记了通过移位的“乘法”在逻辑上是乘法,但在语法上与乘法的优先级不同时,就会引入错误。X * 2 + z和X << 1 + z非常不同!

如果你处理的是数字,那就使用算术运算符,比如+ - * / %。如果您正在处理比特数组,请使用& ^ | >>这样的比特旋转操作符。不要把它们混在一起;一个表达式如果同时具有位旋转和算术,那么这个表达式就是一个等待发生的错误。

刚刚在我的机器上编译了这个:

int a = ...;
int b = a * 10;

当分解它时会产生输出:

MOV EAX,DWORD PTR SS:[ESP+1C] ; Move a into EAX
LEA EAX,DWORD PTR DS:[EAX+EAX*4] ; Multiply by 5 without shift !
SHL EAX, 1 ; Multiply by 2 using shift

这个版本比纯移位和加法的手工优化代码更快。

你永远不知道编译器会得到什么,所以最好只是简单地写一个普通的乘法,让它按自己想要的方式优化,除非在非常精确的情况下,你知道编译器无法优化。

这完全取决于目标设备、语言、目的等。

像素压缩显卡驱动程序?很有可能,是的!

.NET业务应用程序为您的部门?根本没必要去调查。

对于一款面向移动设备的高性能游戏来说,这可能是值得一试的,但前提是要进行更简单的优化。

In the case of signed integers and right shift vs division, it can make a difference. For negative numbers, the shift rounds rounds towards negative infinity whereas division rounds towards zero. Of course the compiler will change the division to something cheaper, but it will usually change it to something that has the same rounding behavior as division, because it is either unable to prove that the variable won't be negative or it simply doesn't care. So if you can prove that a number won't be negative or if you don't care which way it will round, you can do that optimization in a way that is more likely to make a difference.

Python测试对相同的随机数执行相同的乘法1亿次。

>>> from timeit import timeit
>>> setup_str = 'import scipy; from scipy import random; scipy.random.seed(0)'
>>> N = 10*1000*1000
>>> timeit('x=random.randint(65536);', setup=setup_str, number=N)
1.894096851348877 # Time from generating the random #s and no opperati

>>> timeit('x=random.randint(65536); x*2', setup=setup_str, number=N)
2.2799630165100098
>>> timeit('x=random.randint(65536); x << 1', setup=setup_str, number=N)
2.2616429328918457

>>> timeit('x=random.randint(65536); x*10', setup=setup_str, number=N)
2.2799630165100098
>>> timeit('x=random.randint(65536); (x << 3) + (x<<1)', setup=setup_str, number=N)
2.9485139846801758

>>> timeit('x=random.randint(65536); x // 2', setup=setup_str, number=N)
2.490908145904541
>>> timeit('x=random.randint(65536); x / 2', setup=setup_str, number=N)
2.4757170677185059
>>> timeit('x=random.randint(65536); x >> 1', setup=setup_str, number=N)
2.2316000461578369

因此,在python中做移位而不是用2的幂来做乘法/除法,会有轻微的改进(~10%用于除法;~1%的乘法)。如果它不是2的幂,可能会有相当大的放缓。

同样,这些#将根据你的处理器、编译器(或解释器——为了简单起见,在python中这样做)而改变。

和其他人一样,不要过早地优化。编写可读性非常强的代码,如果不够快就进行分析,然后尝试优化慢的部分。请记住,编译器在优化方面比您做得更好。

用say (i<<3)+(i<<1)乘10是否比直接用i*10更快?

它可能在您的机器上,也可能不在您的机器上——如果您关心的话,请在您的实际使用情况中进行测量。

一个案例研究——从486到core i7

Benchmarking is very difficult to do meaningfully, but we can look at a few facts. From http://www.penguin.cz/~literakl/intel/s.html#SAL and http://www.penguin.cz/~literakl/intel/i.html#IMUL we get an idea of x86 clock cycles needed for arithmetic shift and multiplication. Say we stick to "486" (the newest one listed), 32 bit registers and immediates, IMUL takes 13-42 cycles and IDIV 44. Each SAL takes 2, and adding 1, so even with a few of those together shifting superficially looks like a winner.

如今,随着酷睿i7的出现:

(来自http://software.intel.com/en-us/forums/showthread.php?t=61481)

整数加法的延迟为1个周期,整数乘法的延迟为3个周期。您可以在“Intel®64 and IA-32架构优化参考手册”的附录C中找到延迟和吞吐量,该手册位于http://www.intel.com/products/processor/manuals/。

(来自英特尔的宣传)

使用SSE,酷睿i7可以同时发出加法和乘法指令,导致每个时钟周期有8个浮点运算(FLOP)的峰值速率

That gives you an idea of how far things have come. The optimisation trivia - like bit shifting versus * - that was been taken seriously even into the 90s is just obsolete now. Bit-shifting is still faster, but for non-power-of-two mul/div by the time you do all your shifts and add the results it's slower again. Then, more instructions means more cache faults, more potential issues in pipelining, more use of temporary registers may mean more saving and restoring of register content from the stack... it quickly gets too complicated to quantify all the impacts definitively but they're predominantly negative.

源代码中的功能vs实现

More generally, your question is tagged C and C++. As 3rd generation languages, they're specifically designed to hide the details of the underlying CPU instruction set. To satisfy their language Standards, they must support multiplication and shifting operations (and many others) even if the underlying hardware doesn't. In such cases, they must synthesize the required result using many other instructions. Similarly, they must provide software support for floating point operations if the CPU lacks it and there's no FPU. Modern CPUs all support * and <<, so this might seem absurdly theoretical and historical, but the significance thing is that the freedom to choose implementation goes both ways: even if the CPU has an instruction that implements the operation requested in the source code in the general case, the compiler's free to choose something else that it prefers because it's better for the specific case the compiler's faced with.

示例(使用假设的汇编语言)

source           literal approach         optimised approach
#define N 0
int x;           .word x                xor registerA, registerA
x *= N;          move x -> registerA
                 move x -> registerB
                 A = B * immediate(0)
                 store registerA -> x
  ...............do something more with x...............

像exclusive or (xor)这样的指令与源代码没有关系,但是用自身进行xor-ing会清除所有的位,所以它可以用来将一些东西设置为0。暗示内存地址的源代码可能不需要使用任何内存地址。

These kind of hacks have been used for as long as computers have been around. In the early days of 3GLs, to secure developer uptake the compiler output had to satisfy the existing hardcore hand-optimising assembly-language dev. community that the produced code wasn't slower, more verbose or otherwise worse. Compilers quickly adopted lots of great optimisations - they became a better centralised store of it than any individual assembly language programmer could possibly be, though there's always the chance that they miss a specific optimisation that happens to be crucial in a specific case - humans can sometimes nut it out and grope for something better while compilers just do as they've been told until someone feeds that experience back into them.

因此,即使移动和添加在某些特定的硬件上仍然更快,那么编译器编写者可能已经准确地计算出什么时候它既安全又有益。

可维护性

如果你的硬件改变了,你可以重新编译,它会查看目标CPU并做出另一个最佳选择,而你不太可能想要重新审视你的“优化”或列出哪些编译环境应该使用乘法,哪些编译环境应该移位。想想10多年前编写的所有非2位移位的“优化”,现在它们在现代处理器上运行时减慢了它们所使用的代码……!

值得庆幸的是,像GCC这样的优秀编译器通常可以在启用任何优化时用直接乘法替换一系列位移位和算术(即. ...main(…){return (argc << 4) + (argc << 2) + argc;} -> imull $ 21,8 (%ebp), %eax)所以重新编译可能有帮助,即使不修复代码,但这是不保证的。

实现乘法或除法的奇怪位移代码远不能表达您在概念上试图实现的目标,因此其他开发人员会对此感到困惑,而困惑的程序员更有可能引入错误或删除一些必要的东西,以努力恢复表面上的理智。如果你只做那些不明显的事情,但它们确实是有实际好处的,然后好好记录它们(但不要记录其他直观的东西),每个人都会更快乐。

通解和部分解

如果你有一些额外的知识,比如你的int将只存储值x, y和z,那么你可能能够制定出一些指令,适用于这些值,并更快地得到你的结果,而不是编译器没有洞察,需要一个实现,适用于所有int值。例如,考虑你的问题:

乘法和除法可以使用位运算符实现…

你演示了乘法,那除法呢?

int x;
x >> 1;   // divide by 2?

根据c++标准5.8:

-3—E1 >> E2为E1位右移E2位位置。如果E1为无符号类型,或者E1为有符号类型且值为非负值,则结果值为E1的商除以2的E2次方的积分部分。如果E1具有符号类型和负值,则结果值是由实现定义的。

因此,当x为负时,位移位有一个实现定义的结果:在不同的机器上可能不会以相同的方式工作。但是,/工作起来更容易预测。(它也可能不是完全一致的,因为不同的机器可能有不同的负数表示,因此即使构成表示的位数相同,范围也不同。)

You may say "I don't care... that int is storing the age of the employee, it can never be negative". If you have that kind of special insight, then yes - your >> safe optimisation might be passed over by the compiler unless you explicitly do it in your code. But, it's risky and rarely useful as much of the time you won't have this kind of insight, and other programmers working on the same code won't know that you've bet the house on some unusual expectations of the data you'll be handling... what seems a totally safe change to them might backfire because of your "optimisation".

有没有什么输入是不能用这种方法乘或除的?

是的……如上所述,负数在被位移“分割”时具有实现定义的行为。

我同意德鲁·霍尔的明确回答。不过,答案可能需要一些额外的注释。

对于绝大多数软件开发人员来说,处理器和编译器已经不再与问题相关。我们大多数人远远超出了8088和MS-DOS。它可能只与那些仍在开发嵌入式处理器的人有关……

在我的软件公司,Math (add/sub/mul/div)应该用于所有数学。 当数据类型之间转换时应该使用Shift。字节长度为n>>8,而不是n/256。

有些优化编译器无法做到,因为它们只适用于减少的输入集。

下面是c++示例代码,可以执行更快的除法,执行64位“乘倒数”。分子和分母都必须低于某个阈值。注意,它必须被编译为使用64位指令才能比普通除法更快。

#include <stdio.h>
#include <chrono>

static const unsigned s_bc = 32;
static const unsigned long long s_p = 1ULL << s_bc;
static const unsigned long long s_hp = s_p / 2;

static unsigned long long s_f;
static unsigned long long s_fr;

static void fastDivInitialize(const unsigned d)
{
    s_f = s_p / d;
    s_fr = s_f * (s_p - (s_f * d));
}

static unsigned fastDiv(const unsigned n)
{
    return (s_f * n + ((s_fr * n + s_hp) >> s_bc)) >> s_bc;
}

static bool fastDivCheck(const unsigned n, const unsigned d)
{
    // 32 to 64 cycles latency on modern cpus
    const unsigned expected = n / d;

    // At least 10 cycles latency on modern cpus
    const unsigned result = fastDiv(n);

    if (result != expected)
    {
        printf("Failed for: %u/%u != %u\n", n, d, expected);
        return false;
    }

    return true;
}

int main()
{
    unsigned result = 0;

    // Make sure to verify it works for your expected set of inputs
    const unsigned MAX_N = 65535;
    const unsigned MAX_D = 40000;

    const double ONE_SECOND_COUNT = 1000000000.0;

    auto t0 = std::chrono::steady_clock::now();
    unsigned count = 0;
    printf("Verifying...\n");
    for (unsigned d = 1; d <= MAX_D; ++d)
    {
        fastDivInitialize(d);
        for (unsigned n = 0; n <= MAX_N; ++n)
        {
            count += !fastDivCheck(n, d);
        }
    }
    auto t1 = std::chrono::steady_clock::now();
    printf("Errors: %u / %u (%.4fs)\n", count, MAX_D * (MAX_N + 1), (t1 - t0).count() / ONE_SECOND_COUNT);

    t0 = t1;
    for (unsigned d = 1; d <= MAX_D; ++d)
    {
        fastDivInitialize(d);
        for (unsigned n = 0; n <= MAX_N; ++n)
        {
            result += fastDiv(n);
        }
    }
    t1 = std::chrono::steady_clock::now();
    printf("Fast division time: %.4fs\n", (t1 - t0).count() / ONE_SECOND_COUNT);

    t0 = t1;
    count = 0;
    for (unsigned d = 1; d <= MAX_D; ++d)
    {
        for (unsigned n = 0; n <= MAX_N; ++n)
        {
            result += n / d;
        }
    }
    t1 = std::chrono::steady_clock::now();
    printf("Normal division time: %.4fs\n", (t1 - t0).count() / ONE_SECOND_COUNT);

    getchar();
    return result;
}

I think in the one case that you want to multiply or divide by a power of two, you can't go wrong with using bitshift operators, even if the compiler converts them to a MUL/DIV, because some processors microcode (really, a macro) them anyway, so for those cases you will achieve an improvement, especially if the shift is more than 1. Or more explicitly, if the CPU has no bitshift operators, it will be a MUL/DIV anyway, but if the CPU has bitshift operators, you avoid a microcode branch and this is a few instructions less.

I am writing some code right now that requires a lot of doubling/halving operations because it is working on a dense binary tree, and there is one more operation that I suspect might be more optimal than an addition - a left (power of two multiply) shift with an addition. This can be replaced with a left shift and an xor if the shift is wider than the number of bits you want to add, easy example is (i<<1)^1, which adds one to a doubled value. This does not of course apply to a right shift (power of two divide) because only a left (little endian) shift fills the gap with zeros.

在我的代码中,这些乘/除2和2的幂运算被大量使用,因为公式已经很短了,每条可以消除的指令都可以获得很大的收益。如果处理器不支持这些位移操作符,就不会有增益,也不会有损失。

Also, in the algorithms I am writing, they visually represent the movements that occur so in that sense they are in fact more clear. The left hand side of a binary tree is bigger, and the right is smaller. As well as that, in my code, odd and even numbers have a special significance, and all left-hand children in the tree are odd and all right hand children, and the root, are even. In some cases, which I haven't encountered yet, but may, oh, actually, I didn't even think of this, x&1 may be a more optimal operation compared to x%2. x&1 on an even number will produce zero, but will produce 1 for an odd number.

再深入一点,如果x和3是0,我就知道4是这个数的因数,x%7是8,以此类推。我知道这些情况可能有有限的效用,但很高兴知道你可以避免模运算而使用按位逻辑运算,因为按位运算几乎总是最快的,而且对编译器来说不太可能是模糊的。

我在很大程度上发明了密集二叉树的领域,所以我预计人们可能不会理解这个评论的价值,因为很少有人想只对2的幂进行因数分解,或者只对2的幂进行乘/除。

它是否真的更快取决于实际使用的硬件和编译器。

如果在gcc编译器上比较x+x, x*2和x<<1语法的输出,那么在x86汇编中会得到相同的结果:https://godbolt.org/z/JLpp0j

        push    rbp
        mov     rbp, rsp
        mov     DWORD PTR [rbp-4], edi
        mov     eax, DWORD PTR [rbp-4]
        add     eax, eax
        pop     rbp
        ret

因此,您可以认为gcc足够聪明,可以独立于您输入的内容确定自己的最佳解决方案。

我也想看看我能不能打败房子。这是一个更通用的任意数乘任意数的位乘法。我做的宏比普通的乘法要慢25%到两倍。正如其他人所说,如果它接近2的倍数或由几个2的倍数组成,你可能会赢。比如由(X<<4)+(X<<2)+(X<<1)+X组成的X*23要比由(X<<6)+X组成的X*65慢。

#include <stdio.h>
#include <time.h>

#define MULTIPLYINTBYMINUS(X,Y) (-((X >> 30) & 1)&(Y<<30))+(-((X >> 29) & 1)&(Y<<29))+(-((X >> 28) & 1)&(Y<<28))+(-((X >> 27) & 1)&(Y<<27))+(-((X >> 26) & 1)&(Y<<26))+(-((X >> 25) & 1)&(Y<<25))+(-((X >> 24) & 1)&(Y<<24))+(-((X >> 23) & 1)&(Y<<23))+(-((X >> 22) & 1)&(Y<<22))+(-((X >> 21) & 1)&(Y<<21))+(-((X >> 20) & 1)&(Y<<20))+(-((X >> 19) & 1)&(Y<<19))+(-((X >> 18) & 1)&(Y<<18))+(-((X >> 17) & 1)&(Y<<17))+(-((X >> 16) & 1)&(Y<<16))+(-((X >> 15) & 1)&(Y<<15))+(-((X >> 14) & 1)&(Y<<14))+(-((X >> 13) & 1)&(Y<<13))+(-((X >> 12) & 1)&(Y<<12))+(-((X >> 11) & 1)&(Y<<11))+(-((X >> 10) & 1)&(Y<<10))+(-((X >> 9) & 1)&(Y<<9))+(-((X >> 8) & 1)&(Y<<8))+(-((X >> 7) & 1)&(Y<<7))+(-((X >> 6) & 1)&(Y<<6))+(-((X >> 5) & 1)&(Y<<5))+(-((X >> 4) & 1)&(Y<<4))+(-((X >> 3) & 1)&(Y<<3))+(-((X >> 2) & 1)&(Y<<2))+(-((X >> 1) & 1)&(Y<<1))+(-((X >> 0) & 1)&(Y<<0))
#define MULTIPLYINTBYSHIFT(X,Y) (((((X >> 30) & 1)<<31)>>31)&(Y<<30))+(((((X >> 29) & 1)<<31)>>31)&(Y<<29))+(((((X >> 28) & 1)<<31)>>31)&(Y<<28))+(((((X >> 27) & 1)<<31)>>31)&(Y<<27))+(((((X >> 26) & 1)<<31)>>31)&(Y<<26))+(((((X >> 25) & 1)<<31)>>31)&(Y<<25))+(((((X >> 24) & 1)<<31)>>31)&(Y<<24))+(((((X >> 23) & 1)<<31)>>31)&(Y<<23))+(((((X >> 22) & 1)<<31)>>31)&(Y<<22))+(((((X >> 21) & 1)<<31)>>31)&(Y<<21))+(((((X >> 20) & 1)<<31)>>31)&(Y<<20))+(((((X >> 19) & 1)<<31)>>31)&(Y<<19))+(((((X >> 18) & 1)<<31)>>31)&(Y<<18))+(((((X >> 17) & 1)<<31)>>31)&(Y<<17))+(((((X >> 16) & 1)<<31)>>31)&(Y<<16))+(((((X >> 15) & 1)<<31)>>31)&(Y<<15))+(((((X >> 14) & 1)<<31)>>31)&(Y<<14))+(((((X >> 13) & 1)<<31)>>31)&(Y<<13))+(((((X >> 12) & 1)<<31)>>31)&(Y<<12))+(((((X >> 11) & 1)<<31)>>31)&(Y<<11))+(((((X >> 10) & 1)<<31)>>31)&(Y<<10))+(((((X >> 9) & 1)<<31)>>31)&(Y<<9))+(((((X >> 8) & 1)<<31)>>31)&(Y<<8))+(((((X >> 7) & 1)<<31)>>31)&(Y<<7))+(((((X >> 6) & 1)<<31)>>31)&(Y<<6))+(((((X >> 5) & 1)<<31)>>31)&(Y<<5))+(((((X >> 4) & 1)<<31)>>31)&(Y<<4))+(((((X >> 3) & 1)<<31)>>31)&(Y<<3))+(((((X >> 2) & 1)<<31)>>31)&(Y<<2))+(((((X >> 1) & 1)<<31)>>31)&(Y<<1))+(((((X >> 0) & 1)<<31)>>31)&(Y<<0))
int main()
{
    int randomnumber=23;
    int randomnumber2=23;
    int checknum=23;
    clock_t start, diff;
    srand(time(0));
    start = clock();
    for(int i=0;i<1000000;i++)
    {
        randomnumber = rand() % 10000;
        randomnumber2 = rand() % 10000;
        checknum=MULTIPLYINTBYMINUS(randomnumber,randomnumber2);
        if (checknum!=randomnumber*randomnumber2)
        {
            printf("s %i and %i and %i",checknum,randomnumber,randomnumber2);
        }
    }
    diff = clock() - start;
    int msec = diff * 1000 / CLOCKS_PER_SEC;
    printf("MULTIPLYINTBYMINUS Time %d milliseconds", msec);
    start = clock();
    for(int i=0;i<1000000;i++)
    {
        randomnumber = rand() % 10000;
        randomnumber2 = rand() % 10000;
        checknum=MULTIPLYINTBYSHIFT(randomnumber,randomnumber2);
        if (checknum!=randomnumber*randomnumber2)
        {
            printf("s %i and %i and %i",checknum,randomnumber,randomnumber2);
        }
    }
    diff = clock() - start;
    msec = diff * 1000 / CLOCKS_PER_SEC;
    printf("MULTIPLYINTBYSHIFT Time %d milliseconds", msec);
    start = clock();
    for(int i=0;i<1000000;i++)
    {
        randomnumber = rand() % 10000;
        randomnumber2 = rand() % 10000;
        checknum= randomnumber*randomnumber2;
        if (checknum!=randomnumber*randomnumber2)
        {
            printf("s %i and %i and %i",checknum,randomnumber,randomnumber2);
        }
    }
    diff = clock() - start;
    msec = diff * 1000 / CLOCKS_PER_SEC;
    printf("normal * Time %d milliseconds", msec);
    return 0;
}