我在一次面试中被问到这个问题。它们都是O(nlogn),但大多数人使用快速排序而不是归并排序。为什么呢?
当前回答
正如许多人所注意到的,快速排序的平均情况性能要比归并排序快。但这只适用于假设按需访问任何内存段的时间为常数的情况。
在RAM中,这种假设通常不太坏(由于缓存的存在,这种假设并不总是正确的,但也不太坏)。然而,如果你的数据结构足够大,可以存储在磁盘上,那么快速排序就会因为磁盘平均每秒进行200次随机查找而被扼杀。但是,同样的磁盘在按顺序每秒读取或写入兆字节的数据方面没有任何问题。这正是归并排序所做的。
因此,如果数据必须在磁盘上排序,你真的,真的想使用归并排序的一些变体。(通常你快速排序子列表,然后开始将它们合并到某个大小阈值以上。)
Furthermore if you have to do anything with datasets of that size, think hard about how to avoid seeks to disk. For instance this is why it is standard advice that you drop indexes before doing large data loads in databases, and then rebuild the index later. Maintaining the index during the load means constantly seeking to disk. By contrast if you drop the indexes, then the database can rebuild the index by first sorting the information to be dealt with (using a mergesort of course!) and then loading it into a BTREE datastructure for the index. (BTREEs are naturally kept in order, so you can load one from a sorted dataset with few seeks to disk.)
在许多情况下,了解如何避免磁盘寻道使我将数据处理工作花费数小时而不是数天或数周。
其他回答
实际上,快速排序是O(n2)。它的平均情况运行时间是O(nlog(n)),但最坏情况是O(n2),这发生在在包含很少唯一项的列表上运行时。随机化花费O(n)。当然,这并没有改变最坏的情况,它只是防止恶意用户使您的排序花费很长时间。
快速排序更受欢迎,因为它:
(MergeSort需要额外的内存,与要排序的元素数量成线性关系)。 有一个小的隐藏常数。
在c/c++领域,当不使用stl容器时,我倾向于使用快速排序,因为它是构建的 进入运行时,而归并排序没有。
所以我相信,在许多情况下,这只是阻力最小的途径。
此外,对于整个数据集不适合工作集的情况,快速排序的性能可以高得多。
快速排序和合并排序的小增加。
它还可以依赖于排序项的类型。如果访问项、交换和比较不是简单的操作,就像比较平面内存中的整数一样,那么归并排序可能是更可取的算法。
例如,我们在远程服务器上使用网络协议对项目进行排序。
而且,在像“链表”这样的自定义容器中,也没有快速排序的好处。 1. 对链表进行归并排序,不需要额外的内存。 2. 快速排序中对元素的访问不是顺序的(在内存中)
我想补充的是,到目前为止提到的三种算法(归并排序,快速排序和堆排序)只有归并排序是稳定的。也就是说,对于那些具有相同键的值,顺序不会改变。在某些情况下,这是可取的。
但是,说实话,在实际情况下,大多数人只需要良好的平均性能和快速排序…快速=)
所有排序算法都有其起伏。有关排序算法的概述,请参阅维基百科的文章。
One of the reason is more philosophical. Quicksort is Top->Down philosophy. With n elements to sort, there are n! possibilities. With 2 partitions of m & n-m which are mutually exclusive, the number of possibilities go down in several orders of magnitude. m! * (n-m)! is smaller by several orders than n! alone. imagine 5! vs 3! *2!. 5! has 10 times more possibilities than 2 partitions of 2 & 3 each . and extrapolate to 1 million factorial vs 900K!*100K! vs. So instead of worrying about establishing any order within a range or a partition,just establish order at a broader level in partitions and reduce the possibilities within a partition. Any order established earlier within a range will be disturbed later if the partitions themselves are not mutually exclusive.
任何自下而上的排序方法,如归并排序或堆排序,就像工人或雇员的方法一样,人们很早就开始在微观层面进行比较。但是,一旦在它们之间发现了一个元素,这个顺序就必然会丢失。这些方法非常稳定和可预测,但要做一定量的额外工作。
Quick Sort is like Managerial approach where one is not initially concerned about any order , only about meeting a broad criterion with No regard for order. Then the partitions are narrowed until you get a sorted set. The real challenge in Quicksort is in finding a partition or criterion in the dark when you know nothing about the elements to sort. That is why we either need to spend some effort to find a median value or pick 1 at random or some arbitrary "Managerial" approach . To find a perfect median can take significant amount of effort and leads to a stupid bottom up approach again. So Quicksort says just a pick a random pivot and hope that it will be somewhere in the middle or do some work to find median of 3 , 5 or something more to find a better median but do not plan to be perfect & don't waste any time in initially ordering. That seems to do well if you are lucky or sometimes degrades to n^2 when you don't get a median but just take a chance. Any way data is random. right. So I agree more with the top ->down logical approach of quicksort & it turns out that the chance it takes about pivot selection & comparisons that it saves earlier seems to work better more times than any meticulous & thorough stable bottom ->up approach like merge sort. But