用于一般协议消息交换,可以容忍一定的丢包。UDP比TCP效率高多少?
当前回答
当谈到“什么更快”时,至少有两个非常不同的方面:吞吐量和延迟。
如果说到吞吐量- TCP的流控制(在其他答案中提到),是非常重要的,在UDP上做任何类似的事情,虽然肯定有可能,但会是一个大头痛(tm)。因此,当你需要吞吐量时使用UDP,很少被认为是一个好主意(除非你想获得比TCP更不公平的优势)。
然而,如果谈论延迟-整个事情是完全不同的。在没有丢包的情况下,TCP和UDP的行为非常相似(任何差异,如果有的话,都是边缘的)——在丢包之后,整个模式发生了巨大的变化。
After any packet loss, TCP will wait for retransmit for at least 200ms (1sec per paragraph 2.4 of RFC6298, but practical modern implementations tend to reduce it to 200ms). Moreover, with TCP, even those packets which did reach destination host - will not be delivered to your app until the missing packet is received (i.e., the whole communication is delayed by ~200ms) - BTW, this effect, known as Head-of-Line Blocking, is inherent to all reliable ordered streams, whether TCP or reliable+ordered UDP. To make things even worse - if the retransmitted packet is also lost, then we'll be speaking about delay of ~600ms (due to so-called exponential backoff, 1st retransmit is 200ms, and second one is 200*2=400ms). If our channel has 1% packet loss (which is not bad by today's standards), and we have a game with 20 updates per second - such 600ms delays will occur on average every 8 minutes. And as 600ms is more than enough to get you killed in a fast-paced game - well, it is pretty bad for gameplay. These effects are exactly why gamedevs often prefer UDP over TCP.
However, when using UDP to reduce latencies - it is important to realize that merely "using UDP" is not sufficient to get substantial latency improvement, it is all about HOW you're using UDP. In particular, while RUDP libraries usually avoid that "exponential backoff" and use shorter retransmit times - if they are used as a "reliable ordered" stream, they still have to suffer from Head-of-Line Blocking (so in case of a double packet loss, instead of that 600ms we'll get about 1.5*2*RTT - or for a pretty good 80ms RTT, it is a ~250ms delay, which is an improvement, but it is still possible to do better). On the other hand, if using techniques discussed in http://gafferongames.com/networked-physics/snapshot-compression/ and/or http://ithare.com/udp-from-mog-perspective/#low-latency-compression , it IS possible to eliminate Head-of-Line blocking entirely (so for a double-packet loss for a game with 20 updates/second, the delay will be 100ms regardless of RTT).
顺便说一句——如果你碰巧只能访问TCP而不能访问UDP(比如在浏览器中,或者你的客户端位于阻止UDP的丑陋防火墙的6-9%之一)——似乎有一种方法可以在不引起太多延迟的情况下实现UDP- in -TCP,请参阅这里:http://ithare.com/almost-zero-additional-latency-udp-over-tcp/(也请确保阅读注释(!))。
其他回答
UDP比TCP快,原因很简单,因为它不存在允许连续数据包流的确认数据包(ACK),而不是使用TCP窗口大小和往返时间(RTT)来确认一组数据包的TCP。
要了解更多信息,我推荐简单但非常容易理解的Skullbox解释(TCP vs. UDP)
请记住,TCP通常在网络上保存多条消息。如果你想在UDP中实现这一点,如果你想可靠地做到这一点,你将有相当多的工作。你的解决方案要么不太可靠,要么速度较慢,要么工作量巨大。有有效的UDP应用程序,但如果你问这个问题,你的可能不是。
当谈到“什么更快”时,至少有两个非常不同的方面:吞吐量和延迟。
如果说到吞吐量- TCP的流控制(在其他答案中提到),是非常重要的,在UDP上做任何类似的事情,虽然肯定有可能,但会是一个大头痛(tm)。因此,当你需要吞吐量时使用UDP,很少被认为是一个好主意(除非你想获得比TCP更不公平的优势)。
然而,如果谈论延迟-整个事情是完全不同的。在没有丢包的情况下,TCP和UDP的行为非常相似(任何差异,如果有的话,都是边缘的)——在丢包之后,整个模式发生了巨大的变化。
After any packet loss, TCP will wait for retransmit for at least 200ms (1sec per paragraph 2.4 of RFC6298, but practical modern implementations tend to reduce it to 200ms). Moreover, with TCP, even those packets which did reach destination host - will not be delivered to your app until the missing packet is received (i.e., the whole communication is delayed by ~200ms) - BTW, this effect, known as Head-of-Line Blocking, is inherent to all reliable ordered streams, whether TCP or reliable+ordered UDP. To make things even worse - if the retransmitted packet is also lost, then we'll be speaking about delay of ~600ms (due to so-called exponential backoff, 1st retransmit is 200ms, and second one is 200*2=400ms). If our channel has 1% packet loss (which is not bad by today's standards), and we have a game with 20 updates per second - such 600ms delays will occur on average every 8 minutes. And as 600ms is more than enough to get you killed in a fast-paced game - well, it is pretty bad for gameplay. These effects are exactly why gamedevs often prefer UDP over TCP.
However, when using UDP to reduce latencies - it is important to realize that merely "using UDP" is not sufficient to get substantial latency improvement, it is all about HOW you're using UDP. In particular, while RUDP libraries usually avoid that "exponential backoff" and use shorter retransmit times - if they are used as a "reliable ordered" stream, they still have to suffer from Head-of-Line Blocking (so in case of a double packet loss, instead of that 600ms we'll get about 1.5*2*RTT - or for a pretty good 80ms RTT, it is a ~250ms delay, which is an improvement, but it is still possible to do better). On the other hand, if using techniques discussed in http://gafferongames.com/networked-physics/snapshot-compression/ and/or http://ithare.com/udp-from-mog-perspective/#low-latency-compression , it IS possible to eliminate Head-of-Line blocking entirely (so for a double-packet loss for a game with 20 updates/second, the delay will be 100ms regardless of RTT).
顺便说一句——如果你碰巧只能访问TCP而不能访问UDP(比如在浏览器中,或者你的客户端位于阻止UDP的丑陋防火墙的6-9%之一)——似乎有一种方法可以在不引起太多延迟的情况下实现UDP- in -TCP,请参阅这里:http://ithare.com/almost-zero-additional-latency-udp-over-tcp/(也请确保阅读注释(!))。
我们已经做了一些工作,让程序员可以同时享受这两个世界的好处。
SCTP
它是一个独立的传输层协议,但它可以用作在UDP上提供附加层的库。通信的基本单位是消息(映射到一个或多个UDP包)。有内置的拥塞控制。该协议有许多旋钮和旋钮可以打开
按顺序传递信息 自动重传丢失的消息,与用户定义的参数
如果您的特定应用程序需要其中任何一个。
这样做的一个问题是建立连接是一个复杂的(因此是缓慢的过程)
其他类似的东西
https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
还有一个类似的专利实验
https://en.wikipedia.org/wiki/QUIC
这也试图改进TCP的三重握手,并改变拥塞控制以更好地处理快速线路。
更新2022:Quic和HTTP/3
QUIC(上面提到的)已经通过rfc标准化了,甚至在最初的答案写出来之后就成为了HTTP/3的基础。有各种各样的库,如lucas-clemente/quic-go或microsoft/msquic或谷歌/quiche或mozilla/neqo (web浏览器需要实现这个)。
这些库在UDP传输之上向程序员公开可靠的类tcp流。RFC 9221 (QUIC的不可靠数据报扩展)增加了处理单个不可靠数据包的功能。
人们说TCP给你的主要东西是可靠性。但事实并非如此。TCP提供给您的最重要的东西是拥塞控制:您可以在DSL链路上运行100个TCP连接,所有的连接都以最大速度运行,并且所有的100个连接都将是高效的,因为它们都“感知”到可用带宽。用100个不同的UDP应用程序尝试一下,所有的应用程序都尽可能快地推送数据包,看看事情对你有多好。
在更大的范围内,这种TCP行为可以防止Internet陷入“拥塞崩溃”。
倾向于将应用程序推向UDP的事情:
Group delivery semantics: it's possible to do reliable delivery to a group of people much more efficiently than TCP's point-to-point acknowledgement. Out-of-order delivery: in lots of applications, as long as you get all the data, you don't care what order it arrives in; you can reduce app-level latency by accepting an out-of-order block. Unfriendliness: on a LAN party, you may not care if your web browser functions nicely as long as you're blitting updates to the network as fast as you possibly can.
但即使你关心性能,你可能也不想使用UDP:
现在,您要考虑的是可靠性,您为实现可靠性所做的许多事情最终可能比TCP已经实现的要慢。 现在您对网络不友好,这可能会在共享环境中引起问题。 最重要的是,防火墙会阻止你。
通过将多个TCP连接“集群化”在一起,可以潜在地克服一些TCP性能和延迟问题;iSCSI这样做是为了绕过局域网上的拥塞控制,但是你也可以这样做来创建一个低延迟的“紧急”消息通道(TCP的“紧急”行为完全被破坏)。