尽管我很喜欢C和c++,但我还是忍不住对空结尾字符串的选择抓耳挠脑:

Length prefixed (i.e. Pascal) strings existed before C Length prefixed strings make several algorithms faster by allowing constant time length lookup. Length prefixed strings make it more difficult to cause buffer overrun errors. Even on a 32 bit machine, if you allow the string to be the size of available memory, a length prefixed string is only three bytes wider than a null terminated string. On 16 bit machines this is a single byte. On 64 bit machines, 4GB is a reasonable string length limit, but even if you want to expand it to the size of the machine word, 64 bit machines usually have ample memory making the extra seven bytes sort of a null argument. I know the original C standard was written for insanely poor machines (in terms of memory), but the efficiency argument doesn't sell me here. Pretty much every other language (i.e. Perl, Pascal, Python, Java, C#, etc) use length prefixed strings. These languages usually beat C in string manipulation benchmarks because they are more efficient with strings. C++ rectified this a bit with the std::basic_string template, but plain character arrays expecting null terminated strings are still pervasive. This is also imperfect because it requires heap allocation. Null terminated strings have to reserve a character (namely, null), which cannot exist in the string, while length prefixed strings can contain embedded nulls.

其中一些东西比C语言出现得更晚,所以C语言不知道它们是有道理的。然而,在C语言出现之前,有些语言就已经很简单了。为什么会选择空终止字符串,而不是明显更好的长度前缀?

编辑:因为有些人问了关于我上面提到的效率点的事实(他们不喜欢我已经提供的事实),他们源于以下几点:

使用空结尾字符串的Concat需要O(n + m)时间复杂度。长度前缀通常只需要O(m)。 使用空结尾字符串的长度需要O(n)时间复杂度。长度前缀为O(1)。 Length和concat是迄今为止最常见的字符串操作。在一些情况下,以空结尾的字符串可能更有效,但这种情况发生的频率要低得多。

从下面的答案,这些是一些情况下,空终止字符串更有效:

When you need to cut off the start of a string and need to pass it to some method. You can't really do this in constant time with length prefixing even if you are allowed to destroy the original string, because the length prefix probably needs to follow alignment rules. In some cases where you're just looping through the string character by character you might be able to save a CPU register. Note that this works only in the case that you haven't dynamically allocated the string (Because then you'd have to free it, necessitating using that CPU register you saved to hold the pointer you originally got from malloc and friends).

上面这些词都没有length和concat常见。

下面的答案中还有一个断言:

你需要把绳子的一端剪掉

但这个是不正确的——对于以null结尾的字符串和有长度前缀的字符串,它的时间是相同的。(以Null结尾的字符串只是在你想要的新结尾的地方插入一个Null,长度前缀只是从前缀中减去。)


当前回答

围绕C语言的许多设计决策都源于这样一个事实:在最初实现C语言时,参数传递的代价有些昂贵。如果在两者之间作选择。

void add_element_to_next(arr, offset)
  char[] arr;
  int offset;
{
  arr[offset] += arr[offset+1];
}

char array[40];

void test()
{
  for (i=0; i<39; i++)
    add_element_to_next(array, i);
}

void add_element_to_next(ptr)
  char *p;
{
  p[0]+=p[1];
}

char array[40];

void test()
{
  int i;
  for (i=0; i<39; i++)
    add_element_to_next(arr+i);
}

后者会稍微便宜一点(因此是首选),因为它只需要传递一个参数而不是两个。如果被调用的方法不需要知道数组的基址,也不需要知道其中的索引,那么将这两个值组合在一起传递一个指针比分别传递值要便宜。

While there are many reasonable ways in which C could have encoded string lengths, the approaches that had been invented up to that time would have all required functions that should be able to work with part of a string to accept the base address of the string and the desired index as two separate parameters. Using zero-byte termination made it possible to avoid that requirement. Although other approaches would be better with today's machines (modern compilers often pass parameters in registers, and memcpy can be optimized in ways strcpy()-equivalents cannot) enough production code uses zero-byte terminated strings that it's hard to change to anything else.

PS——为了在某些操作上稍微降低速度,以及在较长的字符串上稍微增加一点额外开销,可以让处理字符串的方法直接接受指向字符串的指针、经过边界检查的字符串缓冲区或标识另一个字符串的子字符串的数据结构。像“strcat”这样的函数看起来像[现代语法]

void strcat(unsigned char *dest, unsigned char *src)
{
  struct STRING_INFO d,s;
  str_size_t copy_length;

  get_string_info(&d, dest);
  get_string_info(&s, src);
  if (d.si_buff_size > d.si_length) // Destination is resizable buffer
  {
    copy_length = d.si_buff_size - d.si_length;
    if (s.src_length < copy_length)
      copy_length = s.src_length;
    memcpy(d.buff + d.si_length, s.buff, copy_length);
    d.si_length += copy_length;
    update_string_length(&d);
  }
}

比K&R strcat方法大一点,但它支持边界检查,而K&R方法不支持。此外,与当前的方法不同,它可以轻松地连接任意子字符串,例如。

/* Concatenate 10th through 24th characters from src to dest */

void catpart(unsigned char *dest, unsigned char *src)
{
  struct SUBSTRING_INFO *inf;
  src = temp_substring(&inf, src, 10, 24);
  strcat(dest, src);
}

注意,由temp_substring返回的字符串的生命周期将受到s和src的生命周期的限制,后者更短(这就是为什么该方法需要传入inf——如果它是本地的,它将在方法返回时死亡)。

In terms of memory cost, strings and buffers up to 64 bytes would have one byte of overhead (same as zero-terminated strings); longer strings would have slightly more (whether one allowed amounts of overhead between two bytes and the maximum required would be a time/space tradeoff). A special value of the length/mode byte would be used to indicate that a string function was given a structure containing a flag byte, a pointer, and a buffer length (which could then index arbitrarily into any other string).

当然,K&R并没有实现任何这样的东西,但这很可能是因为他们不想在字符串处理上花费太多精力——即使在今天,许多语言在这方面似乎都相当薄弱。

其他回答

假设C以Pascal的方式实现字符串,通过前缀长度:7字符长字符串与3字符字符串的数据类型相同吗?如果答案是肯定的,那么当我将前者分配给后者时,编译器应该生成什么样的代码?字符串应该被截断,还是自动调整大小?如果调整大小,该操作是否应该被锁保护以使其线程安全?不管你喜不喜欢,C语言的方法回避了所有这些问题。

Obviously for performance and safety, you'll want to keep the length of a string while you're working with it rather than repeatedly performing strlen or the equivalent on it. However, storing the length in a fixed location just before the string contents is an incredibly bad design. As Jörgen pointed out in the comments on Sanjit's answer, it precludes treating the tail of a string as a string, which for example makes a lot of common operations like path_to_filename or filename_to_extension impossible without allocating new memory (and incurring the possibility of failure and error handling). And then of course there's the issue that nobody can agree how many bytes the string length field should occupy (plenty of bad "Pascal string" languages used 16-bit fields or even 24-bit fields which preclude processing of long strings).

C语言让程序员选择是否/在哪里/如何存储长度的设计更加灵活和强大。当然,程序员必须聪明。C语言惩罚愚蠢的程序崩溃,慢慢停止,或者让你的敌人扎根。

根据Joel Spolsky在这篇博文中的说法,

这是因为发明了UNIX和C编程语言的PDP-7微处理器有一个ascii字符串类型。ASCIZ的意思是“以Z(零)结尾的ASCII”。

在看到这里所有其他的答案后,我相信即使这是真的,这也只是C具有以空结束的“字符串”的部分原因。这篇文章很有启发性,因为像字符串这样简单的东西实际上是相当困难的。

Calavera是对的,但由于人们似乎没有理解他的观点,我将提供一些代码示例。

首先,让我们考虑一下C是什么:一种简单的语言,其中所有代码都可以直接转换为机器语言。所有类型都适合寄存器和堆栈,并且它不需要一个操作系统或一个大的运行时库来运行,因为它是用来编写这些东西的(考虑到今天甚至没有一个可能的竞争对手,这个任务非常适合)。

如果C语言有一个字符串类型,比如int或char,它将是一种不适合寄存器或堆栈的类型,并且需要以任何方式处理内存分配(及其所有支持的基础设施)。所有这些都违背了C语言的基本原则。

因此,C语言中的字符串是:

char s*;

那么,我们假设这是有长度前缀的。让我们编写代码来连接两个字符串:

char* concat(char* s1, char* s2)
{
    /* What? What is the type of the length of the string? */
    int l1 = *(int*) s1;
    /* How much? How much must I skip? */
    char *s1s = s1 + sizeof(int);
    int l2 = *(int*) s2;
    char *s2s = s2 + sizeof(int);
    int l3 = l1 + l2;
    char *s3 = (char*) malloc(l3 + sizeof(int));
    char *s3s = s3 + sizeof(int);
    memcpy(s3s, s1s, l1);
    memcpy(s3s + l1, s2s, l2);
    *(int*) s3 = l3;
    return s3;
}

另一种方法是使用struct来定义字符串:

struct {
  int len; /* cannot be left implementation-defined */
  char* buf;
}

此时,所有的字符串操作都需要进行两次分配,这实际上意味着您将通过一个库来进行任何处理。

有趣的是……这样的结构体在C中确实存在!它们只是不用于日常显示消息给用户处理。

所以,这就是Calavera的观点:在c语言中没有字符串类型,要对它做任何事情,你必须获取一个指针,并将其解码为指向两个不同类型的指针,然后字符串的大小就变得非常相关,而不能仅仅是“实现定义”。

现在,C可以以任何方式处理内存,并且库中的mem函数(甚至在<string.h>中!)提供了将内存作为一对指针和大小来处理所需的所有工具。C语言中所谓的“字符串”的创建只有一个目的:在为文本终端编写操作系统的上下文中显示消息。因此,空终止就足够了。

不一定是基本原理,而是长度编码的对应物

Certain forms of dynamic length encoding are superior to static length encoding as far as memory is concerned, it all depends on usage. Just look at UTF-8 for proof. It's essentially an extensible character array for encoding a single character. This uses a single bit for each extended byte. NUL termination uses 8 bits. Length-prefix I think can be reasonably termed infinite length as well by using 64 bits. How often you hit the case of your extra bits is the deciding factor. Only 1 extremely large string? Who cares if you're using 8 or 64 bits? Many small strings (Ie Strings of English words)? Then your prefix costs are a large percentage. Length-prefixed strings allowing time savings is not a real thing. Whether your supplied data is required to have length provided, you're counting at compile time, or you're truly being provided dynamic data that you must encode as a string. These sizes are computed at some point in the algorithm. A separate variable to store the size of a null terminated string can be provided. Which makes the comparison on time-savings moot. One just has an extra NUL at the end... but if the length encode doesn't include that NUL then there's literally no difference between the two. There's no algorithmic change required at all. Just a pre-pass you have to manually design yourself instead of having a compiler/runtime do it for you. C is mostly about doing things manually. Length-prefix being optional is a selling point. I don't always need that extra info for an algorithm so being required to do it for a every string makes my precompute+compute time never able to drop below O(n). (Ie hardware random number generator 1-128. I can pull from an "infinite string". Let's say it only generates characters so fast. So our string length changes all the time. But my usage of the data probably doesn't care how many random bytes I have. It just wants the next available unused byte as soon as it can get it after a request. I could be waiting on the device. But I could also have a buffer of characters pre-read. A length comparison is a needless waste of computation. A null check is more efficient.) Length-prefix is a good guard against buffer overflow? So is sane usage of library functions and implementation. What if I pass in malformed data? My buffer is 2 bytes long but I tell the function it's 7! Ex: If gets() was intended to be used on known data it could've had an internal buffer check that tested compiled buffers and malloc() calls and still follow spec. If it was meant to be used as a pipe for unknown STDIN to arrive at unknown buffer then clearly one can't know abut the buffer size which means a length arg is pointless, you need something else here like a canary check. For that matter, you can't length-prefix some streams and inputs, you just can't. Which means the length check has to be built into the algorithm and not a magic part of the typing system. TL;DR NUL-terminated never had to be unsafe, it just ended up that way via misuse. counter-counter point: NUL-termination is annoying on binary. You either need to do length-prefix here or transform NUL bytes in some way: escape-codes, range remapping, etc... which of course means more-memory-usage/reduced-information/more-operations-per-byte. Length-prefix mostly wins the war here. The only upside to a transform is that no additional functions have to be written to cover the length-prefix strings. Which means on your more optimized sub-O(n) routines you can have them automatically act as their O(n) equivalents without adding more code. Downside is, of course, time/memory/compression waste when used on NUL heavy strings. Depending on how much of your library you end up duplicating to operate on binary data, it may make sense to work solely with length-prefix strings. That said one could also do the same with length-prefix strings... -1 length could mean NUL-terminated and you could use NUL-terminated strings inside length-terminated. Concat: "O(n+m) vs O(m)" I'm assuming your referring to m as the total length of the string after concatenating because they both have to have that number of operations minimum (you can't just tack-on to string 1, what if you have to realloc?). And I'm assuming n is a mythical amount of operations you no longer have to do because of a pre-compute. If so, then the answer is simple: pre-compute. If you're insisting you'll always have enough memory to not need to realloc and that's the basis of the big-O notation then the answer is even more simple: do binary search on allocated memory for end of string 1, clearly there's a large swatch of infinite zeros after string 1 for us to not worry about realloc. There, easily got n to log(n) and I barely tried. Which if you recall log(n) is essentially only ever as large as 64 on a real computer, which is essentially like saying O(64+m), which is essentially O(m). (And yes that logic has been used in run-time analysis of real data structures in-use today. It's not bullshit off the top of my head.) Concat()/Len() again: Memoize results. Easy. Turns all computes into pre-computes if possible/necessary. This is an algorithmic decision. It's not an enforced constraint of the language. String suffix passing is easier/possible with NUL termination. Depending on how length-prefix is implemented it can be destructive on original string and can sometimes not even be possible. Requiring a copy and pass O(n) instead of O(1). Argument-passing/de-referencing is less for NUL-terminated versus length-prefix. Obviously because you're passing less information. If you don't need length, then this saves a lot of footprint and allows optimizations. You can cheat. It's really just a pointer. Who says you have to read it as a string? What if you want to read it as a single character or a float? What if you want to do the opposite and read a float as a string? If you're careful you can do this with NUL-termination. You can't do this with length-prefix, it's a data type distinctly different from a pointer typically. You'd most likely have to build a string byte-by-byte and get the length. Of course if you wanted something like an entire float (probably has a NUL inside it) you'd have to read byte-by-byte anyway, but the details are left to you to decide.

TL;DR您使用二进制数据吗?如果不是,那么null终止允许更多的算法自由。如果是,那么代码数量vs速度/内存/压缩是你的主要关注点。两种方法的混合或记忆可能是最好的。