- It is not a strength to have buffer overflows.
(Again, you are talking about arrays and strings as modern languages do. In C an array is just a contiguous block of memory that consists of elements of the exact same type. Nothing more, nothing less.)
Yes, it's a strength when a language doesn't have any forced bounds checking
by default. First of all, remember that var[n] is just another way of expressing *( (&var) + n). It relies on the fact that arrays often decay into pointers to the starting address; regardless, it's just shorthand for some basic pointer math.
1) How are you going to have some language-wide "checking" system on pointer math? It's not possible, nor is it desirable.
2) What should happen when n exceeds the boundaries of var? Should n get clamped? Should the operation cancel? Should some higher level error be propagated? These are decisions that should be made by the programmer so they can choose the appropriate response given the context. There is no default cost that makes sense to pay, especially if there's no actual way to exceed the boundaries.
3) Why should the programmer expect or want to pay for hidden instructions every time when doing something as seemingly transparent and simple as pointer math?
Most languages these days enforce their own paradigm. In C# using the [] operator means calling into code using an exception framework. Furthermore, if you want to "handle" an out of bounds, you have to deal with exceptions. C lets you handle it yourself (or not handle it at all). There are many times when you are writing loops that cannot exceed the boundaries of an array. Why pay for any checking in any of those situations? Again, if your response is "I shouldn't have to make a decision about this..." then a language like C certainly isn't for you.
A buffer overflow happens when the programmer isn't careful; any modern language will also let you write an infinite loop if you aren't careful. No one considers that a design flaw (in fact, some might argue it's a great proof the language is Turing complete). The buffer overflows associated with cstrings are technically infinite loops, just stopped due the inevitable accidental zero byte encounter or OS memory violation.
Ultimately, though, every language no matter how "safe" or "unsafe" is doing this kind of pointer math internally. It is absolutely required; and any language that wants to get decent performance has the ability to bypass bounds checking. The question I suppose you are trying to address is "should the programmer also get this choice?"
Finally, generally speaking, it's not like every C programmer is using GCC 4.x and writing their code in notepad. The ideal way to deal with these systems now is to use tooling that helps find errors and also do bounds checking for development builds, and then turn it off for a production build in all areas that aren't critical. (One of the major reasons buffer overflows are so prevalent is due to the strange popularity of C; it's a very difficult language to leverage properly across a project and median-grade programmers should stay far, far away. Yet that's not been the case, historically.)
If they had made library functions that allow passing a separate length argument it would make it a lot easier to use Pascal-style strings when the benefit of storing the length outweighs the cost, and programmers might have been less inclined to always default to cstrings.
The "_s" suffixed variants of string functions introduced in C11 more or less cover this.