The string terminator is a byte containing all 0 bits.
The unsigned int is two or four bytes (depending on your environment) each containing all 0 bits.
The two items are stored at different addresses. Your compiled code performs operations suitable for strings on the former location, and operations suitable for unsigned binary numbers on the latter. (Unless you have either a bug in your code, or some dangerously clever code!)
But all of these bytes look the same to the CPU. Data in memory (in most currently-common instruction set architectures) doesn’t have any type associated with it. That’s an abstraction that exists only in the source code and means something only to the compiler.
Edit-added: As an example: It is perfectly possible, even common, to perform arithmetic on the bytes that make up a string. If you have a string of 8-bit ASCII characters, you can convert the letters in the string between upper and lower case by adding or subtracting 32 (decimal). Or if you are translating to another character code you can use their values as indices into an array whose elements provide the equivalent bit coding in the other code.
To the CPU the chars are really extra-short integers. (eight bits each instead of 16, 32, or 64.) To us humans their values happen to be associated with readable characters, but the CPU has no idea of that. It also doesn’t know anything about the “C” convention of “null byte ends a string”, either (and as many have noted in other answers and comments, there are programming environments in which that convention isn’t used at all).
To be sure, there are some instructions in x86/x64 that tend to be used a lot with strings – the REP prefix, for example – but you can just as well use them on an array of integers, if they achieve the desired result.
In short there is no difference (except that an int is 2 or 4 bytes wide and a char just 1).
The thing is that all modern libaries either use the null terminator technique or store the length of a string. And in both cases the program/computer knows it reached the end of a string when it either read a null character or it has read as many characters as the size tells it to.
Issues with this start when the null terminator is missing or the length is wrong as then the program starts reading from memory it isn’t supposed to.
There is no difference. Machine code (assembler) does not have variable types, instead the type of the data is determined by the instruction.
A better example would be
float, if you have 4 bytes in memory, there is no info of whether it’s an
int or a
float (or something else entirely), however there are 2 different instructions for integer addition and float addition, so if the integer addition instruction is used on the data, then it’s an integer, and vice versa.
Same with strings, if you have code that, say, looks at an address and counts bytes until it reaches a
\0 byte, you can think of it as a function computing string’s length.
Of course programming like this would be complete madness, so that’s why we have higher level languages that compile to machine code and almost noone programs in assembler directly.
The scientific single word answer would be: metadata.
The metadata tells the computer whether some data at a certain location is an int, a string, program code or whatever. This metadata can be part of the program Code (as
Jamie Hanrahan mentioned) or it can be explicitly stored somewhere.
Modern CPUs can often distinguish between memory regions assigned to program code and data regions (for example, the NX Bit https://en.wikipedia.org/wiki/NX_bit). Some exotic hardware can also distinguish between strings and numbers, yes. But the usual case is that the Software takes care of this issue, either though implicit metadata (in the code) or explicit metadata (object-oriented VMs often store the metadata (type/class information) as part of the data (object)).
An advantage of not distinguishing between different kinds of data is that some operations become very simple. The I/O subsystem does not necessarily need to know whether the data it just reads from or writes to disk is actually program code, human readable text or numbers. It’s all just bits which get transported through the machine. Let the program code deal with the fancy typing issues.
It doesn’t. You do it!
Or your compiler/interpreter.
If instructions tell computer to add the
0 as a number, it’ll do it. If they tell computer to stop to print data after reach the
0, as a ‘
\0' char, it’ll do it.
Languages have mechanisms to ensure how to treat data. In C variables have types, like
char, and compiler generate right instructions to each data type. But C allows you cast data from a variable to another variable of different type, even a pointer to can be used as a number. To computer it’s all bits like any other.
A null character is one byte and an unsigned int is two bytes.