I was talking to my friend about these two pieces of code. He said the python one terminates, the C++ one doesn't.
Python:
arr = [1, 2, 3]
for i in range(len(arr)):
arr.append(i)
print("done")
C++:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> arr{1,2,3};
for(int i = 0; i < arr.size(); i++){
arr.push_back(i);
}
cout << "done" << endl;
return 0;
}
I challenged that and ran it on 2 computers. The first one ran out of memory (bad alloc) because it had 4gb of ram. My mac as 12gb of ram and it was able to run and terminate just fine. I thought it wouldn't run forever because the type of size() in vector is an unsigned int. Since my mac was 64 bit, I thought that it could store 2^(64-2)=2^62 ints (which is true) but the unsigned int for the size is 32 for some reason.
Is this some bug in the C++ compiler that does not change the max_size() to be relative to the system's hardware? The overflow causes the program to terminate. Or is it for some other reason?
There is not a bug in your C++ compiler manifesting itself here.
intis overflowing (due to thei++), the behaviour of which is undefined. (It's feasible that you'll run out of memory on some platforms before this overflow occurs.) Note that there is no defined behaviour that will makeinegative, although that is a common occurrence on machines with 2's complement signed integral types oncestd::numeric_limits<int>::max()is attained, and ifiwere -1 say theni < arr.size()would befalsedue to the implicit conversion ofito anunsignedtype.The Python version pre-computes
range(len(arr)); that is subsequent appends do not change that initial value.