It's been a while since I C++'d but these answers are off a bit.
As far as the size goes, 'int' isn't anything. It's a notional value of a standard integer; assumed to be fast for purposes of things like iteration. It doesn't have a preset size.
So, the answers are correct with respect to the differences between int and uint, but are incorrect when they talk about "how large they are" or what their range is. That size is undefined, or more accurately, it will change with the compiler and platform.
It's never polite to discuss the size of your bits in public.
When you compile a program, int does have a size, as you've taken the abstract C/C++ and turned it into concrete machine code.
So, TODAY, practically speaking with most common compilers, they are correct. But do not assume this.
Specifically: if you're writing a 32 bit program, int will be one thing, 64 bit, it can be different, and 16 bit is different. I've gone through all three and briefly looked at 6502 shudder
A brief google search shows this: https://www.tutorialspoint.com/cprogramming/c_data_types.htm This is also good info: https://docs.oracle.com/cd/E19620-01/805-3024/lp64-1/index.html
use int if you really don't care how large your bits are; it can change.
Use size_t and ssize_t if you want to know how large something is.
If you're reading or writing binary data, don't use int. Use a (usually platform/source dependent) specific keyword. WinSDK has plenty of good, maintainable examples of this. Other platforms do too.
I've spent a LOT of time going through code from people that "SMH" at the idea that this is all just academic/pedantic. These ate the people that write unmaintainable code. Sure, it's easy to use type 'int' and use it without all the extra darn typing. It's a lot of work to figure out what they really meant, and a bit mind-numbing.
It's crappy coding when you mix int.
use int and uint when you just want a fast integer and don't care about the range (other than signed/unsigned).