You can't because there's no automatic cast that can be used.
Remember that you can't pass an array to a function by value: an array parameter is really just a pointer parameter, so the following two are the same:
void f(long a[]);
void f(long* a);
When you pass an array to a function, the array is implicitly converted to a pointer to its initial element. So, given long data[10];, the following two are the same:
f(data);
f(&data[0]);
There is no implicit conversion from short* to long*, so if data were declared as short data[10];, you would get a compilation error.
You would need to explicitly cast the short* to a long*, using reinterpret_cast, but that won't convert "an array of N short elements" into "an array of N long elements," it will reinterpret the pointed-to data as "an array of [some number of] long elements," which probably isn't what you want.
You need to create a new array of long elements and copy the elements from the array of short into the array of long:
short data[10] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
long data_as_long[10];
std::copy(data, data + 10, data_as_long);
f(data_as_long);
Or, you might consider changing your function to be a template that can accept a pointer to an array of any type of element:
template <typename T> void f(T*);
Or, for a more advanced solution, the most generic way to do this would be to have your function take an iterator range. This way, you could pass it any type of iterator (including pointers into an array). In addition, it provides a natural and simple way to ensure that the length of the array is passed correctly, since the length is not passed explicitly, it's determined by the arguments.
template <typename Iterator> void f(Iterator first, Iterator last);
const unsigned length_of_data = 10;
long data_array[length_of_data];
std::vector<long> data_vector(length_of_data);
f(data_array, data_array + length_of_data); // works!
f(data_vector.begin(), data_vector.end()); // also works!
No, a good compiler will give an error. A bad compiler will give strange results.
Recasting a single element at a time is common and presents no difficulties, but passing a pointer to a function expecting one type of object and receiving another is an I Love Lucy sort of catastrophe.
What is the rule for C to cast between short and int? - Stack Overflow
casting - What happens when I assign long int to int in C? - Stack Overflow
Is C guaranteed to correctly convert "long" and friends as "int"?
c# - Cannot implicitly convert type 'long' to "int?"? - Stack Overflow
Microsoft converts your Int16 variables into Int32 when doing the add function.
Change the following:
Int16 answer = firstNo + secondNo;
into...
Int16 answer = (Int16)(firstNo + secondNo);
Adding two Int16 values result in an Int32 value. You will have to cast it to Int16:
Int16 answer = (Int16) (firstNo + secondNo);
You can avoid this problem by switching all your numbers to Int32.
Anytime an integer type is being converted to a different integer type it falls through a deterministic pachinko machine of rules as dictated by the standard and on one occasion, the implementation.
The general overview on value-qualification:
C99 6.3.1.1-p2
If an int can represent all values of the original type (as restricted by the width, for a bit-field), the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions. All other types are unchanged by the integer promotions.
That said, lets look at your conversions. The signed-short to unsigned int is covered by the following, since the value being converted falls outside the unsigned int domain:
C99 6.3.1.3-p2
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Which basically means "add UINT_MAX+1". On your machine, UINT_MAX is 4294967295, therefore, this becomes
-1 + 4294967295 + 1 = 4294967295
Regarding your unsigned short to signed int conversion, that is covered by the regular value-quaified promotion. Specifically:
C99 6.3.1.3-p1
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
In other words, because the value of your unsigned short falls within the coverable domain of signed int, there is nothing special done and the value is simply saved.
And finally, as mentioned in general-comment above, something special happens to your declaration of b
signed short b = 0xFFFF;
The 0xFFFF in this case is a signed integer. The decimal value is 65535. However, that value is not representable by a signed short so yet-another conversion happens, one that perhaps you weren't aware of:
C99 6.3.1.3-p3
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
In other words, your implementation chose to store it as (-1), but you cannot rely on that on a different implementation.
As stated in the question, assume 16-bit short and 32-bit int.
unsigned short a = 0xFFFF;
This initializes a to 0xFFFF, or 65535. The expression 0xFFFF is of type int; it's implicitly converted to unsigned short, and the value is preserved.
signed short b = 0xFFFF;
This is a little more complicated. Again, 0xFFFF is of type int. It's implicitly converted to signed short -- but since the value is outside the range of signed short the conversion cannot preserve the value.
Conversion of an integer to a signed integer type, when the value can't be represented, yields an implementation-defined value. In principle, the value of b could be anything between -32768 and +32767 inclusive. In practice, it will almost certainly be -1. I'll assume for the rest of this that the value is -1.
unsigned int u16tou32 = a;
The value of a is 0xFFFF, which is converted from unsigned short to unsigned int. The conversion preserves the value.
unsigned int s16tou32 = b;
The value of b is -1. It's converted to unsigned int, which clearly cannot store a value of -1. Conversion of an integer to an unsigned integer type (unlike conversion to a signed type) is defined by the language; the result is reduced modulo MAX + 1, where MAX is the maximum value of the unsigned type. In this case, the value stored in s16tou32 is UINT_MAX - 1, or 0xFFFFFFFF.
signed int u16tos32 = a;
The value of a, 0xFFFF, is converted to signed int. The value is preserved.
signed int s16tos32 = b;
The value of b, -1, is converted to signed int. The value is preserved.
So the stored values are:
a == 0xFFFF (65535)
b == -1 (not guaranteed, but very likely)
u16tou32 == 0xFFFF (65535)
s16tou32 == 0xFFFFFFFF (4294967295)
u16tos32 == 0xFFFF (65535)
s16tos32 == -1
To summarize the integer conversion rules:
If the target type can represent the value, the value is preserved.
Otherwise, if the target type is unsigned, the value is reduced modulo MAX+1, which is equivalent to discarding all but the low-order N bits. Another way to describe this is that the value MAX+1 is repeatedly added to or subtracted from the value until you get a result that's in the range (this is actually how the C standard describes it). Compilers don't actually generate code to do this repeated addition or subtraction; they just have to get the right result.
Otherwise, the target type is signed and cannot represent the value; the conversion yields an implementation-defined value. In almost all implementations, the result discards all but the low-order N bits using a two's-complement representation. (C99 added a rule for this case, permitting an implementation-defined signal to be raised instead. I don't know of any compiler that does this.)
The language guarantees that int is at least 16 bits, long is at least 32 bits, and long can represent at least all the values that int can represent.
If you assign a long value to an int object, it will be implicitly converted. There's no need for an explicit cast; it would merely specify the same conversion that's going to happen anyway.
On your system, where int and long happen to have the same size and range, the conversion is trivial; it simply copies the value.
On a system where long is wider than int, if the value won't fit in an int, then the result of the conversion is implementation-defined. (Or, starting in C99, it can raise an implementation-defined signal, but I don't know of any compilers that actually do that.) What typically happens is that the high-order bits are discarded, but you shouldn't depend on that. (The rules are different for unsigned types; the result of converting a signed or unsigned integer to an unsigned type is well defined.)
If you need to safely assign a long value to an int object, you can check that it will fit before doing the assignment:
#include <limits.h> /* for INT_MIN, INT_MAX */
/* ... */
int i;
long li = /* whatever */
if (li >= INT_MIN && li <= INT_MAX) {
i = li;
}
else {
/* do something else? */
}
The details of "something else" are going to depend on what you want to do.
One correction: int and long are always distinct types, even if they happen to have the same size and representation. Arithmetic types are freely convertible, so this often doesn't make any difference, but for example int* and long* are distinct and incompatible types; you can't assign a long* to an int*, or vice versa, without an explicit (and potentially dangerous) cast.
And if you find yourself needing to convert a long value to int, the first thing you should do is reconsider your code's design. Sometimes such conversions are necessary, but more often they're a sign that the int to which you're assigning should have been defined as a long in the first place.
A long can always represent all values of int.
If the value at hand can be represented by the type of the variable you assign to, then the value is preserved.
If it can't be represented, then for signed destination type the result is formally unspecified, while for unsigned destination type it is specified as the original value modulo 2n, where n is the number of bits in the value representation (which is not necessarily all the bits in the destination).
In practice, on modern machines you get wrapping also for signed types.
That's because modern machines use two's complement form to represent signed integers, without any bits used to denote "invalid value" or such – i.e., all bits used for value representation.
With n bits value representation any integer value is x is mapped to x+K*2n with the integer constant K chosen such that the result is in the range where half of the possible values are negative.
Thus, for example, with 32-bit int the value -7 is represented as bitpattern number -7+232 = 232-7, so that if you display the number that the bitpattern stands for as unsigned integer, you get a pretty large number.
The reason that this is called two's complement is because it makes sense for the binary numeral system, the base two numeral system. For the binary numeral system there's also a ones' (note the placement of the apostrophe) complement. Similarly, for the decimal numberal system there's ten's complement and niners' complement. With 4 digit ten's complement representation you would represent -7 as 10000-7 = 9993. That's all, really.
I'm using a library function which takes arrays of long's as inputs (and single long's too), and modifies these arrays. As long as all the numbers inside the library stay within the int size limit (i.e. there are no calculations that would overflow int), is it 100% safe for me to pass it arrays of int's, or should I manually typecast my arrays using for loops from int to long before input? And then assume that these arrays still contain int's?
What if this was some other integer type, like unsigned int?
I am worried that, for example, number 5 has different bit representation (and is stored using a different number of bits) across these various data types. Will C figure it out and do the correct thing?
The reason you require an explicit conversion is that not all values of long can be represented as an int. The compiler is more or less saying 'you can do this, but you have to tell me you know what you're doing'.
Per the docs, Convert.ToInt32 will check for and throw an OverflowException if the long cannot be represented as an int. You can see the implementation in the reference source - it's just this check and then a cast.
The second two options are (usually) the same and allow the cast despite an overflow as unchecked is the compiler default. If you change the compiler default to checked using the /checked switch then you'll get an OverflowException if not in an unchecked block.
As to which is 'best', it depends what your requirement is.
int is 32-bit integral while long is 64-bit integral.
long can store/represent all int values but all long values cannot be represented by int so long is bigger type than int that's why int can be implicitly converted to long by compiler but not vice versa.
So that's the reason we need to explicitly cast/convert long to int which may result in information loss if that long is not representable asint