Static cast will also fail if the compiler doesn't know (or is pretending not to know) about the relationship between the types. If your inheritance isn't declared public between the two, the compiler will consider them as unrelated types and give you the same cryptic warning.
This just bit me, so thought I'd share.
Answer from Dragos on Stack OverflowStatic cast will also fail if the compiler doesn't know (or is pretending not to know) about the relationship between the types. If your inheritance isn't declared public between the two, the compiler will consider them as unrelated types and give you the same cryptic warning.
This just bit me, so thought I'd share.
Your reasoning has a very usual flaw; I think we all have made the same mistake sometime. You are thinking of std::vector<> as just an output container because this is how you want to use it now, but it is not.
Just imagine that the following code would compile:
vector<BigObject*>* bigVector = box->ClipObjectInRect(); // OK
ObjList* objVector = static_cast<ObjList*>(bigVector); // Not OK; we'll now see why
objVector->push_back(new SmallObject()); // OUCH
As you can see, allowing that cast would allow you to try to put a SmallObject* in what can only contain BigObject*. This would surely result in a runtime error.
By the way: you can actually cast between arrays of related types. This is a behaviour inherited from C. And it results in runtime errors :)
How different is type-casting to type conversion by functions?
c++ - invalid cast from char* to int* - Stack Overflow
casting - C++: can't static_cast from double* to int* - Stack Overflow
Cast syntax for static typing - Ideas - Discussions on Python.org
What are the technical differences between converting types using the type casting operator found in many C-Like languages, and type conversion using functions such as str() and int() in python?
You should use reinterpret_cast for casting pointers, i.e.
r = reinterpret_cast<int*>(p);
Of course this makes no sense,
unless you want take a int-level look at a double! You'll get some weird output and I don't think this is what you intended. If you want to cast the value pointed to by p to an int then,
*r = static_cast<int>(*p);
Also, r is not allocated so you can do one of the following:
int *r = new int(0);
*r = static_cast<int>(*p);
std::cout << *r << std::endl;
Or
int r = 0;
r = static_cast<int>(*p);
std::cout << r << std::endl;
Aside from being pointers, double* and int* have nothing in common. You could say the same thing for Foo* and Bar* pointer types to any dissimilar structures.
static_cast means that a pointer of the source type can be used as a pointer of the destination type, which requires a subtype relationship.
Ok, I think the comments have covered the matter in enough detail so I'll just try to summarize my best understanding of them here. Most of this is by way of @juanpa.arrivillaga.
A standard python casting operation like int(x) (or more precisely, a type conversion operation), is actually a call to the __call__() function of an object. Types like int, float, str, etc are all object classes and are all instances of the metaclass type. A call to one of these instance of type e.g. int.__call__() calls the int object constructor which creates a new instance of that type and initializes it with the inputs to __call__().
In short, there is nothing special or different about the common python "type conversions" (e.g. int(x), str(40)) other than that the int and str objects are included in __builtins__.
And to answer the original question, if type_name is an instance of the type class then the type_name.__call__() function simply declares and initializes a new instance of that type. Thus, one can simply do:
# convert x to type type_name
x = type_name(x)
however this may cause an exception if x is not a valid input to the type_name constructor.
To cast a value in another type you can use the type itself, you can pass the type as an argument and call it into a function and you can get it from the builtins module if you sure that the type is a builtin:
value = "1"
value = int(value) # set value to 1
value = 1
value = str(value) # set value to "1"
def cast(value, type_):
return type_(value)
import buitlins
def builtin_cast(value, type_):
type_ = getattr(buitlins, type_, None)
if isinstance(type_, type):
return type_(value)
raise ValueError(f"{type_!r} is not a builtins type.")
value = cast("1", int) # set value to 1
value = cast(1, str) # set value to "1"
value = builtin_cast("1", "int") # set value to 1
value = builtin_cast(1, "str") # set value to "1"
My understanding is pretty uneven as a programmer because I'm not actually a programmer, I'm an ME who's moved into finite element and HPC, so a grossly over-qualified programmer for an ME, and probably equally unqualified to be a programmer, so there are some pretty basic things I sometimes struggle with. I'm wondering if that's what's happening here.
At the moment, I'm struggling with this. I have a function that's invoked from main, this function itself should only exist on the host, but it should/will launch kernels. At the moment I'm just trying to get device memory to work.
template <int R, int D, typename T>
__host__
void testTimes(const std::string& filename, HRTimer& watch, std::ofstream& output,
const size_t& iter){
myObj<R,D,T> A1;
T *d_arrA1;
//create pointers for other arrays like d_arrA1
cudaMalloc(static_cast<void**>(&d_arrA1), D*sizeof(T));
// do stuff
}This is failing at the cudaMalloc invocation as
error: invalid type conversion
detected during instantiation of "void testTimes<D,T>(const std::__cxx11::string &, HRTimer &, std::ofstream &, const size_t &) [with D=2, T=float]"
and I have no idea why. Any insight is greatly appreciated.
I'm also curious, how do cudaMalloc and cudaMemcpy work for objects? I have a template class that's a container similar to a std::vector, it needs three template parameters, a dimension, data type, and one to indicate how many levels of a multi-dimensional array it will create. If I create a pointer to an instance of that class, say
myObj<D,T> A1; // for loop to fill A1 T *d_A1;
I believe what I need to do to allocate on the device is something like
cudaMalloc(static_cast<void**>(&d_A1), sizeof(A1));
although I know that's not right, as the top of the post says that's throwing type cast errors, but regardless, to copy the contents to the device, I'd need something like
cudaMemcpy(d_A1, A1, D*sizeof(T), cudaMemcpyHostToDevice);