Remember that C and C++ are actually completely different languages. They share some common syntax, but C is a procedural language and C++ is object oriented, so they are different programming paradigms.
gcc should work just fine as a C compiler. I believe MinGW uses it. It also has flags you can specify to make sure it's using the right version of C (e.g. C99).
If you want to stick with C then you simply won't be able to use new (it's not part of the C language) but there shouldn't be any problems with moving to C++ for a shared library, just so long as you put your Object Oriented hat on when you do.
I'd suggest you just stick with the language you are more comfortable with. The fact that you're using new suggests that will be C++, but it's up to you.
Remember that C and C++ are actually completely different languages. They share some common syntax, but C is a procedural language and C++ is object oriented, so they are different programming paradigms.
gcc should work just fine as a C compiler. I believe MinGW uses it. It also has flags you can specify to make sure it's using the right version of C (e.g. C99).
If you want to stick with C then you simply won't be able to use new (it's not part of the C language) but there shouldn't be any problems with moving to C++ for a shared library, just so long as you put your Object Oriented hat on when you do.
I'd suggest you just stick with the language you are more comfortable with. The fact that you're using new suggests that will be C++, but it's up to you.
You can use e.g. GCC as a C compiler. To ensure it's compiling as C, use the
-x coption. You can also specify a particular version of the C standard, e.g.-std=c99. To ensure you're not using any GCC-specific extensions, you can use the-pedanticflag. I'm sure other compilers have similar options.mallocandcallocare indeed how you allocate memory in C.That's up to you.* You say that you want to be cross-platform, but C++ is essentially just as "cross-platform" as C. However, if you're working on embedded platforms (e.g. microcontrollers or DSPs), you may not find C++ compilers for them.
No,
newanddeleteare not supported in C.
* In my opinion, though, you should strongly consider switching to C++ for any application of non-trivial complexity. C++ has far more powerful high-level constructs than C (e.g. smart pointers, containers, templates) that simplify a lot of the tedious work in C. It takes a while to learn how to use them effectively, but in the long run, they will be worth it.
Videos
Your observations are correct. C++ is a complicated beast, and the new keyword was used to distinguish between something that needed delete later and something that would be automatically reclaimed. In Java and C#, they dropped the delete keyword because the garbage collector would take care of it for you.
The problem then is why did they keep the new keyword? Without talking to the people who wrote the language it's kind of difficult to answer. My best guesses are listed below:
- It was semantically correct. If you were familiar with C++, you knew that the
newkeyword creates an object on the heap. So, why change expected behavior? - It calls attention to the fact that you are instantiating an object rather than calling a method. With Microsoft code style recommendations, method names start with capital letters so there can be confusion.
Ruby is somewhere in between Python and Java/C# in it's use of new. Basically you instantiate an object like this:
f = Foo.new()
It's not a keyword, it's a static method for the class. What that means is that if you want a singleton, you can override the default implementation of new() to return the same instance every time. It's not necessarily recommended, but it's possible.
In short, you are right. The new keyword is superfluous in languages like Java and C#. Here are some insights from Bruce Eckel who was a member of C++ Standard Committee in 1990s and later published books on Java:
[T]here needed to be some way to distinguish heap objects from stack objects. To solve this problem, the new keyword was appropriated from Smalltalk. To create a stack object, you simply declare it, as in Cat x; or, with arguments, Cat x("mittens");. To create a heap object, you use new, as in new Cat; or new Cat("mittens");. Given the constraints, this is an elegant and consistent solution.
Enter Java, after deciding that everything C++ is badly done and overly complex. The irony here is that Java could and did make the decision to throw away stack allocation (pointedly ignoring the debacle of primitives, which I've addressed elsewhere). And since all objects are allocated on the heap, there's no need to distinguish between stack and heap allocation. They could easily have said Cat x = Cat() or Cat x = Cat("mittens"). Or even better, incorporated type inference to eliminate the repetition (but that -- and other features like closures -- would have taken "too long" so we are stuck with the mediocre version of Java instead; type inference has been discussed but I will lay odds it won't happen. And shouldn't, given the problems in adding new features to Java).
They both allocate memory on the heap. Am I correct?