Videos
I avoid using pointers to avoid crashes entirely, and I prefer using STL containers instead like map or vector or others. I don't use polymorphism either (I prefer data oriented programming, but that's another story). I still use references which are safer (how much safer?) but can still crash.
Can using pointer-less C++ code lead to worse performance or higher energy consumption (but such pointer-less C++ code would still run faster than other languages, I believe?).
And if it leads to worse performance, are there some coding practices to maximize performance in such pointer-less C++ code? What sort of optimization are compilers able to do when data is passed by value or returned by value? Are there pitfalls to avoid when never using pointers?
Can functional programming in C++, as described in this article, also be relevant when avoiding pointers ? : http://sevangelatos.com/john-carmack-on/
(I am posting this question here since it was closed on stackoverflow)
I see a lot of people using pointers/references instead of a actual variable. I know that pointers are variables that store the memory location of a already existing variable and that references are not stored in memory because they "reference" to a variable (and because of that, it can't change to what it references on later).
I can think only 2 cases where pointers/references are useful:
-
Because I can't
returnanarray, when I want toreturnone, I can return areferenceto it. -
To avoid
variable-duplication.
Apart from this, I can't think why should I want to use them.
Did I understand something wrong?
I'm not sure where you get the idea that modern languages don't have pointers. In Ruby, for example, everything is a pointer. It's true that Ruby doesn't have special syntax or special operations for pointers, but that doesn't mean that there are none. Quite the opposite, in fact: because everything is a pointer, there is no need to distinguish between pointers and non-pointers, pointer operations and non-pointer operations. Pointers are so deeply ingrained in the language that you don't even see them.
The same is true for Python, Java, ECMAScript, Smalltalk, and many other languages.
What those languages don't support, is pointer arithmetic or fabricating a pointer out of thin air. But then again, some CPUs don't allow that either.
The original CISC CPU for the AS/400 distinguishes between pointers and integers. You can store pointers and you can dereference pointers, but you cannot create or modify pointers. The only way to get a pointer is if the kernel hands one to you. If you try to do arithmetic on it, you get back an integer, which cannot be converted to or used as a pointer. Even the modern PowerPC and POWER CPUs have a special tagged address mode specifically for running OS/400 / i5/OS / IBM i.
Go has pointers in the more traditional sense, like C. But it also doesn't allow pointer arithmetic.
Other languages have pointers and pointer arithmetic, but a set of restrictions that ensure that pointers are always valid, always point to initialized memory, and always point to memory that is owned by the entity performing the arithmetic.
Almost all modern programming languages use indirection extensively under the hood - any instance of a Java type that's derived from Object is referenced through a pointer (or pointer-like object), for example.
The difference is that those programming languages don't expose any pointer types or operations on pointer values to the programmer. You can't take the address of a Java Object instance and examine it directly, nor can you use it to offset an arbitrary number of bytes into the instance (even though the JVM does so internally). The language simply doesn't provide any mechanism for the programmer to do so. It doesn't define a method or operator to obtain an object's address; it doesn't define a method or operator to examine the contents of an arbitrary address; it doesn't define the binary + or - operators to work with address types. The [] operator doesn't just offset from a base address; it's smart enough to throw an exception if you attempt to index past the end of the array.
Remember that C was developed (at least in part) to implement the Unix operating system; since any OS needs to manage memory, the language needed to provide operations on memory addresses as well as other types.
C became popular for applications programming because C compilers were small, fast, and produced fast code. Being able to manipulate memory directly sped up a number of operations. Unfortunately, being able to manipulate memory directly also opened up a huge can of worms with respect to security, correctness, etc. Everything from the Morris worm to the Heartbleed bug was enabled by C's ability to manipulate memory. Also, C's pointer syntax could be confusing, especially since unary * has lower precedence than postfix operators like [], (), ++, ., ->, etc. The fact that array expressions "decay" to pointer types also leads to problems for people who don't really know the language that well.
So modern programming languages don't expose pointers the way C does to avoid many of these problems. However, note that most of C's contemporaries (Pascal, Fortran, BASIC, etc.) didn't expose operations on pointer values either, even though they used pointer-like semantics under the hood (passing arguments by reference, COMMON blocks, etc.).