How should I return rho, c, k, d and layers [...] ?
Simply do it:
return rho, c, k, d, layers
And then you'd call it like
rho, c, k, d, layers = material()
print d[1]
Note that the more stuff you're returning, the more likely it is you're going to want to wrap it all together into some structure like a dict (or namedtuple, or class, etc.)
How should I return rho, c, k, d and layers [...] ?
Simply do it:
return rho, c, k, d, layers
And then you'd call it like
rho, c, k, d, layers = material()
print d[1]
Note that the more stuff you're returning, the more likely it is you're going to want to wrap it all together into some structure like a dict (or namedtuple, or class, etc.)
return can return multiple values if you separate them with commas:
return rho, c, k, d, layers
This will make material return a tuple containing rho, c, k, d, and layers.
Once this is done, you can access the values returned by material through unpacking:
rho, c, k, d, layers = material()
Here is a demonstration:
>>> def func():
... return [1, 2], [3, 4], [5, 6]
...
>>> a, b, c = func()
>>> a
[1, 2]
>>> b
[3, 4]
>>> c
[5, 6]
>>> a[1]
2
>>>
You can return a tuple of lists, an use sequence unpacking to assign them to two different names when calling the function:
def f():
return [1, 2, 3], ["a", "b", "c"]
list1, list2 = f()
You can return as many value as you want by separating the values by commas:
def return_values():
# your code
return value1, value2
You can even wrap them in parenthesis as follows:
return (value1, value2)
In order to call the function you can use one of the following alternatives:
value1, value2 = return_values() #in the case where you return 2 values
values= return_values() # in the case values will contain a tuple
>>> rr,tt = zip(*[(i*10, i*12) for i in xrange(4)])
>>> rr
(0, 10, 20, 30)
>>> tt
(0, 12, 24, 36)
Creating two comprehensions list is better (at least for long lists). Be aware that, the best voted answer is slower can be even slower than traditional for loops. List comprehensions are faster and clearer.
python -m timeit -n 100 -s 'rr=[];tt = [];' 'for i in range(500000): rr.append(i*10);tt.append(i*12)'
10 loops, best of 3: 123 msec per loop
> python -m timeit -n 100 'rr,tt = zip(*[(i*10, i*12) for i in range(500000)])'
10 loops, best of 3: 170 msec per loop
> python -m timeit -n 100 'rr = [i*10 for i in range(500000)]; tt = [i*10 for i in range(500000)]'
10 loops, best of 3: 68.5 msec per loop
It would be nice to see list comprehensionss supporting the creation of multiple lists at a time.
However,
if you can take an advantage of using a traditional loop (to be precise, intermediate calculations), then it is possible that you will be better of with a loop (or an iterator/generator using yield). Here is an example:
$ python3 -m timeit -n 100 -s 'rr=[];tt=[];' "for i in (range(1000) for x in range(10000)): tmp = list(i); rr.append(min(tmp));tt.append(max(tmp))"
100 loops, best of 3: 314 msec per loop
$ python3 -m timeit -n 100 "rr=[min(list(i)) for i in (range(1000) for x in range(10000))];tt=[max(list(i)) for i in (range(1000) for x in range(10000))]"
100 loops, best of 3: 413 msec per loop
Of course, the comparison in these cases are unfair; in the example, the code and calculations are not equivalent because in the traditional loop a temporary result is stored (see tmp variable). So, the list comprehension is doing much more internal operations (it calculates the tmp variable twice!, yet it is only 25% slower).
I think the choices need to be considered strictly from the caller's point of view: what is the consumer most likely to need to do?
And what are the salient features of each collection?
- The tuple is accessed in order and immutable
- The list is accessed in order and mutable
- The dict is accessed by key
The list and tuple are equivalent for access, but the list is mutable. Well, that doesn't matter to me the caller if I'm going to immediately unpack the results:
score, top_player = play_round(players)
# or
idx, record = find_longest(records)
There's no reason here for me to care if it's a list or a tuple, and the tuple is simpler on both sides.
On the other hand, if the returned collection is going to be kept whole and used as a collection:
points = calculate_vertices(shape)
points.append(another_point)
# Make a new shape
then it might make sense for the return to be mutable. Homogeneity is also an important factor here. Say you've written a function to search a sequence for repeated patterns. The information I get back is the index in the sequence of the first instance of the pattern, the number of repeats, and the pattern itself. Those aren't the same kinds of thing. Even though I might keep the pieces together, there's no reason that I would want to mutate the collection. This is not a list.
Now for the dictionary.
the last one creates more readable code because you have named outputs
Yes, having keys for the fields makes heterogenous data more explicit, but it also comes with some encumbrance. Again, for the case of "I'm just going to unpack the stuff", this
round_results = play_round(players)
score, top_player = round_results["score"], round_results["top_player"]
(even if you avoid literal strings for the keys), is unnecessary busywork compared to the tuple version.
The question here is threefold: how complex is the collection, how long is the collection going to be kept together, and are we going to need to use this same kind of collection in a bunch of different places?
I'd suggest that a keyed-access return value starts making more sense than a tuple when there are more than about three members, and especially where there is nesting:
shape["transform"]["raw_matrix"][0, 1]
# vs.
shape[2][4][0, 1]
That leads into the next question: is the collection going to leave this scope intact, somewhere away from the call that created it? Keyed access over there will absolutely help understandability.
The third question -- reuse -- points to a simple custom datatype as a fourth option that you didn't present.
Is the structure solely owned by this one function? Or are you creating the same dictionary layout in many places? Do many other parts of the program need to operate on this structure? A repeated dictionary layout should be factored out to a class. The bonus there is that you can attach behavior: maybe some of the functions operating on the data get encapsulated as methods.
A fifth good, lightweight, option is namedtuple(). This is in essence the immutable form of the dictionary return value.
Don't think about functions returning multiple arguments. Conceptually, it is best to think of functions as both receiving and returning a single argument. A function that appears to accept multiple arguments actually receives just a single argument of tuple (formally product) type. Similarly, a function that returns multiple arguments is simply returning a tuple.
In Python:
def func(a, b, c):
return b, c
could be rewritten as
def func(my_triple):
return (my_triple[1], my_triple[2])
to make the comparison obvious.
The first case is merely syntactic sugar for the latter; both receive a triple as an argument, but the first pattern-matches on its argument to perform automatic destructuring into its constituent components. Thus, even languages without full-on general pattern-matching admit some form of basic pattern matching on some of their types (Python admits pattern-matching on both product and record types).
To return to the question at hand: there is no single answer to your question, because it would be like asking "what should be the return type of an arbitrary function"? It depends on the function and the use case. And, incidentally, if the "multiple return values" are really independent, then they should probably be computed by separate functions.