Does python reuse repeated calculation results?Does Python automatically optimize/cache function calls?Calling an external command from PythonWhat are metaclasses in Python?Finding the index of an item given a list containing it in PythonWhat does the “yield” keyword do?Does Python have a ternary conditional operator?How to get the current time in PythonWhat does if __name__ == “__main__”: do?If Python is interpreted, what are .pyc files?Does Python have a string 'contains' substring method?
I wasted six years of my life getting a PhD degree. What should I do, and how will I survive?
In countries where the Emergency Services can be called without a SIM card, what caller ID is shown to them when called from an iPhone?
What is the Bongcloud opening?
How to ask Firefox to suggest a password when filling out registration form on the web?
What type of glass fuse is this and what does the spring do?
Is concept of entropy really indispensable? Especially when the concept of potential energy can serve the purpose?
Why do we have to discharge the capacitor before testing it in an LCR Meter?
Am I obligated to pass on domain knowledge after being let go?
Are generation ships inherently implausible?
Positional thinking by Grandmasters
how to change position of label and icon in LWC
What is the difference between Cisco's QoS models and strategies?
femme - why pronounced but not spelt as "famme"
Reconstructed PIE grammar? Could we be able to speak in Proto-European?
Can a Sorcerer use the Silence spell and the Subtle Spell Metamagic to silently cast the Knock spell?
New manager unapproved PTO my old manager approved, because of a conference at the same time that's now a "condition of my employment here"
Do sine waves (pure tones) have overtones?
Is it OK for a Buddhist teacher to charge their students an hourly rate for their time?
Replace elements of a list if the conditional is false
Ball hits curve of same curvature
Is there any difference between 旅行者 and 旅人?
Why is Brownian motion useful in finance?
Rock, Paper, Scissors in C++
What is the typical CPU utilization of idle iMac?
Does python reuse repeated calculation results?
Does Python automatically optimize/cache function calls?Calling an external command from PythonWhat are metaclasses in Python?Finding the index of an item given a list containing it in PythonWhat does the “yield” keyword do?Does Python have a ternary conditional operator?How to get the current time in PythonWhat does if __name__ == “__main__”: do?If Python is interpreted, what are .pyc files?Does Python have a string 'contains' substring method?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
If I have an expression that I wish to evaluate in Python, such as the expression for r
in the code snippet below, will the Python interpreter be smart and reuse the subresult x+y+z
, or just evaluate it twice? I'd also be interested to know if the answer to this question would be the same for a compiled language, e.g. C.
x = 1
y = 2
z = 3
r = (x+y+z+1) + (x+y+z+2)
It has been suggested that this question is similar to this question. I believe it is similar. However, I believe the linked question is less of a 'minimal example'. Also in the linked question there is no ambiguity as to the order of operations, for example, in examples similar to this question, where there is no defined order for operations (mathematically), depending on the order of the individual function calls (which are ambiguous), a worse or better job of optimisation may be done. Consider (abba)(abba)(baa*b), there are nested repeated substrings, and depending on the order and amount of preprocessing many different optimisations could be performed.
python interpreter interpreted-language
add a comment
|
If I have an expression that I wish to evaluate in Python, such as the expression for r
in the code snippet below, will the Python interpreter be smart and reuse the subresult x+y+z
, or just evaluate it twice? I'd also be interested to know if the answer to this question would be the same for a compiled language, e.g. C.
x = 1
y = 2
z = 3
r = (x+y+z+1) + (x+y+z+2)
It has been suggested that this question is similar to this question. I believe it is similar. However, I believe the linked question is less of a 'minimal example'. Also in the linked question there is no ambiguity as to the order of operations, for example, in examples similar to this question, where there is no defined order for operations (mathematically), depending on the order of the individual function calls (which are ambiguous), a worse or better job of optimisation may be done. Consider (abba)(abba)(baa*b), there are nested repeated substrings, and depending on the order and amount of preprocessing many different optimisations could be performed.
python interpreter interpreted-language
1
Possible duplicate of Does Python automatically optimize/cache function calls?
– Georgy
Sep 29 at 8:35
1
Checked with C++, it simply returns 15 because all values are known. If you make the variables dependent on some input so it can't optimize them away, it calculates (x+y+z+3).
– Sebastian Wahl
Oct 1 at 13:48
1
@SebastianWahl Did you mean it will compute2*(x + y + z) + 3
? Or something else? It would also be informative to indicate the compiler that you've used for checking that result.
– a_guest
Oct 2 at 9:54
1
@a_guest I misread the assembly here, it adds the result of (x + y + z) with itself and adds 3 in one instruction if I understand correct. It was GCC and you can see it online here and try other compilers: godbolt.org/z/PwxaxK
– Sebastian Wahl
Oct 2 at 11:55
add a comment
|
If I have an expression that I wish to evaluate in Python, such as the expression for r
in the code snippet below, will the Python interpreter be smart and reuse the subresult x+y+z
, or just evaluate it twice? I'd also be interested to know if the answer to this question would be the same for a compiled language, e.g. C.
x = 1
y = 2
z = 3
r = (x+y+z+1) + (x+y+z+2)
It has been suggested that this question is similar to this question. I believe it is similar. However, I believe the linked question is less of a 'minimal example'. Also in the linked question there is no ambiguity as to the order of operations, for example, in examples similar to this question, where there is no defined order for operations (mathematically), depending on the order of the individual function calls (which are ambiguous), a worse or better job of optimisation may be done. Consider (abba)(abba)(baa*b), there are nested repeated substrings, and depending on the order and amount of preprocessing many different optimisations could be performed.
python interpreter interpreted-language
If I have an expression that I wish to evaluate in Python, such as the expression for r
in the code snippet below, will the Python interpreter be smart and reuse the subresult x+y+z
, or just evaluate it twice? I'd also be interested to know if the answer to this question would be the same for a compiled language, e.g. C.
x = 1
y = 2
z = 3
r = (x+y+z+1) + (x+y+z+2)
It has been suggested that this question is similar to this question. I believe it is similar. However, I believe the linked question is less of a 'minimal example'. Also in the linked question there is no ambiguity as to the order of operations, for example, in examples similar to this question, where there is no defined order for operations (mathematically), depending on the order of the individual function calls (which are ambiguous), a worse or better job of optimisation may be done. Consider (abba)(abba)(baa*b), there are nested repeated substrings, and depending on the order and amount of preprocessing many different optimisations could be performed.
python interpreter interpreted-language
python interpreter interpreted-language
edited Oct 3 at 20:07
user189076
asked Sep 28 at 13:40
user189076user189076
4193 silver badges6 bronze badges
4193 silver badges6 bronze badges
1
Possible duplicate of Does Python automatically optimize/cache function calls?
– Georgy
Sep 29 at 8:35
1
Checked with C++, it simply returns 15 because all values are known. If you make the variables dependent on some input so it can't optimize them away, it calculates (x+y+z+3).
– Sebastian Wahl
Oct 1 at 13:48
1
@SebastianWahl Did you mean it will compute2*(x + y + z) + 3
? Or something else? It would also be informative to indicate the compiler that you've used for checking that result.
– a_guest
Oct 2 at 9:54
1
@a_guest I misread the assembly here, it adds the result of (x + y + z) with itself and adds 3 in one instruction if I understand correct. It was GCC and you can see it online here and try other compilers: godbolt.org/z/PwxaxK
– Sebastian Wahl
Oct 2 at 11:55
add a comment
|
1
Possible duplicate of Does Python automatically optimize/cache function calls?
– Georgy
Sep 29 at 8:35
1
Checked with C++, it simply returns 15 because all values are known. If you make the variables dependent on some input so it can't optimize them away, it calculates (x+y+z+3).
– Sebastian Wahl
Oct 1 at 13:48
1
@SebastianWahl Did you mean it will compute2*(x + y + z) + 3
? Or something else? It would also be informative to indicate the compiler that you've used for checking that result.
– a_guest
Oct 2 at 9:54
1
@a_guest I misread the assembly here, it adds the result of (x + y + z) with itself and adds 3 in one instruction if I understand correct. It was GCC and you can see it online here and try other compilers: godbolt.org/z/PwxaxK
– Sebastian Wahl
Oct 2 at 11:55
1
1
Possible duplicate of Does Python automatically optimize/cache function calls?
– Georgy
Sep 29 at 8:35
Possible duplicate of Does Python automatically optimize/cache function calls?
– Georgy
Sep 29 at 8:35
1
1
Checked with C++, it simply returns 15 because all values are known. If you make the variables dependent on some input so it can't optimize them away, it calculates (x+y+z+3).
– Sebastian Wahl
Oct 1 at 13:48
Checked with C++, it simply returns 15 because all values are known. If you make the variables dependent on some input so it can't optimize them away, it calculates (x+y+z+3).
– Sebastian Wahl
Oct 1 at 13:48
1
1
@SebastianWahl Did you mean it will compute
2*(x + y + z) + 3
? Or something else? It would also be informative to indicate the compiler that you've used for checking that result.– a_guest
Oct 2 at 9:54
@SebastianWahl Did you mean it will compute
2*(x + y + z) + 3
? Or something else? It would also be informative to indicate the compiler that you've used for checking that result.– a_guest
Oct 2 at 9:54
1
1
@a_guest I misread the assembly here, it adds the result of (x + y + z) with itself and adds 3 in one instruction if I understand correct. It was GCC and you can see it online here and try other compilers: godbolt.org/z/PwxaxK
– Sebastian Wahl
Oct 2 at 11:55
@a_guest I misread the assembly here, it adds the result of (x + y + z) with itself and adds 3 in one instruction if I understand correct. It was GCC and you can see it online here and try other compilers: godbolt.org/z/PwxaxK
– Sebastian Wahl
Oct 2 at 11:55
add a comment
|
4 Answers
4
active
oldest
votes
You can check that with dis.dis
. The output is:
2 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (x)
3 4 LOAD_CONST 1 (2)
6 STORE_NAME 1 (y)
4 8 LOAD_CONST 2 (3)
10 STORE_NAME 2 (z)
5 12 LOAD_NAME 0 (x)
14 LOAD_NAME 1 (y)
16 BINARY_ADD
18 LOAD_NAME 2 (z)
20 BINARY_ADD
22 LOAD_CONST 0 (1)
24 BINARY_ADD
26 LOAD_NAME 0 (x)
28 LOAD_NAME 1 (y)
30 BINARY_ADD
32 LOAD_NAME 2 (z)
34 BINARY_ADD
36 LOAD_CONST 1 (2)
38 BINARY_ADD
40 BINARY_ADD
42 STORE_NAME 3 (r)
44 LOAD_CONST 3 (None)
46 RETURN_VALUE
So it won't cache the result of the expression in parentheses. Though for that specific case it would be possible, in general it is not, since custom classes can define __add__
(or any other binary operation) to modify themselves. For example:
class Foo:
def __init__(self, value):
self.value = value
def __add__(self, other):
self.value += 1
return self.value + other
x = Foo(1)
y = 2
z = 3
print(x + y + z + 1) # prints 8
print(x + y + z + 1) # prints 9
If you have an expensive function of which you would like to cache the result, you can do so via functools.lru_cache
for example.
On the other hand, the compiler will perform constant folding as can be seen from the following examples:
>>> import dis
>>> dis.dis("x = 'abc' * 5")
1 0 LOAD_CONST 0 ('abcabcabcabcabc')
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
>>> dis.dis("x = 1 + 2 + 3 + 4")
1 0 LOAD_CONST 0 (10)
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
add a comment
|
EDIT: This answer applies only to the default CPython interpreter of the Python language. It may not be applicable to other Python implementations that adopts JIT compilation techniques or uses a restricted Python sublanguage that allows static type inference. See @Jörg W Mittag's answer for more details.
No it will not. You can do this to see the compiled code:
from dis import dis
dis("r=(x+y+z+1) + (x+y+z+2)")
Output:
0 LOAD_NAME 0 (x)
2 LOAD_NAME 1 (y)
4 BINARY_ADD
6 LOAD_NAME 2 (z)
8 BINARY_ADD
10 LOAD_CONST 0 (1)
12 BINARY_ADD
14 LOAD_NAME 0 (x)
16 LOAD_NAME 1 (y)
18 BINARY_ADD
20 LOAD_NAME 2 (z)
22 BINARY_ADD
24 LOAD_CONST 1 (2)
26 BINARY_ADD
28 BINARY_ADD
30 STORE_NAME 3 (r)
32 LOAD_CONST 2 (None)
34 RETURN_VALUE
This is partially because Python is dynamically-typed. So the types of variables are not easily known at compile time. And the compiler has no way to know whether the +
operator, which can be overloaded by user classes, could have any side effect at all. Consider the following simple example:
class A:
def __init__(self, v):
self.value = v
def __add__(self, b):
print(b)
return self.value + b
x = A(3)
y = 4
r = (x + y + 1) + (x + y + 2)
For simple expressions, you can just save the intermediate results to a new variable:
z = x + y + 1
r = z + (z + 1)
For functions calls, functools.lru_cache
is another option, as already indicated by other answers.
add a comment
|
If I have an expression that I wish to evaluate in Python, such as the expression for
r
in the code snippet below, will the Python interpreter be smart and reuse the subresultx+y+z
, or just evaluate it twice?
Which Python interpreter are you talking about? There are currently four production-ready, stable Python implementations in widespread use. None of those actually have a Python interpreter, every single one of them compiles Python.
Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The Python Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming Python implementation would be allowed to, but not required, to perform this optimization.
I am pretty certain that, contrary to all the other answers which state that Python cannot do this, PyPy is capable of performing this optimization. Also, depending on which underlying platform you use, code executed using Jython or IronPython will also benefit from this optimization, e.g. I am 100% certain that the C2 compiler of Oracle HotSpot does perform this optimization.
I'd also be interested to know if the answer to this question would be the same for a compiled language […].
There is no such thing as a "compiled language". Compilation and interpretation are traits of the compiler or interpreter (duh!) not the language. Every language can be implemented by a compiler, and every language can be implemented by an interpreter. Case in point: there are interpreters for C, and conversely, every currently existing production-ready, stable, widely-used implementation of Python, ECMAScript, Ruby, and PHP has at least one compiler, many even have more than one (e.g. PyPy, V8, SpiderMonkey, Squirrelfish Extreme, Chakra).
A language is an abstract set of mathematical rules and restrictions written on a piece of paper. A language is neither compiled nor interpreted, a language just is. Those concepts live on different layers of abstraction; if English were a typed language, the term "compiled language" would be a type error.
I'd also be interested to know if the answer to this question would be the same for […] e.g. C.
There are many production-ready, stable C implementations in widespread use. Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The C Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming C implementation would be allowed to, but not required, to perform this optimization.
Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
. And I wonder how that could be done in Python without the knowledge of runtime types (or maybe the optimization only applies to the scenarios where the types can be clearly inferred?). In Python, type hints are just annotations and runtime checking is not enforced at all (see more details).
– GZ0
Sep 29 at 13:59
"Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
" –dis.dis
will only show you the optimizations that the Python-to-bytecode compiler makes, which is actually a pretty simple and "stupid" compiler that performs almost no optimizations. Also, it is a static ahead-of-time compiler, so it has to contend with all the limitations the Halting Problems and Rice's Theorem entail. However, in PyPy, Jython, and IronPython, that bytecode is usually compiled further to native machine code, and that compiler is much more sophisticated.
– Jörg W Mittag
Sep 29 at 14:05
1
"And I wonder how that could be done in Python without the knowledge of runtime types" – What makes you think an optimizer doesn't have knowledge of runtime types? The whole reason why many modern high-performance language execution engines delay compilation until runtime is precisely because that way the optimizer has access not only to runtime types, but also to runtime profiling data, data access patterns, branch statistics, etc. Even further, a compiler that is capable of de-optimization (such as HotSpot's) can make speculative unsafe optimizations and just remove them again when …
– Jörg W Mittag
Sep 29 at 14:06
1
… it realizes that its speculation was wrong. I.e. it could remove the common subexpression under the assumption that it doesn't have any side-effects even if it can't prove that it doesn't have side-effects, but monitor it for side-effects, and when it detects a side-effect, it just recompiles that particular piece of code without CSE.
– Jörg W Mittag
Sep 29 at 14:08
Unlike statically-typed languages such as C / Java, Python is dynamically typed so inferring runtime types is a fairly complicated task. I just checked that it is possible for the RPython subset of Python which imposes more restrictions to allow type inferences.
– GZ0
Sep 29 at 14:18
|
show 4 more comments
No, python doesn't do that by default. f you need python to preserve the result of a certain calculation for you, you need to implicitly tell python to do that, one way to do this would be by defining a function and using functools.lru_cache
docs:
from functools import lru_cache
@lru_cache(maxsize=32)
def add3(x,y,z):
return x + y + z
x=1
y=2
z=3
r = (add3(x,y,z)+1) + (add3(x,y,z)+2)
"No, python doesn't do that by default since it would use up too much memory ..." - Do you have a source for your claim that memory usage is the reason? Because honestly, I'm pretty sceptical of that.
– marcelm
Sep 28 at 23:30
2
The reason is not that the common subexpression elimination “would use up too much memory”; it is that the transformation is not sound in Python because the language is too dynamic.
– wchargin
Sep 28 at 23:30
@marcelm sorry.
– yukashima huksay
Sep 29 at 4:53
add a comment
|
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f58146860%2fdoes-python-reuse-repeated-calculation-results%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can check that with dis.dis
. The output is:
2 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (x)
3 4 LOAD_CONST 1 (2)
6 STORE_NAME 1 (y)
4 8 LOAD_CONST 2 (3)
10 STORE_NAME 2 (z)
5 12 LOAD_NAME 0 (x)
14 LOAD_NAME 1 (y)
16 BINARY_ADD
18 LOAD_NAME 2 (z)
20 BINARY_ADD
22 LOAD_CONST 0 (1)
24 BINARY_ADD
26 LOAD_NAME 0 (x)
28 LOAD_NAME 1 (y)
30 BINARY_ADD
32 LOAD_NAME 2 (z)
34 BINARY_ADD
36 LOAD_CONST 1 (2)
38 BINARY_ADD
40 BINARY_ADD
42 STORE_NAME 3 (r)
44 LOAD_CONST 3 (None)
46 RETURN_VALUE
So it won't cache the result of the expression in parentheses. Though for that specific case it would be possible, in general it is not, since custom classes can define __add__
(or any other binary operation) to modify themselves. For example:
class Foo:
def __init__(self, value):
self.value = value
def __add__(self, other):
self.value += 1
return self.value + other
x = Foo(1)
y = 2
z = 3
print(x + y + z + 1) # prints 8
print(x + y + z + 1) # prints 9
If you have an expensive function of which you would like to cache the result, you can do so via functools.lru_cache
for example.
On the other hand, the compiler will perform constant folding as can be seen from the following examples:
>>> import dis
>>> dis.dis("x = 'abc' * 5")
1 0 LOAD_CONST 0 ('abcabcabcabcabc')
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
>>> dis.dis("x = 1 + 2 + 3 + 4")
1 0 LOAD_CONST 0 (10)
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
add a comment
|
You can check that with dis.dis
. The output is:
2 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (x)
3 4 LOAD_CONST 1 (2)
6 STORE_NAME 1 (y)
4 8 LOAD_CONST 2 (3)
10 STORE_NAME 2 (z)
5 12 LOAD_NAME 0 (x)
14 LOAD_NAME 1 (y)
16 BINARY_ADD
18 LOAD_NAME 2 (z)
20 BINARY_ADD
22 LOAD_CONST 0 (1)
24 BINARY_ADD
26 LOAD_NAME 0 (x)
28 LOAD_NAME 1 (y)
30 BINARY_ADD
32 LOAD_NAME 2 (z)
34 BINARY_ADD
36 LOAD_CONST 1 (2)
38 BINARY_ADD
40 BINARY_ADD
42 STORE_NAME 3 (r)
44 LOAD_CONST 3 (None)
46 RETURN_VALUE
So it won't cache the result of the expression in parentheses. Though for that specific case it would be possible, in general it is not, since custom classes can define __add__
(or any other binary operation) to modify themselves. For example:
class Foo:
def __init__(self, value):
self.value = value
def __add__(self, other):
self.value += 1
return self.value + other
x = Foo(1)
y = 2
z = 3
print(x + y + z + 1) # prints 8
print(x + y + z + 1) # prints 9
If you have an expensive function of which you would like to cache the result, you can do so via functools.lru_cache
for example.
On the other hand, the compiler will perform constant folding as can be seen from the following examples:
>>> import dis
>>> dis.dis("x = 'abc' * 5")
1 0 LOAD_CONST 0 ('abcabcabcabcabc')
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
>>> dis.dis("x = 1 + 2 + 3 + 4")
1 0 LOAD_CONST 0 (10)
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
add a comment
|
You can check that with dis.dis
. The output is:
2 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (x)
3 4 LOAD_CONST 1 (2)
6 STORE_NAME 1 (y)
4 8 LOAD_CONST 2 (3)
10 STORE_NAME 2 (z)
5 12 LOAD_NAME 0 (x)
14 LOAD_NAME 1 (y)
16 BINARY_ADD
18 LOAD_NAME 2 (z)
20 BINARY_ADD
22 LOAD_CONST 0 (1)
24 BINARY_ADD
26 LOAD_NAME 0 (x)
28 LOAD_NAME 1 (y)
30 BINARY_ADD
32 LOAD_NAME 2 (z)
34 BINARY_ADD
36 LOAD_CONST 1 (2)
38 BINARY_ADD
40 BINARY_ADD
42 STORE_NAME 3 (r)
44 LOAD_CONST 3 (None)
46 RETURN_VALUE
So it won't cache the result of the expression in parentheses. Though for that specific case it would be possible, in general it is not, since custom classes can define __add__
(or any other binary operation) to modify themselves. For example:
class Foo:
def __init__(self, value):
self.value = value
def __add__(self, other):
self.value += 1
return self.value + other
x = Foo(1)
y = 2
z = 3
print(x + y + z + 1) # prints 8
print(x + y + z + 1) # prints 9
If you have an expensive function of which you would like to cache the result, you can do so via functools.lru_cache
for example.
On the other hand, the compiler will perform constant folding as can be seen from the following examples:
>>> import dis
>>> dis.dis("x = 'abc' * 5")
1 0 LOAD_CONST 0 ('abcabcabcabcabc')
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
>>> dis.dis("x = 1 + 2 + 3 + 4")
1 0 LOAD_CONST 0 (10)
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
You can check that with dis.dis
. The output is:
2 0 LOAD_CONST 0 (1)
2 STORE_NAME 0 (x)
3 4 LOAD_CONST 1 (2)
6 STORE_NAME 1 (y)
4 8 LOAD_CONST 2 (3)
10 STORE_NAME 2 (z)
5 12 LOAD_NAME 0 (x)
14 LOAD_NAME 1 (y)
16 BINARY_ADD
18 LOAD_NAME 2 (z)
20 BINARY_ADD
22 LOAD_CONST 0 (1)
24 BINARY_ADD
26 LOAD_NAME 0 (x)
28 LOAD_NAME 1 (y)
30 BINARY_ADD
32 LOAD_NAME 2 (z)
34 BINARY_ADD
36 LOAD_CONST 1 (2)
38 BINARY_ADD
40 BINARY_ADD
42 STORE_NAME 3 (r)
44 LOAD_CONST 3 (None)
46 RETURN_VALUE
So it won't cache the result of the expression in parentheses. Though for that specific case it would be possible, in general it is not, since custom classes can define __add__
(or any other binary operation) to modify themselves. For example:
class Foo:
def __init__(self, value):
self.value = value
def __add__(self, other):
self.value += 1
return self.value + other
x = Foo(1)
y = 2
z = 3
print(x + y + z + 1) # prints 8
print(x + y + z + 1) # prints 9
If you have an expensive function of which you would like to cache the result, you can do so via functools.lru_cache
for example.
On the other hand, the compiler will perform constant folding as can be seen from the following examples:
>>> import dis
>>> dis.dis("x = 'abc' * 5")
1 0 LOAD_CONST 0 ('abcabcabcabcabc')
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
>>> dis.dis("x = 1 + 2 + 3 + 4")
1 0 LOAD_CONST 0 (10)
2 STORE_NAME 0 (x)
4 LOAD_CONST 1 (None)
6 RETURN_VALUE
edited Sep 28 at 14:09
answered Sep 28 at 14:00
a_guesta_guest
11.6k3 gold badges21 silver badges57 bronze badges
11.6k3 gold badges21 silver badges57 bronze badges
add a comment
|
add a comment
|
EDIT: This answer applies only to the default CPython interpreter of the Python language. It may not be applicable to other Python implementations that adopts JIT compilation techniques or uses a restricted Python sublanguage that allows static type inference. See @Jörg W Mittag's answer for more details.
No it will not. You can do this to see the compiled code:
from dis import dis
dis("r=(x+y+z+1) + (x+y+z+2)")
Output:
0 LOAD_NAME 0 (x)
2 LOAD_NAME 1 (y)
4 BINARY_ADD
6 LOAD_NAME 2 (z)
8 BINARY_ADD
10 LOAD_CONST 0 (1)
12 BINARY_ADD
14 LOAD_NAME 0 (x)
16 LOAD_NAME 1 (y)
18 BINARY_ADD
20 LOAD_NAME 2 (z)
22 BINARY_ADD
24 LOAD_CONST 1 (2)
26 BINARY_ADD
28 BINARY_ADD
30 STORE_NAME 3 (r)
32 LOAD_CONST 2 (None)
34 RETURN_VALUE
This is partially because Python is dynamically-typed. So the types of variables are not easily known at compile time. And the compiler has no way to know whether the +
operator, which can be overloaded by user classes, could have any side effect at all. Consider the following simple example:
class A:
def __init__(self, v):
self.value = v
def __add__(self, b):
print(b)
return self.value + b
x = A(3)
y = 4
r = (x + y + 1) + (x + y + 2)
For simple expressions, you can just save the intermediate results to a new variable:
z = x + y + 1
r = z + (z + 1)
For functions calls, functools.lru_cache
is another option, as already indicated by other answers.
add a comment
|
EDIT: This answer applies only to the default CPython interpreter of the Python language. It may not be applicable to other Python implementations that adopts JIT compilation techniques or uses a restricted Python sublanguage that allows static type inference. See @Jörg W Mittag's answer for more details.
No it will not. You can do this to see the compiled code:
from dis import dis
dis("r=(x+y+z+1) + (x+y+z+2)")
Output:
0 LOAD_NAME 0 (x)
2 LOAD_NAME 1 (y)
4 BINARY_ADD
6 LOAD_NAME 2 (z)
8 BINARY_ADD
10 LOAD_CONST 0 (1)
12 BINARY_ADD
14 LOAD_NAME 0 (x)
16 LOAD_NAME 1 (y)
18 BINARY_ADD
20 LOAD_NAME 2 (z)
22 BINARY_ADD
24 LOAD_CONST 1 (2)
26 BINARY_ADD
28 BINARY_ADD
30 STORE_NAME 3 (r)
32 LOAD_CONST 2 (None)
34 RETURN_VALUE
This is partially because Python is dynamically-typed. So the types of variables are not easily known at compile time. And the compiler has no way to know whether the +
operator, which can be overloaded by user classes, could have any side effect at all. Consider the following simple example:
class A:
def __init__(self, v):
self.value = v
def __add__(self, b):
print(b)
return self.value + b
x = A(3)
y = 4
r = (x + y + 1) + (x + y + 2)
For simple expressions, you can just save the intermediate results to a new variable:
z = x + y + 1
r = z + (z + 1)
For functions calls, functools.lru_cache
is another option, as already indicated by other answers.
add a comment
|
EDIT: This answer applies only to the default CPython interpreter of the Python language. It may not be applicable to other Python implementations that adopts JIT compilation techniques or uses a restricted Python sublanguage that allows static type inference. See @Jörg W Mittag's answer for more details.
No it will not. You can do this to see the compiled code:
from dis import dis
dis("r=(x+y+z+1) + (x+y+z+2)")
Output:
0 LOAD_NAME 0 (x)
2 LOAD_NAME 1 (y)
4 BINARY_ADD
6 LOAD_NAME 2 (z)
8 BINARY_ADD
10 LOAD_CONST 0 (1)
12 BINARY_ADD
14 LOAD_NAME 0 (x)
16 LOAD_NAME 1 (y)
18 BINARY_ADD
20 LOAD_NAME 2 (z)
22 BINARY_ADD
24 LOAD_CONST 1 (2)
26 BINARY_ADD
28 BINARY_ADD
30 STORE_NAME 3 (r)
32 LOAD_CONST 2 (None)
34 RETURN_VALUE
This is partially because Python is dynamically-typed. So the types of variables are not easily known at compile time. And the compiler has no way to know whether the +
operator, which can be overloaded by user classes, could have any side effect at all. Consider the following simple example:
class A:
def __init__(self, v):
self.value = v
def __add__(self, b):
print(b)
return self.value + b
x = A(3)
y = 4
r = (x + y + 1) + (x + y + 2)
For simple expressions, you can just save the intermediate results to a new variable:
z = x + y + 1
r = z + (z + 1)
For functions calls, functools.lru_cache
is another option, as already indicated by other answers.
EDIT: This answer applies only to the default CPython interpreter of the Python language. It may not be applicable to other Python implementations that adopts JIT compilation techniques or uses a restricted Python sublanguage that allows static type inference. See @Jörg W Mittag's answer for more details.
No it will not. You can do this to see the compiled code:
from dis import dis
dis("r=(x+y+z+1) + (x+y+z+2)")
Output:
0 LOAD_NAME 0 (x)
2 LOAD_NAME 1 (y)
4 BINARY_ADD
6 LOAD_NAME 2 (z)
8 BINARY_ADD
10 LOAD_CONST 0 (1)
12 BINARY_ADD
14 LOAD_NAME 0 (x)
16 LOAD_NAME 1 (y)
18 BINARY_ADD
20 LOAD_NAME 2 (z)
22 BINARY_ADD
24 LOAD_CONST 1 (2)
26 BINARY_ADD
28 BINARY_ADD
30 STORE_NAME 3 (r)
32 LOAD_CONST 2 (None)
34 RETURN_VALUE
This is partially because Python is dynamically-typed. So the types of variables are not easily known at compile time. And the compiler has no way to know whether the +
operator, which can be overloaded by user classes, could have any side effect at all. Consider the following simple example:
class A:
def __init__(self, v):
self.value = v
def __add__(self, b):
print(b)
return self.value + b
x = A(3)
y = 4
r = (x + y + 1) + (x + y + 2)
For simple expressions, you can just save the intermediate results to a new variable:
z = x + y + 1
r = z + (z + 1)
For functions calls, functools.lru_cache
is another option, as already indicated by other answers.
edited Sep 29 at 15:43
answered Sep 28 at 13:58
GZ0GZ0
2,5081 gold badge3 silver badges15 bronze badges
2,5081 gold badge3 silver badges15 bronze badges
add a comment
|
add a comment
|
If I have an expression that I wish to evaluate in Python, such as the expression for
r
in the code snippet below, will the Python interpreter be smart and reuse the subresultx+y+z
, or just evaluate it twice?
Which Python interpreter are you talking about? There are currently four production-ready, stable Python implementations in widespread use. None of those actually have a Python interpreter, every single one of them compiles Python.
Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The Python Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming Python implementation would be allowed to, but not required, to perform this optimization.
I am pretty certain that, contrary to all the other answers which state that Python cannot do this, PyPy is capable of performing this optimization. Also, depending on which underlying platform you use, code executed using Jython or IronPython will also benefit from this optimization, e.g. I am 100% certain that the C2 compiler of Oracle HotSpot does perform this optimization.
I'd also be interested to know if the answer to this question would be the same for a compiled language […].
There is no such thing as a "compiled language". Compilation and interpretation are traits of the compiler or interpreter (duh!) not the language. Every language can be implemented by a compiler, and every language can be implemented by an interpreter. Case in point: there are interpreters for C, and conversely, every currently existing production-ready, stable, widely-used implementation of Python, ECMAScript, Ruby, and PHP has at least one compiler, many even have more than one (e.g. PyPy, V8, SpiderMonkey, Squirrelfish Extreme, Chakra).
A language is an abstract set of mathematical rules and restrictions written on a piece of paper. A language is neither compiled nor interpreted, a language just is. Those concepts live on different layers of abstraction; if English were a typed language, the term "compiled language" would be a type error.
I'd also be interested to know if the answer to this question would be the same for […] e.g. C.
There are many production-ready, stable C implementations in widespread use. Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The C Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming C implementation would be allowed to, but not required, to perform this optimization.
Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
. And I wonder how that could be done in Python without the knowledge of runtime types (or maybe the optimization only applies to the scenarios where the types can be clearly inferred?). In Python, type hints are just annotations and runtime checking is not enforced at all (see more details).
– GZ0
Sep 29 at 13:59
"Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
" –dis.dis
will only show you the optimizations that the Python-to-bytecode compiler makes, which is actually a pretty simple and "stupid" compiler that performs almost no optimizations. Also, it is a static ahead-of-time compiler, so it has to contend with all the limitations the Halting Problems and Rice's Theorem entail. However, in PyPy, Jython, and IronPython, that bytecode is usually compiled further to native machine code, and that compiler is much more sophisticated.
– Jörg W Mittag
Sep 29 at 14:05
1
"And I wonder how that could be done in Python without the knowledge of runtime types" – What makes you think an optimizer doesn't have knowledge of runtime types? The whole reason why many modern high-performance language execution engines delay compilation until runtime is precisely because that way the optimizer has access not only to runtime types, but also to runtime profiling data, data access patterns, branch statistics, etc. Even further, a compiler that is capable of de-optimization (such as HotSpot's) can make speculative unsafe optimizations and just remove them again when …
– Jörg W Mittag
Sep 29 at 14:06
1
… it realizes that its speculation was wrong. I.e. it could remove the common subexpression under the assumption that it doesn't have any side-effects even if it can't prove that it doesn't have side-effects, but monitor it for side-effects, and when it detects a side-effect, it just recompiles that particular piece of code without CSE.
– Jörg W Mittag
Sep 29 at 14:08
Unlike statically-typed languages such as C / Java, Python is dynamically typed so inferring runtime types is a fairly complicated task. I just checked that it is possible for the RPython subset of Python which imposes more restrictions to allow type inferences.
– GZ0
Sep 29 at 14:18
|
show 4 more comments
If I have an expression that I wish to evaluate in Python, such as the expression for
r
in the code snippet below, will the Python interpreter be smart and reuse the subresultx+y+z
, or just evaluate it twice?
Which Python interpreter are you talking about? There are currently four production-ready, stable Python implementations in widespread use. None of those actually have a Python interpreter, every single one of them compiles Python.
Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The Python Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming Python implementation would be allowed to, but not required, to perform this optimization.
I am pretty certain that, contrary to all the other answers which state that Python cannot do this, PyPy is capable of performing this optimization. Also, depending on which underlying platform you use, code executed using Jython or IronPython will also benefit from this optimization, e.g. I am 100% certain that the C2 compiler of Oracle HotSpot does perform this optimization.
I'd also be interested to know if the answer to this question would be the same for a compiled language […].
There is no such thing as a "compiled language". Compilation and interpretation are traits of the compiler or interpreter (duh!) not the language. Every language can be implemented by a compiler, and every language can be implemented by an interpreter. Case in point: there are interpreters for C, and conversely, every currently existing production-ready, stable, widely-used implementation of Python, ECMAScript, Ruby, and PHP has at least one compiler, many even have more than one (e.g. PyPy, V8, SpiderMonkey, Squirrelfish Extreme, Chakra).
A language is an abstract set of mathematical rules and restrictions written on a piece of paper. A language is neither compiled nor interpreted, a language just is. Those concepts live on different layers of abstraction; if English were a typed language, the term "compiled language" would be a type error.
I'd also be interested to know if the answer to this question would be the same for […] e.g. C.
There are many production-ready, stable C implementations in widespread use. Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The C Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming C implementation would be allowed to, but not required, to perform this optimization.
Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
. And I wonder how that could be done in Python without the knowledge of runtime types (or maybe the optimization only applies to the scenarios where the types can be clearly inferred?). In Python, type hints are just annotations and runtime checking is not enforced at all (see more details).
– GZ0
Sep 29 at 13:59
"Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
" –dis.dis
will only show you the optimizations that the Python-to-bytecode compiler makes, which is actually a pretty simple and "stupid" compiler that performs almost no optimizations. Also, it is a static ahead-of-time compiler, so it has to contend with all the limitations the Halting Problems and Rice's Theorem entail. However, in PyPy, Jython, and IronPython, that bytecode is usually compiled further to native machine code, and that compiler is much more sophisticated.
– Jörg W Mittag
Sep 29 at 14:05
1
"And I wonder how that could be done in Python without the knowledge of runtime types" – What makes you think an optimizer doesn't have knowledge of runtime types? The whole reason why many modern high-performance language execution engines delay compilation until runtime is precisely because that way the optimizer has access not only to runtime types, but also to runtime profiling data, data access patterns, branch statistics, etc. Even further, a compiler that is capable of de-optimization (such as HotSpot's) can make speculative unsafe optimizations and just remove them again when …
– Jörg W Mittag
Sep 29 at 14:06
1
… it realizes that its speculation was wrong. I.e. it could remove the common subexpression under the assumption that it doesn't have any side-effects even if it can't prove that it doesn't have side-effects, but monitor it for side-effects, and when it detects a side-effect, it just recompiles that particular piece of code without CSE.
– Jörg W Mittag
Sep 29 at 14:08
Unlike statically-typed languages such as C / Java, Python is dynamically typed so inferring runtime types is a fairly complicated task. I just checked that it is possible for the RPython subset of Python which imposes more restrictions to allow type inferences.
– GZ0
Sep 29 at 14:18
|
show 4 more comments
If I have an expression that I wish to evaluate in Python, such as the expression for
r
in the code snippet below, will the Python interpreter be smart and reuse the subresultx+y+z
, or just evaluate it twice?
Which Python interpreter are you talking about? There are currently four production-ready, stable Python implementations in widespread use. None of those actually have a Python interpreter, every single one of them compiles Python.
Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The Python Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming Python implementation would be allowed to, but not required, to perform this optimization.
I am pretty certain that, contrary to all the other answers which state that Python cannot do this, PyPy is capable of performing this optimization. Also, depending on which underlying platform you use, code executed using Jython or IronPython will also benefit from this optimization, e.g. I am 100% certain that the C2 compiler of Oracle HotSpot does perform this optimization.
I'd also be interested to know if the answer to this question would be the same for a compiled language […].
There is no such thing as a "compiled language". Compilation and interpretation are traits of the compiler or interpreter (duh!) not the language. Every language can be implemented by a compiler, and every language can be implemented by an interpreter. Case in point: there are interpreters for C, and conversely, every currently existing production-ready, stable, widely-used implementation of Python, ECMAScript, Ruby, and PHP has at least one compiler, many even have more than one (e.g. PyPy, V8, SpiderMonkey, Squirrelfish Extreme, Chakra).
A language is an abstract set of mathematical rules and restrictions written on a piece of paper. A language is neither compiled nor interpreted, a language just is. Those concepts live on different layers of abstraction; if English were a typed language, the term "compiled language" would be a type error.
I'd also be interested to know if the answer to this question would be the same for […] e.g. C.
There are many production-ready, stable C implementations in widespread use. Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The C Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming C implementation would be allowed to, but not required, to perform this optimization.
If I have an expression that I wish to evaluate in Python, such as the expression for
r
in the code snippet below, will the Python interpreter be smart and reuse the subresultx+y+z
, or just evaluate it twice?
Which Python interpreter are you talking about? There are currently four production-ready, stable Python implementations in widespread use. None of those actually have a Python interpreter, every single one of them compiles Python.
Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The Python Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming Python implementation would be allowed to, but not required, to perform this optimization.
I am pretty certain that, contrary to all the other answers which state that Python cannot do this, PyPy is capable of performing this optimization. Also, depending on which underlying platform you use, code executed using Jython or IronPython will also benefit from this optimization, e.g. I am 100% certain that the C2 compiler of Oracle HotSpot does perform this optimization.
I'd also be interested to know if the answer to this question would be the same for a compiled language […].
There is no such thing as a "compiled language". Compilation and interpretation are traits of the compiler or interpreter (duh!) not the language. Every language can be implemented by a compiler, and every language can be implemented by an interpreter. Case in point: there are interpreters for C, and conversely, every currently existing production-ready, stable, widely-used implementation of Python, ECMAScript, Ruby, and PHP has at least one compiler, many even have more than one (e.g. PyPy, V8, SpiderMonkey, Squirrelfish Extreme, Chakra).
A language is an abstract set of mathematical rules and restrictions written on a piece of paper. A language is neither compiled nor interpreted, a language just is. Those concepts live on different layers of abstraction; if English were a typed language, the term "compiled language" would be a type error.
I'd also be interested to know if the answer to this question would be the same for […] e.g. C.
There are many production-ready, stable C implementations in widespread use. Some of them may or may not be able to perform this optimization for at least some programs under at least some circumstances.
The C Language Specification does neither require nor forbid this kind of optimization, so any specification-conforming C implementation would be allowed to, but not required, to perform this optimization.
answered Sep 29 at 7:28
Jörg W MittagJörg W Mittag
308k64 gold badges378 silver badges573 bronze badges
308k64 gold badges378 silver badges573 bronze badges
Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
. And I wonder how that could be done in Python without the knowledge of runtime types (or maybe the optimization only applies to the scenarios where the types can be clearly inferred?). In Python, type hints are just annotations and runtime checking is not enforced at all (see more details).
– GZ0
Sep 29 at 13:59
"Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
" –dis.dis
will only show you the optimizations that the Python-to-bytecode compiler makes, which is actually a pretty simple and "stupid" compiler that performs almost no optimizations. Also, it is a static ahead-of-time compiler, so it has to contend with all the limitations the Halting Problems and Rice's Theorem entail. However, in PyPy, Jython, and IronPython, that bytecode is usually compiled further to native machine code, and that compiler is much more sophisticated.
– Jörg W Mittag
Sep 29 at 14:05
1
"And I wonder how that could be done in Python without the knowledge of runtime types" – What makes you think an optimizer doesn't have knowledge of runtime types? The whole reason why many modern high-performance language execution engines delay compilation until runtime is precisely because that way the optimizer has access not only to runtime types, but also to runtime profiling data, data access patterns, branch statistics, etc. Even further, a compiler that is capable of de-optimization (such as HotSpot's) can make speculative unsafe optimizations and just remove them again when …
– Jörg W Mittag
Sep 29 at 14:06
1
… it realizes that its speculation was wrong. I.e. it could remove the common subexpression under the assumption that it doesn't have any side-effects even if it can't prove that it doesn't have side-effects, but monitor it for side-effects, and when it detects a side-effect, it just recompiles that particular piece of code without CSE.
– Jörg W Mittag
Sep 29 at 14:08
Unlike statically-typed languages such as C / Java, Python is dynamically typed so inferring runtime types is a fairly complicated task. I just checked that it is possible for the RPython subset of Python which imposes more restrictions to allow type inferences.
– GZ0
Sep 29 at 14:18
|
show 4 more comments
Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
. And I wonder how that could be done in Python without the knowledge of runtime types (or maybe the optimization only applies to the scenarios where the types can be clearly inferred?). In Python, type hints are just annotations and runtime checking is not enforced at all (see more details).
– GZ0
Sep 29 at 13:59
"Are there any references for the PyPy optimization? I cannot see it by usingdis.dis
" –dis.dis
will only show you the optimizations that the Python-to-bytecode compiler makes, which is actually a pretty simple and "stupid" compiler that performs almost no optimizations. Also, it is a static ahead-of-time compiler, so it has to contend with all the limitations the Halting Problems and Rice's Theorem entail. However, in PyPy, Jython, and IronPython, that bytecode is usually compiled further to native machine code, and that compiler is much more sophisticated.
– Jörg W Mittag
Sep 29 at 14:05
1
"And I wonder how that could be done in Python without the knowledge of runtime types" – What makes you think an optimizer doesn't have knowledge of runtime types? The whole reason why many modern high-performance language execution engines delay compilation until runtime is precisely because that way the optimizer has access not only to runtime types, but also to runtime profiling data, data access patterns, branch statistics, etc. Even further, a compiler that is capable of de-optimization (such as HotSpot's) can make speculative unsafe optimizations and just remove them again when …
– Jörg W Mittag
Sep 29 at 14:06
1
… it realizes that its speculation was wrong. I.e. it could remove the common subexpression under the assumption that it doesn't have any side-effects even if it can't prove that it doesn't have side-effects, but monitor it for side-effects, and when it detects a side-effect, it just recompiles that particular piece of code without CSE.
– Jörg W Mittag
Sep 29 at 14:08
Unlike statically-typed languages such as C / Java, Python is dynamically typed so inferring runtime types is a fairly complicated task. I just checked that it is possible for the RPython subset of Python which imposes more restrictions to allow type inferences.
– GZ0
Sep 29 at 14:18
Are there any references for the PyPy optimization? I cannot see it by using
dis.dis
. And I wonder how that could be done in Python without the knowledge of runtime types (or maybe the optimization only applies to the scenarios where the types can be clearly inferred?). In Python, type hints are just annotations and runtime checking is not enforced at all (see more details).– GZ0
Sep 29 at 13:59
Are there any references for the PyPy optimization? I cannot see it by using
dis.dis
. And I wonder how that could be done in Python without the knowledge of runtime types (or maybe the optimization only applies to the scenarios where the types can be clearly inferred?). In Python, type hints are just annotations and runtime checking is not enforced at all (see more details).– GZ0
Sep 29 at 13:59
"Are there any references for the PyPy optimization? I cannot see it by using
dis.dis
" – dis.dis
will only show you the optimizations that the Python-to-bytecode compiler makes, which is actually a pretty simple and "stupid" compiler that performs almost no optimizations. Also, it is a static ahead-of-time compiler, so it has to contend with all the limitations the Halting Problems and Rice's Theorem entail. However, in PyPy, Jython, and IronPython, that bytecode is usually compiled further to native machine code, and that compiler is much more sophisticated.– Jörg W Mittag
Sep 29 at 14:05
"Are there any references for the PyPy optimization? I cannot see it by using
dis.dis
" – dis.dis
will only show you the optimizations that the Python-to-bytecode compiler makes, which is actually a pretty simple and "stupid" compiler that performs almost no optimizations. Also, it is a static ahead-of-time compiler, so it has to contend with all the limitations the Halting Problems and Rice's Theorem entail. However, in PyPy, Jython, and IronPython, that bytecode is usually compiled further to native machine code, and that compiler is much more sophisticated.– Jörg W Mittag
Sep 29 at 14:05
1
1
"And I wonder how that could be done in Python without the knowledge of runtime types" – What makes you think an optimizer doesn't have knowledge of runtime types? The whole reason why many modern high-performance language execution engines delay compilation until runtime is precisely because that way the optimizer has access not only to runtime types, but also to runtime profiling data, data access patterns, branch statistics, etc. Even further, a compiler that is capable of de-optimization (such as HotSpot's) can make speculative unsafe optimizations and just remove them again when …
– Jörg W Mittag
Sep 29 at 14:06
"And I wonder how that could be done in Python without the knowledge of runtime types" – What makes you think an optimizer doesn't have knowledge of runtime types? The whole reason why many modern high-performance language execution engines delay compilation until runtime is precisely because that way the optimizer has access not only to runtime types, but also to runtime profiling data, data access patterns, branch statistics, etc. Even further, a compiler that is capable of de-optimization (such as HotSpot's) can make speculative unsafe optimizations and just remove them again when …
– Jörg W Mittag
Sep 29 at 14:06
1
1
… it realizes that its speculation was wrong. I.e. it could remove the common subexpression under the assumption that it doesn't have any side-effects even if it can't prove that it doesn't have side-effects, but monitor it for side-effects, and when it detects a side-effect, it just recompiles that particular piece of code without CSE.
– Jörg W Mittag
Sep 29 at 14:08
… it realizes that its speculation was wrong. I.e. it could remove the common subexpression under the assumption that it doesn't have any side-effects even if it can't prove that it doesn't have side-effects, but monitor it for side-effects, and when it detects a side-effect, it just recompiles that particular piece of code without CSE.
– Jörg W Mittag
Sep 29 at 14:08
Unlike statically-typed languages such as C / Java, Python is dynamically typed so inferring runtime types is a fairly complicated task. I just checked that it is possible for the RPython subset of Python which imposes more restrictions to allow type inferences.
– GZ0
Sep 29 at 14:18
Unlike statically-typed languages such as C / Java, Python is dynamically typed so inferring runtime types is a fairly complicated task. I just checked that it is possible for the RPython subset of Python which imposes more restrictions to allow type inferences.
– GZ0
Sep 29 at 14:18
|
show 4 more comments
No, python doesn't do that by default. f you need python to preserve the result of a certain calculation for you, you need to implicitly tell python to do that, one way to do this would be by defining a function and using functools.lru_cache
docs:
from functools import lru_cache
@lru_cache(maxsize=32)
def add3(x,y,z):
return x + y + z
x=1
y=2
z=3
r = (add3(x,y,z)+1) + (add3(x,y,z)+2)
"No, python doesn't do that by default since it would use up too much memory ..." - Do you have a source for your claim that memory usage is the reason? Because honestly, I'm pretty sceptical of that.
– marcelm
Sep 28 at 23:30
2
The reason is not that the common subexpression elimination “would use up too much memory”; it is that the transformation is not sound in Python because the language is too dynamic.
– wchargin
Sep 28 at 23:30
@marcelm sorry.
– yukashima huksay
Sep 29 at 4:53
add a comment
|
No, python doesn't do that by default. f you need python to preserve the result of a certain calculation for you, you need to implicitly tell python to do that, one way to do this would be by defining a function and using functools.lru_cache
docs:
from functools import lru_cache
@lru_cache(maxsize=32)
def add3(x,y,z):
return x + y + z
x=1
y=2
z=3
r = (add3(x,y,z)+1) + (add3(x,y,z)+2)
"No, python doesn't do that by default since it would use up too much memory ..." - Do you have a source for your claim that memory usage is the reason? Because honestly, I'm pretty sceptical of that.
– marcelm
Sep 28 at 23:30
2
The reason is not that the common subexpression elimination “would use up too much memory”; it is that the transformation is not sound in Python because the language is too dynamic.
– wchargin
Sep 28 at 23:30
@marcelm sorry.
– yukashima huksay
Sep 29 at 4:53
add a comment
|
No, python doesn't do that by default. f you need python to preserve the result of a certain calculation for you, you need to implicitly tell python to do that, one way to do this would be by defining a function and using functools.lru_cache
docs:
from functools import lru_cache
@lru_cache(maxsize=32)
def add3(x,y,z):
return x + y + z
x=1
y=2
z=3
r = (add3(x,y,z)+1) + (add3(x,y,z)+2)
No, python doesn't do that by default. f you need python to preserve the result of a certain calculation for you, you need to implicitly tell python to do that, one way to do this would be by defining a function and using functools.lru_cache
docs:
from functools import lru_cache
@lru_cache(maxsize=32)
def add3(x,y,z):
return x + y + z
x=1
y=2
z=3
r = (add3(x,y,z)+1) + (add3(x,y,z)+2)
edited Sep 29 at 4:48
answered Sep 28 at 13:53
yukashima huksayyukashima huksay
2,3552 gold badges17 silver badges39 bronze badges
2,3552 gold badges17 silver badges39 bronze badges
"No, python doesn't do that by default since it would use up too much memory ..." - Do you have a source for your claim that memory usage is the reason? Because honestly, I'm pretty sceptical of that.
– marcelm
Sep 28 at 23:30
2
The reason is not that the common subexpression elimination “would use up too much memory”; it is that the transformation is not sound in Python because the language is too dynamic.
– wchargin
Sep 28 at 23:30
@marcelm sorry.
– yukashima huksay
Sep 29 at 4:53
add a comment
|
"No, python doesn't do that by default since it would use up too much memory ..." - Do you have a source for your claim that memory usage is the reason? Because honestly, I'm pretty sceptical of that.
– marcelm
Sep 28 at 23:30
2
The reason is not that the common subexpression elimination “would use up too much memory”; it is that the transformation is not sound in Python because the language is too dynamic.
– wchargin
Sep 28 at 23:30
@marcelm sorry.
– yukashima huksay
Sep 29 at 4:53
"No, python doesn't do that by default since it would use up too much memory ..." - Do you have a source for your claim that memory usage is the reason? Because honestly, I'm pretty sceptical of that.
– marcelm
Sep 28 at 23:30
"No, python doesn't do that by default since it would use up too much memory ..." - Do you have a source for your claim that memory usage is the reason? Because honestly, I'm pretty sceptical of that.
– marcelm
Sep 28 at 23:30
2
2
The reason is not that the common subexpression elimination “would use up too much memory”; it is that the transformation is not sound in Python because the language is too dynamic.
– wchargin
Sep 28 at 23:30
The reason is not that the common subexpression elimination “would use up too much memory”; it is that the transformation is not sound in Python because the language is too dynamic.
– wchargin
Sep 28 at 23:30
@marcelm sorry.
– yukashima huksay
Sep 29 at 4:53
@marcelm sorry.
– yukashima huksay
Sep 29 at 4:53
add a comment
|
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f58146860%2fdoes-python-reuse-repeated-calculation-results%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Possible duplicate of Does Python automatically optimize/cache function calls?
– Georgy
Sep 29 at 8:35
1
Checked with C++, it simply returns 15 because all values are known. If you make the variables dependent on some input so it can't optimize them away, it calculates (x+y+z+3).
– Sebastian Wahl
Oct 1 at 13:48
1
@SebastianWahl Did you mean it will compute
2*(x + y + z) + 3
? Or something else? It would also be informative to indicate the compiler that you've used for checking that result.– a_guest
Oct 2 at 9:54
1
@a_guest I misread the assembly here, it adds the result of (x + y + z) with itself and adds 3 in one instruction if I understand correct. It was GCC and you can see it online here and try other compilers: godbolt.org/z/PwxaxK
– Sebastian Wahl
Oct 2 at 11:55