Why does (inf + 0j)*1 evaluate to inf + nanj?Why is 0 mod 0 an error?Positive vs negative nansWhy is __muldc3 called, when two std::complex are multiplied?signed NaN valuesWhat does ** (double star/asterisk) and * (star/asterisk) do for parameters?Why are Python's 'private' methods not actually private?What does the “yield” keyword do?Does Python have a ternary conditional operator?What does if __name__ == “__main__”: do?Why is it string.join(list) instead of list.join(string)?Does Python have a string 'contains' substring method?Why is reading lines from stdin much slower in C++ than Python?Why does Python code run faster in a function?Why is “1000000000000000 in range(1000000000000001)” so fast in Python 3?

Selecting Primes from list of list

Do the holes in Jacquard loom punched cards represent input data or program code?

Override iPhone's automatic brightness adjust?

How do you say "to play Devil's advocate" in German?

How to avoid getting angry during job interviews?

How did the star tracker on the Corona (Key Hole) satellite work?

Is the "p" in "spin" really a "b"?

Estimating nest egg and inflation

Life insurance as a lottery ticket

Was X17 predicted before it was observed?

Should high school teachers say “real numbers” before teaching complex numbers?

Almost all non-negative real numbers have only finitely many multiple lies in a measurable set with finite measure

Is it a complete sentence: "Caution murmured: it could be a trick, a lure, a trap."?

How does a Mandalorian eat food if he never takes his helmet off?

Give a grammar for a language on Σ=a,b,c that accepts all strings containing exactly one a

What was this pickled vegetable which I was served at a middle eastern restaurant (description inside)?

Sending non-work emails to colleagues. Is it rude?

Learn university maths or train for high school competitions: which is better?

Can I say "guess what" to acknowledge new information?

What is the point of teaching Coding and robotics to kids as young as 6 years old?

Random variable vs Statistic?

Is sometimes "how I shall" = "how shall I"?

What is a short code for generating this matrix in R?

Signed overflow in C++ and undefined behaviour (UB)



Why does (inf + 0j)*1 evaluate to inf + nanj?


Why is 0 mod 0 an error?Positive vs negative nansWhy is __muldc3 called, when two std::complex are multiplied?signed NaN valuesWhat does ** (double star/asterisk) and * (star/asterisk) do for parameters?Why are Python's 'private' methods not actually private?What does the “yield” keyword do?Does Python have a ternary conditional operator?What does if __name__ == “__main__”: do?Why is it string.join(list) instead of list.join(string)?Does Python have a string 'contains' substring method?Why is reading lines from stdin much slower in C++ than Python?Why does Python code run faster in a function?Why is “1000000000000000 in range(1000000000000001)” so fast in Python 3?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









93


















>>> (float('inf')+0j)*1
(inf+nanj)


Why? This caused a nasty bug in my code.



Why isn't 1 the multiplicative identity, giving (inf + 0j)?










share|improve this question






















  • 1





    I think the keyword you're looking for is "field". Addition and multiplication are by default defined within a single field, and in this case the only standard field that can accommodate your code is the field of complex numbers, so both numbers need to be treated as complex numbers by default before the operation is well-defined. Which is not to say they couldn't extend these definitions, but apparently they just went with the standard thing and didn't feel an urge to go out of their way to extend the definitions.

    – Mehrdad
    Sep 22 at 8:33







  • 1





    Oh, and if you find these idiosyncrasies frustrating and want to punch your computer, you have my sympathy.

    – Mehrdad
    Sep 22 at 8:36







  • 2





    @Mehrdad once you add those non finite elements it ceases to be a field. Indeed, as there isn't a multiplicative neutral anymore it cannot by definition be a field.

    – Paul Panzer
    Sep 22 at 9:37











  • @PaulPanzer: Yeah I think they just shoved those elements in afterward.

    – Mehrdad
    Sep 22 at 10:03











  • floating point numbers (even if you exclude infinity and NaN) are not a field. Most of the identities that hold for fields do not hold for floating point numbers.

    – plugwash
    Sep 23 at 14:54

















93


















>>> (float('inf')+0j)*1
(inf+nanj)


Why? This caused a nasty bug in my code.



Why isn't 1 the multiplicative identity, giving (inf + 0j)?










share|improve this question






















  • 1





    I think the keyword you're looking for is "field". Addition and multiplication are by default defined within a single field, and in this case the only standard field that can accommodate your code is the field of complex numbers, so both numbers need to be treated as complex numbers by default before the operation is well-defined. Which is not to say they couldn't extend these definitions, but apparently they just went with the standard thing and didn't feel an urge to go out of their way to extend the definitions.

    – Mehrdad
    Sep 22 at 8:33







  • 1





    Oh, and if you find these idiosyncrasies frustrating and want to punch your computer, you have my sympathy.

    – Mehrdad
    Sep 22 at 8:36







  • 2





    @Mehrdad once you add those non finite elements it ceases to be a field. Indeed, as there isn't a multiplicative neutral anymore it cannot by definition be a field.

    – Paul Panzer
    Sep 22 at 9:37











  • @PaulPanzer: Yeah I think they just shoved those elements in afterward.

    – Mehrdad
    Sep 22 at 10:03











  • floating point numbers (even if you exclude infinity and NaN) are not a field. Most of the identities that hold for fields do not hold for floating point numbers.

    – plugwash
    Sep 23 at 14:54













93













93









93


9






>>> (float('inf')+0j)*1
(inf+nanj)


Why? This caused a nasty bug in my code.



Why isn't 1 the multiplicative identity, giving (inf + 0j)?










share|improve this question
















>>> (float('inf')+0j)*1
(inf+nanj)


Why? This caused a nasty bug in my code.



Why isn't 1 the multiplicative identity, giving (inf + 0j)?







python nan ieee-754






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Sep 22 at 1:25









Francisco Couzo

7,0773 gold badges24 silver badges34 bronze badges




7,0773 gold badges24 silver badges34 bronze badges










asked Sep 20 at 16:15









marnixmarnix

1,0885 silver badges9 bronze badges




1,0885 silver badges9 bronze badges










  • 1





    I think the keyword you're looking for is "field". Addition and multiplication are by default defined within a single field, and in this case the only standard field that can accommodate your code is the field of complex numbers, so both numbers need to be treated as complex numbers by default before the operation is well-defined. Which is not to say they couldn't extend these definitions, but apparently they just went with the standard thing and didn't feel an urge to go out of their way to extend the definitions.

    – Mehrdad
    Sep 22 at 8:33







  • 1





    Oh, and if you find these idiosyncrasies frustrating and want to punch your computer, you have my sympathy.

    – Mehrdad
    Sep 22 at 8:36







  • 2





    @Mehrdad once you add those non finite elements it ceases to be a field. Indeed, as there isn't a multiplicative neutral anymore it cannot by definition be a field.

    – Paul Panzer
    Sep 22 at 9:37











  • @PaulPanzer: Yeah I think they just shoved those elements in afterward.

    – Mehrdad
    Sep 22 at 10:03











  • floating point numbers (even if you exclude infinity and NaN) are not a field. Most of the identities that hold for fields do not hold for floating point numbers.

    – plugwash
    Sep 23 at 14:54












  • 1





    I think the keyword you're looking for is "field". Addition and multiplication are by default defined within a single field, and in this case the only standard field that can accommodate your code is the field of complex numbers, so both numbers need to be treated as complex numbers by default before the operation is well-defined. Which is not to say they couldn't extend these definitions, but apparently they just went with the standard thing and didn't feel an urge to go out of their way to extend the definitions.

    – Mehrdad
    Sep 22 at 8:33







  • 1





    Oh, and if you find these idiosyncrasies frustrating and want to punch your computer, you have my sympathy.

    – Mehrdad
    Sep 22 at 8:36







  • 2





    @Mehrdad once you add those non finite elements it ceases to be a field. Indeed, as there isn't a multiplicative neutral anymore it cannot by definition be a field.

    – Paul Panzer
    Sep 22 at 9:37











  • @PaulPanzer: Yeah I think they just shoved those elements in afterward.

    – Mehrdad
    Sep 22 at 10:03











  • floating point numbers (even if you exclude infinity and NaN) are not a field. Most of the identities that hold for fields do not hold for floating point numbers.

    – plugwash
    Sep 23 at 14:54







1




1





I think the keyword you're looking for is "field". Addition and multiplication are by default defined within a single field, and in this case the only standard field that can accommodate your code is the field of complex numbers, so both numbers need to be treated as complex numbers by default before the operation is well-defined. Which is not to say they couldn't extend these definitions, but apparently they just went with the standard thing and didn't feel an urge to go out of their way to extend the definitions.

– Mehrdad
Sep 22 at 8:33






I think the keyword you're looking for is "field". Addition and multiplication are by default defined within a single field, and in this case the only standard field that can accommodate your code is the field of complex numbers, so both numbers need to be treated as complex numbers by default before the operation is well-defined. Which is not to say they couldn't extend these definitions, but apparently they just went with the standard thing and didn't feel an urge to go out of their way to extend the definitions.

– Mehrdad
Sep 22 at 8:33





1




1





Oh, and if you find these idiosyncrasies frustrating and want to punch your computer, you have my sympathy.

– Mehrdad
Sep 22 at 8:36






Oh, and if you find these idiosyncrasies frustrating and want to punch your computer, you have my sympathy.

– Mehrdad
Sep 22 at 8:36





2




2





@Mehrdad once you add those non finite elements it ceases to be a field. Indeed, as there isn't a multiplicative neutral anymore it cannot by definition be a field.

– Paul Panzer
Sep 22 at 9:37





@Mehrdad once you add those non finite elements it ceases to be a field. Indeed, as there isn't a multiplicative neutral anymore it cannot by definition be a field.

– Paul Panzer
Sep 22 at 9:37













@PaulPanzer: Yeah I think they just shoved those elements in afterward.

– Mehrdad
Sep 22 at 10:03





@PaulPanzer: Yeah I think they just shoved those elements in afterward.

– Mehrdad
Sep 22 at 10:03













floating point numbers (even if you exclude infinity and NaN) are not a field. Most of the identities that hold for fields do not hold for floating point numbers.

– plugwash
Sep 23 at 14:54





floating point numbers (even if you exclude infinity and NaN) are not a field. Most of the identities that hold for fields do not hold for floating point numbers.

– plugwash
Sep 23 at 14:54












4 Answers
4






active

oldest

votes


















92



















The 1 is converted to a complex number first, 1 + 0j, which then leads to an inf * 0 multiplication, resulting in a nan.



(inf + 0j) * 1
(inf + 0j) * (1 + 0j)
inf * 1 + inf * 0j + 0j * 1 + 0j * 0j
# ^ this is where it comes from
inf + nan j + 0j - 0
inf + nan j





share|improve this answer






















  • 8





    For answering the question "why...?", probably the important most step is the first one, where 1 is cast to 1 + 0j.

    – Warren Weckesser
    Sep 20 at 16:28







  • 5





    Note that C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This may partly be for performance reasons but it also avoids unnecessary NaNs.

    – benrg
    Sep 21 at 19:31











  • @benrg In NumPy, array([inf+0j])*1 also evaluates to array([inf+nanj]). Assuming that the actual multiplication happens somewhere in C/C++ code, would this mean that they wrote custom code to emulate the CPython behavior, rather than using _Complex or std::complex?

    – marnix
    Sep 22 at 9:22






  • 1





    @marnix it is more involved than that. numpy has one central class ufunc from which almost every operator and function derives. ufunc takes care of broadcasting managing strides all that tricky admin that makes working with arrays so convenient. More precisely the split of labor between a specific operator and the general machinery is that the specific operator implements a set of "innermost loops" for each combination of input and output element types it wants to handle. The general machinery takes care of any outer loops and selects the best match innermost loop ...

    – Paul Panzer
    Sep 22 at 9:55






  • 1





    ...promoting any not exactly matching types as required. We can access the list of provided inner loops via the types attribute for np.multiply this yields ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'mq->m', 'qm->m', 'md->m', 'dm->m', 'OO->O'] we can see that there are almost no mixed types, in particular, none that mix float "efdg" with complex "FDG".

    – Paul Panzer
    Sep 22 at 10:02


















30



















Mechanistically, the accepted answer is, of course, correct, but I would argue that a deeper ansswer can be given.



First, it is useful to clarify the question as
@PeterCordes does in a comment: "Is there a multiplicative identity for
complex numbers that does work on inf + 0j?"
or in other words is what OP
sees a weakness in the computer implementation of complex multiplication or is
there something conceptually unsound with inf+0j



Short answer:



Using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision.
So indeed, there is something fundamentally not right with inf+0j, namely,
that as soon as we are at infinity a finite offset becomes meaningless.



Long answer:



Background: The "big thing" around which this question revolves is the matter
of extending a system of numbers (think reals or complex numbers). One reason
one might want to do that is to add some concept of infinity, or to
"compactify" if one happens to be a mathematician. There are other
reasons, too (https://en.wikipedia.org/wiki/Galois_theory, https://en.wikipedia.org/wiki/Non-standard_analysis), but we are not interested in those here.



One point compactification



The tricky bit about such an extension is, of course, that we want these new
numbers to fit into the existing arithmetic. The simplest way is to add a
single element at infinity
(https://en.wikipedia.org/wiki/Alexandroff_extension) and make it equal anything but zero divided by zero. This works for the
reals (https://en.wikipedia.org/wiki/Projectively_extended_real_line) and the complex numbers (https://en.wikipedia.org/wiki/Riemann_sphere).



Other extensions ...



While the one point compactification is simple and mathematically sound, "richer" extensions comprising multiple infinties have been sought. The IEEE 754 standard for real floating point numbers has +inf and -inf (https://en.wikipedia.org/wiki/Extended_real_number_line). Looks
natural and straightforward but already forces us to jump through hoops and
invent stuff like -0 https://en.wikipedia.org/wiki/Signed_zero



... of the complex plane



What about more-than-one-inf extensions of the complex plane?



In computers, complex numbers are typically implemented by sticking two fp reals together one for the real and one for the imaginary part. That is perfectly fine as long as everything is finite. As soon, however, as infinities are considered things become tricky.



The complex plane has a natural rotational symmetry, which ties in nicely with complex arithmetic as multiplying the entire plane by e^phij is the same as a phi radian rotation around 0.



That annex G thing



Now, to keep things simple, complex fp simply uses the extensions (+/-inf, nan etc.) of the underlying real number implementation. This choice may seem so natural it isn't even perceived as a choice, but let's take a closer look at what it implies. A simple visualization of this extension of the complex plane looks like (I = infinite, f = finite, 0 = 0)



I IIIIIIIII I

I fffffffff I
I fffffffff I
I fffffffff I
I fffffffff I
I ffff0ffff I
I fffffffff I
I fffffffff I
I fffffffff I
I fffffffff I

I IIIIIIIII I


But since a true complex plane is one that respects complex multiplication, a more informative projection would be



 III 
I I
fffff
fffffff
fffffffff
I fffffffff I
I ffff0ffff I
I fffffffff I
fffffffff
fffffff
fffff
I I
III


In this projection we see the "uneven distribution" of infinities that is not only ugly but also the root of problems of the kind OP has suffered: Most infinities (those of the forms (+/-inf, finite) and (finite, +/-inf) are lumped together at the four principal directions all other directions are represented by just four infinities (+/-inf, +-inf). It shouldn't come as a surprise that extending complex multiplication to this geometry is a nightmare.



Annex G of the C99 spec tries its best to make it work, including bending the rules on how inf and nan interact (essentially inf trumps nan). OP's problem is sidestepped by not promoting reals and a proposed purely imaginary type to complex, but having the real 1 behave differently from the complex 1 doesn't strike me as a solution. Tellingly, Annex G stops short of fully specifying what the product of two infinities should be.



Can we do better?



It is tempting to try and fix these problems by choosing a better geometry of infinities. In analogy to the extended real line we could add one infinity for each direction. This construction is similar to the projective plane but doesn't lump together opposite directions.
Infinities would be represented in polar coordinates inf x e^2 omega pi i,
defining products would be straightforward. In particular, OP's problem would be solved quite naturally.



But this is where the good news ends. In a way we can be hurled back to square one by---not unreasonably---requiring that our newstyle infinities support functions that extract their real or imaginary parts. Addition is another problem; adding two nonantipodal infinities we'd have to set the angle to undefined i.e. nan (one could argue the angle must lie between the two input angles but there is no simple way of representing that "partial nan-ness")



Riemann to the rescue



In view of all this maybe the good old one point compactification is the safest thing to do. Maybe the authors of Annex G felt the same when mandating a function cproj that lumps all the infinities together.




Here is a related question answered by people more competent on the subject matter than I am.






share|improve this answer






















  • 5





    Yeah, because nan != nan. I understand that this answer is half-joking, but I fail to see why it should be helpful to the OP the way it's written.

    – cmaster
    Sep 21 at 0:37











  • Given that the code in the question body wasn't actually using == (and given they accepted the other answer), it seems it was just a problem of how the OP expressed the title. I reworded the title to fix that inconsistency. (Intentionally invaliding the first half of this answer because I agree with @cmaster: that isn't what this question was asking about).

    – Peter Cordes
    Sep 21 at 3:02






  • 3





    @PeterCordes that would be troubling because using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision. This is in my opinion a deeper explanation than the accepted one, and also one with echos in the nan != nan rule.

    – Paul Panzer
    Sep 21 at 3:22






  • 2





    C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This means that 1 is a multiplicative identity for those types in those languages. Python should do the same. I would call its current behavior simply a bug.

    – benrg
    Sep 21 at 19:28






  • 2





    @PaulPanzer: I don't, but the basic concept would be that one zero (which I'll call Z) would always uphold x+Z=x and x*Z=Z, and 1/Z=NaN, one (positive infinitesimal) would uphold 1/P=+INF, one (negative infinitesimal) would uphold 1/N=-INF, and (unsigned infinitesimal) would yield 1/U=NaN. In general, x-x would be U unless x is a true integer, in which case it would yield Z.

    – supercat
    Sep 21 at 20:48



















6



















This is an implementation detail of how complex multiplication is implemented in CPython. Unlike other languages (e.g. C or C++), CPython takes a somewhat simplistic approach:



  1. ints/floats are promoted to complex numbers in multiplication

  2. the simple school-formula is used, which doesn't provide desired/expected results as soon as infinite numbers are involved:

Py_complex
_Py_c_prod(Py_complex a, Py_complex b)

Py_complex r;
r.real = a.real*b.real - a.imag*b.imag;
r.imag = a.real*b.imag + a.imag*b.real;
return r;



One problematic case with the above code would be:



(0.0+1.0*j)*(inf+inf*j) = (0.0*inf-1*inf)+(0.0*inf+1.0*inf)j
= nan + nan*j


However, one would like to have -inf + inf*j as result.



In this respect other languages are not far ahead: complex number multiplication was for long a time not part of the C standard, included only in C99 as appendix G, which describes how a complex multiplication should be performed - and it is not as simple as the school formula above! The C++ standard doesn't specify how complex multiplication should work, thus most compiler implementations are falling back to C-implementation, which might be C99 conforming (gcc, clang) or not (MSVC).



For the above "problematic" example, C99-compliant implementations (which are more complicated than the school formula) would give (see live) the expected result:



(0.0+1.0*j)*(inf+inf*j) = -inf + inf*j 


Even with C99 standard, an unambiguous result is not defined for all inputs and it might be different even for C99-compliant versions.



Another side effect of float not being promoted to complex in C99 is that multiplyinginf+0.0j with 1.0 or 1.0+0.0j can lead to different results (see here live):



  • (inf+0.0j)*1.0 = inf+0.0j


  • (inf+0.0j)*(1.0+0.0j) = inf-nanj, imaginary part being -nan and not nan (as for CPython) doesn't play a role here, because all quiet nans are equivalent (see this), even some of them have sign-bit set (and thus printed as "-", see this) and some not.

Which is at least counter-intuitive.




My key take-away from it is: there is nothing simple about "simple" complex number multiplication (or division) and when switching between languages or even compilers one must brace oneself for subtle bugs/differences.






share|improve this answer



























  • I know there are lots of nan bit patterns. Didn't know the sign bit thing, though. But I meant semantically How is -nan different from nan? Or should I say more different than nan is from nan?

    – Paul Panzer
    Sep 22 at 8:29











  • @PaulPanzer This is just an implementation detail of how printf and similar works with double: they look at the sign-bit to decide whether "-" should be printed or not (no matter whether it is nan or not). So you are right, there is no meaningful difference between "nan" and "-nan", fixing this part of the answer soon.

    – ead
    Sep 22 at 8:55












  • Ah, good. Was worrying for a mo that everything I thought I knew about fp was not actually correct...

    – Paul Panzer
    Sep 22 at 8:58











  • Sorry for being annoying but are you sure this "there is no imaginary 1.0, i.e. 1.0j which is not the same as 0.0+1.0j with respect to multiplication." is correct? That annex G seems to specify a purely imaginary type (G.2) and also to prescribe how it should be multiplied etc. (G.5.1)

    – Paul Panzer
    Sep 22 at 9:14











  • @PaulPanzer No, thank you for pointing out the issues! As c++-coder, I mostly see C99-standard through C++-glases - it slipped my mind, that C is a step ahead here - you are right obviously, once again.

    – ead
    Sep 22 at 9:39


















3



















Funny definition from Python. If we are solving this with a pen and paper I would say that expected result would be expected: (inf + 0j) as you pointed out because we know that we mean the norm of 1 so (float('inf')+0j)*1 =should= ('inf'+0j):



But that is not the case as you can see... when we run it we get:



>>> Complex( float('inf') , 0j ) * 1
result: (inf + nanj)


Python understands this *1 as a complex number and not the norm of 1 so it interprets as *(1+0j) and the error appears when we try to do inf * 0j = nanj as inf*0 can't be resolved.



What you actually want to do (assuming 1 is the norm of 1):



Recall that if z = x + iy is a complex number with real part x and imaginary part y, the complex conjugate of z is defined as z* = x − iy, and the absolute value, also called the norm of z is defined as:



enter image description here



Assuming 1 is the norm of 1 we should do something like:



>>> c_num = complex(float('inf'),0)
>>> value = 1
>>> realPart=(c_num.real)*value
>>> imagPart=(c_num.imag)*value
>>> complex(realPart,imagPart)
result: (inf+0j)


not very intuitive I know... but sometimes coding languages get defined in a different way from what we are used in our day to day.






share|improve this answer




























    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );














    draft saved

    draft discarded
















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f58031966%2fwhy-does-inf-0j1-evaluate-to-inf-nanj%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown


























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    92



















    The 1 is converted to a complex number first, 1 + 0j, which then leads to an inf * 0 multiplication, resulting in a nan.



    (inf + 0j) * 1
    (inf + 0j) * (1 + 0j)
    inf * 1 + inf * 0j + 0j * 1 + 0j * 0j
    # ^ this is where it comes from
    inf + nan j + 0j - 0
    inf + nan j





    share|improve this answer






















    • 8





      For answering the question "why...?", probably the important most step is the first one, where 1 is cast to 1 + 0j.

      – Warren Weckesser
      Sep 20 at 16:28







    • 5





      Note that C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This may partly be for performance reasons but it also avoids unnecessary NaNs.

      – benrg
      Sep 21 at 19:31











    • @benrg In NumPy, array([inf+0j])*1 also evaluates to array([inf+nanj]). Assuming that the actual multiplication happens somewhere in C/C++ code, would this mean that they wrote custom code to emulate the CPython behavior, rather than using _Complex or std::complex?

      – marnix
      Sep 22 at 9:22






    • 1





      @marnix it is more involved than that. numpy has one central class ufunc from which almost every operator and function derives. ufunc takes care of broadcasting managing strides all that tricky admin that makes working with arrays so convenient. More precisely the split of labor between a specific operator and the general machinery is that the specific operator implements a set of "innermost loops" for each combination of input and output element types it wants to handle. The general machinery takes care of any outer loops and selects the best match innermost loop ...

      – Paul Panzer
      Sep 22 at 9:55






    • 1





      ...promoting any not exactly matching types as required. We can access the list of provided inner loops via the types attribute for np.multiply this yields ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'mq->m', 'qm->m', 'md->m', 'dm->m', 'OO->O'] we can see that there are almost no mixed types, in particular, none that mix float "efdg" with complex "FDG".

      – Paul Panzer
      Sep 22 at 10:02















    92



















    The 1 is converted to a complex number first, 1 + 0j, which then leads to an inf * 0 multiplication, resulting in a nan.



    (inf + 0j) * 1
    (inf + 0j) * (1 + 0j)
    inf * 1 + inf * 0j + 0j * 1 + 0j * 0j
    # ^ this is where it comes from
    inf + nan j + 0j - 0
    inf + nan j





    share|improve this answer






















    • 8





      For answering the question "why...?", probably the important most step is the first one, where 1 is cast to 1 + 0j.

      – Warren Weckesser
      Sep 20 at 16:28







    • 5





      Note that C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This may partly be for performance reasons but it also avoids unnecessary NaNs.

      – benrg
      Sep 21 at 19:31











    • @benrg In NumPy, array([inf+0j])*1 also evaluates to array([inf+nanj]). Assuming that the actual multiplication happens somewhere in C/C++ code, would this mean that they wrote custom code to emulate the CPython behavior, rather than using _Complex or std::complex?

      – marnix
      Sep 22 at 9:22






    • 1





      @marnix it is more involved than that. numpy has one central class ufunc from which almost every operator and function derives. ufunc takes care of broadcasting managing strides all that tricky admin that makes working with arrays so convenient. More precisely the split of labor between a specific operator and the general machinery is that the specific operator implements a set of "innermost loops" for each combination of input and output element types it wants to handle. The general machinery takes care of any outer loops and selects the best match innermost loop ...

      – Paul Panzer
      Sep 22 at 9:55






    • 1





      ...promoting any not exactly matching types as required. We can access the list of provided inner loops via the types attribute for np.multiply this yields ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'mq->m', 'qm->m', 'md->m', 'dm->m', 'OO->O'] we can see that there are almost no mixed types, in particular, none that mix float "efdg" with complex "FDG".

      – Paul Panzer
      Sep 22 at 10:02













    92















    92











    92









    The 1 is converted to a complex number first, 1 + 0j, which then leads to an inf * 0 multiplication, resulting in a nan.



    (inf + 0j) * 1
    (inf + 0j) * (1 + 0j)
    inf * 1 + inf * 0j + 0j * 1 + 0j * 0j
    # ^ this is where it comes from
    inf + nan j + 0j - 0
    inf + nan j





    share|improve this answer
















    The 1 is converted to a complex number first, 1 + 0j, which then leads to an inf * 0 multiplication, resulting in a nan.



    (inf + 0j) * 1
    (inf + 0j) * (1 + 0j)
    inf * 1 + inf * 0j + 0j * 1 + 0j * 0j
    # ^ this is where it comes from
    inf + nan j + 0j - 0
    inf + nan j






    share|improve this answer















    share|improve this answer




    share|improve this answer








    edited Oct 3 at 14:18

























    answered Sep 20 at 16:25









    MaratMarat

    6,0661 gold badge23 silver badges35 bronze badges




    6,0661 gold badge23 silver badges35 bronze badges










    • 8





      For answering the question "why...?", probably the important most step is the first one, where 1 is cast to 1 + 0j.

      – Warren Weckesser
      Sep 20 at 16:28







    • 5





      Note that C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This may partly be for performance reasons but it also avoids unnecessary NaNs.

      – benrg
      Sep 21 at 19:31











    • @benrg In NumPy, array([inf+0j])*1 also evaluates to array([inf+nanj]). Assuming that the actual multiplication happens somewhere in C/C++ code, would this mean that they wrote custom code to emulate the CPython behavior, rather than using _Complex or std::complex?

      – marnix
      Sep 22 at 9:22






    • 1





      @marnix it is more involved than that. numpy has one central class ufunc from which almost every operator and function derives. ufunc takes care of broadcasting managing strides all that tricky admin that makes working with arrays so convenient. More precisely the split of labor between a specific operator and the general machinery is that the specific operator implements a set of "innermost loops" for each combination of input and output element types it wants to handle. The general machinery takes care of any outer loops and selects the best match innermost loop ...

      – Paul Panzer
      Sep 22 at 9:55






    • 1





      ...promoting any not exactly matching types as required. We can access the list of provided inner loops via the types attribute for np.multiply this yields ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'mq->m', 'qm->m', 'md->m', 'dm->m', 'OO->O'] we can see that there are almost no mixed types, in particular, none that mix float "efdg" with complex "FDG".

      – Paul Panzer
      Sep 22 at 10:02












    • 8





      For answering the question "why...?", probably the important most step is the first one, where 1 is cast to 1 + 0j.

      – Warren Weckesser
      Sep 20 at 16:28







    • 5





      Note that C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This may partly be for performance reasons but it also avoids unnecessary NaNs.

      – benrg
      Sep 21 at 19:31











    • @benrg In NumPy, array([inf+0j])*1 also evaluates to array([inf+nanj]). Assuming that the actual multiplication happens somewhere in C/C++ code, would this mean that they wrote custom code to emulate the CPython behavior, rather than using _Complex or std::complex?

      – marnix
      Sep 22 at 9:22






    • 1





      @marnix it is more involved than that. numpy has one central class ufunc from which almost every operator and function derives. ufunc takes care of broadcasting managing strides all that tricky admin that makes working with arrays so convenient. More precisely the split of labor between a specific operator and the general machinery is that the specific operator implements a set of "innermost loops" for each combination of input and output element types it wants to handle. The general machinery takes care of any outer loops and selects the best match innermost loop ...

      – Paul Panzer
      Sep 22 at 9:55






    • 1





      ...promoting any not exactly matching types as required. We can access the list of provided inner loops via the types attribute for np.multiply this yields ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'mq->m', 'qm->m', 'md->m', 'dm->m', 'OO->O'] we can see that there are almost no mixed types, in particular, none that mix float "efdg" with complex "FDG".

      – Paul Panzer
      Sep 22 at 10:02







    8




    8





    For answering the question "why...?", probably the important most step is the first one, where 1 is cast to 1 + 0j.

    – Warren Weckesser
    Sep 20 at 16:28






    For answering the question "why...?", probably the important most step is the first one, where 1 is cast to 1 + 0j.

    – Warren Weckesser
    Sep 20 at 16:28





    5




    5





    Note that C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This may partly be for performance reasons but it also avoids unnecessary NaNs.

    – benrg
    Sep 21 at 19:31





    Note that C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This may partly be for performance reasons but it also avoids unnecessary NaNs.

    – benrg
    Sep 21 at 19:31













    @benrg In NumPy, array([inf+0j])*1 also evaluates to array([inf+nanj]). Assuming that the actual multiplication happens somewhere in C/C++ code, would this mean that they wrote custom code to emulate the CPython behavior, rather than using _Complex or std::complex?

    – marnix
    Sep 22 at 9:22





    @benrg In NumPy, array([inf+0j])*1 also evaluates to array([inf+nanj]). Assuming that the actual multiplication happens somewhere in C/C++ code, would this mean that they wrote custom code to emulate the CPython behavior, rather than using _Complex or std::complex?

    – marnix
    Sep 22 at 9:22




    1




    1





    @marnix it is more involved than that. numpy has one central class ufunc from which almost every operator and function derives. ufunc takes care of broadcasting managing strides all that tricky admin that makes working with arrays so convenient. More precisely the split of labor between a specific operator and the general machinery is that the specific operator implements a set of "innermost loops" for each combination of input and output element types it wants to handle. The general machinery takes care of any outer loops and selects the best match innermost loop ...

    – Paul Panzer
    Sep 22 at 9:55





    @marnix it is more involved than that. numpy has one central class ufunc from which almost every operator and function derives. ufunc takes care of broadcasting managing strides all that tricky admin that makes working with arrays so convenient. More precisely the split of labor between a specific operator and the general machinery is that the specific operator implements a set of "innermost loops" for each combination of input and output element types it wants to handle. The general machinery takes care of any outer loops and selects the best match innermost loop ...

    – Paul Panzer
    Sep 22 at 9:55




    1




    1





    ...promoting any not exactly matching types as required. We can access the list of provided inner loops via the types attribute for np.multiply this yields ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'mq->m', 'qm->m', 'md->m', 'dm->m', 'OO->O'] we can see that there are almost no mixed types, in particular, none that mix float "efdg" with complex "FDG".

    – Paul Panzer
    Sep 22 at 10:02





    ...promoting any not exactly matching types as required. We can access the list of provided inner loops via the types attribute for np.multiply this yields ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ee->e', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'mq->m', 'qm->m', 'md->m', 'dm->m', 'OO->O'] we can see that there are almost no mixed types, in particular, none that mix float "efdg" with complex "FDG".

    – Paul Panzer
    Sep 22 at 10:02













    30



















    Mechanistically, the accepted answer is, of course, correct, but I would argue that a deeper ansswer can be given.



    First, it is useful to clarify the question as
    @PeterCordes does in a comment: "Is there a multiplicative identity for
    complex numbers that does work on inf + 0j?"
    or in other words is what OP
    sees a weakness in the computer implementation of complex multiplication or is
    there something conceptually unsound with inf+0j



    Short answer:



    Using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision.
    So indeed, there is something fundamentally not right with inf+0j, namely,
    that as soon as we are at infinity a finite offset becomes meaningless.



    Long answer:



    Background: The "big thing" around which this question revolves is the matter
    of extending a system of numbers (think reals or complex numbers). One reason
    one might want to do that is to add some concept of infinity, or to
    "compactify" if one happens to be a mathematician. There are other
    reasons, too (https://en.wikipedia.org/wiki/Galois_theory, https://en.wikipedia.org/wiki/Non-standard_analysis), but we are not interested in those here.



    One point compactification



    The tricky bit about such an extension is, of course, that we want these new
    numbers to fit into the existing arithmetic. The simplest way is to add a
    single element at infinity
    (https://en.wikipedia.org/wiki/Alexandroff_extension) and make it equal anything but zero divided by zero. This works for the
    reals (https://en.wikipedia.org/wiki/Projectively_extended_real_line) and the complex numbers (https://en.wikipedia.org/wiki/Riemann_sphere).



    Other extensions ...



    While the one point compactification is simple and mathematically sound, "richer" extensions comprising multiple infinties have been sought. The IEEE 754 standard for real floating point numbers has +inf and -inf (https://en.wikipedia.org/wiki/Extended_real_number_line). Looks
    natural and straightforward but already forces us to jump through hoops and
    invent stuff like -0 https://en.wikipedia.org/wiki/Signed_zero



    ... of the complex plane



    What about more-than-one-inf extensions of the complex plane?



    In computers, complex numbers are typically implemented by sticking two fp reals together one for the real and one for the imaginary part. That is perfectly fine as long as everything is finite. As soon, however, as infinities are considered things become tricky.



    The complex plane has a natural rotational symmetry, which ties in nicely with complex arithmetic as multiplying the entire plane by e^phij is the same as a phi radian rotation around 0.



    That annex G thing



    Now, to keep things simple, complex fp simply uses the extensions (+/-inf, nan etc.) of the underlying real number implementation. This choice may seem so natural it isn't even perceived as a choice, but let's take a closer look at what it implies. A simple visualization of this extension of the complex plane looks like (I = infinite, f = finite, 0 = 0)



    I IIIIIIIII I

    I fffffffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I
    I ffff0ffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I

    I IIIIIIIII I


    But since a true complex plane is one that respects complex multiplication, a more informative projection would be



     III 
    I I
    fffff
    fffffff
    fffffffff
    I fffffffff I
    I ffff0ffff I
    I fffffffff I
    fffffffff
    fffffff
    fffff
    I I
    III


    In this projection we see the "uneven distribution" of infinities that is not only ugly but also the root of problems of the kind OP has suffered: Most infinities (those of the forms (+/-inf, finite) and (finite, +/-inf) are lumped together at the four principal directions all other directions are represented by just four infinities (+/-inf, +-inf). It shouldn't come as a surprise that extending complex multiplication to this geometry is a nightmare.



    Annex G of the C99 spec tries its best to make it work, including bending the rules on how inf and nan interact (essentially inf trumps nan). OP's problem is sidestepped by not promoting reals and a proposed purely imaginary type to complex, but having the real 1 behave differently from the complex 1 doesn't strike me as a solution. Tellingly, Annex G stops short of fully specifying what the product of two infinities should be.



    Can we do better?



    It is tempting to try and fix these problems by choosing a better geometry of infinities. In analogy to the extended real line we could add one infinity for each direction. This construction is similar to the projective plane but doesn't lump together opposite directions.
    Infinities would be represented in polar coordinates inf x e^2 omega pi i,
    defining products would be straightforward. In particular, OP's problem would be solved quite naturally.



    But this is where the good news ends. In a way we can be hurled back to square one by---not unreasonably---requiring that our newstyle infinities support functions that extract their real or imaginary parts. Addition is another problem; adding two nonantipodal infinities we'd have to set the angle to undefined i.e. nan (one could argue the angle must lie between the two input angles but there is no simple way of representing that "partial nan-ness")



    Riemann to the rescue



    In view of all this maybe the good old one point compactification is the safest thing to do. Maybe the authors of Annex G felt the same when mandating a function cproj that lumps all the infinities together.




    Here is a related question answered by people more competent on the subject matter than I am.






    share|improve this answer






















    • 5





      Yeah, because nan != nan. I understand that this answer is half-joking, but I fail to see why it should be helpful to the OP the way it's written.

      – cmaster
      Sep 21 at 0:37











    • Given that the code in the question body wasn't actually using == (and given they accepted the other answer), it seems it was just a problem of how the OP expressed the title. I reworded the title to fix that inconsistency. (Intentionally invaliding the first half of this answer because I agree with @cmaster: that isn't what this question was asking about).

      – Peter Cordes
      Sep 21 at 3:02






    • 3





      @PeterCordes that would be troubling because using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision. This is in my opinion a deeper explanation than the accepted one, and also one with echos in the nan != nan rule.

      – Paul Panzer
      Sep 21 at 3:22






    • 2





      C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This means that 1 is a multiplicative identity for those types in those languages. Python should do the same. I would call its current behavior simply a bug.

      – benrg
      Sep 21 at 19:28






    • 2





      @PaulPanzer: I don't, but the basic concept would be that one zero (which I'll call Z) would always uphold x+Z=x and x*Z=Z, and 1/Z=NaN, one (positive infinitesimal) would uphold 1/P=+INF, one (negative infinitesimal) would uphold 1/N=-INF, and (unsigned infinitesimal) would yield 1/U=NaN. In general, x-x would be U unless x is a true integer, in which case it would yield Z.

      – supercat
      Sep 21 at 20:48
















    30



















    Mechanistically, the accepted answer is, of course, correct, but I would argue that a deeper ansswer can be given.



    First, it is useful to clarify the question as
    @PeterCordes does in a comment: "Is there a multiplicative identity for
    complex numbers that does work on inf + 0j?"
    or in other words is what OP
    sees a weakness in the computer implementation of complex multiplication or is
    there something conceptually unsound with inf+0j



    Short answer:



    Using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision.
    So indeed, there is something fundamentally not right with inf+0j, namely,
    that as soon as we are at infinity a finite offset becomes meaningless.



    Long answer:



    Background: The "big thing" around which this question revolves is the matter
    of extending a system of numbers (think reals or complex numbers). One reason
    one might want to do that is to add some concept of infinity, or to
    "compactify" if one happens to be a mathematician. There are other
    reasons, too (https://en.wikipedia.org/wiki/Galois_theory, https://en.wikipedia.org/wiki/Non-standard_analysis), but we are not interested in those here.



    One point compactification



    The tricky bit about such an extension is, of course, that we want these new
    numbers to fit into the existing arithmetic. The simplest way is to add a
    single element at infinity
    (https://en.wikipedia.org/wiki/Alexandroff_extension) and make it equal anything but zero divided by zero. This works for the
    reals (https://en.wikipedia.org/wiki/Projectively_extended_real_line) and the complex numbers (https://en.wikipedia.org/wiki/Riemann_sphere).



    Other extensions ...



    While the one point compactification is simple and mathematically sound, "richer" extensions comprising multiple infinties have been sought. The IEEE 754 standard for real floating point numbers has +inf and -inf (https://en.wikipedia.org/wiki/Extended_real_number_line). Looks
    natural and straightforward but already forces us to jump through hoops and
    invent stuff like -0 https://en.wikipedia.org/wiki/Signed_zero



    ... of the complex plane



    What about more-than-one-inf extensions of the complex plane?



    In computers, complex numbers are typically implemented by sticking two fp reals together one for the real and one for the imaginary part. That is perfectly fine as long as everything is finite. As soon, however, as infinities are considered things become tricky.



    The complex plane has a natural rotational symmetry, which ties in nicely with complex arithmetic as multiplying the entire plane by e^phij is the same as a phi radian rotation around 0.



    That annex G thing



    Now, to keep things simple, complex fp simply uses the extensions (+/-inf, nan etc.) of the underlying real number implementation. This choice may seem so natural it isn't even perceived as a choice, but let's take a closer look at what it implies. A simple visualization of this extension of the complex plane looks like (I = infinite, f = finite, 0 = 0)



    I IIIIIIIII I

    I fffffffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I
    I ffff0ffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I

    I IIIIIIIII I


    But since a true complex plane is one that respects complex multiplication, a more informative projection would be



     III 
    I I
    fffff
    fffffff
    fffffffff
    I fffffffff I
    I ffff0ffff I
    I fffffffff I
    fffffffff
    fffffff
    fffff
    I I
    III


    In this projection we see the "uneven distribution" of infinities that is not only ugly but also the root of problems of the kind OP has suffered: Most infinities (those of the forms (+/-inf, finite) and (finite, +/-inf) are lumped together at the four principal directions all other directions are represented by just four infinities (+/-inf, +-inf). It shouldn't come as a surprise that extending complex multiplication to this geometry is a nightmare.



    Annex G of the C99 spec tries its best to make it work, including bending the rules on how inf and nan interact (essentially inf trumps nan). OP's problem is sidestepped by not promoting reals and a proposed purely imaginary type to complex, but having the real 1 behave differently from the complex 1 doesn't strike me as a solution. Tellingly, Annex G stops short of fully specifying what the product of two infinities should be.



    Can we do better?



    It is tempting to try and fix these problems by choosing a better geometry of infinities. In analogy to the extended real line we could add one infinity for each direction. This construction is similar to the projective plane but doesn't lump together opposite directions.
    Infinities would be represented in polar coordinates inf x e^2 omega pi i,
    defining products would be straightforward. In particular, OP's problem would be solved quite naturally.



    But this is where the good news ends. In a way we can be hurled back to square one by---not unreasonably---requiring that our newstyle infinities support functions that extract their real or imaginary parts. Addition is another problem; adding two nonantipodal infinities we'd have to set the angle to undefined i.e. nan (one could argue the angle must lie between the two input angles but there is no simple way of representing that "partial nan-ness")



    Riemann to the rescue



    In view of all this maybe the good old one point compactification is the safest thing to do. Maybe the authors of Annex G felt the same when mandating a function cproj that lumps all the infinities together.




    Here is a related question answered by people more competent on the subject matter than I am.






    share|improve this answer






















    • 5





      Yeah, because nan != nan. I understand that this answer is half-joking, but I fail to see why it should be helpful to the OP the way it's written.

      – cmaster
      Sep 21 at 0:37











    • Given that the code in the question body wasn't actually using == (and given they accepted the other answer), it seems it was just a problem of how the OP expressed the title. I reworded the title to fix that inconsistency. (Intentionally invaliding the first half of this answer because I agree with @cmaster: that isn't what this question was asking about).

      – Peter Cordes
      Sep 21 at 3:02






    • 3





      @PeterCordes that would be troubling because using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision. This is in my opinion a deeper explanation than the accepted one, and also one with echos in the nan != nan rule.

      – Paul Panzer
      Sep 21 at 3:22






    • 2





      C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This means that 1 is a multiplicative identity for those types in those languages. Python should do the same. I would call its current behavior simply a bug.

      – benrg
      Sep 21 at 19:28






    • 2





      @PaulPanzer: I don't, but the basic concept would be that one zero (which I'll call Z) would always uphold x+Z=x and x*Z=Z, and 1/Z=NaN, one (positive infinitesimal) would uphold 1/P=+INF, one (negative infinitesimal) would uphold 1/N=-INF, and (unsigned infinitesimal) would yield 1/U=NaN. In general, x-x would be U unless x is a true integer, in which case it would yield Z.

      – supercat
      Sep 21 at 20:48














    30















    30











    30









    Mechanistically, the accepted answer is, of course, correct, but I would argue that a deeper ansswer can be given.



    First, it is useful to clarify the question as
    @PeterCordes does in a comment: "Is there a multiplicative identity for
    complex numbers that does work on inf + 0j?"
    or in other words is what OP
    sees a weakness in the computer implementation of complex multiplication or is
    there something conceptually unsound with inf+0j



    Short answer:



    Using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision.
    So indeed, there is something fundamentally not right with inf+0j, namely,
    that as soon as we are at infinity a finite offset becomes meaningless.



    Long answer:



    Background: The "big thing" around which this question revolves is the matter
    of extending a system of numbers (think reals or complex numbers). One reason
    one might want to do that is to add some concept of infinity, or to
    "compactify" if one happens to be a mathematician. There are other
    reasons, too (https://en.wikipedia.org/wiki/Galois_theory, https://en.wikipedia.org/wiki/Non-standard_analysis), but we are not interested in those here.



    One point compactification



    The tricky bit about such an extension is, of course, that we want these new
    numbers to fit into the existing arithmetic. The simplest way is to add a
    single element at infinity
    (https://en.wikipedia.org/wiki/Alexandroff_extension) and make it equal anything but zero divided by zero. This works for the
    reals (https://en.wikipedia.org/wiki/Projectively_extended_real_line) and the complex numbers (https://en.wikipedia.org/wiki/Riemann_sphere).



    Other extensions ...



    While the one point compactification is simple and mathematically sound, "richer" extensions comprising multiple infinties have been sought. The IEEE 754 standard for real floating point numbers has +inf and -inf (https://en.wikipedia.org/wiki/Extended_real_number_line). Looks
    natural and straightforward but already forces us to jump through hoops and
    invent stuff like -0 https://en.wikipedia.org/wiki/Signed_zero



    ... of the complex plane



    What about more-than-one-inf extensions of the complex plane?



    In computers, complex numbers are typically implemented by sticking two fp reals together one for the real and one for the imaginary part. That is perfectly fine as long as everything is finite. As soon, however, as infinities are considered things become tricky.



    The complex plane has a natural rotational symmetry, which ties in nicely with complex arithmetic as multiplying the entire plane by e^phij is the same as a phi radian rotation around 0.



    That annex G thing



    Now, to keep things simple, complex fp simply uses the extensions (+/-inf, nan etc.) of the underlying real number implementation. This choice may seem so natural it isn't even perceived as a choice, but let's take a closer look at what it implies. A simple visualization of this extension of the complex plane looks like (I = infinite, f = finite, 0 = 0)



    I IIIIIIIII I

    I fffffffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I
    I ffff0ffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I

    I IIIIIIIII I


    But since a true complex plane is one that respects complex multiplication, a more informative projection would be



     III 
    I I
    fffff
    fffffff
    fffffffff
    I fffffffff I
    I ffff0ffff I
    I fffffffff I
    fffffffff
    fffffff
    fffff
    I I
    III


    In this projection we see the "uneven distribution" of infinities that is not only ugly but also the root of problems of the kind OP has suffered: Most infinities (those of the forms (+/-inf, finite) and (finite, +/-inf) are lumped together at the four principal directions all other directions are represented by just four infinities (+/-inf, +-inf). It shouldn't come as a surprise that extending complex multiplication to this geometry is a nightmare.



    Annex G of the C99 spec tries its best to make it work, including bending the rules on how inf and nan interact (essentially inf trumps nan). OP's problem is sidestepped by not promoting reals and a proposed purely imaginary type to complex, but having the real 1 behave differently from the complex 1 doesn't strike me as a solution. Tellingly, Annex G stops short of fully specifying what the product of two infinities should be.



    Can we do better?



    It is tempting to try and fix these problems by choosing a better geometry of infinities. In analogy to the extended real line we could add one infinity for each direction. This construction is similar to the projective plane but doesn't lump together opposite directions.
    Infinities would be represented in polar coordinates inf x e^2 omega pi i,
    defining products would be straightforward. In particular, OP's problem would be solved quite naturally.



    But this is where the good news ends. In a way we can be hurled back to square one by---not unreasonably---requiring that our newstyle infinities support functions that extract their real or imaginary parts. Addition is another problem; adding two nonantipodal infinities we'd have to set the angle to undefined i.e. nan (one could argue the angle must lie between the two input angles but there is no simple way of representing that "partial nan-ness")



    Riemann to the rescue



    In view of all this maybe the good old one point compactification is the safest thing to do. Maybe the authors of Annex G felt the same when mandating a function cproj that lumps all the infinities together.




    Here is a related question answered by people more competent on the subject matter than I am.






    share|improve this answer
















    Mechanistically, the accepted answer is, of course, correct, but I would argue that a deeper ansswer can be given.



    First, it is useful to clarify the question as
    @PeterCordes does in a comment: "Is there a multiplicative identity for
    complex numbers that does work on inf + 0j?"
    or in other words is what OP
    sees a weakness in the computer implementation of complex multiplication or is
    there something conceptually unsound with inf+0j



    Short answer:



    Using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision.
    So indeed, there is something fundamentally not right with inf+0j, namely,
    that as soon as we are at infinity a finite offset becomes meaningless.



    Long answer:



    Background: The "big thing" around which this question revolves is the matter
    of extending a system of numbers (think reals or complex numbers). One reason
    one might want to do that is to add some concept of infinity, or to
    "compactify" if one happens to be a mathematician. There are other
    reasons, too (https://en.wikipedia.org/wiki/Galois_theory, https://en.wikipedia.org/wiki/Non-standard_analysis), but we are not interested in those here.



    One point compactification



    The tricky bit about such an extension is, of course, that we want these new
    numbers to fit into the existing arithmetic. The simplest way is to add a
    single element at infinity
    (https://en.wikipedia.org/wiki/Alexandroff_extension) and make it equal anything but zero divided by zero. This works for the
    reals (https://en.wikipedia.org/wiki/Projectively_extended_real_line) and the complex numbers (https://en.wikipedia.org/wiki/Riemann_sphere).



    Other extensions ...



    While the one point compactification is simple and mathematically sound, "richer" extensions comprising multiple infinties have been sought. The IEEE 754 standard for real floating point numbers has +inf and -inf (https://en.wikipedia.org/wiki/Extended_real_number_line). Looks
    natural and straightforward but already forces us to jump through hoops and
    invent stuff like -0 https://en.wikipedia.org/wiki/Signed_zero



    ... of the complex plane



    What about more-than-one-inf extensions of the complex plane?



    In computers, complex numbers are typically implemented by sticking two fp reals together one for the real and one for the imaginary part. That is perfectly fine as long as everything is finite. As soon, however, as infinities are considered things become tricky.



    The complex plane has a natural rotational symmetry, which ties in nicely with complex arithmetic as multiplying the entire plane by e^phij is the same as a phi radian rotation around 0.



    That annex G thing



    Now, to keep things simple, complex fp simply uses the extensions (+/-inf, nan etc.) of the underlying real number implementation. This choice may seem so natural it isn't even perceived as a choice, but let's take a closer look at what it implies. A simple visualization of this extension of the complex plane looks like (I = infinite, f = finite, 0 = 0)



    I IIIIIIIII I

    I fffffffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I
    I ffff0ffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I
    I fffffffff I

    I IIIIIIIII I


    But since a true complex plane is one that respects complex multiplication, a more informative projection would be



     III 
    I I
    fffff
    fffffff
    fffffffff
    I fffffffff I
    I ffff0ffff I
    I fffffffff I
    fffffffff
    fffffff
    fffff
    I I
    III


    In this projection we see the "uneven distribution" of infinities that is not only ugly but also the root of problems of the kind OP has suffered: Most infinities (those of the forms (+/-inf, finite) and (finite, +/-inf) are lumped together at the four principal directions all other directions are represented by just four infinities (+/-inf, +-inf). It shouldn't come as a surprise that extending complex multiplication to this geometry is a nightmare.



    Annex G of the C99 spec tries its best to make it work, including bending the rules on how inf and nan interact (essentially inf trumps nan). OP's problem is sidestepped by not promoting reals and a proposed purely imaginary type to complex, but having the real 1 behave differently from the complex 1 doesn't strike me as a solution. Tellingly, Annex G stops short of fully specifying what the product of two infinities should be.



    Can we do better?



    It is tempting to try and fix these problems by choosing a better geometry of infinities. In analogy to the extended real line we could add one infinity for each direction. This construction is similar to the projective plane but doesn't lump together opposite directions.
    Infinities would be represented in polar coordinates inf x e^2 omega pi i,
    defining products would be straightforward. In particular, OP's problem would be solved quite naturally.



    But this is where the good news ends. In a way we can be hurled back to square one by---not unreasonably---requiring that our newstyle infinities support functions that extract their real or imaginary parts. Addition is another problem; adding two nonantipodal infinities we'd have to set the angle to undefined i.e. nan (one could argue the angle must lie between the two input angles but there is no simple way of representing that "partial nan-ness")



    Riemann to the rescue



    In view of all this maybe the good old one point compactification is the safest thing to do. Maybe the authors of Annex G felt the same when mandating a function cproj that lumps all the infinities together.




    Here is a related question answered by people more competent on the subject matter than I am.







    share|improve this answer















    share|improve this answer




    share|improve this answer








    edited Sep 24 at 5:37

























    answered Sep 20 at 17:59









    Paul PanzerPaul Panzer

    39.9k2 gold badges26 silver badges60 bronze badges




    39.9k2 gold badges26 silver badges60 bronze badges










    • 5





      Yeah, because nan != nan. I understand that this answer is half-joking, but I fail to see why it should be helpful to the OP the way it's written.

      – cmaster
      Sep 21 at 0:37











    • Given that the code in the question body wasn't actually using == (and given they accepted the other answer), it seems it was just a problem of how the OP expressed the title. I reworded the title to fix that inconsistency. (Intentionally invaliding the first half of this answer because I agree with @cmaster: that isn't what this question was asking about).

      – Peter Cordes
      Sep 21 at 3:02






    • 3





      @PeterCordes that would be troubling because using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision. This is in my opinion a deeper explanation than the accepted one, and also one with echos in the nan != nan rule.

      – Paul Panzer
      Sep 21 at 3:22






    • 2





      C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This means that 1 is a multiplicative identity for those types in those languages. Python should do the same. I would call its current behavior simply a bug.

      – benrg
      Sep 21 at 19:28






    • 2





      @PaulPanzer: I don't, but the basic concept would be that one zero (which I'll call Z) would always uphold x+Z=x and x*Z=Z, and 1/Z=NaN, one (positive infinitesimal) would uphold 1/P=+INF, one (negative infinitesimal) would uphold 1/N=-INF, and (unsigned infinitesimal) would yield 1/U=NaN. In general, x-x would be U unless x is a true integer, in which case it would yield Z.

      – supercat
      Sep 21 at 20:48













    • 5





      Yeah, because nan != nan. I understand that this answer is half-joking, but I fail to see why it should be helpful to the OP the way it's written.

      – cmaster
      Sep 21 at 0:37











    • Given that the code in the question body wasn't actually using == (and given they accepted the other answer), it seems it was just a problem of how the OP expressed the title. I reworded the title to fix that inconsistency. (Intentionally invaliding the first half of this answer because I agree with @cmaster: that isn't what this question was asking about).

      – Peter Cordes
      Sep 21 at 3:02






    • 3





      @PeterCordes that would be troubling because using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision. This is in my opinion a deeper explanation than the accepted one, and also one with echos in the nan != nan rule.

      – Paul Panzer
      Sep 21 at 3:22






    • 2





      C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This means that 1 is a multiplicative identity for those types in those languages. Python should do the same. I would call its current behavior simply a bug.

      – benrg
      Sep 21 at 19:28






    • 2





      @PaulPanzer: I don't, but the basic concept would be that one zero (which I'll call Z) would always uphold x+Z=x and x*Z=Z, and 1/Z=NaN, one (positive infinitesimal) would uphold 1/P=+INF, one (negative infinitesimal) would uphold 1/N=-INF, and (unsigned infinitesimal) would yield 1/U=NaN. In general, x-x would be U unless x is a true integer, in which case it would yield Z.

      – supercat
      Sep 21 at 20:48








    5




    5





    Yeah, because nan != nan. I understand that this answer is half-joking, but I fail to see why it should be helpful to the OP the way it's written.

    – cmaster
    Sep 21 at 0:37





    Yeah, because nan != nan. I understand that this answer is half-joking, but I fail to see why it should be helpful to the OP the way it's written.

    – cmaster
    Sep 21 at 0:37













    Given that the code in the question body wasn't actually using == (and given they accepted the other answer), it seems it was just a problem of how the OP expressed the title. I reworded the title to fix that inconsistency. (Intentionally invaliding the first half of this answer because I agree with @cmaster: that isn't what this question was asking about).

    – Peter Cordes
    Sep 21 at 3:02





    Given that the code in the question body wasn't actually using == (and given they accepted the other answer), it seems it was just a problem of how the OP expressed the title. I reworded the title to fix that inconsistency. (Intentionally invaliding the first half of this answer because I agree with @cmaster: that isn't what this question was asking about).

    – Peter Cordes
    Sep 21 at 3:02




    3




    3





    @PeterCordes that would be troubling because using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision. This is in my opinion a deeper explanation than the accepted one, and also one with echos in the nan != nan rule.

    – Paul Panzer
    Sep 21 at 3:22





    @PeterCordes that would be troubling because using polar coordinates we can view complex multiplication as a scaling and a rotation. Rotating an infinite "arm" even by 0 degrees as in the case of multiplying by one we cannot expect to place its tip with finite precision. This is in my opinion a deeper explanation than the accepted one, and also one with echos in the nan != nan rule.

    – Paul Panzer
    Sep 21 at 3:22




    2




    2





    C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This means that 1 is a multiplicative identity for those types in those languages. Python should do the same. I would call its current behavior simply a bug.

    – benrg
    Sep 21 at 19:28





    C99 specifies that real floating-point types are not promoted to complex when multiplying by a complex type (section 6.3.1.8 of the draft standard), and as far as I know the same is true of C++'s std::complex. This means that 1 is a multiplicative identity for those types in those languages. Python should do the same. I would call its current behavior simply a bug.

    – benrg
    Sep 21 at 19:28




    2




    2





    @PaulPanzer: I don't, but the basic concept would be that one zero (which I'll call Z) would always uphold x+Z=x and x*Z=Z, and 1/Z=NaN, one (positive infinitesimal) would uphold 1/P=+INF, one (negative infinitesimal) would uphold 1/N=-INF, and (unsigned infinitesimal) would yield 1/U=NaN. In general, x-x would be U unless x is a true integer, in which case it would yield Z.

    – supercat
    Sep 21 at 20:48






    @PaulPanzer: I don't, but the basic concept would be that one zero (which I'll call Z) would always uphold x+Z=x and x*Z=Z, and 1/Z=NaN, one (positive infinitesimal) would uphold 1/P=+INF, one (negative infinitesimal) would uphold 1/N=-INF, and (unsigned infinitesimal) would yield 1/U=NaN. In general, x-x would be U unless x is a true integer, in which case it would yield Z.

    – supercat
    Sep 21 at 20:48












    6



















    This is an implementation detail of how complex multiplication is implemented in CPython. Unlike other languages (e.g. C or C++), CPython takes a somewhat simplistic approach:



    1. ints/floats are promoted to complex numbers in multiplication

    2. the simple school-formula is used, which doesn't provide desired/expected results as soon as infinite numbers are involved:

    Py_complex
    _Py_c_prod(Py_complex a, Py_complex b)

    Py_complex r;
    r.real = a.real*b.real - a.imag*b.imag;
    r.imag = a.real*b.imag + a.imag*b.real;
    return r;



    One problematic case with the above code would be:



    (0.0+1.0*j)*(inf+inf*j) = (0.0*inf-1*inf)+(0.0*inf+1.0*inf)j
    = nan + nan*j


    However, one would like to have -inf + inf*j as result.



    In this respect other languages are not far ahead: complex number multiplication was for long a time not part of the C standard, included only in C99 as appendix G, which describes how a complex multiplication should be performed - and it is not as simple as the school formula above! The C++ standard doesn't specify how complex multiplication should work, thus most compiler implementations are falling back to C-implementation, which might be C99 conforming (gcc, clang) or not (MSVC).



    For the above "problematic" example, C99-compliant implementations (which are more complicated than the school formula) would give (see live) the expected result:



    (0.0+1.0*j)*(inf+inf*j) = -inf + inf*j 


    Even with C99 standard, an unambiguous result is not defined for all inputs and it might be different even for C99-compliant versions.



    Another side effect of float not being promoted to complex in C99 is that multiplyinginf+0.0j with 1.0 or 1.0+0.0j can lead to different results (see here live):



    • (inf+0.0j)*1.0 = inf+0.0j


    • (inf+0.0j)*(1.0+0.0j) = inf-nanj, imaginary part being -nan and not nan (as for CPython) doesn't play a role here, because all quiet nans are equivalent (see this), even some of them have sign-bit set (and thus printed as "-", see this) and some not.

    Which is at least counter-intuitive.




    My key take-away from it is: there is nothing simple about "simple" complex number multiplication (or division) and when switching between languages or even compilers one must brace oneself for subtle bugs/differences.






    share|improve this answer



























    • I know there are lots of nan bit patterns. Didn't know the sign bit thing, though. But I meant semantically How is -nan different from nan? Or should I say more different than nan is from nan?

      – Paul Panzer
      Sep 22 at 8:29











    • @PaulPanzer This is just an implementation detail of how printf and similar works with double: they look at the sign-bit to decide whether "-" should be printed or not (no matter whether it is nan or not). So you are right, there is no meaningful difference between "nan" and "-nan", fixing this part of the answer soon.

      – ead
      Sep 22 at 8:55












    • Ah, good. Was worrying for a mo that everything I thought I knew about fp was not actually correct...

      – Paul Panzer
      Sep 22 at 8:58











    • Sorry for being annoying but are you sure this "there is no imaginary 1.0, i.e. 1.0j which is not the same as 0.0+1.0j with respect to multiplication." is correct? That annex G seems to specify a purely imaginary type (G.2) and also to prescribe how it should be multiplied etc. (G.5.1)

      – Paul Panzer
      Sep 22 at 9:14











    • @PaulPanzer No, thank you for pointing out the issues! As c++-coder, I mostly see C99-standard through C++-glases - it slipped my mind, that C is a step ahead here - you are right obviously, once again.

      – ead
      Sep 22 at 9:39















    6



















    This is an implementation detail of how complex multiplication is implemented in CPython. Unlike other languages (e.g. C or C++), CPython takes a somewhat simplistic approach:



    1. ints/floats are promoted to complex numbers in multiplication

    2. the simple school-formula is used, which doesn't provide desired/expected results as soon as infinite numbers are involved:

    Py_complex
    _Py_c_prod(Py_complex a, Py_complex b)

    Py_complex r;
    r.real = a.real*b.real - a.imag*b.imag;
    r.imag = a.real*b.imag + a.imag*b.real;
    return r;



    One problematic case with the above code would be:



    (0.0+1.0*j)*(inf+inf*j) = (0.0*inf-1*inf)+(0.0*inf+1.0*inf)j
    = nan + nan*j


    However, one would like to have -inf + inf*j as result.



    In this respect other languages are not far ahead: complex number multiplication was for long a time not part of the C standard, included only in C99 as appendix G, which describes how a complex multiplication should be performed - and it is not as simple as the school formula above! The C++ standard doesn't specify how complex multiplication should work, thus most compiler implementations are falling back to C-implementation, which might be C99 conforming (gcc, clang) or not (MSVC).



    For the above "problematic" example, C99-compliant implementations (which are more complicated than the school formula) would give (see live) the expected result:



    (0.0+1.0*j)*(inf+inf*j) = -inf + inf*j 


    Even with C99 standard, an unambiguous result is not defined for all inputs and it might be different even for C99-compliant versions.



    Another side effect of float not being promoted to complex in C99 is that multiplyinginf+0.0j with 1.0 or 1.0+0.0j can lead to different results (see here live):



    • (inf+0.0j)*1.0 = inf+0.0j


    • (inf+0.0j)*(1.0+0.0j) = inf-nanj, imaginary part being -nan and not nan (as for CPython) doesn't play a role here, because all quiet nans are equivalent (see this), even some of them have sign-bit set (and thus printed as "-", see this) and some not.

    Which is at least counter-intuitive.




    My key take-away from it is: there is nothing simple about "simple" complex number multiplication (or division) and when switching between languages or even compilers one must brace oneself for subtle bugs/differences.






    share|improve this answer



























    • I know there are lots of nan bit patterns. Didn't know the sign bit thing, though. But I meant semantically How is -nan different from nan? Or should I say more different than nan is from nan?

      – Paul Panzer
      Sep 22 at 8:29











    • @PaulPanzer This is just an implementation detail of how printf and similar works with double: they look at the sign-bit to decide whether "-" should be printed or not (no matter whether it is nan or not). So you are right, there is no meaningful difference between "nan" and "-nan", fixing this part of the answer soon.

      – ead
      Sep 22 at 8:55












    • Ah, good. Was worrying for a mo that everything I thought I knew about fp was not actually correct...

      – Paul Panzer
      Sep 22 at 8:58











    • Sorry for being annoying but are you sure this "there is no imaginary 1.0, i.e. 1.0j which is not the same as 0.0+1.0j with respect to multiplication." is correct? That annex G seems to specify a purely imaginary type (G.2) and also to prescribe how it should be multiplied etc. (G.5.1)

      – Paul Panzer
      Sep 22 at 9:14











    • @PaulPanzer No, thank you for pointing out the issues! As c++-coder, I mostly see C99-standard through C++-glases - it slipped my mind, that C is a step ahead here - you are right obviously, once again.

      – ead
      Sep 22 at 9:39













    6















    6











    6









    This is an implementation detail of how complex multiplication is implemented in CPython. Unlike other languages (e.g. C or C++), CPython takes a somewhat simplistic approach:



    1. ints/floats are promoted to complex numbers in multiplication

    2. the simple school-formula is used, which doesn't provide desired/expected results as soon as infinite numbers are involved:

    Py_complex
    _Py_c_prod(Py_complex a, Py_complex b)

    Py_complex r;
    r.real = a.real*b.real - a.imag*b.imag;
    r.imag = a.real*b.imag + a.imag*b.real;
    return r;



    One problematic case with the above code would be:



    (0.0+1.0*j)*(inf+inf*j) = (0.0*inf-1*inf)+(0.0*inf+1.0*inf)j
    = nan + nan*j


    However, one would like to have -inf + inf*j as result.



    In this respect other languages are not far ahead: complex number multiplication was for long a time not part of the C standard, included only in C99 as appendix G, which describes how a complex multiplication should be performed - and it is not as simple as the school formula above! The C++ standard doesn't specify how complex multiplication should work, thus most compiler implementations are falling back to C-implementation, which might be C99 conforming (gcc, clang) or not (MSVC).



    For the above "problematic" example, C99-compliant implementations (which are more complicated than the school formula) would give (see live) the expected result:



    (0.0+1.0*j)*(inf+inf*j) = -inf + inf*j 


    Even with C99 standard, an unambiguous result is not defined for all inputs and it might be different even for C99-compliant versions.



    Another side effect of float not being promoted to complex in C99 is that multiplyinginf+0.0j with 1.0 or 1.0+0.0j can lead to different results (see here live):



    • (inf+0.0j)*1.0 = inf+0.0j


    • (inf+0.0j)*(1.0+0.0j) = inf-nanj, imaginary part being -nan and not nan (as for CPython) doesn't play a role here, because all quiet nans are equivalent (see this), even some of them have sign-bit set (and thus printed as "-", see this) and some not.

    Which is at least counter-intuitive.




    My key take-away from it is: there is nothing simple about "simple" complex number multiplication (or division) and when switching between languages or even compilers one must brace oneself for subtle bugs/differences.






    share|improve this answer
















    This is an implementation detail of how complex multiplication is implemented in CPython. Unlike other languages (e.g. C or C++), CPython takes a somewhat simplistic approach:



    1. ints/floats are promoted to complex numbers in multiplication

    2. the simple school-formula is used, which doesn't provide desired/expected results as soon as infinite numbers are involved:

    Py_complex
    _Py_c_prod(Py_complex a, Py_complex b)

    Py_complex r;
    r.real = a.real*b.real - a.imag*b.imag;
    r.imag = a.real*b.imag + a.imag*b.real;
    return r;



    One problematic case with the above code would be:



    (0.0+1.0*j)*(inf+inf*j) = (0.0*inf-1*inf)+(0.0*inf+1.0*inf)j
    = nan + nan*j


    However, one would like to have -inf + inf*j as result.



    In this respect other languages are not far ahead: complex number multiplication was for long a time not part of the C standard, included only in C99 as appendix G, which describes how a complex multiplication should be performed - and it is not as simple as the school formula above! The C++ standard doesn't specify how complex multiplication should work, thus most compiler implementations are falling back to C-implementation, which might be C99 conforming (gcc, clang) or not (MSVC).



    For the above "problematic" example, C99-compliant implementations (which are more complicated than the school formula) would give (see live) the expected result:



    (0.0+1.0*j)*(inf+inf*j) = -inf + inf*j 


    Even with C99 standard, an unambiguous result is not defined for all inputs and it might be different even for C99-compliant versions.



    Another side effect of float not being promoted to complex in C99 is that multiplyinginf+0.0j with 1.0 or 1.0+0.0j can lead to different results (see here live):



    • (inf+0.0j)*1.0 = inf+0.0j


    • (inf+0.0j)*(1.0+0.0j) = inf-nanj, imaginary part being -nan and not nan (as for CPython) doesn't play a role here, because all quiet nans are equivalent (see this), even some of them have sign-bit set (and thus printed as "-", see this) and some not.

    Which is at least counter-intuitive.




    My key take-away from it is: there is nothing simple about "simple" complex number multiplication (or division) and when switching between languages or even compilers one must brace oneself for subtle bugs/differences.







    share|improve this answer















    share|improve this answer




    share|improve this answer








    edited Sep 23 at 15:32









    Toby Speight

    19.6k13 gold badges49 silver badges74 bronze badges




    19.6k13 gold badges49 silver badges74 bronze badges










    answered Sep 22 at 6:50









    eadead

    19k2 gold badges41 silver badges74 bronze badges




    19k2 gold badges41 silver badges74 bronze badges















    • I know there are lots of nan bit patterns. Didn't know the sign bit thing, though. But I meant semantically How is -nan different from nan? Or should I say more different than nan is from nan?

      – Paul Panzer
      Sep 22 at 8:29











    • @PaulPanzer This is just an implementation detail of how printf and similar works with double: they look at the sign-bit to decide whether "-" should be printed or not (no matter whether it is nan or not). So you are right, there is no meaningful difference between "nan" and "-nan", fixing this part of the answer soon.

      – ead
      Sep 22 at 8:55












    • Ah, good. Was worrying for a mo that everything I thought I knew about fp was not actually correct...

      – Paul Panzer
      Sep 22 at 8:58











    • Sorry for being annoying but are you sure this "there is no imaginary 1.0, i.e. 1.0j which is not the same as 0.0+1.0j with respect to multiplication." is correct? That annex G seems to specify a purely imaginary type (G.2) and also to prescribe how it should be multiplied etc. (G.5.1)

      – Paul Panzer
      Sep 22 at 9:14











    • @PaulPanzer No, thank you for pointing out the issues! As c++-coder, I mostly see C99-standard through C++-glases - it slipped my mind, that C is a step ahead here - you are right obviously, once again.

      – ead
      Sep 22 at 9:39

















    • I know there are lots of nan bit patterns. Didn't know the sign bit thing, though. But I meant semantically How is -nan different from nan? Or should I say more different than nan is from nan?

      – Paul Panzer
      Sep 22 at 8:29











    • @PaulPanzer This is just an implementation detail of how printf and similar works with double: they look at the sign-bit to decide whether "-" should be printed or not (no matter whether it is nan or not). So you are right, there is no meaningful difference between "nan" and "-nan", fixing this part of the answer soon.

      – ead
      Sep 22 at 8:55












    • Ah, good. Was worrying for a mo that everything I thought I knew about fp was not actually correct...

      – Paul Panzer
      Sep 22 at 8:58











    • Sorry for being annoying but are you sure this "there is no imaginary 1.0, i.e. 1.0j which is not the same as 0.0+1.0j with respect to multiplication." is correct? That annex G seems to specify a purely imaginary type (G.2) and also to prescribe how it should be multiplied etc. (G.5.1)

      – Paul Panzer
      Sep 22 at 9:14











    • @PaulPanzer No, thank you for pointing out the issues! As c++-coder, I mostly see C99-standard through C++-glases - it slipped my mind, that C is a step ahead here - you are right obviously, once again.

      – ead
      Sep 22 at 9:39
















    I know there are lots of nan bit patterns. Didn't know the sign bit thing, though. But I meant semantically How is -nan different from nan? Or should I say more different than nan is from nan?

    – Paul Panzer
    Sep 22 at 8:29





    I know there are lots of nan bit patterns. Didn't know the sign bit thing, though. But I meant semantically How is -nan different from nan? Or should I say more different than nan is from nan?

    – Paul Panzer
    Sep 22 at 8:29













    @PaulPanzer This is just an implementation detail of how printf and similar works with double: they look at the sign-bit to decide whether "-" should be printed or not (no matter whether it is nan or not). So you are right, there is no meaningful difference between "nan" and "-nan", fixing this part of the answer soon.

    – ead
    Sep 22 at 8:55






    @PaulPanzer This is just an implementation detail of how printf and similar works with double: they look at the sign-bit to decide whether "-" should be printed or not (no matter whether it is nan or not). So you are right, there is no meaningful difference between "nan" and "-nan", fixing this part of the answer soon.

    – ead
    Sep 22 at 8:55














    Ah, good. Was worrying for a mo that everything I thought I knew about fp was not actually correct...

    – Paul Panzer
    Sep 22 at 8:58





    Ah, good. Was worrying for a mo that everything I thought I knew about fp was not actually correct...

    – Paul Panzer
    Sep 22 at 8:58













    Sorry for being annoying but are you sure this "there is no imaginary 1.0, i.e. 1.0j which is not the same as 0.0+1.0j with respect to multiplication." is correct? That annex G seems to specify a purely imaginary type (G.2) and also to prescribe how it should be multiplied etc. (G.5.1)

    – Paul Panzer
    Sep 22 at 9:14





    Sorry for being annoying but are you sure this "there is no imaginary 1.0, i.e. 1.0j which is not the same as 0.0+1.0j with respect to multiplication." is correct? That annex G seems to specify a purely imaginary type (G.2) and also to prescribe how it should be multiplied etc. (G.5.1)

    – Paul Panzer
    Sep 22 at 9:14













    @PaulPanzer No, thank you for pointing out the issues! As c++-coder, I mostly see C99-standard through C++-glases - it slipped my mind, that C is a step ahead here - you are right obviously, once again.

    – ead
    Sep 22 at 9:39





    @PaulPanzer No, thank you for pointing out the issues! As c++-coder, I mostly see C99-standard through C++-glases - it slipped my mind, that C is a step ahead here - you are right obviously, once again.

    – ead
    Sep 22 at 9:39











    3



















    Funny definition from Python. If we are solving this with a pen and paper I would say that expected result would be expected: (inf + 0j) as you pointed out because we know that we mean the norm of 1 so (float('inf')+0j)*1 =should= ('inf'+0j):



    But that is not the case as you can see... when we run it we get:



    >>> Complex( float('inf') , 0j ) * 1
    result: (inf + nanj)


    Python understands this *1 as a complex number and not the norm of 1 so it interprets as *(1+0j) and the error appears when we try to do inf * 0j = nanj as inf*0 can't be resolved.



    What you actually want to do (assuming 1 is the norm of 1):



    Recall that if z = x + iy is a complex number with real part x and imaginary part y, the complex conjugate of z is defined as z* = x − iy, and the absolute value, also called the norm of z is defined as:



    enter image description here



    Assuming 1 is the norm of 1 we should do something like:



    >>> c_num = complex(float('inf'),0)
    >>> value = 1
    >>> realPart=(c_num.real)*value
    >>> imagPart=(c_num.imag)*value
    >>> complex(realPart,imagPart)
    result: (inf+0j)


    not very intuitive I know... but sometimes coding languages get defined in a different way from what we are used in our day to day.






    share|improve this answer































      3



















      Funny definition from Python. If we are solving this with a pen and paper I would say that expected result would be expected: (inf + 0j) as you pointed out because we know that we mean the norm of 1 so (float('inf')+0j)*1 =should= ('inf'+0j):



      But that is not the case as you can see... when we run it we get:



      >>> Complex( float('inf') , 0j ) * 1
      result: (inf + nanj)


      Python understands this *1 as a complex number and not the norm of 1 so it interprets as *(1+0j) and the error appears when we try to do inf * 0j = nanj as inf*0 can't be resolved.



      What you actually want to do (assuming 1 is the norm of 1):



      Recall that if z = x + iy is a complex number with real part x and imaginary part y, the complex conjugate of z is defined as z* = x − iy, and the absolute value, also called the norm of z is defined as:



      enter image description here



      Assuming 1 is the norm of 1 we should do something like:



      >>> c_num = complex(float('inf'),0)
      >>> value = 1
      >>> realPart=(c_num.real)*value
      >>> imagPart=(c_num.imag)*value
      >>> complex(realPart,imagPart)
      result: (inf+0j)


      not very intuitive I know... but sometimes coding languages get defined in a different way from what we are used in our day to day.






      share|improve this answer





























        3















        3











        3









        Funny definition from Python. If we are solving this with a pen and paper I would say that expected result would be expected: (inf + 0j) as you pointed out because we know that we mean the norm of 1 so (float('inf')+0j)*1 =should= ('inf'+0j):



        But that is not the case as you can see... when we run it we get:



        >>> Complex( float('inf') , 0j ) * 1
        result: (inf + nanj)


        Python understands this *1 as a complex number and not the norm of 1 so it interprets as *(1+0j) and the error appears when we try to do inf * 0j = nanj as inf*0 can't be resolved.



        What you actually want to do (assuming 1 is the norm of 1):



        Recall that if z = x + iy is a complex number with real part x and imaginary part y, the complex conjugate of z is defined as z* = x − iy, and the absolute value, also called the norm of z is defined as:



        enter image description here



        Assuming 1 is the norm of 1 we should do something like:



        >>> c_num = complex(float('inf'),0)
        >>> value = 1
        >>> realPart=(c_num.real)*value
        >>> imagPart=(c_num.imag)*value
        >>> complex(realPart,imagPart)
        result: (inf+0j)


        not very intuitive I know... but sometimes coding languages get defined in a different way from what we are used in our day to day.






        share|improve this answer
















        Funny definition from Python. If we are solving this with a pen and paper I would say that expected result would be expected: (inf + 0j) as you pointed out because we know that we mean the norm of 1 so (float('inf')+0j)*1 =should= ('inf'+0j):



        But that is not the case as you can see... when we run it we get:



        >>> Complex( float('inf') , 0j ) * 1
        result: (inf + nanj)


        Python understands this *1 as a complex number and not the norm of 1 so it interprets as *(1+0j) and the error appears when we try to do inf * 0j = nanj as inf*0 can't be resolved.



        What you actually want to do (assuming 1 is the norm of 1):



        Recall that if z = x + iy is a complex number with real part x and imaginary part y, the complex conjugate of z is defined as z* = x − iy, and the absolute value, also called the norm of z is defined as:



        enter image description here



        Assuming 1 is the norm of 1 we should do something like:



        >>> c_num = complex(float('inf'),0)
        >>> value = 1
        >>> realPart=(c_num.real)*value
        >>> imagPart=(c_num.imag)*value
        >>> complex(realPart,imagPart)
        result: (inf+0j)


        not very intuitive I know... but sometimes coding languages get defined in a different way from what we are used in our day to day.







        share|improve this answer















        share|improve this answer




        share|improve this answer








        edited Oct 7 at 13:56

























        answered Oct 3 at 22:10









        costargccostargc

        1398 bronze badges




        1398 bronze badges































            draft saved

            draft discarded















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f58031966%2fwhy-does-inf-0j1-evaluate-to-inf-nanj%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown









            Popular posts from this blog

            Tamil (spriik) Luke uk diar | Nawigatjuun

            Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

            Where does the image of a data connector as a sharp metal spike originate from?Where does the concept of infected people turning into zombies only after death originate from?Where does the motif of a reanimated human head originate?Where did the notion that Dragons could speak originate?Where does the archetypal image of the 'Grey' alien come from?Where did the suffix '-Man' originate?Where does the notion of being injured or killed by an illusion originate?Where did the term “sophont” originate?Where does the trope of magic spells being driven by advanced technology originate from?Where did the term “the living impaired” originate?