Are variable time comparisons always a security risk in cryptography code?Why did TLS 1.3 drop AES-CBC?What are private key cryptography and public key cryptography, and where are they useful?NSA Suite A Cryptography: Security through obscurity?Is advanced mathematics relevant to security beyond cryptography?What are security implications of enabling access to performance counters on ARM Cortex A9?An attempt to overcome the key distribution problem inherent in one time pad cryptographyCan sites which check your password as you type pose a security risk?Are servers that do not implement time services vulnerable to clock skew attacks?Constant time compares when array sizes are not equal?Does disclosing server local time to users cause any security risks?Are there techniques or methods to develop security protocols without side channel attacks?

Translate English to Pig Latin | PIG_LATIN.PY

Could this estimate of the size and mass of the Chicxulub Impactor be accurate?

How can I hint that my character isn't real?

Why there is no wireless switch?

Why did Tony's Arc Reactor do this?

Why does 8 bit truecolor use only 2 bits for blue?

Some questions about a series pass transistor & op amp voltage regulator

I won a car in a poker game

Looking for the comic book where Spider-Man was [mistakenly] addressed as Super-Man

Was the lunar landing site always in the same plane as the CM's orbit?

How do I make my fill-in-the-blank exercise more obvious?

Why are UK MPs allowed to not vote (but it counts as a no)?

Is there a neutral term for people who tend to avoid face-to-face or video/audio communication?

If I have an accident, should I file a claim with my car insurance company?

What is the justification for Dirac's large numbers hypothesis?

How to create large inductors (1H) for audio use?

In-universe, why does Doc Brown program the time machine to go to 1955?

Who's this voice acting performer?

Entering the US with dual citizenship but US passport is long expired?

Why is a pressure canner needed when canning?

Does the Giant Toad's Swallow acid damage take effect only at the start of its next turn?

How to measure the statistical "distance" between two frequency distributions?

If I sell my PS4 game disc and buy a digital version, can I still access my saved game?

What's this constructed number's starter?



Are variable time comparisons always a security risk in cryptography code?


Why did TLS 1.3 drop AES-CBC?What are private key cryptography and public key cryptography, and where are they useful?NSA Suite A Cryptography: Security through obscurity?Is advanced mathematics relevant to security beyond cryptography?What are security implications of enabling access to performance counters on ARM Cortex A9?An attempt to overcome the key distribution problem inherent in one time pad cryptographyCan sites which check your password as you type pose a security risk?Are servers that do not implement time services vulnerable to clock skew attacks?Constant time compares when array sizes are not equal?Does disclosing server local time to users cause any security risks?Are there techniques or methods to develop security protocols without side channel attacks?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








12















I came across the cryptography python package, and noticed it had the following about reporting security vulnerabilities:




Examples of things we wouldn’t consider security issues:
Using a variable time comparison somewhere, if it’s not possible to articulate any particular program in which this would result in problematic information disclosure.




I thought having no variable time comparisons was one of the things cryptography implementations ensured to be secure. Should this be a goal of all secure cryptography libraries? If not, under what conditions does it not make a difference to security whether or not a constant time comparison is used?










share|improve this question


























  • About "variable time comparison" and "constant time comparison" algorithms: sjoerdlangkemper.nl/2017/11/08/…

    – bolov
    Apr 15 at 11:40












  • TLS 1.3 dropped support for AES-CBC that was broken and then fixed many times as it was too hard to implement without timing attacks. Here is a paper about how hard it is to do all of the operations properly in constant time imperialviolet.org/2013/02/04/luckythirteen.html more discussion is at security.stackexchange.com/a/207414/45960

    – simbo1905
    Apr 16 at 6:00

















12















I came across the cryptography python package, and noticed it had the following about reporting security vulnerabilities:




Examples of things we wouldn’t consider security issues:
Using a variable time comparison somewhere, if it’s not possible to articulate any particular program in which this would result in problematic information disclosure.




I thought having no variable time comparisons was one of the things cryptography implementations ensured to be secure. Should this be a goal of all secure cryptography libraries? If not, under what conditions does it not make a difference to security whether or not a constant time comparison is used?










share|improve this question


























  • About "variable time comparison" and "constant time comparison" algorithms: sjoerdlangkemper.nl/2017/11/08/…

    – bolov
    Apr 15 at 11:40












  • TLS 1.3 dropped support for AES-CBC that was broken and then fixed many times as it was too hard to implement without timing attacks. Here is a paper about how hard it is to do all of the operations properly in constant time imperialviolet.org/2013/02/04/luckythirteen.html more discussion is at security.stackexchange.com/a/207414/45960

    – simbo1905
    Apr 16 at 6:00













12












12








12


3






I came across the cryptography python package, and noticed it had the following about reporting security vulnerabilities:




Examples of things we wouldn’t consider security issues:
Using a variable time comparison somewhere, if it’s not possible to articulate any particular program in which this would result in problematic information disclosure.




I thought having no variable time comparisons was one of the things cryptography implementations ensured to be secure. Should this be a goal of all secure cryptography libraries? If not, under what conditions does it not make a difference to security whether or not a constant time comparison is used?










share|improve this question
















I came across the cryptography python package, and noticed it had the following about reporting security vulnerabilities:




Examples of things we wouldn’t consider security issues:
Using a variable time comparison somewhere, if it’s not possible to articulate any particular program in which this would result in problematic information disclosure.




I thought having no variable time comparisons was one of the things cryptography implementations ensured to be secure. Should this be a goal of all secure cryptography libraries? If not, under what conditions does it not make a difference to security whether or not a constant time comparison is used?







cryptography timing-attack side-channel






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 15 at 10:51









Anders

52.2k22 gold badges148 silver badges174 bronze badges




52.2k22 gold badges148 silver badges174 bronze badges










asked Apr 15 at 1:29









curiousgeorge7curiousgeorge7

611 silver badge4 bronze badges




611 silver badge4 bronze badges















  • About "variable time comparison" and "constant time comparison" algorithms: sjoerdlangkemper.nl/2017/11/08/…

    – bolov
    Apr 15 at 11:40












  • TLS 1.3 dropped support for AES-CBC that was broken and then fixed many times as it was too hard to implement without timing attacks. Here is a paper about how hard it is to do all of the operations properly in constant time imperialviolet.org/2013/02/04/luckythirteen.html more discussion is at security.stackexchange.com/a/207414/45960

    – simbo1905
    Apr 16 at 6:00

















  • About "variable time comparison" and "constant time comparison" algorithms: sjoerdlangkemper.nl/2017/11/08/…

    – bolov
    Apr 15 at 11:40












  • TLS 1.3 dropped support for AES-CBC that was broken and then fixed many times as it was too hard to implement without timing attacks. Here is a paper about how hard it is to do all of the operations properly in constant time imperialviolet.org/2013/02/04/luckythirteen.html more discussion is at security.stackexchange.com/a/207414/45960

    – simbo1905
    Apr 16 at 6:00
















About "variable time comparison" and "constant time comparison" algorithms: sjoerdlangkemper.nl/2017/11/08/…

– bolov
Apr 15 at 11:40






About "variable time comparison" and "constant time comparison" algorithms: sjoerdlangkemper.nl/2017/11/08/…

– bolov
Apr 15 at 11:40














TLS 1.3 dropped support for AES-CBC that was broken and then fixed many times as it was too hard to implement without timing attacks. Here is a paper about how hard it is to do all of the operations properly in constant time imperialviolet.org/2013/02/04/luckythirteen.html more discussion is at security.stackexchange.com/a/207414/45960

– simbo1905
Apr 16 at 6:00





TLS 1.3 dropped support for AES-CBC that was broken and then fixed many times as it was too hard to implement without timing attacks. Here is a paper about how hard it is to do all of the operations properly in constant time imperialviolet.org/2013/02/04/luckythirteen.html more discussion is at security.stackexchange.com/a/207414/45960

– simbo1905
Apr 16 at 6:00










5 Answers
5






active

oldest

votes


















12
















No, they are not always a risk, and indeed, it is impossible to build a practical cryptographic library where every function is constant time.



Take, for instance, a decryption function. In a real-world cryptographic library, it is going to be faster to decrypt a 10 byte message than a 10 gigabyte message. You could theoretically create a library where decryption was constant time for any message of supported size, but that would be absurd, so you have a variable time function that leaks (useless) information about the size of the message being decrypted. One of the specific reasons that this isn't an issue is the assumption that the attacker may have information about the message size is built into the security proofs. Security properties are generally designed to only assume security for messages of a specific size, and since they don't apply to messages of different sizes, there is no impact to the designed security of a system if message size affects function completion timing.



In other cases, timing may be potentially sensitive, but a function may be designed to eliminate any information an attacker could use against the function. Double-MAC'ing is one example of this. Evaluation of a MAC is very definitely a function that is sensitive to timing attacks. In an (for example) encrypt then MAC scheme, you compare a MAC sent with a ciphertext with a MAC freshly computed against the ciphertext, a timing attack could allow an adversary to repeated submit modified ciphertext/MAC combinations and iteratively compute a valid MAC without knowing the correct secret key. If, however, on every comparison you re-MAC the MAC with an ephemeral key and re-compare, it doesn't matter if there are timing issues. The ephemeral key in the second MAC will prevent an adversary from being able to iteratively produce a correct MAC because the new ephemeral key produced for each subsequent evaluation will thwart the effort to discover valid bits.






share|improve this answer




















  • 17





    I always assumed that constant time functions were in regards to the content of m, not the length of m. So for example, a comparison should not be faster if the first bit is unequal, than as if everything is identical.

    – MechMK1
    Apr 15 at 9:21












  • I guess that's a question of context. Generally the size of the message is leaked by other means (e.g. by just counting the bytes sent), so a time constant function for ciphertext with different sizes would be useless. I'd at least indicate explicitly if the size of the ciphertext needs to be obfuscated by any constant time function; most cryptographers would not expect that to be the case.

    – Maarten Bodewes
    Apr 15 at 13:15











  • @MechMK1 Exactly... The example does not sound convincing. It would be way more convincing when thinking about compression+encryption and the attacker having control over part of the plaintext since the timing can give an idea of the compression rate and that gives an idea of what is in the rest of the plaintext.

    – Bakuriu
    Apr 15 at 21:23


















6
















Variable time comparisons will leak information. If there is no harm in revealing the data being compared, then there is no issue with using variable time comparisons. If it is crucial that it remain secret, then use constant time comparison.



Some variable time comparisons that are exploitable in theory might be hard to exploit in practice. The quality of timing data that a hacker is able to collect might not be good enough for a timing attack to succeed in certain real world contexts.



For example, a timing attack is more difficult if the software processing secret information is running on hardware that already has inconsistent, unpredictable run times (for identical inputs); even if someone has access to nanosecond precise timing data.



If the secret information also isn't that sensitive and has a short lifespan, then the risk associated with such a timing attack might be too small to worry about.



However, there are ways to work around noisy timing data. Maybe a hacker uses other side channels to supplement timing data. Maybe he sends your server specific requests that makes timing more predictable. And of course he can perform the attack many times and sort of average out the noise.



(As a developer, if you're not sure, then use constant time operations. Don't just assume it's too hard to exploit.)




Things that definitely should not be compared using a variable time algorithm:



  • Secret (symmetric) keys or private (asymmetric) keys

  • Message authentication tags

  • Messages (or hashes of messages) containing secrets

  • Hard-coded (or user configurable) secrets

Examples for which it may or may not matter:



  • Public keys

  • Ciphertext

  • Public IP addresses

  • Public identifiers (excluding personally identifiable information)


Remember that compromising user privacy can be a security issue. (Even if you don't personally think you need to keep some kinds of information private.) Therefore developers should still consider side channel attacks, even when working with data you might not think is that sensitive.






share|improve this answer


































    3
















    First, cryptography is a broad term and the Python cryptography library does not only include primitives but also X.509 handling etc.



    The main question is if the timing issues can be used by an attacker to get secrets or reduce the complexity of the problem and thus the time to get secrets to a value which makes attacks realistic. This means for one that the attacker can measure the time with sufficient granularity in the first place. This is for example usually not the case with complex web applications where variations in network latency and execution time of SQL queries etc will likely make small timing differences in the cryptographic operations impossible to measure from remote. Note that early use of cryptography as in the TLS handshake still might be affected since in this early stage all the other overhead and variations in the application have not much affect yet.



    It also means that the attacker can run enough experiments in order to finally extract the secrets. The details depends on how much information is actually leaked from the timing differences but it often practically unfeasible.



    And then there might simply be less complex attacks possible. If it is a local application it might be cheaper to just reverse engineer it instead of treating it mostly as a black box. Of course if the application is for example included in some tamper resistant enclosure (like with smartcards) then timing attacks or other side channels (like radiation, variations in used power, ...) might actually be the only way.



    In summary: resistance against timing attacks is relevant if a) the timing difference can be measured with sufficient granularity and b) enough experiments can be done and c) no obviously cheaper attacks are possible. Given that especially a) and b) depend a lot on the actual use case it is more important that the cryptographic primitives are resistant, since they might be used in a broad variety of use cases.






    share|improve this answer
































      1
















      The primary purpose of avoiding variable time comparison was to prevent unintended information disclosure leading to timing based attacks on the application.



      For example, take a look at Padding Oracle attack and Time based Blind SQL Injection attack. Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases.



      It is easy to miss such use cases for security relevant code, so I'd prefer to avoid variable time comparisons.






      share|improve this answer



























      • "Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases." Sorry, what is the cryptography python package trying to avoid? Costs? Please clarify that section as it is extremely hard to read.

        – Maarten Bodewes
        Apr 15 at 13:21


















      0
















      The important thing about leaked information, is to identify what information is leaked, given a certain set of circumstances, and what the attacker would know, and/or control.



      For example, lets take password checking. Old, plaintext passwords in the bad old days were often compared one character at a time, by a library with no thought about security. This means that if you got the first letter of the password right, it would take slightly longer than if you got the first letter wrong (since it would take the time to check the second letter). You could try every possible first letter, until you hit the right one, then that first letter plus every possible second letter, and so forth. This way, you could reduce the work needed from 64^6 to 64*6 (assuming 6 symbols from a set of 64 possible symbols).



      Even if you hash the password before checking it, you could apply the same to precalculated rainbow tables - find a password with a hash starting with byte 00, if that takes longer to check than a password starting with 01, then you can eliminate all passwords with a hash not starting with 01.



      In the example of encryption, if the message being decrypted is supplied by the attacker, then they know how big it is. If it takes twice as long to read a message twice as long, they don't learn anything new from that.



      If, on the other hand, it takes longer to decrypt a message if the KEY is different, then the time it took to decrypt a message gives you information about the key, information not otherwise known by the attacker.






      share|improve this answer



























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "162"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );














        draft saved

        draft discarded
















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f207418%2fare-variable-time-comparisons-always-a-security-risk-in-cryptography-code%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        5 Answers
        5






        active

        oldest

        votes








        5 Answers
        5






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        12
















        No, they are not always a risk, and indeed, it is impossible to build a practical cryptographic library where every function is constant time.



        Take, for instance, a decryption function. In a real-world cryptographic library, it is going to be faster to decrypt a 10 byte message than a 10 gigabyte message. You could theoretically create a library where decryption was constant time for any message of supported size, but that would be absurd, so you have a variable time function that leaks (useless) information about the size of the message being decrypted. One of the specific reasons that this isn't an issue is the assumption that the attacker may have information about the message size is built into the security proofs. Security properties are generally designed to only assume security for messages of a specific size, and since they don't apply to messages of different sizes, there is no impact to the designed security of a system if message size affects function completion timing.



        In other cases, timing may be potentially sensitive, but a function may be designed to eliminate any information an attacker could use against the function. Double-MAC'ing is one example of this. Evaluation of a MAC is very definitely a function that is sensitive to timing attacks. In an (for example) encrypt then MAC scheme, you compare a MAC sent with a ciphertext with a MAC freshly computed against the ciphertext, a timing attack could allow an adversary to repeated submit modified ciphertext/MAC combinations and iteratively compute a valid MAC without knowing the correct secret key. If, however, on every comparison you re-MAC the MAC with an ephemeral key and re-compare, it doesn't matter if there are timing issues. The ephemeral key in the second MAC will prevent an adversary from being able to iteratively produce a correct MAC because the new ephemeral key produced for each subsequent evaluation will thwart the effort to discover valid bits.






        share|improve this answer




















        • 17





          I always assumed that constant time functions were in regards to the content of m, not the length of m. So for example, a comparison should not be faster if the first bit is unequal, than as if everything is identical.

          – MechMK1
          Apr 15 at 9:21












        • I guess that's a question of context. Generally the size of the message is leaked by other means (e.g. by just counting the bytes sent), so a time constant function for ciphertext with different sizes would be useless. I'd at least indicate explicitly if the size of the ciphertext needs to be obfuscated by any constant time function; most cryptographers would not expect that to be the case.

          – Maarten Bodewes
          Apr 15 at 13:15











        • @MechMK1 Exactly... The example does not sound convincing. It would be way more convincing when thinking about compression+encryption and the attacker having control over part of the plaintext since the timing can give an idea of the compression rate and that gives an idea of what is in the rest of the plaintext.

          – Bakuriu
          Apr 15 at 21:23















        12
















        No, they are not always a risk, and indeed, it is impossible to build a practical cryptographic library where every function is constant time.



        Take, for instance, a decryption function. In a real-world cryptographic library, it is going to be faster to decrypt a 10 byte message than a 10 gigabyte message. You could theoretically create a library where decryption was constant time for any message of supported size, but that would be absurd, so you have a variable time function that leaks (useless) information about the size of the message being decrypted. One of the specific reasons that this isn't an issue is the assumption that the attacker may have information about the message size is built into the security proofs. Security properties are generally designed to only assume security for messages of a specific size, and since they don't apply to messages of different sizes, there is no impact to the designed security of a system if message size affects function completion timing.



        In other cases, timing may be potentially sensitive, but a function may be designed to eliminate any information an attacker could use against the function. Double-MAC'ing is one example of this. Evaluation of a MAC is very definitely a function that is sensitive to timing attacks. In an (for example) encrypt then MAC scheme, you compare a MAC sent with a ciphertext with a MAC freshly computed against the ciphertext, a timing attack could allow an adversary to repeated submit modified ciphertext/MAC combinations and iteratively compute a valid MAC without knowing the correct secret key. If, however, on every comparison you re-MAC the MAC with an ephemeral key and re-compare, it doesn't matter if there are timing issues. The ephemeral key in the second MAC will prevent an adversary from being able to iteratively produce a correct MAC because the new ephemeral key produced for each subsequent evaluation will thwart the effort to discover valid bits.






        share|improve this answer




















        • 17





          I always assumed that constant time functions were in regards to the content of m, not the length of m. So for example, a comparison should not be faster if the first bit is unequal, than as if everything is identical.

          – MechMK1
          Apr 15 at 9:21












        • I guess that's a question of context. Generally the size of the message is leaked by other means (e.g. by just counting the bytes sent), so a time constant function for ciphertext with different sizes would be useless. I'd at least indicate explicitly if the size of the ciphertext needs to be obfuscated by any constant time function; most cryptographers would not expect that to be the case.

          – Maarten Bodewes
          Apr 15 at 13:15











        • @MechMK1 Exactly... The example does not sound convincing. It would be way more convincing when thinking about compression+encryption and the attacker having control over part of the plaintext since the timing can give an idea of the compression rate and that gives an idea of what is in the rest of the plaintext.

          – Bakuriu
          Apr 15 at 21:23













        12














        12










        12









        No, they are not always a risk, and indeed, it is impossible to build a practical cryptographic library where every function is constant time.



        Take, for instance, a decryption function. In a real-world cryptographic library, it is going to be faster to decrypt a 10 byte message than a 10 gigabyte message. You could theoretically create a library where decryption was constant time for any message of supported size, but that would be absurd, so you have a variable time function that leaks (useless) information about the size of the message being decrypted. One of the specific reasons that this isn't an issue is the assumption that the attacker may have information about the message size is built into the security proofs. Security properties are generally designed to only assume security for messages of a specific size, and since they don't apply to messages of different sizes, there is no impact to the designed security of a system if message size affects function completion timing.



        In other cases, timing may be potentially sensitive, but a function may be designed to eliminate any information an attacker could use against the function. Double-MAC'ing is one example of this. Evaluation of a MAC is very definitely a function that is sensitive to timing attacks. In an (for example) encrypt then MAC scheme, you compare a MAC sent with a ciphertext with a MAC freshly computed against the ciphertext, a timing attack could allow an adversary to repeated submit modified ciphertext/MAC combinations and iteratively compute a valid MAC without knowing the correct secret key. If, however, on every comparison you re-MAC the MAC with an ephemeral key and re-compare, it doesn't matter if there are timing issues. The ephemeral key in the second MAC will prevent an adversary from being able to iteratively produce a correct MAC because the new ephemeral key produced for each subsequent evaluation will thwart the effort to discover valid bits.






        share|improve this answer













        No, they are not always a risk, and indeed, it is impossible to build a practical cryptographic library where every function is constant time.



        Take, for instance, a decryption function. In a real-world cryptographic library, it is going to be faster to decrypt a 10 byte message than a 10 gigabyte message. You could theoretically create a library where decryption was constant time for any message of supported size, but that would be absurd, so you have a variable time function that leaks (useless) information about the size of the message being decrypted. One of the specific reasons that this isn't an issue is the assumption that the attacker may have information about the message size is built into the security proofs. Security properties are generally designed to only assume security for messages of a specific size, and since they don't apply to messages of different sizes, there is no impact to the designed security of a system if message size affects function completion timing.



        In other cases, timing may be potentially sensitive, but a function may be designed to eliminate any information an attacker could use against the function. Double-MAC'ing is one example of this. Evaluation of a MAC is very definitely a function that is sensitive to timing attacks. In an (for example) encrypt then MAC scheme, you compare a MAC sent with a ciphertext with a MAC freshly computed against the ciphertext, a timing attack could allow an adversary to repeated submit modified ciphertext/MAC combinations and iteratively compute a valid MAC without knowing the correct secret key. If, however, on every comparison you re-MAC the MAC with an ephemeral key and re-compare, it doesn't matter if there are timing issues. The ephemeral key in the second MAC will prevent an adversary from being able to iteratively produce a correct MAC because the new ephemeral key produced for each subsequent evaluation will thwart the effort to discover valid bits.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 15 at 2:49









        XanderXander

        34k12 gold badges102 silver badges129 bronze badges




        34k12 gold badges102 silver badges129 bronze badges










        • 17





          I always assumed that constant time functions were in regards to the content of m, not the length of m. So for example, a comparison should not be faster if the first bit is unequal, than as if everything is identical.

          – MechMK1
          Apr 15 at 9:21












        • I guess that's a question of context. Generally the size of the message is leaked by other means (e.g. by just counting the bytes sent), so a time constant function for ciphertext with different sizes would be useless. I'd at least indicate explicitly if the size of the ciphertext needs to be obfuscated by any constant time function; most cryptographers would not expect that to be the case.

          – Maarten Bodewes
          Apr 15 at 13:15











        • @MechMK1 Exactly... The example does not sound convincing. It would be way more convincing when thinking about compression+encryption and the attacker having control over part of the plaintext since the timing can give an idea of the compression rate and that gives an idea of what is in the rest of the plaintext.

          – Bakuriu
          Apr 15 at 21:23












        • 17





          I always assumed that constant time functions were in regards to the content of m, not the length of m. So for example, a comparison should not be faster if the first bit is unequal, than as if everything is identical.

          – MechMK1
          Apr 15 at 9:21












        • I guess that's a question of context. Generally the size of the message is leaked by other means (e.g. by just counting the bytes sent), so a time constant function for ciphertext with different sizes would be useless. I'd at least indicate explicitly if the size of the ciphertext needs to be obfuscated by any constant time function; most cryptographers would not expect that to be the case.

          – Maarten Bodewes
          Apr 15 at 13:15











        • @MechMK1 Exactly... The example does not sound convincing. It would be way more convincing when thinking about compression+encryption and the attacker having control over part of the plaintext since the timing can give an idea of the compression rate and that gives an idea of what is in the rest of the plaintext.

          – Bakuriu
          Apr 15 at 21:23







        17




        17





        I always assumed that constant time functions were in regards to the content of m, not the length of m. So for example, a comparison should not be faster if the first bit is unequal, than as if everything is identical.

        – MechMK1
        Apr 15 at 9:21






        I always assumed that constant time functions were in regards to the content of m, not the length of m. So for example, a comparison should not be faster if the first bit is unequal, than as if everything is identical.

        – MechMK1
        Apr 15 at 9:21














        I guess that's a question of context. Generally the size of the message is leaked by other means (e.g. by just counting the bytes sent), so a time constant function for ciphertext with different sizes would be useless. I'd at least indicate explicitly if the size of the ciphertext needs to be obfuscated by any constant time function; most cryptographers would not expect that to be the case.

        – Maarten Bodewes
        Apr 15 at 13:15





        I guess that's a question of context. Generally the size of the message is leaked by other means (e.g. by just counting the bytes sent), so a time constant function for ciphertext with different sizes would be useless. I'd at least indicate explicitly if the size of the ciphertext needs to be obfuscated by any constant time function; most cryptographers would not expect that to be the case.

        – Maarten Bodewes
        Apr 15 at 13:15













        @MechMK1 Exactly... The example does not sound convincing. It would be way more convincing when thinking about compression+encryption and the attacker having control over part of the plaintext since the timing can give an idea of the compression rate and that gives an idea of what is in the rest of the plaintext.

        – Bakuriu
        Apr 15 at 21:23





        @MechMK1 Exactly... The example does not sound convincing. It would be way more convincing when thinking about compression+encryption and the attacker having control over part of the plaintext since the timing can give an idea of the compression rate and that gives an idea of what is in the rest of the plaintext.

        – Bakuriu
        Apr 15 at 21:23













        6
















        Variable time comparisons will leak information. If there is no harm in revealing the data being compared, then there is no issue with using variable time comparisons. If it is crucial that it remain secret, then use constant time comparison.



        Some variable time comparisons that are exploitable in theory might be hard to exploit in practice. The quality of timing data that a hacker is able to collect might not be good enough for a timing attack to succeed in certain real world contexts.



        For example, a timing attack is more difficult if the software processing secret information is running on hardware that already has inconsistent, unpredictable run times (for identical inputs); even if someone has access to nanosecond precise timing data.



        If the secret information also isn't that sensitive and has a short lifespan, then the risk associated with such a timing attack might be too small to worry about.



        However, there are ways to work around noisy timing data. Maybe a hacker uses other side channels to supplement timing data. Maybe he sends your server specific requests that makes timing more predictable. And of course he can perform the attack many times and sort of average out the noise.



        (As a developer, if you're not sure, then use constant time operations. Don't just assume it's too hard to exploit.)




        Things that definitely should not be compared using a variable time algorithm:



        • Secret (symmetric) keys or private (asymmetric) keys

        • Message authentication tags

        • Messages (or hashes of messages) containing secrets

        • Hard-coded (or user configurable) secrets

        Examples for which it may or may not matter:



        • Public keys

        • Ciphertext

        • Public IP addresses

        • Public identifiers (excluding personally identifiable information)


        Remember that compromising user privacy can be a security issue. (Even if you don't personally think you need to keep some kinds of information private.) Therefore developers should still consider side channel attacks, even when working with data you might not think is that sensitive.






        share|improve this answer































          6
















          Variable time comparisons will leak information. If there is no harm in revealing the data being compared, then there is no issue with using variable time comparisons. If it is crucial that it remain secret, then use constant time comparison.



          Some variable time comparisons that are exploitable in theory might be hard to exploit in practice. The quality of timing data that a hacker is able to collect might not be good enough for a timing attack to succeed in certain real world contexts.



          For example, a timing attack is more difficult if the software processing secret information is running on hardware that already has inconsistent, unpredictable run times (for identical inputs); even if someone has access to nanosecond precise timing data.



          If the secret information also isn't that sensitive and has a short lifespan, then the risk associated with such a timing attack might be too small to worry about.



          However, there are ways to work around noisy timing data. Maybe a hacker uses other side channels to supplement timing data. Maybe he sends your server specific requests that makes timing more predictable. And of course he can perform the attack many times and sort of average out the noise.



          (As a developer, if you're not sure, then use constant time operations. Don't just assume it's too hard to exploit.)




          Things that definitely should not be compared using a variable time algorithm:



          • Secret (symmetric) keys or private (asymmetric) keys

          • Message authentication tags

          • Messages (or hashes of messages) containing secrets

          • Hard-coded (or user configurable) secrets

          Examples for which it may or may not matter:



          • Public keys

          • Ciphertext

          • Public IP addresses

          • Public identifiers (excluding personally identifiable information)


          Remember that compromising user privacy can be a security issue. (Even if you don't personally think you need to keep some kinds of information private.) Therefore developers should still consider side channel attacks, even when working with data you might not think is that sensitive.






          share|improve this answer





























            6














            6










            6









            Variable time comparisons will leak information. If there is no harm in revealing the data being compared, then there is no issue with using variable time comparisons. If it is crucial that it remain secret, then use constant time comparison.



            Some variable time comparisons that are exploitable in theory might be hard to exploit in practice. The quality of timing data that a hacker is able to collect might not be good enough for a timing attack to succeed in certain real world contexts.



            For example, a timing attack is more difficult if the software processing secret information is running on hardware that already has inconsistent, unpredictable run times (for identical inputs); even if someone has access to nanosecond precise timing data.



            If the secret information also isn't that sensitive and has a short lifespan, then the risk associated with such a timing attack might be too small to worry about.



            However, there are ways to work around noisy timing data. Maybe a hacker uses other side channels to supplement timing data. Maybe he sends your server specific requests that makes timing more predictable. And of course he can perform the attack many times and sort of average out the noise.



            (As a developer, if you're not sure, then use constant time operations. Don't just assume it's too hard to exploit.)




            Things that definitely should not be compared using a variable time algorithm:



            • Secret (symmetric) keys or private (asymmetric) keys

            • Message authentication tags

            • Messages (or hashes of messages) containing secrets

            • Hard-coded (or user configurable) secrets

            Examples for which it may or may not matter:



            • Public keys

            • Ciphertext

            • Public IP addresses

            • Public identifiers (excluding personally identifiable information)


            Remember that compromising user privacy can be a security issue. (Even if you don't personally think you need to keep some kinds of information private.) Therefore developers should still consider side channel attacks, even when working with data you might not think is that sensitive.






            share|improve this answer















            Variable time comparisons will leak information. If there is no harm in revealing the data being compared, then there is no issue with using variable time comparisons. If it is crucial that it remain secret, then use constant time comparison.



            Some variable time comparisons that are exploitable in theory might be hard to exploit in practice. The quality of timing data that a hacker is able to collect might not be good enough for a timing attack to succeed in certain real world contexts.



            For example, a timing attack is more difficult if the software processing secret information is running on hardware that already has inconsistent, unpredictable run times (for identical inputs); even if someone has access to nanosecond precise timing data.



            If the secret information also isn't that sensitive and has a short lifespan, then the risk associated with such a timing attack might be too small to worry about.



            However, there are ways to work around noisy timing data. Maybe a hacker uses other side channels to supplement timing data. Maybe he sends your server specific requests that makes timing more predictable. And of course he can perform the attack many times and sort of average out the noise.



            (As a developer, if you're not sure, then use constant time operations. Don't just assume it's too hard to exploit.)




            Things that definitely should not be compared using a variable time algorithm:



            • Secret (symmetric) keys or private (asymmetric) keys

            • Message authentication tags

            • Messages (or hashes of messages) containing secrets

            • Hard-coded (or user configurable) secrets

            Examples for which it may or may not matter:



            • Public keys

            • Ciphertext

            • Public IP addresses

            • Public identifiers (excluding personally identifiable information)


            Remember that compromising user privacy can be a security issue. (Even if you don't personally think you need to keep some kinds of information private.) Therefore developers should still consider side channel attacks, even when working with data you might not think is that sensitive.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Apr 15 at 3:07

























            answered Apr 15 at 2:51









            Future SecurityFuture Security

            1,3563 silver badges12 bronze badges




            1,3563 silver badges12 bronze badges
























                3
















                First, cryptography is a broad term and the Python cryptography library does not only include primitives but also X.509 handling etc.



                The main question is if the timing issues can be used by an attacker to get secrets or reduce the complexity of the problem and thus the time to get secrets to a value which makes attacks realistic. This means for one that the attacker can measure the time with sufficient granularity in the first place. This is for example usually not the case with complex web applications where variations in network latency and execution time of SQL queries etc will likely make small timing differences in the cryptographic operations impossible to measure from remote. Note that early use of cryptography as in the TLS handshake still might be affected since in this early stage all the other overhead and variations in the application have not much affect yet.



                It also means that the attacker can run enough experiments in order to finally extract the secrets. The details depends on how much information is actually leaked from the timing differences but it often practically unfeasible.



                And then there might simply be less complex attacks possible. If it is a local application it might be cheaper to just reverse engineer it instead of treating it mostly as a black box. Of course if the application is for example included in some tamper resistant enclosure (like with smartcards) then timing attacks or other side channels (like radiation, variations in used power, ...) might actually be the only way.



                In summary: resistance against timing attacks is relevant if a) the timing difference can be measured with sufficient granularity and b) enough experiments can be done and c) no obviously cheaper attacks are possible. Given that especially a) and b) depend a lot on the actual use case it is more important that the cryptographic primitives are resistant, since they might be used in a broad variety of use cases.






                share|improve this answer





























                  3
















                  First, cryptography is a broad term and the Python cryptography library does not only include primitives but also X.509 handling etc.



                  The main question is if the timing issues can be used by an attacker to get secrets or reduce the complexity of the problem and thus the time to get secrets to a value which makes attacks realistic. This means for one that the attacker can measure the time with sufficient granularity in the first place. This is for example usually not the case with complex web applications where variations in network latency and execution time of SQL queries etc will likely make small timing differences in the cryptographic operations impossible to measure from remote. Note that early use of cryptography as in the TLS handshake still might be affected since in this early stage all the other overhead and variations in the application have not much affect yet.



                  It also means that the attacker can run enough experiments in order to finally extract the secrets. The details depends on how much information is actually leaked from the timing differences but it often practically unfeasible.



                  And then there might simply be less complex attacks possible. If it is a local application it might be cheaper to just reverse engineer it instead of treating it mostly as a black box. Of course if the application is for example included in some tamper resistant enclosure (like with smartcards) then timing attacks or other side channels (like radiation, variations in used power, ...) might actually be the only way.



                  In summary: resistance against timing attacks is relevant if a) the timing difference can be measured with sufficient granularity and b) enough experiments can be done and c) no obviously cheaper attacks are possible. Given that especially a) and b) depend a lot on the actual use case it is more important that the cryptographic primitives are resistant, since they might be used in a broad variety of use cases.






                  share|improve this answer



























                    3














                    3










                    3









                    First, cryptography is a broad term and the Python cryptography library does not only include primitives but also X.509 handling etc.



                    The main question is if the timing issues can be used by an attacker to get secrets or reduce the complexity of the problem and thus the time to get secrets to a value which makes attacks realistic. This means for one that the attacker can measure the time with sufficient granularity in the first place. This is for example usually not the case with complex web applications where variations in network latency and execution time of SQL queries etc will likely make small timing differences in the cryptographic operations impossible to measure from remote. Note that early use of cryptography as in the TLS handshake still might be affected since in this early stage all the other overhead and variations in the application have not much affect yet.



                    It also means that the attacker can run enough experiments in order to finally extract the secrets. The details depends on how much information is actually leaked from the timing differences but it often practically unfeasible.



                    And then there might simply be less complex attacks possible. If it is a local application it might be cheaper to just reverse engineer it instead of treating it mostly as a black box. Of course if the application is for example included in some tamper resistant enclosure (like with smartcards) then timing attacks or other side channels (like radiation, variations in used power, ...) might actually be the only way.



                    In summary: resistance against timing attacks is relevant if a) the timing difference can be measured with sufficient granularity and b) enough experiments can be done and c) no obviously cheaper attacks are possible. Given that especially a) and b) depend a lot on the actual use case it is more important that the cryptographic primitives are resistant, since they might be used in a broad variety of use cases.






                    share|improve this answer













                    First, cryptography is a broad term and the Python cryptography library does not only include primitives but also X.509 handling etc.



                    The main question is if the timing issues can be used by an attacker to get secrets or reduce the complexity of the problem and thus the time to get secrets to a value which makes attacks realistic. This means for one that the attacker can measure the time with sufficient granularity in the first place. This is for example usually not the case with complex web applications where variations in network latency and execution time of SQL queries etc will likely make small timing differences in the cryptographic operations impossible to measure from remote. Note that early use of cryptography as in the TLS handshake still might be affected since in this early stage all the other overhead and variations in the application have not much affect yet.



                    It also means that the attacker can run enough experiments in order to finally extract the secrets. The details depends on how much information is actually leaked from the timing differences but it often practically unfeasible.



                    And then there might simply be less complex attacks possible. If it is a local application it might be cheaper to just reverse engineer it instead of treating it mostly as a black box. Of course if the application is for example included in some tamper resistant enclosure (like with smartcards) then timing attacks or other side channels (like radiation, variations in used power, ...) might actually be the only way.



                    In summary: resistance against timing attacks is relevant if a) the timing difference can be measured with sufficient granularity and b) enough experiments can be done and c) no obviously cheaper attacks are possible. Given that especially a) and b) depend a lot on the actual use case it is more important that the cryptographic primitives are resistant, since they might be used in a broad variety of use cases.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Apr 15 at 2:49









                    Steffen UllrichSteffen Ullrich

                    129k17 gold badges229 silver badges296 bronze badges




                    129k17 gold badges229 silver badges296 bronze badges
























                        1
















                        The primary purpose of avoiding variable time comparison was to prevent unintended information disclosure leading to timing based attacks on the application.



                        For example, take a look at Padding Oracle attack and Time based Blind SQL Injection attack. Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases.



                        It is easy to miss such use cases for security relevant code, so I'd prefer to avoid variable time comparisons.






                        share|improve this answer



























                        • "Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases." Sorry, what is the cryptography python package trying to avoid? Costs? Please clarify that section as it is extremely hard to read.

                          – Maarten Bodewes
                          Apr 15 at 13:21















                        1
















                        The primary purpose of avoiding variable time comparison was to prevent unintended information disclosure leading to timing based attacks on the application.



                        For example, take a look at Padding Oracle attack and Time based Blind SQL Injection attack. Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases.



                        It is easy to miss such use cases for security relevant code, so I'd prefer to avoid variable time comparisons.






                        share|improve this answer



























                        • "Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases." Sorry, what is the cryptography python package trying to avoid? Costs? Please clarify that section as it is extremely hard to read.

                          – Maarten Bodewes
                          Apr 15 at 13:21













                        1














                        1










                        1









                        The primary purpose of avoiding variable time comparison was to prevent unintended information disclosure leading to timing based attacks on the application.



                        For example, take a look at Padding Oracle attack and Time based Blind SQL Injection attack. Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases.



                        It is easy to miss such use cases for security relevant code, so I'd prefer to avoid variable time comparisons.






                        share|improve this answer















                        The primary purpose of avoiding variable time comparison was to prevent unintended information disclosure leading to timing based attacks on the application.



                        For example, take a look at Padding Oracle attack and Time based Blind SQL Injection attack. Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases.



                        It is easy to miss such use cases for security relevant code, so I'd prefer to avoid variable time comparisons.







                        share|improve this answer














                        share|improve this answer



                        share|improve this answer








                        edited Apr 15 at 13:20









                        Maarten Bodewes

                        3,63912 silver badges26 bronze badges




                        3,63912 silver badges26 bronze badges










                        answered Apr 15 at 3:11









                        karanb192karanb192

                        881 gold badge1 silver badge6 bronze badges




                        881 gold badge1 silver badge6 bronze badges















                        • "Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases." Sorry, what is the cryptography python package trying to avoid? Costs? Please clarify that section as it is extremely hard to read.

                          – Maarten Bodewes
                          Apr 15 at 13:21

















                        • "Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases." Sorry, what is the cryptography python package trying to avoid? Costs? Please clarify that section as it is extremely hard to read.

                          – Maarten Bodewes
                          Apr 15 at 13:21
















                        "Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases." Sorry, what is the cryptography python package trying to avoid? Costs? Please clarify that section as it is extremely hard to read.

                        – Maarten Bodewes
                        Apr 15 at 13:21





                        "Clearing up all the variable time comparisons in the program comes at a certain cost which the cryptography python package is trying to avoid since they don't see any information disclosure due to those cases." Sorry, what is the cryptography python package trying to avoid? Costs? Please clarify that section as it is extremely hard to read.

                        – Maarten Bodewes
                        Apr 15 at 13:21











                        0
















                        The important thing about leaked information, is to identify what information is leaked, given a certain set of circumstances, and what the attacker would know, and/or control.



                        For example, lets take password checking. Old, plaintext passwords in the bad old days were often compared one character at a time, by a library with no thought about security. This means that if you got the first letter of the password right, it would take slightly longer than if you got the first letter wrong (since it would take the time to check the second letter). You could try every possible first letter, until you hit the right one, then that first letter plus every possible second letter, and so forth. This way, you could reduce the work needed from 64^6 to 64*6 (assuming 6 symbols from a set of 64 possible symbols).



                        Even if you hash the password before checking it, you could apply the same to precalculated rainbow tables - find a password with a hash starting with byte 00, if that takes longer to check than a password starting with 01, then you can eliminate all passwords with a hash not starting with 01.



                        In the example of encryption, if the message being decrypted is supplied by the attacker, then they know how big it is. If it takes twice as long to read a message twice as long, they don't learn anything new from that.



                        If, on the other hand, it takes longer to decrypt a message if the KEY is different, then the time it took to decrypt a message gives you information about the key, information not otherwise known by the attacker.






                        share|improve this answer





























                          0
















                          The important thing about leaked information, is to identify what information is leaked, given a certain set of circumstances, and what the attacker would know, and/or control.



                          For example, lets take password checking. Old, plaintext passwords in the bad old days were often compared one character at a time, by a library with no thought about security. This means that if you got the first letter of the password right, it would take slightly longer than if you got the first letter wrong (since it would take the time to check the second letter). You could try every possible first letter, until you hit the right one, then that first letter plus every possible second letter, and so forth. This way, you could reduce the work needed from 64^6 to 64*6 (assuming 6 symbols from a set of 64 possible symbols).



                          Even if you hash the password before checking it, you could apply the same to precalculated rainbow tables - find a password with a hash starting with byte 00, if that takes longer to check than a password starting with 01, then you can eliminate all passwords with a hash not starting with 01.



                          In the example of encryption, if the message being decrypted is supplied by the attacker, then they know how big it is. If it takes twice as long to read a message twice as long, they don't learn anything new from that.



                          If, on the other hand, it takes longer to decrypt a message if the KEY is different, then the time it took to decrypt a message gives you information about the key, information not otherwise known by the attacker.






                          share|improve this answer



























                            0














                            0










                            0









                            The important thing about leaked information, is to identify what information is leaked, given a certain set of circumstances, and what the attacker would know, and/or control.



                            For example, lets take password checking. Old, plaintext passwords in the bad old days were often compared one character at a time, by a library with no thought about security. This means that if you got the first letter of the password right, it would take slightly longer than if you got the first letter wrong (since it would take the time to check the second letter). You could try every possible first letter, until you hit the right one, then that first letter plus every possible second letter, and so forth. This way, you could reduce the work needed from 64^6 to 64*6 (assuming 6 symbols from a set of 64 possible symbols).



                            Even if you hash the password before checking it, you could apply the same to precalculated rainbow tables - find a password with a hash starting with byte 00, if that takes longer to check than a password starting with 01, then you can eliminate all passwords with a hash not starting with 01.



                            In the example of encryption, if the message being decrypted is supplied by the attacker, then they know how big it is. If it takes twice as long to read a message twice as long, they don't learn anything new from that.



                            If, on the other hand, it takes longer to decrypt a message if the KEY is different, then the time it took to decrypt a message gives you information about the key, information not otherwise known by the attacker.






                            share|improve this answer













                            The important thing about leaked information, is to identify what information is leaked, given a certain set of circumstances, and what the attacker would know, and/or control.



                            For example, lets take password checking. Old, plaintext passwords in the bad old days were often compared one character at a time, by a library with no thought about security. This means that if you got the first letter of the password right, it would take slightly longer than if you got the first letter wrong (since it would take the time to check the second letter). You could try every possible first letter, until you hit the right one, then that first letter plus every possible second letter, and so forth. This way, you could reduce the work needed from 64^6 to 64*6 (assuming 6 symbols from a set of 64 possible symbols).



                            Even if you hash the password before checking it, you could apply the same to precalculated rainbow tables - find a password with a hash starting with byte 00, if that takes longer to check than a password starting with 01, then you can eliminate all passwords with a hash not starting with 01.



                            In the example of encryption, if the message being decrypted is supplied by the attacker, then they know how big it is. If it takes twice as long to read a message twice as long, they don't learn anything new from that.



                            If, on the other hand, it takes longer to decrypt a message if the KEY is different, then the time it took to decrypt a message gives you information about the key, information not otherwise known by the attacker.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Apr 15 at 20:42









                            AMADANON Inc.AMADANON Inc.

                            1,4717 silver badges9 bronze badges




                            1,4717 silver badges9 bronze badges































                                draft saved

                                draft discarded















































                                Thanks for contributing an answer to Information Security Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsecurity.stackexchange.com%2fquestions%2f207418%2fare-variable-time-comparisons-always-a-security-risk-in-cryptography-code%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Tamil (spriik) Luke uk diar | Nawigatjuun

                                Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

                                Where does the image of a data connector as a sharp metal spike originate from?Where does the concept of infected people turning into zombies only after death originate from?Where does the motif of a reanimated human head originate?Where did the notion that Dragons could speak originate?Where does the archetypal image of the 'Grey' alien come from?Where did the suffix '-Man' originate?Where does the notion of being injured or killed by an illusion originate?Where did the term “sophont” originate?Where does the trope of magic spells being driven by advanced technology originate from?Where did the term “the living impaired” originate?