Why are oscilloscope input impedances so low?Measuring the input impedance of an oscilloscopeWhy does capacitive loading occur when using passive oscilloscope probes?Emulating Old Tektronix Coded ProbesOscilloscope probe measure squarewaveRigol DS1052E Oscilloscope acting wierd50 Ohm Oscilloscope InputWhy doesn't a passive x1000 oscilloscope probe have 1000M Ohm impedance?Probes gives different resultsOscilloscope: when to use coaxial instead of probes?

Why does Principal Vagina say, "no relation" after introducing himself?

Do businesses save their customers' credit card information until the payment is finalized?

Is anyone against the rational teachings of Maimonides?

My Villain scrys on the party, but I forgot about the sensor!

How does Firefox know my ISP login page?

How to create a vimrc macro using :sort?

Code Golf Measurer © 2019

Can there be an atomic nucleus where there are more protons than neutrons?

Can you set fire to beer barrels?

In "Avatar: The Last Airbender" can a metalbender bloodbend if there is metal in our blood?

What does this text mean with capitalized letters?

Why did Crew Dragon switch to burst disks instead of multiple check valves?

How can a "proper" function have a vertical slope?

How do I reset the TSA-unlocked indicator on my lock?

How can I learn to write better questions to test for conceptual understanding?

Dice? = Is that so?

Can you upgrade armour from breastplate to halfplate?

How many records can an Apex Batch process

What does "stirring tanks" mean?

What does すきすき mean here?

What power does the UK parliament hold over a Prime Minister whom they refuse to remove from power?

What are the branches of statistics?

What is this game with a red cricket pushing a ball?

"inuendo" in a piano score



Why are oscilloscope input impedances so low?


Measuring the input impedance of an oscilloscopeWhy does capacitive loading occur when using passive oscilloscope probes?Emulating Old Tektronix Coded ProbesOscilloscope probe measure squarewaveRigol DS1052E Oscilloscope acting wierd50 Ohm Oscilloscope InputWhy doesn't a passive x1000 oscilloscope probe have 1000M Ohm impedance?Probes gives different resultsOscilloscope: when to use coaxial instead of probes?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









14














$begingroup$


My question is two-fold:



Where does the input impedance come from?



I'm wondering where the input impedance of your average multimeter or oscilloscope comes from? Is it just the input impedance to the device's input stage (such as an amplifier or ADC input stage), or is it the impedance of an actual resistor? If it is the impedance of an actual resistor, then why is there a resistor at all? Why not just the input circuitry?



I measured the input impedance of my oscilloscope with a DMM. When the scope was turned off, the DMM measured about $1.2mathrmMOmega$. However, when the scope was turned on, the DMM measured pretty much exactly $1mathrmMOmega$ (I could even see the 1V test input applied by the DMM on the oscilloscope screen!). This suggests to me that there is active circuitry involved in the scope's input impedance. If this is true, how can the input impedance be so precisely controlled? Based on my understanding, the input impedance to active circuitry will depend somewhat on the exact transistor characteristics.



Why can't the input impedance be much higher?



Why is the input impedance of an oscilloscope a standard $1mathrmMOmega$? Why can't it be higher than that? FET input stages can achieve input impedances on the order of teraohms! Why have such a low input impedance?



I suppose one benefit of a precise standard $1mathrmMOmega$ is it allows 10X probes and the like, which would only work if the scope had a precise input impedance that wasn't unreasonably large (like that of a FET input stage). However, even if the scope had a really high input impedance (e.g., teraohms), it seems to me that you could still have 10X probes just by having a 10:1 voltage divider inside the probe itself, with the scope measuring across a $1mathrmMOmega$ resistor inside the probe. If it had an input impedance on the order of teraohms, this would seem to be feasible.



Am I misunderstanding the input circuitry of a scope? Is it more complicated than I'm making it out to be? What are your thoughts on this?



The reason I thought of this is that I've recently been trying to measure the common-mode input impedance of an emitter-coupled differential pair, which is much larger than the scope input impedance, so it made me wonder why the input impedance can't be larger.










share|improve this question









$endgroup$











  • 7




    $begingroup$
    The topic is much more complex than you might think. You seem to be considering only the DC response, but in fact, a scope must have a flat response all the way up to its specified bandwidth. This is a huge challenge, and standardizing on 1MΩ/50Ω makes the problem at least somewhat tractable for probe manufacturers.
    $endgroup$
    – Dave Tweed
    May 4 at 11:16







  • 1




    $begingroup$
    Would you like to use my old scope? It can be configured for 100 ohm input impedance. On the other hand, it was built in 1965, and the standard setup for it is 1MOhm input impedance. 1M seems to have been standard for quite a while.
    $endgroup$
    – JRE
    May 4 at 11:24






  • 1




    $begingroup$
    Don't forget that a $times$10 probe has an input impedance of 10 M$Omega$
    $endgroup$
    – D Duck
    May 4 at 11:24










  • $begingroup$
    @DaveTweed So it is not feasible to have a FET input stage with high enough bandwidth? What are input stages of scopes actually like?
    $endgroup$
    – hddh
    May 4 at 11:26







  • 1




    $begingroup$
    Is it directly into the ADC? No, how would a scope be able to measure 1 mV and 100 V? Usual configuration: BNC - input protection + switchable attenuation - Input stage (often FET based) - ADC. So yes many are FET based. You would not have an active device define the input impedance. There's a 1 M resistor to set it properly. I highly recommend that you study how things are done and ask yourself WHY before assuming: it must be ... it cannot be... Because you will confuse yourself.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:03


















14














$begingroup$


My question is two-fold:



Where does the input impedance come from?



I'm wondering where the input impedance of your average multimeter or oscilloscope comes from? Is it just the input impedance to the device's input stage (such as an amplifier or ADC input stage), or is it the impedance of an actual resistor? If it is the impedance of an actual resistor, then why is there a resistor at all? Why not just the input circuitry?



I measured the input impedance of my oscilloscope with a DMM. When the scope was turned off, the DMM measured about $1.2mathrmMOmega$. However, when the scope was turned on, the DMM measured pretty much exactly $1mathrmMOmega$ (I could even see the 1V test input applied by the DMM on the oscilloscope screen!). This suggests to me that there is active circuitry involved in the scope's input impedance. If this is true, how can the input impedance be so precisely controlled? Based on my understanding, the input impedance to active circuitry will depend somewhat on the exact transistor characteristics.



Why can't the input impedance be much higher?



Why is the input impedance of an oscilloscope a standard $1mathrmMOmega$? Why can't it be higher than that? FET input stages can achieve input impedances on the order of teraohms! Why have such a low input impedance?



I suppose one benefit of a precise standard $1mathrmMOmega$ is it allows 10X probes and the like, which would only work if the scope had a precise input impedance that wasn't unreasonably large (like that of a FET input stage). However, even if the scope had a really high input impedance (e.g., teraohms), it seems to me that you could still have 10X probes just by having a 10:1 voltage divider inside the probe itself, with the scope measuring across a $1mathrmMOmega$ resistor inside the probe. If it had an input impedance on the order of teraohms, this would seem to be feasible.



Am I misunderstanding the input circuitry of a scope? Is it more complicated than I'm making it out to be? What are your thoughts on this?



The reason I thought of this is that I've recently been trying to measure the common-mode input impedance of an emitter-coupled differential pair, which is much larger than the scope input impedance, so it made me wonder why the input impedance can't be larger.










share|improve this question









$endgroup$











  • 7




    $begingroup$
    The topic is much more complex than you might think. You seem to be considering only the DC response, but in fact, a scope must have a flat response all the way up to its specified bandwidth. This is a huge challenge, and standardizing on 1MΩ/50Ω makes the problem at least somewhat tractable for probe manufacturers.
    $endgroup$
    – Dave Tweed
    May 4 at 11:16







  • 1




    $begingroup$
    Would you like to use my old scope? It can be configured for 100 ohm input impedance. On the other hand, it was built in 1965, and the standard setup for it is 1MOhm input impedance. 1M seems to have been standard for quite a while.
    $endgroup$
    – JRE
    May 4 at 11:24






  • 1




    $begingroup$
    Don't forget that a $times$10 probe has an input impedance of 10 M$Omega$
    $endgroup$
    – D Duck
    May 4 at 11:24










  • $begingroup$
    @DaveTweed So it is not feasible to have a FET input stage with high enough bandwidth? What are input stages of scopes actually like?
    $endgroup$
    – hddh
    May 4 at 11:26







  • 1




    $begingroup$
    Is it directly into the ADC? No, how would a scope be able to measure 1 mV and 100 V? Usual configuration: BNC - input protection + switchable attenuation - Input stage (often FET based) - ADC. So yes many are FET based. You would not have an active device define the input impedance. There's a 1 M resistor to set it properly. I highly recommend that you study how things are done and ask yourself WHY before assuming: it must be ... it cannot be... Because you will confuse yourself.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:03














14












14








14


3



$begingroup$


My question is two-fold:



Where does the input impedance come from?



I'm wondering where the input impedance of your average multimeter or oscilloscope comes from? Is it just the input impedance to the device's input stage (such as an amplifier or ADC input stage), or is it the impedance of an actual resistor? If it is the impedance of an actual resistor, then why is there a resistor at all? Why not just the input circuitry?



I measured the input impedance of my oscilloscope with a DMM. When the scope was turned off, the DMM measured about $1.2mathrmMOmega$. However, when the scope was turned on, the DMM measured pretty much exactly $1mathrmMOmega$ (I could even see the 1V test input applied by the DMM on the oscilloscope screen!). This suggests to me that there is active circuitry involved in the scope's input impedance. If this is true, how can the input impedance be so precisely controlled? Based on my understanding, the input impedance to active circuitry will depend somewhat on the exact transistor characteristics.



Why can't the input impedance be much higher?



Why is the input impedance of an oscilloscope a standard $1mathrmMOmega$? Why can't it be higher than that? FET input stages can achieve input impedances on the order of teraohms! Why have such a low input impedance?



I suppose one benefit of a precise standard $1mathrmMOmega$ is it allows 10X probes and the like, which would only work if the scope had a precise input impedance that wasn't unreasonably large (like that of a FET input stage). However, even if the scope had a really high input impedance (e.g., teraohms), it seems to me that you could still have 10X probes just by having a 10:1 voltage divider inside the probe itself, with the scope measuring across a $1mathrmMOmega$ resistor inside the probe. If it had an input impedance on the order of teraohms, this would seem to be feasible.



Am I misunderstanding the input circuitry of a scope? Is it more complicated than I'm making it out to be? What are your thoughts on this?



The reason I thought of this is that I've recently been trying to measure the common-mode input impedance of an emitter-coupled differential pair, which is much larger than the scope input impedance, so it made me wonder why the input impedance can't be larger.










share|improve this question









$endgroup$




My question is two-fold:



Where does the input impedance come from?



I'm wondering where the input impedance of your average multimeter or oscilloscope comes from? Is it just the input impedance to the device's input stage (such as an amplifier or ADC input stage), or is it the impedance of an actual resistor? If it is the impedance of an actual resistor, then why is there a resistor at all? Why not just the input circuitry?



I measured the input impedance of my oscilloscope with a DMM. When the scope was turned off, the DMM measured about $1.2mathrmMOmega$. However, when the scope was turned on, the DMM measured pretty much exactly $1mathrmMOmega$ (I could even see the 1V test input applied by the DMM on the oscilloscope screen!). This suggests to me that there is active circuitry involved in the scope's input impedance. If this is true, how can the input impedance be so precisely controlled? Based on my understanding, the input impedance to active circuitry will depend somewhat on the exact transistor characteristics.



Why can't the input impedance be much higher?



Why is the input impedance of an oscilloscope a standard $1mathrmMOmega$? Why can't it be higher than that? FET input stages can achieve input impedances on the order of teraohms! Why have such a low input impedance?



I suppose one benefit of a precise standard $1mathrmMOmega$ is it allows 10X probes and the like, which would only work if the scope had a precise input impedance that wasn't unreasonably large (like that of a FET input stage). However, even if the scope had a really high input impedance (e.g., teraohms), it seems to me that you could still have 10X probes just by having a 10:1 voltage divider inside the probe itself, with the scope measuring across a $1mathrmMOmega$ resistor inside the probe. If it had an input impedance on the order of teraohms, this would seem to be feasible.



Am I misunderstanding the input circuitry of a scope? Is it more complicated than I'm making it out to be? What are your thoughts on this?



The reason I thought of this is that I've recently been trying to measure the common-mode input impedance of an emitter-coupled differential pair, which is much larger than the scope input impedance, so it made me wonder why the input impedance can't be larger.







oscilloscope measurement probe input-impedance






share|improve this question













share|improve this question











share|improve this question




share|improve this question



share|improve this question










asked May 4 at 11:03









hddhhddh

2231 silver badge10 bronze badges




2231 silver badge10 bronze badges










  • 7




    $begingroup$
    The topic is much more complex than you might think. You seem to be considering only the DC response, but in fact, a scope must have a flat response all the way up to its specified bandwidth. This is a huge challenge, and standardizing on 1MΩ/50Ω makes the problem at least somewhat tractable for probe manufacturers.
    $endgroup$
    – Dave Tweed
    May 4 at 11:16







  • 1




    $begingroup$
    Would you like to use my old scope? It can be configured for 100 ohm input impedance. On the other hand, it was built in 1965, and the standard setup for it is 1MOhm input impedance. 1M seems to have been standard for quite a while.
    $endgroup$
    – JRE
    May 4 at 11:24






  • 1




    $begingroup$
    Don't forget that a $times$10 probe has an input impedance of 10 M$Omega$
    $endgroup$
    – D Duck
    May 4 at 11:24










  • $begingroup$
    @DaveTweed So it is not feasible to have a FET input stage with high enough bandwidth? What are input stages of scopes actually like?
    $endgroup$
    – hddh
    May 4 at 11:26







  • 1




    $begingroup$
    Is it directly into the ADC? No, how would a scope be able to measure 1 mV and 100 V? Usual configuration: BNC - input protection + switchable attenuation - Input stage (often FET based) - ADC. So yes many are FET based. You would not have an active device define the input impedance. There's a 1 M resistor to set it properly. I highly recommend that you study how things are done and ask yourself WHY before assuming: it must be ... it cannot be... Because you will confuse yourself.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:03













  • 7




    $begingroup$
    The topic is much more complex than you might think. You seem to be considering only the DC response, but in fact, a scope must have a flat response all the way up to its specified bandwidth. This is a huge challenge, and standardizing on 1MΩ/50Ω makes the problem at least somewhat tractable for probe manufacturers.
    $endgroup$
    – Dave Tweed
    May 4 at 11:16







  • 1




    $begingroup$
    Would you like to use my old scope? It can be configured for 100 ohm input impedance. On the other hand, it was built in 1965, and the standard setup for it is 1MOhm input impedance. 1M seems to have been standard for quite a while.
    $endgroup$
    – JRE
    May 4 at 11:24






  • 1




    $begingroup$
    Don't forget that a $times$10 probe has an input impedance of 10 M$Omega$
    $endgroup$
    – D Duck
    May 4 at 11:24










  • $begingroup$
    @DaveTweed So it is not feasible to have a FET input stage with high enough bandwidth? What are input stages of scopes actually like?
    $endgroup$
    – hddh
    May 4 at 11:26







  • 1




    $begingroup$
    Is it directly into the ADC? No, how would a scope be able to measure 1 mV and 100 V? Usual configuration: BNC - input protection + switchable attenuation - Input stage (often FET based) - ADC. So yes many are FET based. You would not have an active device define the input impedance. There's a 1 M resistor to set it properly. I highly recommend that you study how things are done and ask yourself WHY before assuming: it must be ... it cannot be... Because you will confuse yourself.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:03








7




7




$begingroup$
The topic is much more complex than you might think. You seem to be considering only the DC response, but in fact, a scope must have a flat response all the way up to its specified bandwidth. This is a huge challenge, and standardizing on 1MΩ/50Ω makes the problem at least somewhat tractable for probe manufacturers.
$endgroup$
– Dave Tweed
May 4 at 11:16





$begingroup$
The topic is much more complex than you might think. You seem to be considering only the DC response, but in fact, a scope must have a flat response all the way up to its specified bandwidth. This is a huge challenge, and standardizing on 1MΩ/50Ω makes the problem at least somewhat tractable for probe manufacturers.
$endgroup$
– Dave Tweed
May 4 at 11:16





1




1




$begingroup$
Would you like to use my old scope? It can be configured for 100 ohm input impedance. On the other hand, it was built in 1965, and the standard setup for it is 1MOhm input impedance. 1M seems to have been standard for quite a while.
$endgroup$
– JRE
May 4 at 11:24




$begingroup$
Would you like to use my old scope? It can be configured for 100 ohm input impedance. On the other hand, it was built in 1965, and the standard setup for it is 1MOhm input impedance. 1M seems to have been standard for quite a while.
$endgroup$
– JRE
May 4 at 11:24




1




1




$begingroup$
Don't forget that a $times$10 probe has an input impedance of 10 M$Omega$
$endgroup$
– D Duck
May 4 at 11:24




$begingroup$
Don't forget that a $times$10 probe has an input impedance of 10 M$Omega$
$endgroup$
– D Duck
May 4 at 11:24












$begingroup$
@DaveTweed So it is not feasible to have a FET input stage with high enough bandwidth? What are input stages of scopes actually like?
$endgroup$
– hddh
May 4 at 11:26





$begingroup$
@DaveTweed So it is not feasible to have a FET input stage with high enough bandwidth? What are input stages of scopes actually like?
$endgroup$
– hddh
May 4 at 11:26





1




1




$begingroup$
Is it directly into the ADC? No, how would a scope be able to measure 1 mV and 100 V? Usual configuration: BNC - input protection + switchable attenuation - Input stage (often FET based) - ADC. So yes many are FET based. You would not have an active device define the input impedance. There's a 1 M resistor to set it properly. I highly recommend that you study how things are done and ask yourself WHY before assuming: it must be ... it cannot be... Because you will confuse yourself.
$endgroup$
– Bimpelrekkie
May 4 at 12:03





$begingroup$
Is it directly into the ADC? No, how would a scope be able to measure 1 mV and 100 V? Usual configuration: BNC - input protection + switchable attenuation - Input stage (often FET based) - ADC. So yes many are FET based. You would not have an active device define the input impedance. There's a 1 M resistor to set it properly. I highly recommend that you study how things are done and ask yourself WHY before assuming: it must be ... it cannot be... Because you will confuse yourself.
$endgroup$
– Bimpelrekkie
May 4 at 12:03











3 Answers
3






active

oldest

votes


















10
















$begingroup$

I would say a combination of a few factors.



  1. The input stages of an osciloscope are a difficult compromise. They need to be have a wide range of gains/attenutations, they need to be tolerant of user errors, and they need to pass high bandwidths. Adding a requirement for a very high DC resistance would just further complicate matters. In particular attenuators needed to handle the higher end of the scopes input level range would get much more complex/sensitive if they needed to have a very high DC resistance.

  2. It's a de-facto standard, changing to something else would lead to incompatibilities with existing probes etc.

  3. There wouldn't be much benefit anyway.

To further explain point 3, at moderate frequencies (from a few kilohertz upwards) the 1 megohm DC resistance of the scope input is not the dominant factor in the overall input impedance. The dominant factor is the capacitance, with the cable making probably the largest contribution.



(in fact at UHF/microwave frequencies it's common to reduce the scope input impedance to 50 ohm, so the inductance in the cable can balance out the capacitance and the cable becomes a properly matched transmission line)



What this means is if high input impedances are desirable then it's much better to deal with that at the point of probing than at the scope. The typical compromise of cost/flexibility/input impedance for general use is an x10 passive probe.



If you need a really high DC resistance then the solution is to add a FET based amplifier in front of the scope, preferably as close to the point of measurement as possible.






share|improve this answer












$endgroup$














  • $begingroup$
    Is the input capacitance also specifically engineered like the 1Mohm input impedance, or is it just a parasitic element that is measured? (A non-precise input capacitance wouldn’t be a problem since attenuating probes have variable capacitors.) Would I be correct in saying that: if attenuation circuitry was not needed, and we didn’t worry about impedance matching at higher frequencies (in which case you might have a switchable input to 50ohms), then it would be fine to have input directly into high impedance FET stage? Just trying to get the different reasons for this straight in my head.
    $endgroup$
    – hddh
    May 4 at 23:56










  • $begingroup$
    I guess even then, you’d still have probe/cable capacitance to worry about, but in that case adding 1meg across it is just going to make the impedance even lower. And 10X probes could just have their own 1meg resistor in parallel with the probe output. So basically: ignoring attenuating probes, impedance matching, and attenuation circuitry, I don’t see any other reasons for an input resistance as low as 1meg, since it would just make the input impedance due to capacitance even lower (and the impedance-matching ship would have already sailed at 1meg input impedance anyway).
    $endgroup$
    – hddh
    May 5 at 0:25










  • $begingroup$
    So my understanding so far: 1meg input resistance is preferable due to: (a) required attenuation circuitry, (b) input impedance is dominated by capacitance anyway, (c) it makes attenuating probe design simpler. Impedance matching doesn’t seem to he a reason since you’d go down to 50ohms in such cases anyway. Makes me wonder about multimeter input impedances (normally 10meg), where only (a) seems to apply.
    $endgroup$
    – hddh
    May 5 at 0:31






  • 1




    $begingroup$
    Another issue with high impedance inputs it "phantom" voltages when they are not connected to anything. Even at 10 meg this can be noticeable sometimes. Some high end multimeters do actually have the option to switch-out the 10 meg resistor, I have access to such a meter but I don't think I've ever felt the need to use said feature.
    $endgroup$
    – Peter Green
    May 5 at 0:40










  • $begingroup$
    @PeterGreen see if you can disable the 50/60Hz suppression too, and you have a random number generator instead of a voltmeter while it is not connected to something.
    $endgroup$
    – rackandboneman
    May 5 at 7:28


















10
















$begingroup$

A lot of things are the way they are because of history, and de facto standardisation.



A general purpose oscilloscope input is a difficult compromise between not loading the circuit, not being damaged by high voltage, having reasonably low noise, and being able to maintain a decent bandwidth.



1Mohm in parallel with 15pF to 30pF satisfies a lot of people for a lot of applications. There's little incentive for manufacturers to build a general purpose oscilloscope with a different input, to address tiny parts of the market.



When you do need better noise, or a differential input, or a higher input impedance, then you use a custom pre-amp. When you need wider bandwidth, you switch to a 50 ohm input impedance.



There are special purpose oscilloscopes made at high prices that do address niche applications.






share|improve this answer










$endgroup$














  • $begingroup$
    Fair enough. So the input impedance (to a scope or meter) does not come from an actual resistor, but from active circuitry instead? (Am I crazy for not being sure about this?) Makes me wonder how they can precisely control it. I wonder if there are any schematics of scope input stages/front ends floating around the internet that I could have a look at.
    $endgroup$
    – hddh
    May 4 at 11:31






  • 4




    $begingroup$
    @hddh I still find it surprising that a FET input stage of sufficient bandwidth can't be engineered Says who? There are FET probes with more than 1 GHz BW, for example: keysight.com/main/… Prehaps what you mean is, that you want it inside the scope. That could be made yet it would be unusable that way! You need a cable to connect your testpoint to your scope. That cable has capacitance. The whole point of FET probe is that is has a low capacitance.
    $endgroup$
    – Bimpelrekkie
    May 4 at 11:55







  • 2




    $begingroup$
    Pointers: EEVBlog ! Also there are plenty of schematics to be found in service manuals of for example older Tektronix scopes. It clearly can’t be a FET with a 1Mohm input impedance (right?). No wrong, he input impedance is set by a resistor then (often) a FET amplifier is used to amplify the voltage across that resistor. The 1 M is needed to have a properly defined impedance. Here's Dave reverse engineering the popular Rigol DS1054Z scope: youtube.com/watch?v=lJVrTV_BeGg&t=989s Its design is typical of many modern scopes
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:08







  • 2




    $begingroup$
    And here's a service manual of a Tektronix 2215 analog scope, it has a block diagram and all the circuits. Yes it is an old design but the input stage will be very similar to modern many scopes: tek.com/manual/2215 for study purposes, this is very useful.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:17






  • 1




    $begingroup$
    ..ADC w/ FET input stage isn’t feasible is because of the attenuation required before it to achieve the desired dynamic range? Yes, dynamic range is indeed the answer. A variable attenuator helps to bring the signal into a range that is appropriate to both the input amplifier and the ADC.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:18


















5
















$begingroup$

Actually, it is ridiculously high for a wideband input.



There is no practical connector or cable that actually has an impedance (from a transmission line view. Resistance, but for coaxial cablers, gold platers, and waveguide plumbers. RF dudes.) of 1 megaohms, leaving the input utterly mismatched - even worse, a 15-45pf capacitor across an 1 megaohm (transmission line impedance) input would mismatch it to oblivion.



The reason it is 1 megaohm is for supporting standard 10:1 probes, which you indeed need to not overload the kind of circuit carrying audio frequency signals at high impedance and with high DC offset (think audio vacuum tube circuits, the probe designs are from just that era).



However, once you are dealing with RF or fast digital circuitry, the parallel capacitance of the scope input (which you can't make too small, again because of probes, cables, connectors) will dominate ... and bring the actual input resistance of that input down to 5 to 10 kiloohms once you reach one megahertz, 500 to 1000 ohms once you reach 10 megahertz. Reach VHF (hint: ACMOS or F-TTL circuitry is VHF stuff even if you don't clock it at VHF) and you would be better off with a matched 50 Ohm input, since you could connect a (within reason) long 50 Ohm cable and still have a 50 Ohm input on the circuit end, instead of an even bigger capacitive burden.



With the conventional kind of probe and input, you will overload RF circuitry easily. RF optimized oscilloscopes tend to have inputs that can be switched to 50 Ohm input impedance (any oscilloscope input can, with a parallel/through terminator) - which is, interestingly, BETTER suited, since now you can use probes (eg Z0 probes or active FET probes) that actually can be made to present much higher effective input impedances at the probe point. Or just provide a reliable 50 Ohm connection to your circuit with any old RG58 cable.






share|improve this answer












$endgroup$














  • $begingroup$
    If I understand correctly: So you’re saying that 1megaohm doesn’t help with impedance matching, and you’d be better with 50ohm inputs in those cases. So if the impedance-matching ship has sailed with 1meg, then why is a low input impedance of 1meg necessary? The reason I’ve gathered for this from other answers is that the required input attenuation circuitry makes this infeasible. Are there other reasons? (Also is the scope input capacitance intentional like the 1meg, or is it parasitic? - i.e., could it be easily be reduced?)
    $endgroup$
    – hddh
    May 5 at 0:08










  • $begingroup$
    @hddh it was parasitic once, then it likely became intentional :)
    $endgroup$
    – rackandboneman
    May 5 at 7:24












Your Answer






StackExchange.ifUsing("editor", function ()
return StackExchange.using("schematics", function ()
StackExchange.schematics.init();
);
, "cicuitlab");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "135"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);














draft saved

draft discarded
















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f436889%2fwhy-are-oscilloscope-input-impedances-so-low%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown


























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









10
















$begingroup$

I would say a combination of a few factors.



  1. The input stages of an osciloscope are a difficult compromise. They need to be have a wide range of gains/attenutations, they need to be tolerant of user errors, and they need to pass high bandwidths. Adding a requirement for a very high DC resistance would just further complicate matters. In particular attenuators needed to handle the higher end of the scopes input level range would get much more complex/sensitive if they needed to have a very high DC resistance.

  2. It's a de-facto standard, changing to something else would lead to incompatibilities with existing probes etc.

  3. There wouldn't be much benefit anyway.

To further explain point 3, at moderate frequencies (from a few kilohertz upwards) the 1 megohm DC resistance of the scope input is not the dominant factor in the overall input impedance. The dominant factor is the capacitance, with the cable making probably the largest contribution.



(in fact at UHF/microwave frequencies it's common to reduce the scope input impedance to 50 ohm, so the inductance in the cable can balance out the capacitance and the cable becomes a properly matched transmission line)



What this means is if high input impedances are desirable then it's much better to deal with that at the point of probing than at the scope. The typical compromise of cost/flexibility/input impedance for general use is an x10 passive probe.



If you need a really high DC resistance then the solution is to add a FET based amplifier in front of the scope, preferably as close to the point of measurement as possible.






share|improve this answer












$endgroup$














  • $begingroup$
    Is the input capacitance also specifically engineered like the 1Mohm input impedance, or is it just a parasitic element that is measured? (A non-precise input capacitance wouldn’t be a problem since attenuating probes have variable capacitors.) Would I be correct in saying that: if attenuation circuitry was not needed, and we didn’t worry about impedance matching at higher frequencies (in which case you might have a switchable input to 50ohms), then it would be fine to have input directly into high impedance FET stage? Just trying to get the different reasons for this straight in my head.
    $endgroup$
    – hddh
    May 4 at 23:56










  • $begingroup$
    I guess even then, you’d still have probe/cable capacitance to worry about, but in that case adding 1meg across it is just going to make the impedance even lower. And 10X probes could just have their own 1meg resistor in parallel with the probe output. So basically: ignoring attenuating probes, impedance matching, and attenuation circuitry, I don’t see any other reasons for an input resistance as low as 1meg, since it would just make the input impedance due to capacitance even lower (and the impedance-matching ship would have already sailed at 1meg input impedance anyway).
    $endgroup$
    – hddh
    May 5 at 0:25










  • $begingroup$
    So my understanding so far: 1meg input resistance is preferable due to: (a) required attenuation circuitry, (b) input impedance is dominated by capacitance anyway, (c) it makes attenuating probe design simpler. Impedance matching doesn’t seem to he a reason since you’d go down to 50ohms in such cases anyway. Makes me wonder about multimeter input impedances (normally 10meg), where only (a) seems to apply.
    $endgroup$
    – hddh
    May 5 at 0:31






  • 1




    $begingroup$
    Another issue with high impedance inputs it "phantom" voltages when they are not connected to anything. Even at 10 meg this can be noticeable sometimes. Some high end multimeters do actually have the option to switch-out the 10 meg resistor, I have access to such a meter but I don't think I've ever felt the need to use said feature.
    $endgroup$
    – Peter Green
    May 5 at 0:40










  • $begingroup$
    @PeterGreen see if you can disable the 50/60Hz suppression too, and you have a random number generator instead of a voltmeter while it is not connected to something.
    $endgroup$
    – rackandboneman
    May 5 at 7:28















10
















$begingroup$

I would say a combination of a few factors.



  1. The input stages of an osciloscope are a difficult compromise. They need to be have a wide range of gains/attenutations, they need to be tolerant of user errors, and they need to pass high bandwidths. Adding a requirement for a very high DC resistance would just further complicate matters. In particular attenuators needed to handle the higher end of the scopes input level range would get much more complex/sensitive if they needed to have a very high DC resistance.

  2. It's a de-facto standard, changing to something else would lead to incompatibilities with existing probes etc.

  3. There wouldn't be much benefit anyway.

To further explain point 3, at moderate frequencies (from a few kilohertz upwards) the 1 megohm DC resistance of the scope input is not the dominant factor in the overall input impedance. The dominant factor is the capacitance, with the cable making probably the largest contribution.



(in fact at UHF/microwave frequencies it's common to reduce the scope input impedance to 50 ohm, so the inductance in the cable can balance out the capacitance and the cable becomes a properly matched transmission line)



What this means is if high input impedances are desirable then it's much better to deal with that at the point of probing than at the scope. The typical compromise of cost/flexibility/input impedance for general use is an x10 passive probe.



If you need a really high DC resistance then the solution is to add a FET based amplifier in front of the scope, preferably as close to the point of measurement as possible.






share|improve this answer












$endgroup$














  • $begingroup$
    Is the input capacitance also specifically engineered like the 1Mohm input impedance, or is it just a parasitic element that is measured? (A non-precise input capacitance wouldn’t be a problem since attenuating probes have variable capacitors.) Would I be correct in saying that: if attenuation circuitry was not needed, and we didn’t worry about impedance matching at higher frequencies (in which case you might have a switchable input to 50ohms), then it would be fine to have input directly into high impedance FET stage? Just trying to get the different reasons for this straight in my head.
    $endgroup$
    – hddh
    May 4 at 23:56










  • $begingroup$
    I guess even then, you’d still have probe/cable capacitance to worry about, but in that case adding 1meg across it is just going to make the impedance even lower. And 10X probes could just have their own 1meg resistor in parallel with the probe output. So basically: ignoring attenuating probes, impedance matching, and attenuation circuitry, I don’t see any other reasons for an input resistance as low as 1meg, since it would just make the input impedance due to capacitance even lower (and the impedance-matching ship would have already sailed at 1meg input impedance anyway).
    $endgroup$
    – hddh
    May 5 at 0:25










  • $begingroup$
    So my understanding so far: 1meg input resistance is preferable due to: (a) required attenuation circuitry, (b) input impedance is dominated by capacitance anyway, (c) it makes attenuating probe design simpler. Impedance matching doesn’t seem to he a reason since you’d go down to 50ohms in such cases anyway. Makes me wonder about multimeter input impedances (normally 10meg), where only (a) seems to apply.
    $endgroup$
    – hddh
    May 5 at 0:31






  • 1




    $begingroup$
    Another issue with high impedance inputs it "phantom" voltages when they are not connected to anything. Even at 10 meg this can be noticeable sometimes. Some high end multimeters do actually have the option to switch-out the 10 meg resistor, I have access to such a meter but I don't think I've ever felt the need to use said feature.
    $endgroup$
    – Peter Green
    May 5 at 0:40










  • $begingroup$
    @PeterGreen see if you can disable the 50/60Hz suppression too, and you have a random number generator instead of a voltmeter while it is not connected to something.
    $endgroup$
    – rackandboneman
    May 5 at 7:28













10














10










10







$begingroup$

I would say a combination of a few factors.



  1. The input stages of an osciloscope are a difficult compromise. They need to be have a wide range of gains/attenutations, they need to be tolerant of user errors, and they need to pass high bandwidths. Adding a requirement for a very high DC resistance would just further complicate matters. In particular attenuators needed to handle the higher end of the scopes input level range would get much more complex/sensitive if they needed to have a very high DC resistance.

  2. It's a de-facto standard, changing to something else would lead to incompatibilities with existing probes etc.

  3. There wouldn't be much benefit anyway.

To further explain point 3, at moderate frequencies (from a few kilohertz upwards) the 1 megohm DC resistance of the scope input is not the dominant factor in the overall input impedance. The dominant factor is the capacitance, with the cable making probably the largest contribution.



(in fact at UHF/microwave frequencies it's common to reduce the scope input impedance to 50 ohm, so the inductance in the cable can balance out the capacitance and the cable becomes a properly matched transmission line)



What this means is if high input impedances are desirable then it's much better to deal with that at the point of probing than at the scope. The typical compromise of cost/flexibility/input impedance for general use is an x10 passive probe.



If you need a really high DC resistance then the solution is to add a FET based amplifier in front of the scope, preferably as close to the point of measurement as possible.






share|improve this answer












$endgroup$



I would say a combination of a few factors.



  1. The input stages of an osciloscope are a difficult compromise. They need to be have a wide range of gains/attenutations, they need to be tolerant of user errors, and they need to pass high bandwidths. Adding a requirement for a very high DC resistance would just further complicate matters. In particular attenuators needed to handle the higher end of the scopes input level range would get much more complex/sensitive if they needed to have a very high DC resistance.

  2. It's a de-facto standard, changing to something else would lead to incompatibilities with existing probes etc.

  3. There wouldn't be much benefit anyway.

To further explain point 3, at moderate frequencies (from a few kilohertz upwards) the 1 megohm DC resistance of the scope input is not the dominant factor in the overall input impedance. The dominant factor is the capacitance, with the cable making probably the largest contribution.



(in fact at UHF/microwave frequencies it's common to reduce the scope input impedance to 50 ohm, so the inductance in the cable can balance out the capacitance and the cable becomes a properly matched transmission line)



What this means is if high input impedances are desirable then it's much better to deal with that at the point of probing than at the scope. The typical compromise of cost/flexibility/input impedance for general use is an x10 passive probe.



If you need a really high DC resistance then the solution is to add a FET based amplifier in front of the scope, preferably as close to the point of measurement as possible.







share|improve this answer















share|improve this answer




share|improve this answer



share|improve this answer








edited May 4 at 16:38

























answered May 4 at 14:01









Peter GreenPeter Green

12.8k1 gold badge22 silver badges42 bronze badges




12.8k1 gold badge22 silver badges42 bronze badges














  • $begingroup$
    Is the input capacitance also specifically engineered like the 1Mohm input impedance, or is it just a parasitic element that is measured? (A non-precise input capacitance wouldn’t be a problem since attenuating probes have variable capacitors.) Would I be correct in saying that: if attenuation circuitry was not needed, and we didn’t worry about impedance matching at higher frequencies (in which case you might have a switchable input to 50ohms), then it would be fine to have input directly into high impedance FET stage? Just trying to get the different reasons for this straight in my head.
    $endgroup$
    – hddh
    May 4 at 23:56










  • $begingroup$
    I guess even then, you’d still have probe/cable capacitance to worry about, but in that case adding 1meg across it is just going to make the impedance even lower. And 10X probes could just have their own 1meg resistor in parallel with the probe output. So basically: ignoring attenuating probes, impedance matching, and attenuation circuitry, I don’t see any other reasons for an input resistance as low as 1meg, since it would just make the input impedance due to capacitance even lower (and the impedance-matching ship would have already sailed at 1meg input impedance anyway).
    $endgroup$
    – hddh
    May 5 at 0:25










  • $begingroup$
    So my understanding so far: 1meg input resistance is preferable due to: (a) required attenuation circuitry, (b) input impedance is dominated by capacitance anyway, (c) it makes attenuating probe design simpler. Impedance matching doesn’t seem to he a reason since you’d go down to 50ohms in such cases anyway. Makes me wonder about multimeter input impedances (normally 10meg), where only (a) seems to apply.
    $endgroup$
    – hddh
    May 5 at 0:31






  • 1




    $begingroup$
    Another issue with high impedance inputs it "phantom" voltages when they are not connected to anything. Even at 10 meg this can be noticeable sometimes. Some high end multimeters do actually have the option to switch-out the 10 meg resistor, I have access to such a meter but I don't think I've ever felt the need to use said feature.
    $endgroup$
    – Peter Green
    May 5 at 0:40










  • $begingroup$
    @PeterGreen see if you can disable the 50/60Hz suppression too, and you have a random number generator instead of a voltmeter while it is not connected to something.
    $endgroup$
    – rackandboneman
    May 5 at 7:28
















  • $begingroup$
    Is the input capacitance also specifically engineered like the 1Mohm input impedance, or is it just a parasitic element that is measured? (A non-precise input capacitance wouldn’t be a problem since attenuating probes have variable capacitors.) Would I be correct in saying that: if attenuation circuitry was not needed, and we didn’t worry about impedance matching at higher frequencies (in which case you might have a switchable input to 50ohms), then it would be fine to have input directly into high impedance FET stage? Just trying to get the different reasons for this straight in my head.
    $endgroup$
    – hddh
    May 4 at 23:56










  • $begingroup$
    I guess even then, you’d still have probe/cable capacitance to worry about, but in that case adding 1meg across it is just going to make the impedance even lower. And 10X probes could just have their own 1meg resistor in parallel with the probe output. So basically: ignoring attenuating probes, impedance matching, and attenuation circuitry, I don’t see any other reasons for an input resistance as low as 1meg, since it would just make the input impedance due to capacitance even lower (and the impedance-matching ship would have already sailed at 1meg input impedance anyway).
    $endgroup$
    – hddh
    May 5 at 0:25










  • $begingroup$
    So my understanding so far: 1meg input resistance is preferable due to: (a) required attenuation circuitry, (b) input impedance is dominated by capacitance anyway, (c) it makes attenuating probe design simpler. Impedance matching doesn’t seem to he a reason since you’d go down to 50ohms in such cases anyway. Makes me wonder about multimeter input impedances (normally 10meg), where only (a) seems to apply.
    $endgroup$
    – hddh
    May 5 at 0:31






  • 1




    $begingroup$
    Another issue with high impedance inputs it "phantom" voltages when they are not connected to anything. Even at 10 meg this can be noticeable sometimes. Some high end multimeters do actually have the option to switch-out the 10 meg resistor, I have access to such a meter but I don't think I've ever felt the need to use said feature.
    $endgroup$
    – Peter Green
    May 5 at 0:40










  • $begingroup$
    @PeterGreen see if you can disable the 50/60Hz suppression too, and you have a random number generator instead of a voltmeter while it is not connected to something.
    $endgroup$
    – rackandboneman
    May 5 at 7:28















$begingroup$
Is the input capacitance also specifically engineered like the 1Mohm input impedance, or is it just a parasitic element that is measured? (A non-precise input capacitance wouldn’t be a problem since attenuating probes have variable capacitors.) Would I be correct in saying that: if attenuation circuitry was not needed, and we didn’t worry about impedance matching at higher frequencies (in which case you might have a switchable input to 50ohms), then it would be fine to have input directly into high impedance FET stage? Just trying to get the different reasons for this straight in my head.
$endgroup$
– hddh
May 4 at 23:56




$begingroup$
Is the input capacitance also specifically engineered like the 1Mohm input impedance, or is it just a parasitic element that is measured? (A non-precise input capacitance wouldn’t be a problem since attenuating probes have variable capacitors.) Would I be correct in saying that: if attenuation circuitry was not needed, and we didn’t worry about impedance matching at higher frequencies (in which case you might have a switchable input to 50ohms), then it would be fine to have input directly into high impedance FET stage? Just trying to get the different reasons for this straight in my head.
$endgroup$
– hddh
May 4 at 23:56












$begingroup$
I guess even then, you’d still have probe/cable capacitance to worry about, but in that case adding 1meg across it is just going to make the impedance even lower. And 10X probes could just have their own 1meg resistor in parallel with the probe output. So basically: ignoring attenuating probes, impedance matching, and attenuation circuitry, I don’t see any other reasons for an input resistance as low as 1meg, since it would just make the input impedance due to capacitance even lower (and the impedance-matching ship would have already sailed at 1meg input impedance anyway).
$endgroup$
– hddh
May 5 at 0:25




$begingroup$
I guess even then, you’d still have probe/cable capacitance to worry about, but in that case adding 1meg across it is just going to make the impedance even lower. And 10X probes could just have their own 1meg resistor in parallel with the probe output. So basically: ignoring attenuating probes, impedance matching, and attenuation circuitry, I don’t see any other reasons for an input resistance as low as 1meg, since it would just make the input impedance due to capacitance even lower (and the impedance-matching ship would have already sailed at 1meg input impedance anyway).
$endgroup$
– hddh
May 5 at 0:25












$begingroup$
So my understanding so far: 1meg input resistance is preferable due to: (a) required attenuation circuitry, (b) input impedance is dominated by capacitance anyway, (c) it makes attenuating probe design simpler. Impedance matching doesn’t seem to he a reason since you’d go down to 50ohms in such cases anyway. Makes me wonder about multimeter input impedances (normally 10meg), where only (a) seems to apply.
$endgroup$
– hddh
May 5 at 0:31




$begingroup$
So my understanding so far: 1meg input resistance is preferable due to: (a) required attenuation circuitry, (b) input impedance is dominated by capacitance anyway, (c) it makes attenuating probe design simpler. Impedance matching doesn’t seem to he a reason since you’d go down to 50ohms in such cases anyway. Makes me wonder about multimeter input impedances (normally 10meg), where only (a) seems to apply.
$endgroup$
– hddh
May 5 at 0:31




1




1




$begingroup$
Another issue with high impedance inputs it "phantom" voltages when they are not connected to anything. Even at 10 meg this can be noticeable sometimes. Some high end multimeters do actually have the option to switch-out the 10 meg resistor, I have access to such a meter but I don't think I've ever felt the need to use said feature.
$endgroup$
– Peter Green
May 5 at 0:40




$begingroup$
Another issue with high impedance inputs it "phantom" voltages when they are not connected to anything. Even at 10 meg this can be noticeable sometimes. Some high end multimeters do actually have the option to switch-out the 10 meg resistor, I have access to such a meter but I don't think I've ever felt the need to use said feature.
$endgroup$
– Peter Green
May 5 at 0:40












$begingroup$
@PeterGreen see if you can disable the 50/60Hz suppression too, and you have a random number generator instead of a voltmeter while it is not connected to something.
$endgroup$
– rackandboneman
May 5 at 7:28




$begingroup$
@PeterGreen see if you can disable the 50/60Hz suppression too, and you have a random number generator instead of a voltmeter while it is not connected to something.
$endgroup$
– rackandboneman
May 5 at 7:28













10
















$begingroup$

A lot of things are the way they are because of history, and de facto standardisation.



A general purpose oscilloscope input is a difficult compromise between not loading the circuit, not being damaged by high voltage, having reasonably low noise, and being able to maintain a decent bandwidth.



1Mohm in parallel with 15pF to 30pF satisfies a lot of people for a lot of applications. There's little incentive for manufacturers to build a general purpose oscilloscope with a different input, to address tiny parts of the market.



When you do need better noise, or a differential input, or a higher input impedance, then you use a custom pre-amp. When you need wider bandwidth, you switch to a 50 ohm input impedance.



There are special purpose oscilloscopes made at high prices that do address niche applications.






share|improve this answer










$endgroup$














  • $begingroup$
    Fair enough. So the input impedance (to a scope or meter) does not come from an actual resistor, but from active circuitry instead? (Am I crazy for not being sure about this?) Makes me wonder how they can precisely control it. I wonder if there are any schematics of scope input stages/front ends floating around the internet that I could have a look at.
    $endgroup$
    – hddh
    May 4 at 11:31






  • 4




    $begingroup$
    @hddh I still find it surprising that a FET input stage of sufficient bandwidth can't be engineered Says who? There are FET probes with more than 1 GHz BW, for example: keysight.com/main/… Prehaps what you mean is, that you want it inside the scope. That could be made yet it would be unusable that way! You need a cable to connect your testpoint to your scope. That cable has capacitance. The whole point of FET probe is that is has a low capacitance.
    $endgroup$
    – Bimpelrekkie
    May 4 at 11:55







  • 2




    $begingroup$
    Pointers: EEVBlog ! Also there are plenty of schematics to be found in service manuals of for example older Tektronix scopes. It clearly can’t be a FET with a 1Mohm input impedance (right?). No wrong, he input impedance is set by a resistor then (often) a FET amplifier is used to amplify the voltage across that resistor. The 1 M is needed to have a properly defined impedance. Here's Dave reverse engineering the popular Rigol DS1054Z scope: youtube.com/watch?v=lJVrTV_BeGg&t=989s Its design is typical of many modern scopes
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:08







  • 2




    $begingroup$
    And here's a service manual of a Tektronix 2215 analog scope, it has a block diagram and all the circuits. Yes it is an old design but the input stage will be very similar to modern many scopes: tek.com/manual/2215 for study purposes, this is very useful.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:17






  • 1




    $begingroup$
    ..ADC w/ FET input stage isn’t feasible is because of the attenuation required before it to achieve the desired dynamic range? Yes, dynamic range is indeed the answer. A variable attenuator helps to bring the signal into a range that is appropriate to both the input amplifier and the ADC.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:18















10
















$begingroup$

A lot of things are the way they are because of history, and de facto standardisation.



A general purpose oscilloscope input is a difficult compromise between not loading the circuit, not being damaged by high voltage, having reasonably low noise, and being able to maintain a decent bandwidth.



1Mohm in parallel with 15pF to 30pF satisfies a lot of people for a lot of applications. There's little incentive for manufacturers to build a general purpose oscilloscope with a different input, to address tiny parts of the market.



When you do need better noise, or a differential input, or a higher input impedance, then you use a custom pre-amp. When you need wider bandwidth, you switch to a 50 ohm input impedance.



There are special purpose oscilloscopes made at high prices that do address niche applications.






share|improve this answer










$endgroup$














  • $begingroup$
    Fair enough. So the input impedance (to a scope or meter) does not come from an actual resistor, but from active circuitry instead? (Am I crazy for not being sure about this?) Makes me wonder how they can precisely control it. I wonder if there are any schematics of scope input stages/front ends floating around the internet that I could have a look at.
    $endgroup$
    – hddh
    May 4 at 11:31






  • 4




    $begingroup$
    @hddh I still find it surprising that a FET input stage of sufficient bandwidth can't be engineered Says who? There are FET probes with more than 1 GHz BW, for example: keysight.com/main/… Prehaps what you mean is, that you want it inside the scope. That could be made yet it would be unusable that way! You need a cable to connect your testpoint to your scope. That cable has capacitance. The whole point of FET probe is that is has a low capacitance.
    $endgroup$
    – Bimpelrekkie
    May 4 at 11:55







  • 2




    $begingroup$
    Pointers: EEVBlog ! Also there are plenty of schematics to be found in service manuals of for example older Tektronix scopes. It clearly can’t be a FET with a 1Mohm input impedance (right?). No wrong, he input impedance is set by a resistor then (often) a FET amplifier is used to amplify the voltage across that resistor. The 1 M is needed to have a properly defined impedance. Here's Dave reverse engineering the popular Rigol DS1054Z scope: youtube.com/watch?v=lJVrTV_BeGg&t=989s Its design is typical of many modern scopes
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:08







  • 2




    $begingroup$
    And here's a service manual of a Tektronix 2215 analog scope, it has a block diagram and all the circuits. Yes it is an old design but the input stage will be very similar to modern many scopes: tek.com/manual/2215 for study purposes, this is very useful.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:17






  • 1




    $begingroup$
    ..ADC w/ FET input stage isn’t feasible is because of the attenuation required before it to achieve the desired dynamic range? Yes, dynamic range is indeed the answer. A variable attenuator helps to bring the signal into a range that is appropriate to both the input amplifier and the ADC.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:18













10














10










10







$begingroup$

A lot of things are the way they are because of history, and de facto standardisation.



A general purpose oscilloscope input is a difficult compromise between not loading the circuit, not being damaged by high voltage, having reasonably low noise, and being able to maintain a decent bandwidth.



1Mohm in parallel with 15pF to 30pF satisfies a lot of people for a lot of applications. There's little incentive for manufacturers to build a general purpose oscilloscope with a different input, to address tiny parts of the market.



When you do need better noise, or a differential input, or a higher input impedance, then you use a custom pre-amp. When you need wider bandwidth, you switch to a 50 ohm input impedance.



There are special purpose oscilloscopes made at high prices that do address niche applications.






share|improve this answer










$endgroup$



A lot of things are the way they are because of history, and de facto standardisation.



A general purpose oscilloscope input is a difficult compromise between not loading the circuit, not being damaged by high voltage, having reasonably low noise, and being able to maintain a decent bandwidth.



1Mohm in parallel with 15pF to 30pF satisfies a lot of people for a lot of applications. There's little incentive for manufacturers to build a general purpose oscilloscope with a different input, to address tiny parts of the market.



When you do need better noise, or a differential input, or a higher input impedance, then you use a custom pre-amp. When you need wider bandwidth, you switch to a 50 ohm input impedance.



There are special purpose oscilloscopes made at high prices that do address niche applications.







share|improve this answer













share|improve this answer




share|improve this answer



share|improve this answer










answered May 4 at 11:15









Neil_UKNeil_UK

87.7k2 gold badges90 silver badges203 bronze badges




87.7k2 gold badges90 silver badges203 bronze badges














  • $begingroup$
    Fair enough. So the input impedance (to a scope or meter) does not come from an actual resistor, but from active circuitry instead? (Am I crazy for not being sure about this?) Makes me wonder how they can precisely control it. I wonder if there are any schematics of scope input stages/front ends floating around the internet that I could have a look at.
    $endgroup$
    – hddh
    May 4 at 11:31






  • 4




    $begingroup$
    @hddh I still find it surprising that a FET input stage of sufficient bandwidth can't be engineered Says who? There are FET probes with more than 1 GHz BW, for example: keysight.com/main/… Prehaps what you mean is, that you want it inside the scope. That could be made yet it would be unusable that way! You need a cable to connect your testpoint to your scope. That cable has capacitance. The whole point of FET probe is that is has a low capacitance.
    $endgroup$
    – Bimpelrekkie
    May 4 at 11:55







  • 2




    $begingroup$
    Pointers: EEVBlog ! Also there are plenty of schematics to be found in service manuals of for example older Tektronix scopes. It clearly can’t be a FET with a 1Mohm input impedance (right?). No wrong, he input impedance is set by a resistor then (often) a FET amplifier is used to amplify the voltage across that resistor. The 1 M is needed to have a properly defined impedance. Here's Dave reverse engineering the popular Rigol DS1054Z scope: youtube.com/watch?v=lJVrTV_BeGg&t=989s Its design is typical of many modern scopes
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:08







  • 2




    $begingroup$
    And here's a service manual of a Tektronix 2215 analog scope, it has a block diagram and all the circuits. Yes it is an old design but the input stage will be very similar to modern many scopes: tek.com/manual/2215 for study purposes, this is very useful.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:17






  • 1




    $begingroup$
    ..ADC w/ FET input stage isn’t feasible is because of the attenuation required before it to achieve the desired dynamic range? Yes, dynamic range is indeed the answer. A variable attenuator helps to bring the signal into a range that is appropriate to both the input amplifier and the ADC.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:18
















  • $begingroup$
    Fair enough. So the input impedance (to a scope or meter) does not come from an actual resistor, but from active circuitry instead? (Am I crazy for not being sure about this?) Makes me wonder how they can precisely control it. I wonder if there are any schematics of scope input stages/front ends floating around the internet that I could have a look at.
    $endgroup$
    – hddh
    May 4 at 11:31






  • 4




    $begingroup$
    @hddh I still find it surprising that a FET input stage of sufficient bandwidth can't be engineered Says who? There are FET probes with more than 1 GHz BW, for example: keysight.com/main/… Prehaps what you mean is, that you want it inside the scope. That could be made yet it would be unusable that way! You need a cable to connect your testpoint to your scope. That cable has capacitance. The whole point of FET probe is that is has a low capacitance.
    $endgroup$
    – Bimpelrekkie
    May 4 at 11:55







  • 2




    $begingroup$
    Pointers: EEVBlog ! Also there are plenty of schematics to be found in service manuals of for example older Tektronix scopes. It clearly can’t be a FET with a 1Mohm input impedance (right?). No wrong, he input impedance is set by a resistor then (often) a FET amplifier is used to amplify the voltage across that resistor. The 1 M is needed to have a properly defined impedance. Here's Dave reverse engineering the popular Rigol DS1054Z scope: youtube.com/watch?v=lJVrTV_BeGg&t=989s Its design is typical of many modern scopes
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:08







  • 2




    $begingroup$
    And here's a service manual of a Tektronix 2215 analog scope, it has a block diagram and all the circuits. Yes it is an old design but the input stage will be very similar to modern many scopes: tek.com/manual/2215 for study purposes, this is very useful.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:17






  • 1




    $begingroup$
    ..ADC w/ FET input stage isn’t feasible is because of the attenuation required before it to achieve the desired dynamic range? Yes, dynamic range is indeed the answer. A variable attenuator helps to bring the signal into a range that is appropriate to both the input amplifier and the ADC.
    $endgroup$
    – Bimpelrekkie
    May 4 at 12:18















$begingroup$
Fair enough. So the input impedance (to a scope or meter) does not come from an actual resistor, but from active circuitry instead? (Am I crazy for not being sure about this?) Makes me wonder how they can precisely control it. I wonder if there are any schematics of scope input stages/front ends floating around the internet that I could have a look at.
$endgroup$
– hddh
May 4 at 11:31




$begingroup$
Fair enough. So the input impedance (to a scope or meter) does not come from an actual resistor, but from active circuitry instead? (Am I crazy for not being sure about this?) Makes me wonder how they can precisely control it. I wonder if there are any schematics of scope input stages/front ends floating around the internet that I could have a look at.
$endgroup$
– hddh
May 4 at 11:31




4




4




$begingroup$
@hddh I still find it surprising that a FET input stage of sufficient bandwidth can't be engineered Says who? There are FET probes with more than 1 GHz BW, for example: keysight.com/main/… Prehaps what you mean is, that you want it inside the scope. That could be made yet it would be unusable that way! You need a cable to connect your testpoint to your scope. That cable has capacitance. The whole point of FET probe is that is has a low capacitance.
$endgroup$
– Bimpelrekkie
May 4 at 11:55





$begingroup$
@hddh I still find it surprising that a FET input stage of sufficient bandwidth can't be engineered Says who? There are FET probes with more than 1 GHz BW, for example: keysight.com/main/… Prehaps what you mean is, that you want it inside the scope. That could be made yet it would be unusable that way! You need a cable to connect your testpoint to your scope. That cable has capacitance. The whole point of FET probe is that is has a low capacitance.
$endgroup$
– Bimpelrekkie
May 4 at 11:55





2




2




$begingroup$
Pointers: EEVBlog ! Also there are plenty of schematics to be found in service manuals of for example older Tektronix scopes. It clearly can’t be a FET with a 1Mohm input impedance (right?). No wrong, he input impedance is set by a resistor then (often) a FET amplifier is used to amplify the voltage across that resistor. The 1 M is needed to have a properly defined impedance. Here's Dave reverse engineering the popular Rigol DS1054Z scope: youtube.com/watch?v=lJVrTV_BeGg&t=989s Its design is typical of many modern scopes
$endgroup$
– Bimpelrekkie
May 4 at 12:08





$begingroup$
Pointers: EEVBlog ! Also there are plenty of schematics to be found in service manuals of for example older Tektronix scopes. It clearly can’t be a FET with a 1Mohm input impedance (right?). No wrong, he input impedance is set by a resistor then (often) a FET amplifier is used to amplify the voltage across that resistor. The 1 M is needed to have a properly defined impedance. Here's Dave reverse engineering the popular Rigol DS1054Z scope: youtube.com/watch?v=lJVrTV_BeGg&t=989s Its design is typical of many modern scopes
$endgroup$
– Bimpelrekkie
May 4 at 12:08





2




2




$begingroup$
And here's a service manual of a Tektronix 2215 analog scope, it has a block diagram and all the circuits. Yes it is an old design but the input stage will be very similar to modern many scopes: tek.com/manual/2215 for study purposes, this is very useful.
$endgroup$
– Bimpelrekkie
May 4 at 12:17




$begingroup$
And here's a service manual of a Tektronix 2215 analog scope, it has a block diagram and all the circuits. Yes it is an old design but the input stage will be very similar to modern many scopes: tek.com/manual/2215 for study purposes, this is very useful.
$endgroup$
– Bimpelrekkie
May 4 at 12:17




1




1




$begingroup$
..ADC w/ FET input stage isn’t feasible is because of the attenuation required before it to achieve the desired dynamic range? Yes, dynamic range is indeed the answer. A variable attenuator helps to bring the signal into a range that is appropriate to both the input amplifier and the ADC.
$endgroup$
– Bimpelrekkie
May 4 at 12:18




$begingroup$
..ADC w/ FET input stage isn’t feasible is because of the attenuation required before it to achieve the desired dynamic range? Yes, dynamic range is indeed the answer. A variable attenuator helps to bring the signal into a range that is appropriate to both the input amplifier and the ADC.
$endgroup$
– Bimpelrekkie
May 4 at 12:18











5
















$begingroup$

Actually, it is ridiculously high for a wideband input.



There is no practical connector or cable that actually has an impedance (from a transmission line view. Resistance, but for coaxial cablers, gold platers, and waveguide plumbers. RF dudes.) of 1 megaohms, leaving the input utterly mismatched - even worse, a 15-45pf capacitor across an 1 megaohm (transmission line impedance) input would mismatch it to oblivion.



The reason it is 1 megaohm is for supporting standard 10:1 probes, which you indeed need to not overload the kind of circuit carrying audio frequency signals at high impedance and with high DC offset (think audio vacuum tube circuits, the probe designs are from just that era).



However, once you are dealing with RF or fast digital circuitry, the parallel capacitance of the scope input (which you can't make too small, again because of probes, cables, connectors) will dominate ... and bring the actual input resistance of that input down to 5 to 10 kiloohms once you reach one megahertz, 500 to 1000 ohms once you reach 10 megahertz. Reach VHF (hint: ACMOS or F-TTL circuitry is VHF stuff even if you don't clock it at VHF) and you would be better off with a matched 50 Ohm input, since you could connect a (within reason) long 50 Ohm cable and still have a 50 Ohm input on the circuit end, instead of an even bigger capacitive burden.



With the conventional kind of probe and input, you will overload RF circuitry easily. RF optimized oscilloscopes tend to have inputs that can be switched to 50 Ohm input impedance (any oscilloscope input can, with a parallel/through terminator) - which is, interestingly, BETTER suited, since now you can use probes (eg Z0 probes or active FET probes) that actually can be made to present much higher effective input impedances at the probe point. Or just provide a reliable 50 Ohm connection to your circuit with any old RG58 cable.






share|improve this answer












$endgroup$














  • $begingroup$
    If I understand correctly: So you’re saying that 1megaohm doesn’t help with impedance matching, and you’d be better with 50ohm inputs in those cases. So if the impedance-matching ship has sailed with 1meg, then why is a low input impedance of 1meg necessary? The reason I’ve gathered for this from other answers is that the required input attenuation circuitry makes this infeasible. Are there other reasons? (Also is the scope input capacitance intentional like the 1meg, or is it parasitic? - i.e., could it be easily be reduced?)
    $endgroup$
    – hddh
    May 5 at 0:08










  • $begingroup$
    @hddh it was parasitic once, then it likely became intentional :)
    $endgroup$
    – rackandboneman
    May 5 at 7:24















5
















$begingroup$

Actually, it is ridiculously high for a wideband input.



There is no practical connector or cable that actually has an impedance (from a transmission line view. Resistance, but for coaxial cablers, gold platers, and waveguide plumbers. RF dudes.) of 1 megaohms, leaving the input utterly mismatched - even worse, a 15-45pf capacitor across an 1 megaohm (transmission line impedance) input would mismatch it to oblivion.



The reason it is 1 megaohm is for supporting standard 10:1 probes, which you indeed need to not overload the kind of circuit carrying audio frequency signals at high impedance and with high DC offset (think audio vacuum tube circuits, the probe designs are from just that era).



However, once you are dealing with RF or fast digital circuitry, the parallel capacitance of the scope input (which you can't make too small, again because of probes, cables, connectors) will dominate ... and bring the actual input resistance of that input down to 5 to 10 kiloohms once you reach one megahertz, 500 to 1000 ohms once you reach 10 megahertz. Reach VHF (hint: ACMOS or F-TTL circuitry is VHF stuff even if you don't clock it at VHF) and you would be better off with a matched 50 Ohm input, since you could connect a (within reason) long 50 Ohm cable and still have a 50 Ohm input on the circuit end, instead of an even bigger capacitive burden.



With the conventional kind of probe and input, you will overload RF circuitry easily. RF optimized oscilloscopes tend to have inputs that can be switched to 50 Ohm input impedance (any oscilloscope input can, with a parallel/through terminator) - which is, interestingly, BETTER suited, since now you can use probes (eg Z0 probes or active FET probes) that actually can be made to present much higher effective input impedances at the probe point. Or just provide a reliable 50 Ohm connection to your circuit with any old RG58 cable.






share|improve this answer












$endgroup$














  • $begingroup$
    If I understand correctly: So you’re saying that 1megaohm doesn’t help with impedance matching, and you’d be better with 50ohm inputs in those cases. So if the impedance-matching ship has sailed with 1meg, then why is a low input impedance of 1meg necessary? The reason I’ve gathered for this from other answers is that the required input attenuation circuitry makes this infeasible. Are there other reasons? (Also is the scope input capacitance intentional like the 1meg, or is it parasitic? - i.e., could it be easily be reduced?)
    $endgroup$
    – hddh
    May 5 at 0:08










  • $begingroup$
    @hddh it was parasitic once, then it likely became intentional :)
    $endgroup$
    – rackandboneman
    May 5 at 7:24













5














5










5







$begingroup$

Actually, it is ridiculously high for a wideband input.



There is no practical connector or cable that actually has an impedance (from a transmission line view. Resistance, but for coaxial cablers, gold platers, and waveguide plumbers. RF dudes.) of 1 megaohms, leaving the input utterly mismatched - even worse, a 15-45pf capacitor across an 1 megaohm (transmission line impedance) input would mismatch it to oblivion.



The reason it is 1 megaohm is for supporting standard 10:1 probes, which you indeed need to not overload the kind of circuit carrying audio frequency signals at high impedance and with high DC offset (think audio vacuum tube circuits, the probe designs are from just that era).



However, once you are dealing with RF or fast digital circuitry, the parallel capacitance of the scope input (which you can't make too small, again because of probes, cables, connectors) will dominate ... and bring the actual input resistance of that input down to 5 to 10 kiloohms once you reach one megahertz, 500 to 1000 ohms once you reach 10 megahertz. Reach VHF (hint: ACMOS or F-TTL circuitry is VHF stuff even if you don't clock it at VHF) and you would be better off with a matched 50 Ohm input, since you could connect a (within reason) long 50 Ohm cable and still have a 50 Ohm input on the circuit end, instead of an even bigger capacitive burden.



With the conventional kind of probe and input, you will overload RF circuitry easily. RF optimized oscilloscopes tend to have inputs that can be switched to 50 Ohm input impedance (any oscilloscope input can, with a parallel/through terminator) - which is, interestingly, BETTER suited, since now you can use probes (eg Z0 probes or active FET probes) that actually can be made to present much higher effective input impedances at the probe point. Or just provide a reliable 50 Ohm connection to your circuit with any old RG58 cable.






share|improve this answer












$endgroup$



Actually, it is ridiculously high for a wideband input.



There is no practical connector or cable that actually has an impedance (from a transmission line view. Resistance, but for coaxial cablers, gold platers, and waveguide plumbers. RF dudes.) of 1 megaohms, leaving the input utterly mismatched - even worse, a 15-45pf capacitor across an 1 megaohm (transmission line impedance) input would mismatch it to oblivion.



The reason it is 1 megaohm is for supporting standard 10:1 probes, which you indeed need to not overload the kind of circuit carrying audio frequency signals at high impedance and with high DC offset (think audio vacuum tube circuits, the probe designs are from just that era).



However, once you are dealing with RF or fast digital circuitry, the parallel capacitance of the scope input (which you can't make too small, again because of probes, cables, connectors) will dominate ... and bring the actual input resistance of that input down to 5 to 10 kiloohms once you reach one megahertz, 500 to 1000 ohms once you reach 10 megahertz. Reach VHF (hint: ACMOS or F-TTL circuitry is VHF stuff even if you don't clock it at VHF) and you would be better off with a matched 50 Ohm input, since you could connect a (within reason) long 50 Ohm cable and still have a 50 Ohm input on the circuit end, instead of an even bigger capacitive burden.



With the conventional kind of probe and input, you will overload RF circuitry easily. RF optimized oscilloscopes tend to have inputs that can be switched to 50 Ohm input impedance (any oscilloscope input can, with a parallel/through terminator) - which is, interestingly, BETTER suited, since now you can use probes (eg Z0 probes or active FET probes) that actually can be made to present much higher effective input impedances at the probe point. Or just provide a reliable 50 Ohm connection to your circuit with any old RG58 cable.







share|improve this answer















share|improve this answer




share|improve this answer



share|improve this answer








edited May 4 at 19:51

























answered May 4 at 19:42









rackandbonemanrackandboneman

2,2455 silver badges9 bronze badges




2,2455 silver badges9 bronze badges














  • $begingroup$
    If I understand correctly: So you’re saying that 1megaohm doesn’t help with impedance matching, and you’d be better with 50ohm inputs in those cases. So if the impedance-matching ship has sailed with 1meg, then why is a low input impedance of 1meg necessary? The reason I’ve gathered for this from other answers is that the required input attenuation circuitry makes this infeasible. Are there other reasons? (Also is the scope input capacitance intentional like the 1meg, or is it parasitic? - i.e., could it be easily be reduced?)
    $endgroup$
    – hddh
    May 5 at 0:08










  • $begingroup$
    @hddh it was parasitic once, then it likely became intentional :)
    $endgroup$
    – rackandboneman
    May 5 at 7:24
















  • $begingroup$
    If I understand correctly: So you’re saying that 1megaohm doesn’t help with impedance matching, and you’d be better with 50ohm inputs in those cases. So if the impedance-matching ship has sailed with 1meg, then why is a low input impedance of 1meg necessary? The reason I’ve gathered for this from other answers is that the required input attenuation circuitry makes this infeasible. Are there other reasons? (Also is the scope input capacitance intentional like the 1meg, or is it parasitic? - i.e., could it be easily be reduced?)
    $endgroup$
    – hddh
    May 5 at 0:08










  • $begingroup$
    @hddh it was parasitic once, then it likely became intentional :)
    $endgroup$
    – rackandboneman
    May 5 at 7:24















$begingroup$
If I understand correctly: So you’re saying that 1megaohm doesn’t help with impedance matching, and you’d be better with 50ohm inputs in those cases. So if the impedance-matching ship has sailed with 1meg, then why is a low input impedance of 1meg necessary? The reason I’ve gathered for this from other answers is that the required input attenuation circuitry makes this infeasible. Are there other reasons? (Also is the scope input capacitance intentional like the 1meg, or is it parasitic? - i.e., could it be easily be reduced?)
$endgroup$
– hddh
May 5 at 0:08




$begingroup$
If I understand correctly: So you’re saying that 1megaohm doesn’t help with impedance matching, and you’d be better with 50ohm inputs in those cases. So if the impedance-matching ship has sailed with 1meg, then why is a low input impedance of 1meg necessary? The reason I’ve gathered for this from other answers is that the required input attenuation circuitry makes this infeasible. Are there other reasons? (Also is the scope input capacitance intentional like the 1meg, or is it parasitic? - i.e., could it be easily be reduced?)
$endgroup$
– hddh
May 5 at 0:08












$begingroup$
@hddh it was parasitic once, then it likely became intentional :)
$endgroup$
– rackandboneman
May 5 at 7:24




$begingroup$
@hddh it was parasitic once, then it likely became intentional :)
$endgroup$
– rackandboneman
May 5 at 7:24


















draft saved

draft discarded















































Thanks for contributing an answer to Electrical Engineering Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f436889%2fwhy-are-oscilloscope-input-impedances-so-low%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown









Popular posts from this blog

Tamil (spriik) Luke uk diar | Nawigatjuun

Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

Where does the image of a data connector as a sharp metal spike originate from?Where does the concept of infected people turning into zombies only after death originate from?Where does the motif of a reanimated human head originate?Where did the notion that Dragons could speak originate?Where does the archetypal image of the 'Grey' alien come from?Where did the suffix '-Man' originate?Where does the notion of being injured or killed by an illusion originate?Where did the term “sophont” originate?Where does the trope of magic spells being driven by advanced technology originate from?Where did the term “the living impaired” originate?