Are space camera sensors usually round, or square?Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used?What is the current state of affairs concerning space telescopes?Folding structures in space - What are the potential benefits and problems?What are the current state in plans to put a liquid telescope on the moon or in space?Aren't the mirrors of the James Webb Space Telescope too unprotected?Why do some space telescopes require cooling (sometimes down to 3K)?What causes these cross-shaped artifacts in TESS' first images?How did Skylab's electrographic camera work?

How can I speed up secure erasing of a disk?

Speeding up sums involving 16x16 matrices and 16x16x16x16 antisymmetric tensor

Inverse Look-and-Say

Is Fox News not classified as a news channel?

If I have fewer than 6 Pokemon in my party, does each gain more EXP?

Exactly what color was the text on monochrome terminals with green-on-black and amber-on-black screens?

How can I swallow pills more easily?

Today‘s scale factor of the universe

What does play with feeling mean?

Why does the B-2 Spirit have a pattern of thin white lines?

Why is SpaceX not also working on a smaller version of Starship?

A bob hanging in an accelerating train moves backward. What is the force moving it backward?

Pointlessly recurse down the alphabet

Do I even like doing research?

"the whole shabang" vs "the whole shebang"

Do companies have non compete agreements between each other?

Rats biting off fuel line ( again and again and again )!

What can I do if one employer used offer letter from first company against me?

Too many pull requests in backlog

If password expiration is applied, should door-locks expiration be applied too?

Correct spacing in math inside a word

According to who?

How was the Luftwaffe able to destroy nearly 4000 Soviet aircraft in 3 days of operation Barbarossa?

Why are Trump's handwritten notes being focused on in the news?



Are space camera sensors usually round, or square?


Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used?What is the current state of affairs concerning space telescopes?Folding structures in space - What are the potential benefits and problems?What are the current state in plans to put a liquid telescope on the moon or in space?Aren't the mirrors of the James Webb Space Telescope too unprotected?Why do some space telescopes require cooling (sometimes down to 3K)?What causes these cross-shaped artifacts in TESS' first images?How did Skylab's electrographic camera work?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;

.everyonelovesstackoverflowposition:absolute;height:1px;width:1px;opacity:0;top:0;left:0;pointer-events:none;








17















$begingroup$


A series of round lenses produces a round image on some sort of sensor or sensor array.



When it comes to cameras and telescopes out in space, are the sensors also round?



It seems like most of the images I've seen from space telescopes and cameras are rectangular, but I'm not sure if that's cropping in post-processing or something more raw.



General answers are fine as well as specific well-known examples (like HST, to pick one at random.)










share|improve this question









$endgroup$










  • 7




    $begingroup$
    Many earth observation satellites are linear and capture the ground by moving over it.
    $endgroup$
    – lijat
    Sep 12 at 19:21






  • 1




    $begingroup$
    @lijat space.stackexchange.com/search?q=pushbroom
    $endgroup$
    – uhoh
    Sep 13 at 14:24






  • 2




    $begingroup$
    Great question! Why are images presented rectangular, instead of circular or pentagonal or gingerbread man-shaped? Light is if anything round. The Sun is round, waves have no corners, our eyes are round, everything is round in the world of imaging, except pictures on our flat TV-screens. What is this all about? I doubt that HST's mirror is designed to concentrate all light to a rectangular CCD. I can demonstrate for myself that my reading glasses in sunlight do not concentrate light to a rectangle. Basic optics as far as I understand it. Something needs to be explained here.
    $endgroup$
    – Tombola
    Sep 14 at 13:29







  • 1




    $begingroup$
    @Tombola: Images are made up of pixels and displayed on flat surfaces. Pixels have to be triangular, square, or hexagonal, as these are the only regular shapes that can tesselate a flat plane without leaving gaps in between. Of these three possible shapes, squares (or rectangles) are used because they have fourfold (90-degree) symmetry, making it easy to identify pixels using x and y coordinates. And, once you have square/rectangular pixels, a square/rectangular screen, displaying square/rectangular images, is the only shape that lets you avoid leaving sawtoothed gaps around the edges.
    $endgroup$
    – Sean
    Sep 14 at 22:42






  • 1




    $begingroup$
    @Tombola My monitor's surface is optimized for rectangular pictures. What does yours look like?
    $endgroup$
    – Mast
    Sep 15 at 7:56

















17















$begingroup$


A series of round lenses produces a round image on some sort of sensor or sensor array.



When it comes to cameras and telescopes out in space, are the sensors also round?



It seems like most of the images I've seen from space telescopes and cameras are rectangular, but I'm not sure if that's cropping in post-processing or something more raw.



General answers are fine as well as specific well-known examples (like HST, to pick one at random.)










share|improve this question









$endgroup$










  • 7




    $begingroup$
    Many earth observation satellites are linear and capture the ground by moving over it.
    $endgroup$
    – lijat
    Sep 12 at 19:21






  • 1




    $begingroup$
    @lijat space.stackexchange.com/search?q=pushbroom
    $endgroup$
    – uhoh
    Sep 13 at 14:24






  • 2




    $begingroup$
    Great question! Why are images presented rectangular, instead of circular or pentagonal or gingerbread man-shaped? Light is if anything round. The Sun is round, waves have no corners, our eyes are round, everything is round in the world of imaging, except pictures on our flat TV-screens. What is this all about? I doubt that HST's mirror is designed to concentrate all light to a rectangular CCD. I can demonstrate for myself that my reading glasses in sunlight do not concentrate light to a rectangle. Basic optics as far as I understand it. Something needs to be explained here.
    $endgroup$
    – Tombola
    Sep 14 at 13:29







  • 1




    $begingroup$
    @Tombola: Images are made up of pixels and displayed on flat surfaces. Pixels have to be triangular, square, or hexagonal, as these are the only regular shapes that can tesselate a flat plane without leaving gaps in between. Of these three possible shapes, squares (or rectangles) are used because they have fourfold (90-degree) symmetry, making it easy to identify pixels using x and y coordinates. And, once you have square/rectangular pixels, a square/rectangular screen, displaying square/rectangular images, is the only shape that lets you avoid leaving sawtoothed gaps around the edges.
    $endgroup$
    – Sean
    Sep 14 at 22:42






  • 1




    $begingroup$
    @Tombola My monitor's surface is optimized for rectangular pictures. What does yours look like?
    $endgroup$
    – Mast
    Sep 15 at 7:56













17













17









17


6



$begingroup$


A series of round lenses produces a round image on some sort of sensor or sensor array.



When it comes to cameras and telescopes out in space, are the sensors also round?



It seems like most of the images I've seen from space telescopes and cameras are rectangular, but I'm not sure if that's cropping in post-processing or something more raw.



General answers are fine as well as specific well-known examples (like HST, to pick one at random.)










share|improve this question









$endgroup$




A series of round lenses produces a round image on some sort of sensor or sensor array.



When it comes to cameras and telescopes out in space, are the sensors also round?



It seems like most of the images I've seen from space telescopes and cameras are rectangular, but I'm not sure if that's cropping in post-processing or something more raw.



General answers are fine as well as specific well-known examples (like HST, to pick one at random.)







space-telescope telescope






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Sep 12 at 14:24









RogerRoger

1,6103 silver badges20 bronze badges




1,6103 silver badges20 bronze badges










  • 7




    $begingroup$
    Many earth observation satellites are linear and capture the ground by moving over it.
    $endgroup$
    – lijat
    Sep 12 at 19:21






  • 1




    $begingroup$
    @lijat space.stackexchange.com/search?q=pushbroom
    $endgroup$
    – uhoh
    Sep 13 at 14:24






  • 2




    $begingroup$
    Great question! Why are images presented rectangular, instead of circular or pentagonal or gingerbread man-shaped? Light is if anything round. The Sun is round, waves have no corners, our eyes are round, everything is round in the world of imaging, except pictures on our flat TV-screens. What is this all about? I doubt that HST's mirror is designed to concentrate all light to a rectangular CCD. I can demonstrate for myself that my reading glasses in sunlight do not concentrate light to a rectangle. Basic optics as far as I understand it. Something needs to be explained here.
    $endgroup$
    – Tombola
    Sep 14 at 13:29







  • 1




    $begingroup$
    @Tombola: Images are made up of pixels and displayed on flat surfaces. Pixels have to be triangular, square, or hexagonal, as these are the only regular shapes that can tesselate a flat plane without leaving gaps in between. Of these three possible shapes, squares (or rectangles) are used because they have fourfold (90-degree) symmetry, making it easy to identify pixels using x and y coordinates. And, once you have square/rectangular pixels, a square/rectangular screen, displaying square/rectangular images, is the only shape that lets you avoid leaving sawtoothed gaps around the edges.
    $endgroup$
    – Sean
    Sep 14 at 22:42






  • 1




    $begingroup$
    @Tombola My monitor's surface is optimized for rectangular pictures. What does yours look like?
    $endgroup$
    – Mast
    Sep 15 at 7:56












  • 7




    $begingroup$
    Many earth observation satellites are linear and capture the ground by moving over it.
    $endgroup$
    – lijat
    Sep 12 at 19:21






  • 1




    $begingroup$
    @lijat space.stackexchange.com/search?q=pushbroom
    $endgroup$
    – uhoh
    Sep 13 at 14:24






  • 2




    $begingroup$
    Great question! Why are images presented rectangular, instead of circular or pentagonal or gingerbread man-shaped? Light is if anything round. The Sun is round, waves have no corners, our eyes are round, everything is round in the world of imaging, except pictures on our flat TV-screens. What is this all about? I doubt that HST's mirror is designed to concentrate all light to a rectangular CCD. I can demonstrate for myself that my reading glasses in sunlight do not concentrate light to a rectangle. Basic optics as far as I understand it. Something needs to be explained here.
    $endgroup$
    – Tombola
    Sep 14 at 13:29







  • 1




    $begingroup$
    @Tombola: Images are made up of pixels and displayed on flat surfaces. Pixels have to be triangular, square, or hexagonal, as these are the only regular shapes that can tesselate a flat plane without leaving gaps in between. Of these three possible shapes, squares (or rectangles) are used because they have fourfold (90-degree) symmetry, making it easy to identify pixels using x and y coordinates. And, once you have square/rectangular pixels, a square/rectangular screen, displaying square/rectangular images, is the only shape that lets you avoid leaving sawtoothed gaps around the edges.
    $endgroup$
    – Sean
    Sep 14 at 22:42






  • 1




    $begingroup$
    @Tombola My monitor's surface is optimized for rectangular pictures. What does yours look like?
    $endgroup$
    – Mast
    Sep 15 at 7:56







7




7




$begingroup$
Many earth observation satellites are linear and capture the ground by moving over it.
$endgroup$
– lijat
Sep 12 at 19:21




$begingroup$
Many earth observation satellites are linear and capture the ground by moving over it.
$endgroup$
– lijat
Sep 12 at 19:21




1




1




$begingroup$
@lijat space.stackexchange.com/search?q=pushbroom
$endgroup$
– uhoh
Sep 13 at 14:24




$begingroup$
@lijat space.stackexchange.com/search?q=pushbroom
$endgroup$
– uhoh
Sep 13 at 14:24




2




2




$begingroup$
Great question! Why are images presented rectangular, instead of circular or pentagonal or gingerbread man-shaped? Light is if anything round. The Sun is round, waves have no corners, our eyes are round, everything is round in the world of imaging, except pictures on our flat TV-screens. What is this all about? I doubt that HST's mirror is designed to concentrate all light to a rectangular CCD. I can demonstrate for myself that my reading glasses in sunlight do not concentrate light to a rectangle. Basic optics as far as I understand it. Something needs to be explained here.
$endgroup$
– Tombola
Sep 14 at 13:29





$begingroup$
Great question! Why are images presented rectangular, instead of circular or pentagonal or gingerbread man-shaped? Light is if anything round. The Sun is round, waves have no corners, our eyes are round, everything is round in the world of imaging, except pictures on our flat TV-screens. What is this all about? I doubt that HST's mirror is designed to concentrate all light to a rectangular CCD. I can demonstrate for myself that my reading glasses in sunlight do not concentrate light to a rectangle. Basic optics as far as I understand it. Something needs to be explained here.
$endgroup$
– Tombola
Sep 14 at 13:29





1




1




$begingroup$
@Tombola: Images are made up of pixels and displayed on flat surfaces. Pixels have to be triangular, square, or hexagonal, as these are the only regular shapes that can tesselate a flat plane without leaving gaps in between. Of these three possible shapes, squares (or rectangles) are used because they have fourfold (90-degree) symmetry, making it easy to identify pixels using x and y coordinates. And, once you have square/rectangular pixels, a square/rectangular screen, displaying square/rectangular images, is the only shape that lets you avoid leaving sawtoothed gaps around the edges.
$endgroup$
– Sean
Sep 14 at 22:42




$begingroup$
@Tombola: Images are made up of pixels and displayed on flat surfaces. Pixels have to be triangular, square, or hexagonal, as these are the only regular shapes that can tesselate a flat plane without leaving gaps in between. Of these three possible shapes, squares (or rectangles) are used because they have fourfold (90-degree) symmetry, making it easy to identify pixels using x and y coordinates. And, once you have square/rectangular pixels, a square/rectangular screen, displaying square/rectangular images, is the only shape that lets you avoid leaving sawtoothed gaps around the edges.
$endgroup$
– Sean
Sep 14 at 22:42




1




1




$begingroup$
@Tombola My monitor's surface is optimized for rectangular pictures. What does yours look like?
$endgroup$
– Mast
Sep 15 at 7:56




$begingroup$
@Tombola My monitor's surface is optimized for rectangular pictures. What does yours look like?
$endgroup$
– Mast
Sep 15 at 7:56










4 Answers
4






active

oldest

votes


















29

















$begingroup$

Kepler



The Kepler space telescope uses a bank of 21 rectangular CCD modules - each with two 2200x1024 pixel CCDs). Each module covers 5 square degrees on the sky.



https://keplerscience.arc.nasa.gov/the-kepler-space-telescope.html



enter image description here



Giving a field of vision of:



enter image description here



Hubble



For Hubble, the wide field camera CCD sensor is again rectangular/square



https://www.teledyne-e2v.com/news/e2v-ccd-imaging-sensors-to-enable-nasas-hubble-space-telescope-to-explore-the-nature-and-history-of-our-universe-with-greater-capability-than-ever-before/



enter image description here



This WFC3 (Wide Field Camera 3) forms just a small portion of Hubble's field of view from its full instrumentation coverage.



enter image description here




This illustration shows the “footprints” of all the instruments in Hubble’s field of view. These include:



  • the fine guidance sensors (FGSs)

  • the Near Infrared Camera and Multi-Object Spectrometer (NICMOS)

  • the Space Telescope Imaging Spectrograph (STIS)

  • the Cosmic Origins Spectrograph (COS)

  • the Wide Field Camera 3 (WFC3)

  • the Advanced Camera for Surveys (ACS), which includes the Solar Blind Channel (SBC).

WFC3 and ACS are the two instruments involved in the Frontier Fields program.







share|improve this answer












$endgroup$










  • 14




    $begingroup$
    that's a really beautiful example! It's round(ish) in outline and a (piecewise) curved surface as well for field curvature.
    $endgroup$
    – uhoh
    Sep 12 at 15:15







  • 12




    $begingroup$
    I never realised the the "CMOS" in NICMOS did not refer to the camera sensor technology.
    $endgroup$
    – Skyler
    Sep 13 at 13:02










  • $begingroup$
    What fraction of the light collected by the mirror is wasted between and around those rectangular CCD modules? 15% lost perhaps?
    $endgroup$
    – Tombola
    Sep 14 at 14:13






  • 1




    $begingroup$
    @Tombola Much more light is wasted by not even entering ...
    $endgroup$
    – Hagen von Eitzen
    Sep 15 at 10:22


















10

















$begingroup$

Silicon wafers are sliced from a giant single crystal of silicon called a boule, which is grown from a seed crystal dipped in and then slowly pulled from molten silicon.



Circuits such as CCDs (and everything else) are patterend on silicon wafers aligned to the crystal axes of the wafers indicated by the wafer flat or notch (1, 2 see alignment flat on bottom, 3 see alignment notch on left)



This can sometimes be important for electrical reasons but it is very important for mechanical reasons because you need to "dice" the thin, delicate wafers into individual die and that is a lot easier to do allong crystal planes than it would be trying to cut out a circle. Crystals like to break when cut off-axis, tiny microscopic cracks propagate along crystal planes especially when the die are cut off-axis.



So if you have a rectilinear silicon die and a rectilinear circuit pattern and rectilinear readout system, there's absolutely no benefit to building a single-die circular sensor. (However, multi-die arrays are a different matter, as nicely illustrated in @Snow's answer!)



If your useful optical field is circular due to optical vignetting or aberration then you can mask your data electronically during processing.



A circle has 21% less area than the square in which is it circumscribed:



$$ 1 - fracpi4 approx text21%$$



so you could speed up data transmission from a spacecraft by 27% if you only sent back the data from an inscribed circular field of a square sensor:



$$ frac4pi - 1 approx text27%.$$



That's a meaningful amount of time savings, considering that some deep-space spacecraft (e.g. New Horizons) can spend months sending back all the image and other data from a flyby photoshoot. However, I think instead that they make the optics good enough to provide good image quality out to the corners and keep the whole square (or rectangular) image data.






share|improve this answer












$endgroup$










  • 2




    $begingroup$
    The crystal drawn out of molten silicon and the crystal which is cut into wafers are not the same crystal. Many steps of purifing by zone melting are necessary to get silicon that may be used to make camera sensors. Molten silicon in a crucible is not pure enough.
    $endgroup$
    – Uwe
    Sep 13 at 8:33






  • 3




    $begingroup$
    @Uwe Thanks for that! en.wikipedia.org/wiki/Zone_melting I like to think of it as the "same crystal" because the recrystallizing areas are still lattice-coherent with the crystal that was there before melting. It's a little bit like humans in that the stuff in our bodies is not the stuff that was there when we were younger, but we're still the same people (or at least like to believe we are). Our brain cells still remember the same stuff they knew before even though a lot of their molecules are a lot younger than the memories.
    $endgroup$
    – uhoh
    Sep 13 at 8:38











  • $begingroup$
    I don't find this answer very convincing as it stands. Yes, for commercially-produced cameras the sensor is cut rectangularly from a wafer for cost and other reasons. But the wafer itself is circular, and for something as expensive as a space telescope the cost of using a single wafer of the appropriate size, rather than a piece of a larger wafer, should easily be trumped by the advantages of the circular sensor – if it has any.
    $endgroup$
    – leftaroundabout
    Sep 14 at 10:16






  • 1




    $begingroup$
    @LocalFluff oh I didn't notice that, but that's probably easy. The diffraction pattern of a square aperture would be more difficult to manage in the images than that of a circle. Any high resolution optical imaging system (telescopes, microscopes, cameras...) will have circular lenses and apertures, especially the entrance pupil. It's true that for variable diameter irises they are not exactly circular but have six or more blades, but on good systems they work hard to approximate circles by rounding them i.stack.imgur.com/BvKmj.jpg
    $endgroup$
    – uhoh
    Sep 14 at 23:09






  • 1




    $begingroup$
    @LocalFluff there's a new question Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used? If the lenses turn out to be rectangular, we will still see that the system's acceptance will still be defined by a circular aperture and the diffraction limit also defined by that circle. They might cut out unused sections of the glass to save weight or space, but that missing glass would never have contributed to the image.
    $endgroup$
    – uhoh
    Sep 14 at 23:25



















6

















$begingroup$

As another example: the Gaia mission illustrates that modern CCD production techniques allow to have form follow function: it uses an creatively laid-out array of CCDs to be able to integrate multiple functions in a single instrument:




[T]he three functions are built into a single instrument by using common telescopes and a shared focal plane:



  • The Astrometric instrument (ASTRO) is devoted to star angular position measurements, providing the five astrometric parameters [...]

  • The Photometric instrument provides continuous star spectra for astrophysis in the band 320-1000 nm and the ASTRO chromaticity calibration

  • The Radial Velocity Spectrometer (RVS) provides radial velocity and high resolution spectral data in the narrow band 847-874 nm

Each function is achieved within a dedicated area on the focal plane.




The result:



Gaia focal plane



(The Gaia focal plane; source)



Gaia CCD array



(The Gaia CCD array; source)






share|improve this answer












$endgroup$






















    4

















    $begingroup$

    There were a lot of square format cameras.



    • The Voyager cameras had 800*800 pixels.


    • The LORRI cameras of New Horizons had 1024*1024 pixels.


    • The Galileo cameras had 800*800 pixels.


    • The Cassini WAC and NAC cameras had 1024*1024 pixels.


    • The narrow and wide angel OSIRIS cameras of Rosetta had 2048*2048 pixels.


    • The FC camera of Dawn had 1024*1024 pixels.


    But there are also camera sensors being neither square nor round. The narrow angle cameras of the Lunar Reconnaissance Orbiter use a line sensor with 1*5064 pixels. The maximum image size is 2.5 x 26 km at an altitude of 50 km. The pixel scale is 0.5 m per pixel, so the 2.5 km wide image is resolved into 5000 pixels. The image length of 26 km is recorded during the orbital move of the camera around the Moon. The resulting image has 5000*52,000 pixels.






    share|improve this answer












    $endgroup$
















      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "508"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );














      draft saved

      draft discarded
















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fspace.stackexchange.com%2fquestions%2f38740%2fare-space-camera-sensors-usually-round-or-square%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown


























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      29

















      $begingroup$

      Kepler



      The Kepler space telescope uses a bank of 21 rectangular CCD modules - each with two 2200x1024 pixel CCDs). Each module covers 5 square degrees on the sky.



      https://keplerscience.arc.nasa.gov/the-kepler-space-telescope.html



      enter image description here



      Giving a field of vision of:



      enter image description here



      Hubble



      For Hubble, the wide field camera CCD sensor is again rectangular/square



      https://www.teledyne-e2v.com/news/e2v-ccd-imaging-sensors-to-enable-nasas-hubble-space-telescope-to-explore-the-nature-and-history-of-our-universe-with-greater-capability-than-ever-before/



      enter image description here



      This WFC3 (Wide Field Camera 3) forms just a small portion of Hubble's field of view from its full instrumentation coverage.



      enter image description here




      This illustration shows the “footprints” of all the instruments in Hubble’s field of view. These include:



      • the fine guidance sensors (FGSs)

      • the Near Infrared Camera and Multi-Object Spectrometer (NICMOS)

      • the Space Telescope Imaging Spectrograph (STIS)

      • the Cosmic Origins Spectrograph (COS)

      • the Wide Field Camera 3 (WFC3)

      • the Advanced Camera for Surveys (ACS), which includes the Solar Blind Channel (SBC).

      WFC3 and ACS are the two instruments involved in the Frontier Fields program.







      share|improve this answer












      $endgroup$










      • 14




        $begingroup$
        that's a really beautiful example! It's round(ish) in outline and a (piecewise) curved surface as well for field curvature.
        $endgroup$
        – uhoh
        Sep 12 at 15:15







      • 12




        $begingroup$
        I never realised the the "CMOS" in NICMOS did not refer to the camera sensor technology.
        $endgroup$
        – Skyler
        Sep 13 at 13:02










      • $begingroup$
        What fraction of the light collected by the mirror is wasted between and around those rectangular CCD modules? 15% lost perhaps?
        $endgroup$
        – Tombola
        Sep 14 at 14:13






      • 1




        $begingroup$
        @Tombola Much more light is wasted by not even entering ...
        $endgroup$
        – Hagen von Eitzen
        Sep 15 at 10:22















      29

















      $begingroup$

      Kepler



      The Kepler space telescope uses a bank of 21 rectangular CCD modules - each with two 2200x1024 pixel CCDs). Each module covers 5 square degrees on the sky.



      https://keplerscience.arc.nasa.gov/the-kepler-space-telescope.html



      enter image description here



      Giving a field of vision of:



      enter image description here



      Hubble



      For Hubble, the wide field camera CCD sensor is again rectangular/square



      https://www.teledyne-e2v.com/news/e2v-ccd-imaging-sensors-to-enable-nasas-hubble-space-telescope-to-explore-the-nature-and-history-of-our-universe-with-greater-capability-than-ever-before/



      enter image description here



      This WFC3 (Wide Field Camera 3) forms just a small portion of Hubble's field of view from its full instrumentation coverage.



      enter image description here




      This illustration shows the “footprints” of all the instruments in Hubble’s field of view. These include:



      • the fine guidance sensors (FGSs)

      • the Near Infrared Camera and Multi-Object Spectrometer (NICMOS)

      • the Space Telescope Imaging Spectrograph (STIS)

      • the Cosmic Origins Spectrograph (COS)

      • the Wide Field Camera 3 (WFC3)

      • the Advanced Camera for Surveys (ACS), which includes the Solar Blind Channel (SBC).

      WFC3 and ACS are the two instruments involved in the Frontier Fields program.







      share|improve this answer












      $endgroup$










      • 14




        $begingroup$
        that's a really beautiful example! It's round(ish) in outline and a (piecewise) curved surface as well for field curvature.
        $endgroup$
        – uhoh
        Sep 12 at 15:15







      • 12




        $begingroup$
        I never realised the the "CMOS" in NICMOS did not refer to the camera sensor technology.
        $endgroup$
        – Skyler
        Sep 13 at 13:02










      • $begingroup$
        What fraction of the light collected by the mirror is wasted between and around those rectangular CCD modules? 15% lost perhaps?
        $endgroup$
        – Tombola
        Sep 14 at 14:13






      • 1




        $begingroup$
        @Tombola Much more light is wasted by not even entering ...
        $endgroup$
        – Hagen von Eitzen
        Sep 15 at 10:22













      29















      29











      29







      $begingroup$

      Kepler



      The Kepler space telescope uses a bank of 21 rectangular CCD modules - each with two 2200x1024 pixel CCDs). Each module covers 5 square degrees on the sky.



      https://keplerscience.arc.nasa.gov/the-kepler-space-telescope.html



      enter image description here



      Giving a field of vision of:



      enter image description here



      Hubble



      For Hubble, the wide field camera CCD sensor is again rectangular/square



      https://www.teledyne-e2v.com/news/e2v-ccd-imaging-sensors-to-enable-nasas-hubble-space-telescope-to-explore-the-nature-and-history-of-our-universe-with-greater-capability-than-ever-before/



      enter image description here



      This WFC3 (Wide Field Camera 3) forms just a small portion of Hubble's field of view from its full instrumentation coverage.



      enter image description here




      This illustration shows the “footprints” of all the instruments in Hubble’s field of view. These include:



      • the fine guidance sensors (FGSs)

      • the Near Infrared Camera and Multi-Object Spectrometer (NICMOS)

      • the Space Telescope Imaging Spectrograph (STIS)

      • the Cosmic Origins Spectrograph (COS)

      • the Wide Field Camera 3 (WFC3)

      • the Advanced Camera for Surveys (ACS), which includes the Solar Blind Channel (SBC).

      WFC3 and ACS are the two instruments involved in the Frontier Fields program.







      share|improve this answer












      $endgroup$



      Kepler



      The Kepler space telescope uses a bank of 21 rectangular CCD modules - each with two 2200x1024 pixel CCDs). Each module covers 5 square degrees on the sky.



      https://keplerscience.arc.nasa.gov/the-kepler-space-telescope.html



      enter image description here



      Giving a field of vision of:



      enter image description here



      Hubble



      For Hubble, the wide field camera CCD sensor is again rectangular/square



      https://www.teledyne-e2v.com/news/e2v-ccd-imaging-sensors-to-enable-nasas-hubble-space-telescope-to-explore-the-nature-and-history-of-our-universe-with-greater-capability-than-ever-before/



      enter image description here



      This WFC3 (Wide Field Camera 3) forms just a small portion of Hubble's field of view from its full instrumentation coverage.



      enter image description here




      This illustration shows the “footprints” of all the instruments in Hubble’s field of view. These include:



      • the fine guidance sensors (FGSs)

      • the Near Infrared Camera and Multi-Object Spectrometer (NICMOS)

      • the Space Telescope Imaging Spectrograph (STIS)

      • the Cosmic Origins Spectrograph (COS)

      • the Wide Field Camera 3 (WFC3)

      • the Advanced Camera for Surveys (ACS), which includes the Solar Blind Channel (SBC).

      WFC3 and ACS are the two instruments involved in the Frontier Fields program.








      share|improve this answer















      share|improve this answer




      share|improve this answer








      edited Sep 13 at 5:54

























      answered Sep 12 at 14:43







      user21233user21233

















      • 14




        $begingroup$
        that's a really beautiful example! It's round(ish) in outline and a (piecewise) curved surface as well for field curvature.
        $endgroup$
        – uhoh
        Sep 12 at 15:15







      • 12




        $begingroup$
        I never realised the the "CMOS" in NICMOS did not refer to the camera sensor technology.
        $endgroup$
        – Skyler
        Sep 13 at 13:02










      • $begingroup$
        What fraction of the light collected by the mirror is wasted between and around those rectangular CCD modules? 15% lost perhaps?
        $endgroup$
        – Tombola
        Sep 14 at 14:13






      • 1




        $begingroup$
        @Tombola Much more light is wasted by not even entering ...
        $endgroup$
        – Hagen von Eitzen
        Sep 15 at 10:22












      • 14




        $begingroup$
        that's a really beautiful example! It's round(ish) in outline and a (piecewise) curved surface as well for field curvature.
        $endgroup$
        – uhoh
        Sep 12 at 15:15







      • 12




        $begingroup$
        I never realised the the "CMOS" in NICMOS did not refer to the camera sensor technology.
        $endgroup$
        – Skyler
        Sep 13 at 13:02










      • $begingroup$
        What fraction of the light collected by the mirror is wasted between and around those rectangular CCD modules? 15% lost perhaps?
        $endgroup$
        – Tombola
        Sep 14 at 14:13






      • 1




        $begingroup$
        @Tombola Much more light is wasted by not even entering ...
        $endgroup$
        – Hagen von Eitzen
        Sep 15 at 10:22







      14




      14




      $begingroup$
      that's a really beautiful example! It's round(ish) in outline and a (piecewise) curved surface as well for field curvature.
      $endgroup$
      – uhoh
      Sep 12 at 15:15





      $begingroup$
      that's a really beautiful example! It's round(ish) in outline and a (piecewise) curved surface as well for field curvature.
      $endgroup$
      – uhoh
      Sep 12 at 15:15





      12




      12




      $begingroup$
      I never realised the the "CMOS" in NICMOS did not refer to the camera sensor technology.
      $endgroup$
      – Skyler
      Sep 13 at 13:02




      $begingroup$
      I never realised the the "CMOS" in NICMOS did not refer to the camera sensor technology.
      $endgroup$
      – Skyler
      Sep 13 at 13:02












      $begingroup$
      What fraction of the light collected by the mirror is wasted between and around those rectangular CCD modules? 15% lost perhaps?
      $endgroup$
      – Tombola
      Sep 14 at 14:13




      $begingroup$
      What fraction of the light collected by the mirror is wasted between and around those rectangular CCD modules? 15% lost perhaps?
      $endgroup$
      – Tombola
      Sep 14 at 14:13




      1




      1




      $begingroup$
      @Tombola Much more light is wasted by not even entering ...
      $endgroup$
      – Hagen von Eitzen
      Sep 15 at 10:22




      $begingroup$
      @Tombola Much more light is wasted by not even entering ...
      $endgroup$
      – Hagen von Eitzen
      Sep 15 at 10:22













      10

















      $begingroup$

      Silicon wafers are sliced from a giant single crystal of silicon called a boule, which is grown from a seed crystal dipped in and then slowly pulled from molten silicon.



      Circuits such as CCDs (and everything else) are patterend on silicon wafers aligned to the crystal axes of the wafers indicated by the wafer flat or notch (1, 2 see alignment flat on bottom, 3 see alignment notch on left)



      This can sometimes be important for electrical reasons but it is very important for mechanical reasons because you need to "dice" the thin, delicate wafers into individual die and that is a lot easier to do allong crystal planes than it would be trying to cut out a circle. Crystals like to break when cut off-axis, tiny microscopic cracks propagate along crystal planes especially when the die are cut off-axis.



      So if you have a rectilinear silicon die and a rectilinear circuit pattern and rectilinear readout system, there's absolutely no benefit to building a single-die circular sensor. (However, multi-die arrays are a different matter, as nicely illustrated in @Snow's answer!)



      If your useful optical field is circular due to optical vignetting or aberration then you can mask your data electronically during processing.



      A circle has 21% less area than the square in which is it circumscribed:



      $$ 1 - fracpi4 approx text21%$$



      so you could speed up data transmission from a spacecraft by 27% if you only sent back the data from an inscribed circular field of a square sensor:



      $$ frac4pi - 1 approx text27%.$$



      That's a meaningful amount of time savings, considering that some deep-space spacecraft (e.g. New Horizons) can spend months sending back all the image and other data from a flyby photoshoot. However, I think instead that they make the optics good enough to provide good image quality out to the corners and keep the whole square (or rectangular) image data.






      share|improve this answer












      $endgroup$










      • 2




        $begingroup$
        The crystal drawn out of molten silicon and the crystal which is cut into wafers are not the same crystal. Many steps of purifing by zone melting are necessary to get silicon that may be used to make camera sensors. Molten silicon in a crucible is not pure enough.
        $endgroup$
        – Uwe
        Sep 13 at 8:33






      • 3




        $begingroup$
        @Uwe Thanks for that! en.wikipedia.org/wiki/Zone_melting I like to think of it as the "same crystal" because the recrystallizing areas are still lattice-coherent with the crystal that was there before melting. It's a little bit like humans in that the stuff in our bodies is not the stuff that was there when we were younger, but we're still the same people (or at least like to believe we are). Our brain cells still remember the same stuff they knew before even though a lot of their molecules are a lot younger than the memories.
        $endgroup$
        – uhoh
        Sep 13 at 8:38











      • $begingroup$
        I don't find this answer very convincing as it stands. Yes, for commercially-produced cameras the sensor is cut rectangularly from a wafer for cost and other reasons. But the wafer itself is circular, and for something as expensive as a space telescope the cost of using a single wafer of the appropriate size, rather than a piece of a larger wafer, should easily be trumped by the advantages of the circular sensor – if it has any.
        $endgroup$
        – leftaroundabout
        Sep 14 at 10:16






      • 1




        $begingroup$
        @LocalFluff oh I didn't notice that, but that's probably easy. The diffraction pattern of a square aperture would be more difficult to manage in the images than that of a circle. Any high resolution optical imaging system (telescopes, microscopes, cameras...) will have circular lenses and apertures, especially the entrance pupil. It's true that for variable diameter irises they are not exactly circular but have six or more blades, but on good systems they work hard to approximate circles by rounding them i.stack.imgur.com/BvKmj.jpg
        $endgroup$
        – uhoh
        Sep 14 at 23:09






      • 1




        $begingroup$
        @LocalFluff there's a new question Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used? If the lenses turn out to be rectangular, we will still see that the system's acceptance will still be defined by a circular aperture and the diffraction limit also defined by that circle. They might cut out unused sections of the glass to save weight or space, but that missing glass would never have contributed to the image.
        $endgroup$
        – uhoh
        Sep 14 at 23:25
















      10

















      $begingroup$

      Silicon wafers are sliced from a giant single crystal of silicon called a boule, which is grown from a seed crystal dipped in and then slowly pulled from molten silicon.



      Circuits such as CCDs (and everything else) are patterend on silicon wafers aligned to the crystal axes of the wafers indicated by the wafer flat or notch (1, 2 see alignment flat on bottom, 3 see alignment notch on left)



      This can sometimes be important for electrical reasons but it is very important for mechanical reasons because you need to "dice" the thin, delicate wafers into individual die and that is a lot easier to do allong crystal planes than it would be trying to cut out a circle. Crystals like to break when cut off-axis, tiny microscopic cracks propagate along crystal planes especially when the die are cut off-axis.



      So if you have a rectilinear silicon die and a rectilinear circuit pattern and rectilinear readout system, there's absolutely no benefit to building a single-die circular sensor. (However, multi-die arrays are a different matter, as nicely illustrated in @Snow's answer!)



      If your useful optical field is circular due to optical vignetting or aberration then you can mask your data electronically during processing.



      A circle has 21% less area than the square in which is it circumscribed:



      $$ 1 - fracpi4 approx text21%$$



      so you could speed up data transmission from a spacecraft by 27% if you only sent back the data from an inscribed circular field of a square sensor:



      $$ frac4pi - 1 approx text27%.$$



      That's a meaningful amount of time savings, considering that some deep-space spacecraft (e.g. New Horizons) can spend months sending back all the image and other data from a flyby photoshoot. However, I think instead that they make the optics good enough to provide good image quality out to the corners and keep the whole square (or rectangular) image data.






      share|improve this answer












      $endgroup$










      • 2




        $begingroup$
        The crystal drawn out of molten silicon and the crystal which is cut into wafers are not the same crystal. Many steps of purifing by zone melting are necessary to get silicon that may be used to make camera sensors. Molten silicon in a crucible is not pure enough.
        $endgroup$
        – Uwe
        Sep 13 at 8:33






      • 3




        $begingroup$
        @Uwe Thanks for that! en.wikipedia.org/wiki/Zone_melting I like to think of it as the "same crystal" because the recrystallizing areas are still lattice-coherent with the crystal that was there before melting. It's a little bit like humans in that the stuff in our bodies is not the stuff that was there when we were younger, but we're still the same people (or at least like to believe we are). Our brain cells still remember the same stuff they knew before even though a lot of their molecules are a lot younger than the memories.
        $endgroup$
        – uhoh
        Sep 13 at 8:38











      • $begingroup$
        I don't find this answer very convincing as it stands. Yes, for commercially-produced cameras the sensor is cut rectangularly from a wafer for cost and other reasons. But the wafer itself is circular, and for something as expensive as a space telescope the cost of using a single wafer of the appropriate size, rather than a piece of a larger wafer, should easily be trumped by the advantages of the circular sensor – if it has any.
        $endgroup$
        – leftaroundabout
        Sep 14 at 10:16






      • 1




        $begingroup$
        @LocalFluff oh I didn't notice that, but that's probably easy. The diffraction pattern of a square aperture would be more difficult to manage in the images than that of a circle. Any high resolution optical imaging system (telescopes, microscopes, cameras...) will have circular lenses and apertures, especially the entrance pupil. It's true that for variable diameter irises they are not exactly circular but have six or more blades, but on good systems they work hard to approximate circles by rounding them i.stack.imgur.com/BvKmj.jpg
        $endgroup$
        – uhoh
        Sep 14 at 23:09






      • 1




        $begingroup$
        @LocalFluff there's a new question Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used? If the lenses turn out to be rectangular, we will still see that the system's acceptance will still be defined by a circular aperture and the diffraction limit also defined by that circle. They might cut out unused sections of the glass to save weight or space, but that missing glass would never have contributed to the image.
        $endgroup$
        – uhoh
        Sep 14 at 23:25














      10















      10











      10







      $begingroup$

      Silicon wafers are sliced from a giant single crystal of silicon called a boule, which is grown from a seed crystal dipped in and then slowly pulled from molten silicon.



      Circuits such as CCDs (and everything else) are patterend on silicon wafers aligned to the crystal axes of the wafers indicated by the wafer flat or notch (1, 2 see alignment flat on bottom, 3 see alignment notch on left)



      This can sometimes be important for electrical reasons but it is very important for mechanical reasons because you need to "dice" the thin, delicate wafers into individual die and that is a lot easier to do allong crystal planes than it would be trying to cut out a circle. Crystals like to break when cut off-axis, tiny microscopic cracks propagate along crystal planes especially when the die are cut off-axis.



      So if you have a rectilinear silicon die and a rectilinear circuit pattern and rectilinear readout system, there's absolutely no benefit to building a single-die circular sensor. (However, multi-die arrays are a different matter, as nicely illustrated in @Snow's answer!)



      If your useful optical field is circular due to optical vignetting or aberration then you can mask your data electronically during processing.



      A circle has 21% less area than the square in which is it circumscribed:



      $$ 1 - fracpi4 approx text21%$$



      so you could speed up data transmission from a spacecraft by 27% if you only sent back the data from an inscribed circular field of a square sensor:



      $$ frac4pi - 1 approx text27%.$$



      That's a meaningful amount of time savings, considering that some deep-space spacecraft (e.g. New Horizons) can spend months sending back all the image and other data from a flyby photoshoot. However, I think instead that they make the optics good enough to provide good image quality out to the corners and keep the whole square (or rectangular) image data.






      share|improve this answer












      $endgroup$



      Silicon wafers are sliced from a giant single crystal of silicon called a boule, which is grown from a seed crystal dipped in and then slowly pulled from molten silicon.



      Circuits such as CCDs (and everything else) are patterend on silicon wafers aligned to the crystal axes of the wafers indicated by the wafer flat or notch (1, 2 see alignment flat on bottom, 3 see alignment notch on left)



      This can sometimes be important for electrical reasons but it is very important for mechanical reasons because you need to "dice" the thin, delicate wafers into individual die and that is a lot easier to do allong crystal planes than it would be trying to cut out a circle. Crystals like to break when cut off-axis, tiny microscopic cracks propagate along crystal planes especially when the die are cut off-axis.



      So if you have a rectilinear silicon die and a rectilinear circuit pattern and rectilinear readout system, there's absolutely no benefit to building a single-die circular sensor. (However, multi-die arrays are a different matter, as nicely illustrated in @Snow's answer!)



      If your useful optical field is circular due to optical vignetting or aberration then you can mask your data electronically during processing.



      A circle has 21% less area than the square in which is it circumscribed:



      $$ 1 - fracpi4 approx text21%$$



      so you could speed up data transmission from a spacecraft by 27% if you only sent back the data from an inscribed circular field of a square sensor:



      $$ frac4pi - 1 approx text27%.$$



      That's a meaningful amount of time savings, considering that some deep-space spacecraft (e.g. New Horizons) can spend months sending back all the image and other data from a flyby photoshoot. However, I think instead that they make the optics good enough to provide good image quality out to the corners and keep the whole square (or rectangular) image data.







      share|improve this answer















      share|improve this answer




      share|improve this answer








      edited Sep 13 at 2:07

























      answered Sep 13 at 1:48









      uhohuhoh

      90.8k28 gold badges223 silver badges710 bronze badges




      90.8k28 gold badges223 silver badges710 bronze badges










      • 2




        $begingroup$
        The crystal drawn out of molten silicon and the crystal which is cut into wafers are not the same crystal. Many steps of purifing by zone melting are necessary to get silicon that may be used to make camera sensors. Molten silicon in a crucible is not pure enough.
        $endgroup$
        – Uwe
        Sep 13 at 8:33






      • 3




        $begingroup$
        @Uwe Thanks for that! en.wikipedia.org/wiki/Zone_melting I like to think of it as the "same crystal" because the recrystallizing areas are still lattice-coherent with the crystal that was there before melting. It's a little bit like humans in that the stuff in our bodies is not the stuff that was there when we were younger, but we're still the same people (or at least like to believe we are). Our brain cells still remember the same stuff they knew before even though a lot of their molecules are a lot younger than the memories.
        $endgroup$
        – uhoh
        Sep 13 at 8:38











      • $begingroup$
        I don't find this answer very convincing as it stands. Yes, for commercially-produced cameras the sensor is cut rectangularly from a wafer for cost and other reasons. But the wafer itself is circular, and for something as expensive as a space telescope the cost of using a single wafer of the appropriate size, rather than a piece of a larger wafer, should easily be trumped by the advantages of the circular sensor – if it has any.
        $endgroup$
        – leftaroundabout
        Sep 14 at 10:16






      • 1




        $begingroup$
        @LocalFluff oh I didn't notice that, but that's probably easy. The diffraction pattern of a square aperture would be more difficult to manage in the images than that of a circle. Any high resolution optical imaging system (telescopes, microscopes, cameras...) will have circular lenses and apertures, especially the entrance pupil. It's true that for variable diameter irises they are not exactly circular but have six or more blades, but on good systems they work hard to approximate circles by rounding them i.stack.imgur.com/BvKmj.jpg
        $endgroup$
        – uhoh
        Sep 14 at 23:09






      • 1




        $begingroup$
        @LocalFluff there's a new question Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used? If the lenses turn out to be rectangular, we will still see that the system's acceptance will still be defined by a circular aperture and the diffraction limit also defined by that circle. They might cut out unused sections of the glass to save weight or space, but that missing glass would never have contributed to the image.
        $endgroup$
        – uhoh
        Sep 14 at 23:25













      • 2




        $begingroup$
        The crystal drawn out of molten silicon and the crystal which is cut into wafers are not the same crystal. Many steps of purifing by zone melting are necessary to get silicon that may be used to make camera sensors. Molten silicon in a crucible is not pure enough.
        $endgroup$
        – Uwe
        Sep 13 at 8:33






      • 3




        $begingroup$
        @Uwe Thanks for that! en.wikipedia.org/wiki/Zone_melting I like to think of it as the "same crystal" because the recrystallizing areas are still lattice-coherent with the crystal that was there before melting. It's a little bit like humans in that the stuff in our bodies is not the stuff that was there when we were younger, but we're still the same people (or at least like to believe we are). Our brain cells still remember the same stuff they knew before even though a lot of their molecules are a lot younger than the memories.
        $endgroup$
        – uhoh
        Sep 13 at 8:38











      • $begingroup$
        I don't find this answer very convincing as it stands. Yes, for commercially-produced cameras the sensor is cut rectangularly from a wafer for cost and other reasons. But the wafer itself is circular, and for something as expensive as a space telescope the cost of using a single wafer of the appropriate size, rather than a piece of a larger wafer, should easily be trumped by the advantages of the circular sensor – if it has any.
        $endgroup$
        – leftaroundabout
        Sep 14 at 10:16






      • 1




        $begingroup$
        @LocalFluff oh I didn't notice that, but that's probably easy. The diffraction pattern of a square aperture would be more difficult to manage in the images than that of a circle. Any high resolution optical imaging system (telescopes, microscopes, cameras...) will have circular lenses and apertures, especially the entrance pupil. It's true that for variable diameter irises they are not exactly circular but have six or more blades, but on good systems they work hard to approximate circles by rounding them i.stack.imgur.com/BvKmj.jpg
        $endgroup$
        – uhoh
        Sep 14 at 23:09






      • 1




        $begingroup$
        @LocalFluff there's a new question Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used? If the lenses turn out to be rectangular, we will still see that the system's acceptance will still be defined by a circular aperture and the diffraction limit also defined by that circle. They might cut out unused sections of the glass to save weight or space, but that missing glass would never have contributed to the image.
        $endgroup$
        – uhoh
        Sep 14 at 23:25








      2




      2




      $begingroup$
      The crystal drawn out of molten silicon and the crystal which is cut into wafers are not the same crystal. Many steps of purifing by zone melting are necessary to get silicon that may be used to make camera sensors. Molten silicon in a crucible is not pure enough.
      $endgroup$
      – Uwe
      Sep 13 at 8:33




      $begingroup$
      The crystal drawn out of molten silicon and the crystal which is cut into wafers are not the same crystal. Many steps of purifing by zone melting are necessary to get silicon that may be used to make camera sensors. Molten silicon in a crucible is not pure enough.
      $endgroup$
      – Uwe
      Sep 13 at 8:33




      3




      3




      $begingroup$
      @Uwe Thanks for that! en.wikipedia.org/wiki/Zone_melting I like to think of it as the "same crystal" because the recrystallizing areas are still lattice-coherent with the crystal that was there before melting. It's a little bit like humans in that the stuff in our bodies is not the stuff that was there when we were younger, but we're still the same people (or at least like to believe we are). Our brain cells still remember the same stuff they knew before even though a lot of their molecules are a lot younger than the memories.
      $endgroup$
      – uhoh
      Sep 13 at 8:38





      $begingroup$
      @Uwe Thanks for that! en.wikipedia.org/wiki/Zone_melting I like to think of it as the "same crystal" because the recrystallizing areas are still lattice-coherent with the crystal that was there before melting. It's a little bit like humans in that the stuff in our bodies is not the stuff that was there when we were younger, but we're still the same people (or at least like to believe we are). Our brain cells still remember the same stuff they knew before even though a lot of their molecules are a lot younger than the memories.
      $endgroup$
      – uhoh
      Sep 13 at 8:38













      $begingroup$
      I don't find this answer very convincing as it stands. Yes, for commercially-produced cameras the sensor is cut rectangularly from a wafer for cost and other reasons. But the wafer itself is circular, and for something as expensive as a space telescope the cost of using a single wafer of the appropriate size, rather than a piece of a larger wafer, should easily be trumped by the advantages of the circular sensor – if it has any.
      $endgroup$
      – leftaroundabout
      Sep 14 at 10:16




      $begingroup$
      I don't find this answer very convincing as it stands. Yes, for commercially-produced cameras the sensor is cut rectangularly from a wafer for cost and other reasons. But the wafer itself is circular, and for something as expensive as a space telescope the cost of using a single wafer of the appropriate size, rather than a piece of a larger wafer, should easily be trumped by the advantages of the circular sensor – if it has any.
      $endgroup$
      – leftaroundabout
      Sep 14 at 10:16




      1




      1




      $begingroup$
      @LocalFluff oh I didn't notice that, but that's probably easy. The diffraction pattern of a square aperture would be more difficult to manage in the images than that of a circle. Any high resolution optical imaging system (telescopes, microscopes, cameras...) will have circular lenses and apertures, especially the entrance pupil. It's true that for variable diameter irises they are not exactly circular but have six or more blades, but on good systems they work hard to approximate circles by rounding them i.stack.imgur.com/BvKmj.jpg
      $endgroup$
      – uhoh
      Sep 14 at 23:09




      $begingroup$
      @LocalFluff oh I didn't notice that, but that's probably easy. The diffraction pattern of a square aperture would be more difficult to manage in the images than that of a circle. Any high resolution optical imaging system (telescopes, microscopes, cameras...) will have circular lenses and apertures, especially the entrance pupil. It's true that for variable diameter irises they are not exactly circular but have six or more blades, but on good systems they work hard to approximate circles by rounding them i.stack.imgur.com/BvKmj.jpg
      $endgroup$
      – uhoh
      Sep 14 at 23:09




      1




      1




      $begingroup$
      @LocalFluff there's a new question Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used? If the lenses turn out to be rectangular, we will still see that the system's acceptance will still be defined by a circular aperture and the diffraction limit also defined by that circle. They might cut out unused sections of the glass to save weight or space, but that missing glass would never have contributed to the image.
      $endgroup$
      – uhoh
      Sep 14 at 23:25





      $begingroup$
      @LocalFluff there's a new question Wide angle camera of Lunar Reconnaissance Orbiter, were rectangular lenses used? If the lenses turn out to be rectangular, we will still see that the system's acceptance will still be defined by a circular aperture and the diffraction limit also defined by that circle. They might cut out unused sections of the glass to save weight or space, but that missing glass would never have contributed to the image.
      $endgroup$
      – uhoh
      Sep 14 at 23:25












      6

















      $begingroup$

      As another example: the Gaia mission illustrates that modern CCD production techniques allow to have form follow function: it uses an creatively laid-out array of CCDs to be able to integrate multiple functions in a single instrument:




      [T]he three functions are built into a single instrument by using common telescopes and a shared focal plane:



      • The Astrometric instrument (ASTRO) is devoted to star angular position measurements, providing the five astrometric parameters [...]

      • The Photometric instrument provides continuous star spectra for astrophysis in the band 320-1000 nm and the ASTRO chromaticity calibration

      • The Radial Velocity Spectrometer (RVS) provides radial velocity and high resolution spectral data in the narrow band 847-874 nm

      Each function is achieved within a dedicated area on the focal plane.




      The result:



      Gaia focal plane



      (The Gaia focal plane; source)



      Gaia CCD array



      (The Gaia CCD array; source)






      share|improve this answer












      $endgroup$



















        6

















        $begingroup$

        As another example: the Gaia mission illustrates that modern CCD production techniques allow to have form follow function: it uses an creatively laid-out array of CCDs to be able to integrate multiple functions in a single instrument:




        [T]he three functions are built into a single instrument by using common telescopes and a shared focal plane:



        • The Astrometric instrument (ASTRO) is devoted to star angular position measurements, providing the five astrometric parameters [...]

        • The Photometric instrument provides continuous star spectra for astrophysis in the band 320-1000 nm and the ASTRO chromaticity calibration

        • The Radial Velocity Spectrometer (RVS) provides radial velocity and high resolution spectral data in the narrow band 847-874 nm

        Each function is achieved within a dedicated area on the focal plane.




        The result:



        Gaia focal plane



        (The Gaia focal plane; source)



        Gaia CCD array



        (The Gaia CCD array; source)






        share|improve this answer












        $endgroup$

















          6















          6











          6







          $begingroup$

          As another example: the Gaia mission illustrates that modern CCD production techniques allow to have form follow function: it uses an creatively laid-out array of CCDs to be able to integrate multiple functions in a single instrument:




          [T]he three functions are built into a single instrument by using common telescopes and a shared focal plane:



          • The Astrometric instrument (ASTRO) is devoted to star angular position measurements, providing the five astrometric parameters [...]

          • The Photometric instrument provides continuous star spectra for astrophysis in the band 320-1000 nm and the ASTRO chromaticity calibration

          • The Radial Velocity Spectrometer (RVS) provides radial velocity and high resolution spectral data in the narrow band 847-874 nm

          Each function is achieved within a dedicated area on the focal plane.




          The result:



          Gaia focal plane



          (The Gaia focal plane; source)



          Gaia CCD array



          (The Gaia CCD array; source)






          share|improve this answer












          $endgroup$



          As another example: the Gaia mission illustrates that modern CCD production techniques allow to have form follow function: it uses an creatively laid-out array of CCDs to be able to integrate multiple functions in a single instrument:




          [T]he three functions are built into a single instrument by using common telescopes and a shared focal plane:



          • The Astrometric instrument (ASTRO) is devoted to star angular position measurements, providing the five astrometric parameters [...]

          • The Photometric instrument provides continuous star spectra for astrophysis in the band 320-1000 nm and the ASTRO chromaticity calibration

          • The Radial Velocity Spectrometer (RVS) provides radial velocity and high resolution spectral data in the narrow band 847-874 nm

          Each function is achieved within a dedicated area on the focal plane.




          The result:



          Gaia focal plane



          (The Gaia focal plane; source)



          Gaia CCD array



          (The Gaia CCD array; source)







          share|improve this answer















          share|improve this answer




          share|improve this answer








          edited Sep 14 at 12:32

























          answered Sep 13 at 10:20









          LudoLudo

          2,4941 gold badge7 silver badges30 bronze badges




          2,4941 gold badge7 silver badges30 bronze badges
























              4

















              $begingroup$

              There were a lot of square format cameras.



              • The Voyager cameras had 800*800 pixels.


              • The LORRI cameras of New Horizons had 1024*1024 pixels.


              • The Galileo cameras had 800*800 pixels.


              • The Cassini WAC and NAC cameras had 1024*1024 pixels.


              • The narrow and wide angel OSIRIS cameras of Rosetta had 2048*2048 pixels.


              • The FC camera of Dawn had 1024*1024 pixels.


              But there are also camera sensors being neither square nor round. The narrow angle cameras of the Lunar Reconnaissance Orbiter use a line sensor with 1*5064 pixels. The maximum image size is 2.5 x 26 km at an altitude of 50 km. The pixel scale is 0.5 m per pixel, so the 2.5 km wide image is resolved into 5000 pixels. The image length of 26 km is recorded during the orbital move of the camera around the Moon. The resulting image has 5000*52,000 pixels.






              share|improve this answer












              $endgroup$



















                4

















                $begingroup$

                There were a lot of square format cameras.



                • The Voyager cameras had 800*800 pixels.


                • The LORRI cameras of New Horizons had 1024*1024 pixels.


                • The Galileo cameras had 800*800 pixels.


                • The Cassini WAC and NAC cameras had 1024*1024 pixels.


                • The narrow and wide angel OSIRIS cameras of Rosetta had 2048*2048 pixels.


                • The FC camera of Dawn had 1024*1024 pixels.


                But there are also camera sensors being neither square nor round. The narrow angle cameras of the Lunar Reconnaissance Orbiter use a line sensor with 1*5064 pixels. The maximum image size is 2.5 x 26 km at an altitude of 50 km. The pixel scale is 0.5 m per pixel, so the 2.5 km wide image is resolved into 5000 pixels. The image length of 26 km is recorded during the orbital move of the camera around the Moon. The resulting image has 5000*52,000 pixels.






                share|improve this answer












                $endgroup$

















                  4















                  4











                  4







                  $begingroup$

                  There were a lot of square format cameras.



                  • The Voyager cameras had 800*800 pixels.


                  • The LORRI cameras of New Horizons had 1024*1024 pixels.


                  • The Galileo cameras had 800*800 pixels.


                  • The Cassini WAC and NAC cameras had 1024*1024 pixels.


                  • The narrow and wide angel OSIRIS cameras of Rosetta had 2048*2048 pixels.


                  • The FC camera of Dawn had 1024*1024 pixels.


                  But there are also camera sensors being neither square nor round. The narrow angle cameras of the Lunar Reconnaissance Orbiter use a line sensor with 1*5064 pixels. The maximum image size is 2.5 x 26 km at an altitude of 50 km. The pixel scale is 0.5 m per pixel, so the 2.5 km wide image is resolved into 5000 pixels. The image length of 26 km is recorded during the orbital move of the camera around the Moon. The resulting image has 5000*52,000 pixels.






                  share|improve this answer












                  $endgroup$



                  There were a lot of square format cameras.



                  • The Voyager cameras had 800*800 pixels.


                  • The LORRI cameras of New Horizons had 1024*1024 pixels.


                  • The Galileo cameras had 800*800 pixels.


                  • The Cassini WAC and NAC cameras had 1024*1024 pixels.


                  • The narrow and wide angel OSIRIS cameras of Rosetta had 2048*2048 pixels.


                  • The FC camera of Dawn had 1024*1024 pixels.


                  But there are also camera sensors being neither square nor round. The narrow angle cameras of the Lunar Reconnaissance Orbiter use a line sensor with 1*5064 pixels. The maximum image size is 2.5 x 26 km at an altitude of 50 km. The pixel scale is 0.5 m per pixel, so the 2.5 km wide image is resolved into 5000 pixels. The image length of 26 km is recorded during the orbital move of the camera around the Moon. The resulting image has 5000*52,000 pixels.







                  share|improve this answer















                  share|improve this answer




                  share|improve this answer








                  edited Sep 13 at 14:14

























                  answered Sep 12 at 22:14









                  UweUwe

                  19.7k3 gold badges51 silver badges83 bronze badges




                  19.7k3 gold badges51 silver badges83 bronze badges































                      draft saved

                      draft discarded















































                      Thanks for contributing an answer to Space Exploration Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fspace.stackexchange.com%2fquestions%2f38740%2fare-space-camera-sensors-usually-round-or-square%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown









                      Popular posts from this blog

                      Tamil (spriik) Luke uk diar | Nawigatjuun

                      Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

                      Where does the image of a data connector as a sharp metal spike originate from?Where does the concept of infected people turning into zombies only after death originate from?Where does the motif of a reanimated human head originate?Where did the notion that Dragons could speak originate?Where does the archetypal image of the 'Grey' alien come from?Where did the suffix '-Man' originate?Where does the notion of being injured or killed by an illusion originate?Where did the term “sophont” originate?Where does the trope of magic spells being driven by advanced technology originate from?Where did the term “the living impaired” originate?