What is fractionally-strided convolution layer?What are deconvolutional layers?How do subsequent convolution layers work?How are 1x1 convolutions the same as a fully connected layer?Do all layers have the same computational complexity in a ResNet?Depth of the first pooling layer outcome in tensorflow documentationWhy is this not ordinary convolution?What principle is behind semantic segmenation with CNNs?Convolutional Neural Networks layer sizesSubsequent convolution layersLeNet-5 - combining feature maps in C3 layerWhat is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?

How far did Gandalf and the Balrog drop from the bridge in Moria?

Plotting octahedron inside the sphere and sphere inside the cube

How would timezones work on a planet 100 times the size of our Earth

How to describe accents?

If clocks themselves are based on light signals, wouldn't we expect the measured speed of light to always be the same constant?

Can "être sur" mean "to be about" ?

Is there any way to stop a user from creating executables and running them?

A Non Math Puzzle. What is the middle number?

First amendment and employment: Can a police department terminate an officer for speech?

Why aren't rainbows blurred-out into nothing after they are produced?

Why command hierarchy, if the chain of command is standing next to each other?

The cat ate your input again!

What should I call bands of armed men in the Middle Ages?

Why did Gandalf use a sword against the Balrog?

How exactly are corporate bonds priced at issue

How does "Te vas a cansar" mean "You're going to get tired"?

Is there a Morita cocycle for the mapping class group Mod(g,n) when n > 1?

Collinear Galois conjugates

What is this 1990s horror game of otherworldly PCs dealing with monsters on modern Earth?

80's/90's superhero cartoon with a man on fire and a man who made ice runways like Frozone

A torrent of foreign terms

What is this "Table of astronomy" about?

A continuous water "planet" ring around a star

Do I have to cite common CS algorithms?



What is fractionally-strided convolution layer?


What are deconvolutional layers?How do subsequent convolution layers work?How are 1x1 convolutions the same as a fully connected layer?Do all layers have the same computational complexity in a ResNet?Depth of the first pooling layer outcome in tensorflow documentationWhy is this not ordinary convolution?What principle is behind semantic segmenation with CNNs?Convolutional Neural Networks layer sizesSubsequent convolution layersLeNet-5 - combining feature maps in C3 layerWhat is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2












$begingroup$


In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said




Since, the aim of this work is to estimate high-resolution and
high-quality density maps, F-CNN is constructed using a set of
convolutional and fractionally-strided convolutional layers. The set
of fractionally-strided convolutional layers help us to restore
details in the output density maps. The following structure is used
for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C
is convolutional layer, R is ReLU layer, T is fractionally-strided
convolution layer and the first number inside every brace indicates
the number of filters while the second number indicates filter size.
Every fractionally-strided convolution layer increases the input
resolution by a factor of 2, thereby ensuring that the output
resolution is the same as that of input.




I would like to know the detail of fractionally-strided convolution layer.










share|improve this question











$endgroup$




















    2












    $begingroup$


    In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said




    Since, the aim of this work is to estimate high-resolution and
    high-quality density maps, F-CNN is constructed using a set of
    convolutional and fractionally-strided convolutional layers. The set
    of fractionally-strided convolutional layers help us to restore
    details in the output density maps. The following structure is used
    for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C
    is convolutional layer, R is ReLU layer, T is fractionally-strided
    convolution layer and the first number inside every brace indicates
    the number of filters while the second number indicates filter size.
    Every fractionally-strided convolution layer increases the input
    resolution by a factor of 2, thereby ensuring that the output
    resolution is the same as that of input.




    I would like to know the detail of fractionally-strided convolution layer.










    share|improve this question











    $endgroup$
















      2












      2








      2





      $begingroup$


      In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said




      Since, the aim of this work is to estimate high-resolution and
      high-quality density maps, F-CNN is constructed using a set of
      convolutional and fractionally-strided convolutional layers. The set
      of fractionally-strided convolutional layers help us to restore
      details in the output density maps. The following structure is used
      for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C
      is convolutional layer, R is ReLU layer, T is fractionally-strided
      convolution layer and the first number inside every brace indicates
      the number of filters while the second number indicates filter size.
      Every fractionally-strided convolution layer increases the input
      resolution by a factor of 2, thereby ensuring that the output
      resolution is the same as that of input.




      I would like to know the detail of fractionally-strided convolution layer.










      share|improve this question











      $endgroup$




      In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said




      Since, the aim of this work is to estimate high-resolution and
      high-quality density maps, F-CNN is constructed using a set of
      convolutional and fractionally-strided convolutional layers. The set
      of fractionally-strided convolutional layers help us to restore
      details in the output density maps. The following structure is used
      for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C
      is convolutional layer, R is ReLU layer, T is fractionally-strided
      convolution layer and the first number inside every brace indicates
      the number of filters while the second number indicates filter size.
      Every fractionally-strided convolution layer increases the input
      resolution by a factor of 2, thereby ensuring that the output
      resolution is the same as that of input.




      I would like to know the detail of fractionally-strided convolution layer.







      deep-learning convnet computer-vision convolution






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 15 at 6:33









      Esmailian

      4,9141 gold badge4 silver badges22 bronze badges




      4,9141 gold badge4 silver badges22 bronze badges










      asked Apr 15 at 3:26









      Haha TTproHaha TTpro

      1134 bronze badges




      1134 bronze badges























          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          Here is an animation of fractionally-strided convolution (from this github project):





          where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:



          A guide to convolution arithmetic for deep learning



          Here is a quote from the article:




          Figure [..] helps understand what fractional strides involve: zeros
          are inserted between input units, which makes the kernel move around
          at a slower pace than with unit strides [footnote: doing so is
          inefficient and real-world implementations avoid useless
          multiplications by zero, but conceptually it is how the transpose of a
          strided convolution can be thought of.]





          Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.



          And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:




          Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
          convolutions)




          and




          Some sources use the name deconvolution, which is inappropriate
          because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.







          share|improve this answer











          $endgroup$

















            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49299%2fwhat-is-fractionally-strided-convolution-layer%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            Here is an animation of fractionally-strided convolution (from this github project):





            where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:



            A guide to convolution arithmetic for deep learning



            Here is a quote from the article:




            Figure [..] helps understand what fractional strides involve: zeros
            are inserted between input units, which makes the kernel move around
            at a slower pace than with unit strides [footnote: doing so is
            inefficient and real-world implementations avoid useless
            multiplications by zero, but conceptually it is how the transpose of a
            strided convolution can be thought of.]





            Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.



            And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:




            Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
            convolutions)




            and




            Some sources use the name deconvolution, which is inappropriate
            because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.







            share|improve this answer











            $endgroup$



















              2












              $begingroup$

              Here is an animation of fractionally-strided convolution (from this github project):





              where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:



              A guide to convolution arithmetic for deep learning



              Here is a quote from the article:




              Figure [..] helps understand what fractional strides involve: zeros
              are inserted between input units, which makes the kernel move around
              at a slower pace than with unit strides [footnote: doing so is
              inefficient and real-world implementations avoid useless
              multiplications by zero, but conceptually it is how the transpose of a
              strided convolution can be thought of.]





              Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.



              And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:




              Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
              convolutions)




              and




              Some sources use the name deconvolution, which is inappropriate
              because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.







              share|improve this answer











              $endgroup$

















                2












                2








                2





                $begingroup$

                Here is an animation of fractionally-strided convolution (from this github project):





                where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:



                A guide to convolution arithmetic for deep learning



                Here is a quote from the article:




                Figure [..] helps understand what fractional strides involve: zeros
                are inserted between input units, which makes the kernel move around
                at a slower pace than with unit strides [footnote: doing so is
                inefficient and real-world implementations avoid useless
                multiplications by zero, but conceptually it is how the transpose of a
                strided convolution can be thought of.]





                Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.



                And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:




                Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
                convolutions)




                and




                Some sources use the name deconvolution, which is inappropriate
                because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.







                share|improve this answer











                $endgroup$



                Here is an animation of fractionally-strided convolution (from this github project):





                where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:



                A guide to convolution arithmetic for deep learning



                Here is a quote from the article:




                Figure [..] helps understand what fractional strides involve: zeros
                are inserted between input units, which makes the kernel move around
                at a slower pace than with unit strides [footnote: doing so is
                inefficient and real-world implementations avoid useless
                multiplications by zero, but conceptually it is how the transpose of a
                strided convolution can be thought of.]





                Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.



                And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:




                Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
                convolutions)




                and




                Some sources use the name deconvolution, which is inappropriate
                because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.








                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Apr 15 at 9:04

























                answered Apr 15 at 6:08









                EsmailianEsmailian

                4,9141 gold badge4 silver badges22 bronze badges




                4,9141 gold badge4 silver badges22 bronze badges






























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49299%2fwhat-is-fractionally-strided-convolution-layer%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Tamil (spriik) Luke uk diar | Nawigatjuun

                    Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

                    Training a classifier when some of the features are unknownWhy does Gradient Boosting regression predict negative values when there are no negative y-values in my training set?How to improve an existing (trained) classifier?What is effect when I set up some self defined predisctor variables?Why Matlab neural network classification returns decimal values on prediction dataset?Fitting and transforming text data in training, testing, and validation setsHow to quantify the performance of the classifier (multi-class SVM) using the test data?How do I control for some patients providing multiple samples in my training data?Training and Test setTraining a convolutional neural network for image denoising in MatlabShouldn't an autoencoder with #(neurons in hidden layer) = #(neurons in input layer) be “perfect”?