What is fractionally-strided convolution layer?What are deconvolutional layers?How do subsequent convolution layers work?How are 1x1 convolutions the same as a fully connected layer?Do all layers have the same computational complexity in a ResNet?Depth of the first pooling layer outcome in tensorflow documentationWhy is this not ordinary convolution?What principle is behind semantic segmenation with CNNs?Convolutional Neural Networks layer sizesSubsequent convolution layersLeNet-5 - combining feature maps in C3 layerWhat is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?
How far did Gandalf and the Balrog drop from the bridge in Moria?
Plotting octahedron inside the sphere and sphere inside the cube
How would timezones work on a planet 100 times the size of our Earth
How to describe accents?
If clocks themselves are based on light signals, wouldn't we expect the measured speed of light to always be the same constant?
Can "être sur" mean "to be about" ?
Is there any way to stop a user from creating executables and running them?
A Non Math Puzzle. What is the middle number?
First amendment and employment: Can a police department terminate an officer for speech?
Why aren't rainbows blurred-out into nothing after they are produced?
Why command hierarchy, if the chain of command is standing next to each other?
The cat ate your input again!
What should I call bands of armed men in the Middle Ages?
Why did Gandalf use a sword against the Balrog?
How exactly are corporate bonds priced at issue
How does "Te vas a cansar" mean "You're going to get tired"?
Is there a Morita cocycle for the mapping class group Mod(g,n) when n > 1?
Collinear Galois conjugates
What is this 1990s horror game of otherworldly PCs dealing with monsters on modern Earth?
80's/90's superhero cartoon with a man on fire and a man who made ice runways like Frozone
A torrent of foreign terms
What is this "Table of astronomy" about?
A continuous water "planet" ring around a star
Do I have to cite common CS algorithms?
What is fractionally-strided convolution layer?
What are deconvolutional layers?How do subsequent convolution layers work?How are 1x1 convolutions the same as a fully connected layer?Do all layers have the same computational complexity in a ResNet?Depth of the first pooling layer outcome in tensorflow documentationWhy is this not ordinary convolution?What principle is behind semantic segmenation with CNNs?Convolutional Neural Networks layer sizesSubsequent convolution layersLeNet-5 - combining feature maps in C3 layerWhat is the motivation for row-wise convolution and folding in Kalchbrenner et al. (2014)?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said
Since, the aim of this work is to estimate high-resolution and
high-quality density maps, F-CNN is constructed using a set of
convolutional and fractionally-strided convolutional layers. The set
of fractionally-strided convolutional layers help us to restore
details in the output density maps. The following structure is used
for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C
is convolutional layer, R is ReLU layer, T is fractionally-strided
convolution layer and the first number inside every brace indicates
the number of filters while the second number indicates filter size.
Every fractionally-strided convolution layer increases the input
resolution by a factor of 2, thereby ensuring that the output
resolution is the same as that of input.
I would like to know the detail of fractionally-strided convolution layer.
deep-learning convnet computer-vision convolution
$endgroup$
add a comment |
$begingroup$
In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said
Since, the aim of this work is to estimate high-resolution and
high-quality density maps, F-CNN is constructed using a set of
convolutional and fractionally-strided convolutional layers. The set
of fractionally-strided convolutional layers help us to restore
details in the output density maps. The following structure is used
for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C
is convolutional layer, R is ReLU layer, T is fractionally-strided
convolution layer and the first number inside every brace indicates
the number of filters while the second number indicates filter size.
Every fractionally-strided convolution layer increases the input
resolution by a factor of 2, thereby ensuring that the output
resolution is the same as that of input.
I would like to know the detail of fractionally-strided convolution layer.
deep-learning convnet computer-vision convolution
$endgroup$
add a comment |
$begingroup$
In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said
Since, the aim of this work is to estimate high-resolution and
high-quality density maps, F-CNN is constructed using a set of
convolutional and fractionally-strided convolutional layers. The set
of fractionally-strided convolutional layers help us to restore
details in the output density maps. The following structure is used
for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C
is convolutional layer, R is ReLU layer, T is fractionally-strided
convolution layer and the first number inside every brace indicates
the number of filters while the second number indicates filter size.
Every fractionally-strided convolution layer increases the input
resolution by a factor of 2, thereby ensuring that the output
resolution is the same as that of input.
I would like to know the detail of fractionally-strided convolution layer.
deep-learning convnet computer-vision convolution
$endgroup$
In paper Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs, in Section 3.4, it said
Since, the aim of this work is to estimate high-resolution and
high-quality density maps, F-CNN is constructed using a set of
convolutional and fractionally-strided convolutional layers. The set
of fractionally-strided convolutional layers help us to restore
details in the output density maps. The following structure is used
for F-CNN: CR(64,9)-CR(32,7)- TR(32)-CR(16,5)-TR(16)-C(1,1), where, C
is convolutional layer, R is ReLU layer, T is fractionally-strided
convolution layer and the first number inside every brace indicates
the number of filters while the second number indicates filter size.
Every fractionally-strided convolution layer increases the input
resolution by a factor of 2, thereby ensuring that the output
resolution is the same as that of input.
I would like to know the detail of fractionally-strided convolution layer.
deep-learning convnet computer-vision convolution
deep-learning convnet computer-vision convolution
edited Apr 15 at 6:33
Esmailian
4,9141 gold badge4 silver badges22 bronze badges
4,9141 gold badge4 silver badges22 bronze badges
asked Apr 15 at 3:26
Haha TTproHaha TTpro
1134 bronze badges
1134 bronze badges
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Here is an animation of fractionally-strided convolution (from this github project):
where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:
A guide to convolution arithmetic for deep learning
Here is a quote from the article:
Figure [..] helps understand what fractional strides involve: zeros
are inserted between input units, which makes the kernel move around
at a slower pace than with unit strides [footnote: doing so is
inefficient and real-world implementations avoid useless
multiplications by zero, but conceptually it is how the transpose of a
strided convolution can be thought of.]
Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.
And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:
Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
convolutions)
and
Some sources use the name deconvolution, which is inappropriate
because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49299%2fwhat-is-fractionally-strided-convolution-layer%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Here is an animation of fractionally-strided convolution (from this github project):
where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:
A guide to convolution arithmetic for deep learning
Here is a quote from the article:
Figure [..] helps understand what fractional strides involve: zeros
are inserted between input units, which makes the kernel move around
at a slower pace than with unit strides [footnote: doing so is
inefficient and real-world implementations avoid useless
multiplications by zero, but conceptually it is how the transpose of a
strided convolution can be thought of.]
Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.
And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:
Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
convolutions)
and
Some sources use the name deconvolution, which is inappropriate
because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.
$endgroup$
add a comment |
$begingroup$
Here is an animation of fractionally-strided convolution (from this github project):
where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:
A guide to convolution arithmetic for deep learning
Here is a quote from the article:
Figure [..] helps understand what fractional strides involve: zeros
are inserted between input units, which makes the kernel move around
at a slower pace than with unit strides [footnote: doing so is
inefficient and real-world implementations avoid useless
multiplications by zero, but conceptually it is how the transpose of a
strided convolution can be thought of.]
Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.
And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:
Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
convolutions)
and
Some sources use the name deconvolution, which is inappropriate
because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.
$endgroup$
add a comment |
$begingroup$
Here is an animation of fractionally-strided convolution (from this github project):
where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:
A guide to convolution arithmetic for deep learning
Here is a quote from the article:
Figure [..] helps understand what fractional strides involve: zeros
are inserted between input units, which makes the kernel move around
at a slower pace than with unit strides [footnote: doing so is
inefficient and real-world implementations avoid useless
multiplications by zero, but conceptually it is how the transpose of a
strided convolution can be thought of.]
Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.
And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:
Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
convolutions)
and
Some sources use the name deconvolution, which is inappropriate
because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.
$endgroup$
Here is an animation of fractionally-strided convolution (from this github project):
where the dashed white cells are zero rows/columns padded between the input cells (blue). These animations are visualizations of the mathematical formulas from the article below:
A guide to convolution arithmetic for deep learning
Here is a quote from the article:
Figure [..] helps understand what fractional strides involve: zeros
are inserted between input units, which makes the kernel move around
at a slower pace than with unit strides [footnote: doing so is
inefficient and real-world implementations avoid useless
multiplications by zero, but conceptually it is how the transpose of a
strided convolution can be thought of.]
Also, here is a post on this site asking "What are deconvolutional layers?" which is the same thing.
And here are two quotes from a post by Paul-Louis Pröve on different types of convolutions:
Transposed Convolutions (a.k.a. deconvolutions or fractionally strided
convolutions)
and
Some sources use the name deconvolution, which is inappropriate
because it’s not a deconvolution [..] An actual deconvolution reverts the process of a convolution.
edited Apr 15 at 9:04
answered Apr 15 at 6:08
EsmailianEsmailian
4,9141 gold badge4 silver badges22 bronze badges
4,9141 gold badge4 silver badges22 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49299%2fwhat-is-fractionally-strided-convolution-layer%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown