Is it a good idea to use CNN to classify 1D signal?Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?Time steps in Keras LSTMHow does an LSTM process sequences longer than its memory?Convolution operator in CNN and how it differs from feed forward NN operation?Replacing CNNs with Random ForestsMoving from support vector machine to neural network (Back propagation)What is the difference between Machine Learning and Deep Learning?How are SVMs = Template Matching?Can I add data, that my neural network classified, to the training set, in order to improve it?Probability threshold and signal/noise ratioIs a SVM (+Boost) faster than a NN to train with similar accuracy?
Minimum perfect squares needed to sum up to a target
SHA3-255, one bit less
How to accompany with piano in latin music when given only chords?
Where can I find Armory 92.3 for download?
How to be productive while waiting for meetings to start, when managers are casual about being late
I'm made of obsolete parts
How is the speed of nucleons in the nucleus measured?
Does SQL Server's serializable isolation level lock entire table
How much money should I save in order to generate $1000/month for the rest of my life?
Would we have more than 8 minutes of light, if the sun "went out"?
Meaning/Translation of title "The Light Fantastic" By Terry Pratchet
What is a practical use for this metric?
Does the Flixbus N770 from Antwerp to Copenhagen go by ferry to Denmark
How to know the size of a package
Is there any problem with students seeing faculty naked in university gym?
Why is the time of useful consciousness only seconds at high altitudes, when I can hold my breath much longer at ground level?
Power Adapter for Traveling to Scotland (I live in the US)
Is having your hand in your pocket during a presentation bad?
What are the most important factors in determining how fast technology progresses?
How long could a human survive completely without the immune system?
Proof of bound on optimal TSP tour length in rectangular region
The work of mathematicians outside their professional environment
Find the number for the question mark
The locus of polynomials with specified root multiplicities
Is it a good idea to use CNN to classify 1D signal?
Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?Time steps in Keras LSTMHow does an LSTM process sequences longer than its memory?Convolution operator in CNN and how it differs from feed forward NN operation?Replacing CNNs with Random ForestsMoving from support vector machine to neural network (Back propagation)What is the difference between Machine Learning and Deep Learning?How are SVMs = Template Matching?Can I add data, that my neural network classified, to the training set, in order to improve it?Probability threshold and signal/noise ratioIs a SVM (+Boost) faster than a NN to train with similar accuracy?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
$begingroup$
I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?
I am new to this kind of work. Pardon me if I ask anything wrong?
neural-networks svm conv-neural-network signal-processing
$endgroup$
add a comment
|
$begingroup$
I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?
I am new to this kind of work. Pardon me if I ask anything wrong?
neural-networks svm conv-neural-network signal-processing
$endgroup$
$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
Apr 18 at 11:21
add a comment
|
$begingroup$
I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?
I am new to this kind of work. Pardon me if I ask anything wrong?
neural-networks svm conv-neural-network signal-processing
$endgroup$
I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?
I am new to this kind of work. Pardon me if I ask anything wrong?
neural-networks svm conv-neural-network signal-processing
neural-networks svm conv-neural-network signal-processing
asked Apr 17 at 6:00
Fazla Rabbi MashrurFazla Rabbi Mashrur
1861 silver badge5 bronze badges
1861 silver badge5 bronze badges
$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
Apr 18 at 11:21
add a comment
|
$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
Apr 18 at 11:21
$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
Apr 18 at 11:21
$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
Apr 18 at 11:21
add a comment
|
4 Answers
4
active
oldest
votes
$begingroup$
I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (2017), where the information about time is passed via Fourier series features.
With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.
$endgroup$
add a comment
|
$begingroup$
You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.
Here is the architecture:
There are two parts to the network:
Representational learning layers:
This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.
Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.
At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.
$endgroup$
add a comment
|
$begingroup$
I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:
As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.
On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.
So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.
Here is a rough illustration of this method:
--------------------------
- -
- long 1D sequence -
- -
--------------------------
|
|
v
==========================
= =
= Conv + Pooling layers =
= =
==========================
|
|
v
---------------------------
- -
- Shorter representations -
- (higher-level -
- CNN features) -
- -
---------------------------
|
|
v
===========================
= =
= (stack of) RNN layers =
= =
===========================
|
|
v
===============================
= =
= classifier, regressor, etc. =
= =
===============================
$endgroup$
add a comment
|
$begingroup$
FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).
$endgroup$
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403502%2fis-it-a-good-idea-to-use-cnn-to-classify-1d-signal%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (2017), where the information about time is passed via Fourier series features.
With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.
$endgroup$
add a comment
|
$begingroup$
I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (2017), where the information about time is passed via Fourier series features.
With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.
$endgroup$
add a comment
|
$begingroup$
I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (2017), where the information about time is passed via Fourier series features.
With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.
$endgroup$
I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (2017), where the information about time is passed via Fourier series features.
With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.
edited Apr 18 at 12:33
answered Apr 17 at 6:54
Tim♦Tim
65.5k11 gold badges149 silver badges246 bronze badges
65.5k11 gold badges149 silver badges246 bronze badges
add a comment
|
add a comment
|
$begingroup$
You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.
Here is the architecture:
There are two parts to the network:
Representational learning layers:
This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.
Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.
At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.
$endgroup$
add a comment
|
$begingroup$
You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.
Here is the architecture:
There are two parts to the network:
Representational learning layers:
This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.
Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.
At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.
$endgroup$
add a comment
|
$begingroup$
You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.
Here is the architecture:
There are two parts to the network:
Representational learning layers:
This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.
Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.
At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.
$endgroup$
You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.
Here is the architecture:
There are two parts to the network:
Representational learning layers:
This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.
Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.
At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.
edited Apr 17 at 14:18
answered Apr 17 at 14:06
kedarpskedarps
1,1408 silver badges23 bronze badges
1,1408 silver badges23 bronze badges
add a comment
|
add a comment
|
$begingroup$
I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:
As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.
On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.
So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.
Here is a rough illustration of this method:
--------------------------
- -
- long 1D sequence -
- -
--------------------------
|
|
v
==========================
= =
= Conv + Pooling layers =
= =
==========================
|
|
v
---------------------------
- -
- Shorter representations -
- (higher-level -
- CNN features) -
- -
---------------------------
|
|
v
===========================
= =
= (stack of) RNN layers =
= =
===========================
|
|
v
===============================
= =
= classifier, regressor, etc. =
= =
===============================
$endgroup$
add a comment
|
$begingroup$
I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:
As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.
On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.
So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.
Here is a rough illustration of this method:
--------------------------
- -
- long 1D sequence -
- -
--------------------------
|
|
v
==========================
= =
= Conv + Pooling layers =
= =
==========================
|
|
v
---------------------------
- -
- Shorter representations -
- (higher-level -
- CNN features) -
- -
---------------------------
|
|
v
===========================
= =
= (stack of) RNN layers =
= =
===========================
|
|
v
===============================
= =
= classifier, regressor, etc. =
= =
===============================
$endgroup$
add a comment
|
$begingroup$
I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:
As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.
On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.
So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.
Here is a rough illustration of this method:
--------------------------
- -
- long 1D sequence -
- -
--------------------------
|
|
v
==========================
= =
= Conv + Pooling layers =
= =
==========================
|
|
v
---------------------------
- -
- Shorter representations -
- (higher-level -
- CNN features) -
- -
---------------------------
|
|
v
===========================
= =
= (stack of) RNN layers =
= =
===========================
|
|
v
===============================
= =
= classifier, regressor, etc. =
= =
===============================
$endgroup$
I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:
As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.
On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.
So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.
Here is a rough illustration of this method:
--------------------------
- -
- long 1D sequence -
- -
--------------------------
|
|
v
==========================
= =
= Conv + Pooling layers =
= =
==========================
|
|
v
---------------------------
- -
- Shorter representations -
- (higher-level -
- CNN features) -
- -
---------------------------
|
|
v
===========================
= =
= (stack of) RNN layers =
= =
===========================
|
|
v
===============================
= =
= classifier, regressor, etc. =
= =
===============================
answered Apr 17 at 19:53
todaytoday
3611 silver badge11 bronze badges
3611 silver badge11 bronze badges
add a comment
|
add a comment
|
$begingroup$
FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).
$endgroup$
add a comment
|
$begingroup$
FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).
$endgroup$
add a comment
|
$begingroup$
FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).
$endgroup$
FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).
answered Apr 17 at 20:51
kamptakampta
1317 bronze badges
1317 bronze badges
add a comment
|
add a comment
|
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403502%2fis-it-a-good-idea-to-use-cnn-to-classify-1d-signal%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
Apr 18 at 11:21