Search code examples
deep-learningtime-seriesconv-neural-networkanomaly-detectionfaster-rcnn

Deep Learning for Acoustic Emission concrete fracture speciments: regression on-set time and classification of type of failure


How can I use deep learning for both regression and classification tasks?

I am facing a problem with acoustic emission on fracture with concrete speciment. The objective is to find automatically the on-set time instant (time at the beginning of the acoustic emission) and the slope with the peak value to determine the kind of fracture (mode I or mode II based on the raise angle RA). Definition of onset time and raise angle to classify fracture

I have tried Regional CNN to work with images of the signals Fine-tuning Faster-RCNN using pytorch, but unfortunately the results are not outstanding up to now.

Object detection with Faster-RCNN pytorch

I would like to work with sequences (time series) of amplitude data according to a certain sampling frequency, but they have different length each. How can I deal with this problem?

Can I make a 1D-CNN which makes a sort of anomaly detection based on the supervised point that I can mark manually on training examples?

I have a certain number of recordings which I would like to exploit to train the model sampled at 100Hz. In examples on anomaly detection like Timeseries anomaly detection using an Autoencoder, they use the same time series and they perform a window with sliding 1 time step in order to obtain about 3700 to train their neural network. Instead I have different number of recordings (time series) each of them with a certain on-set time instant and different global length in seconds. How can I manage it?

I actually need the time instant of the beginning of the signal and the maximum point to define the raise angle and classify the type of fracture. Can I make classification directly with CNN simultaneously with regression tasks of the on-set time instant?

Thank you in advance!


Solution

  • I finally solved, thanks to the fundamental suggestion by @JonNordby, using Sound Event Detection method. We adopted and readapted the code from GitHub YashNita.

    I labelled the data according to the following image: enter image description here

    Then, I adopted the method for extracting features from computing the spectrogram of the input signals: enter image description here

    And finally we were able to get a more precise output recognition of the Seismic Event Detection which is directly connected to the Acoustic Emission Event detection, obtaining the following result: enter image description here

    For the moment, only the event recognition phase was done, but it would be simple to readapt also to conduct classification of mode I or mode II of cracking.