You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Author(s)

NameInstitutionMail Address
Luca ReiINFN Sezione diGenovaluca.rei@ligo.org

How to Obtain Support

Mailluca.rei@ligo.org

General Information

ML/DL TechnologiesLSTM
Science FieldsGeneral Relativity
DifficultyLow
LanguageEnglish
Typefully annotated / runnable / external resource / ...

Software and Tools

Programming LanguagePython
ML ToolsetKeras + Tensorflow
Suggested Environmentsbare Linux Node

Needed datasets

Data CreatorVirgo/Ligo
Data Typereal acquisition
Data Size1 GB
Data SourceIGWN collaboration


Short Description of the Use Case

simple example on how use an autoenconder to efficient data codings and foldings

"An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”."

In the following we will use the autoencoder to analyse a gravitation wave format file and learn how to ignore some sources of noise, obtain a signal cleaned and size reductioned

All data files used for this exercise are public and can be obtained from the Ligo website at https://www.gw-openscience.org/archive/O2_16KHZ_R1/ in the gwf format or hdf5 format

At https://www.gw-openscience.org/ you could find many interesting tutorial on how read plot and analyze with standard technique the gravitational files 

How to execute it

Download data files (any files at https://www.gw-openscience.org/data/ will be good) and execute the Jupyter notebook (https://github.com/luca-rei/ml-genoa). For convenience in the Jupyter we assume to work with hdf5 files,the interesting part is how the output of an encoded signal (gw) differ from an encoded noise (compare their size and their entropy). For example try to encode different data...

Annotated Description

References

https://blog.keras.io/building-autoencoders-in-keras.html

Attachments

  • No labels