Commit 59c4ebe6 authored by Marius Kriegerowski's avatar Marius Kriegerowski
Browse files

Merge branch 'fft_convolve' into 'master'

ifc: use scipy fftconvolve instead of numpy convolve.

Performance increases significantly for long time windows when using fftconvolve over convolve. I tested two data sets: ASPO and the alentejo example dataset. Checkout the attached terminal output of the alentejo example. 
At the top: the current master branch version. The numbers in between the lines are processing times needed for that specific time window.
At the bottom: the proposed version using fftconvolve. It's a bit faster as you can see.
However, the ifc pre-processing of the ASPO  application of Jose accelerates by a factor of about 35 when using fftconvolve!
The maximum relative (!) differences in trace amplitude when comparing both versions if about 1e-6 . Hence, negligible, I think. Also, detections are identical.
...Just faster.
[lassie_compare.txt](/uploads/3f7e1cf3054ca9e8f355e0df6419244f/lassie_compare.txt)

See merge request !3
parents a9b0c4b1 1e254428
import sys
import logging
import numpy as num
from scipy.signal import fftconvolve
from pyrocko.guts import Object, String, Float
from pyrocko import trace, autopick, util
from lassie import shifter, common
......@@ -60,13 +61,12 @@ class WavePacketIFC(IFC):
tr.highpass(4, self.fmin, demean=True)
tr.lowpass(4, self.fmax, demean=False)
tr.ydata = tr.ydata**2
n = int(num.round(1./fsmooth / tr.deltat))
taper = num.hanning(n)
tr.set_ydata(num.convolve(tr.get_ydata(), taper))
tr.set_ydata(fftconvolve(tr.get_ydata(), taper))
tr.set_ydata(num.maximum(tr.ydata, 0.0))
tr.shift(-(n/2.*tr.deltat))
try:
tr.downsample_to(deltat_cf, snap=True, demean=False)
except util.UnavailableDecimation as e:
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment