Thread: Re: a problem while downsampling sac data with decimate

Started: 2010-05-26 16:13:10
Last activity: 2010-07-09 23:01:13
Topics: SAC Help
Thanks Milton for quick reply.

Since in our later data processing the phase plays an important role, I am
wondering if the interp or decimate will keep the phase in low freqency, for
example lower than 0.4Hz. It seems that the decimate cmd performs an
anti-alias filter, but document on interp does not show that. By comparing
the origin data and interped and decimated ones in a short time gap, say two
seconds, we may find all the decimated data points can be found in the
origin data but interped one adds some new points different from ori data.

So will interp pre-filter the origin data to low frequency just same as
decimate cmd ? Are both "phase-keeping" cmd ?

thanks again for all your suggestions

weitao
2010/5/25 Milton P. Plasencia Linares <mplasencia<at>ogs.trieste.it>


Hi Weitao,
Yes, when you decimate (5-5-2-2) the header is 9.99999E-01,
Sac 101.3b 64bits - Fedora Linux.
I have a test and attach the ps file.
I downsampling using decimate and interpolation commands,
(see figure). Values defaults for decimate applied anti-aliasing
FIR filter. Interpolation use the Wiggins method.
I see that changing header with 'ch delta 1.0' no affect the
quantity or form of data.

In the figure, in this case, I think that interpolate does a
better job, at least not "filtered" completely the EQ signal.

I wait this help you.

Cheers,

Milton

**********************************
Milton P. PLASENCIA LINARES

Dipartimento Centro di Ricerche Sismologiche
Istituto Nazionale di Oceanografia e di Geofisica Sperimentale - OGS

Borgo Grotta Gigante 42/C
(34010) Sgonico - TRIESTE - ITALIA
Tel: +39-040-2140136
Fax: +39-040-327307

E-mail: mplasencia<at>ogs.trieste.it

ASAIN (Antarctic Seismographic Argentinean Italian Network)
*********************************



Quoting "weitao wang" <wangwtustc<at>gmail.com>:

Hi All

When I tried to downsample a sac data using decimate command,I encountered
a
curious problem on the delta change.
Suppose we have one data with samplng rate 100hz, and we want to
downsample
to 1hz.
the cmd I used is
decimate 5, now delta=0.05;
decimate 5 now=delta=0.25
decimate 2 now delta=0.5
decimate 2 now delta=0.999999

in the final step the delta is 0.999999 instead of 1.0. Since our later
processing need to check the consistent of sachdr.delta and we have some
LHZ sac with delta=1.0, we need to make the downsampled-data's delta to be
1.0 ,not 0.9999999.

Is there any way to aviod the 0.99999 thing ? Or we can forcely change
delta
to 1.0 using ch delta 1.0 without bad effects for later processing ?

And there is another question, can sac cmd " interp "be used to downsamp
origin data ? is it differ from decimate ?

thanks for all your help in advance.

wt

--
This email was Anti Virus checked by INOGS Antivirus filter.




----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.


  • Based on looking at the code, I can confirm that interpolate does not
    itself have the capability of running an anti-aliasing filter. You would
    have to do that first yourself.

    Sometimes one wants to increase the number of points. In some
    applications I increased npts to be an exact power of 2.

    If you want to compare the effects of interpolate and decimate, here is a
    suggestion. (I have not tried it, so ...)

    Take a file for which you have a RDSEED RESP file and use the evalresp
    response in transfer with FREQLIMITS set to filter the low frequencies
    but not the high. That will give you a lot of amplitude at high
    frequencies in the saved file.

    Take that file and apply interpolate and decimate separately and compare
    the outputs.




    • This thread was in late May. Partly as a result of it, I looked more
      closely at the INTERPOLATE command. For version 101.4, we replaced the
      truncate with a round off when calculating NPTS, but since then I found
      other things that should probably be changed. If one uses the current
      version carefully, it should work satisfactorily, so a patch is not
      required. However, I thought I should post to this list to see if anyone
      has any problems with my proposed changes. I am attaching a PDF version
      of the revised HELP file.

      In the current version, there is a default DELTA of 0.025 s. Hence, if
      one enters INTERPOLATE NPTS 4096, the end time will change to a value that
      depends on how the input DELTA compares to 0.025. I think that the start
      and end times should not change with INTERPOLATE, and do not understand
      the need/use of a default DELTA. in the revised version, if NPTS is
      called for as an argument, DELTA is calculated -- just as if DELTA is
      input, NPTS is calculated.

      I do not understand when one would use BEGIN. It seems to me more logical
      to use CUT before calling INTERPOLATE. I have left it in, so it can be
      used with either NPTS or DELTA.

      In the current HELP file, there are logical arguments for NPTS and BEGIN
      that are not needed to be mentioned explicitly.

      Another calling argument is EPSILON. It is incorrectly described in both
      the current HELP file and the source code. It is effectively a "water
      level" that gives a lower limit to the ratios that are used in the Wiggins
      procedure. I have used this procedure for many years, both in my own
      programs and in those written by others. I have never seen a reason to
      vary EPSILON and tests show that for a reasonable time series there is no
      effect on changing EPSILON by factors of 100. I have left it in, but not
      advertised it in the HELP file.

      One final comment about this interpolation routine: When Wiggins wrote t
      in 1976, it was intended to be used for hand-digitized data. Typically,
      one included extrema and (perhaps) inflection points. A difference between
      this scheme and cubic splines is that the output file will not have greater
      extrema than the input data points. It is also not as smooth as a cubic
      spline interpolation because second derivatives are not forced to be
      continuous. This scheme works well for data if the DELTA is not changed
      too much. Following up on the original thread, it should probably not be
      done if one is down-sampling by a large amount both because the fit will
      probably not be very good and because there is no anti-aliasing filtering
      done within INTERPOLATE. For such a case, it is best to use DECIMATE.

      Please comment -- especially if you disagree with anything I have said.

      Arthur Snoke
      snoke<at>vt.edu
      Attachments
10:07:16 v.22510d55