Kuang He
2008-09-04 03:38:20
Dear Brian,
Probably folks here have much more experience on building testing
suites than I do, but since you ask, I'll share my two cents.
If I understand you correctly, the idea of the macro approach you
mentioned is to create something using Expect
(http://expect.nist.gov/) to interact with SAC as if a human being is
operating on it, or more simply just to write shell scripts to feed
commands to SAC using *nix pipes. This is obviously doable, but I
agree with you that it does not offer a fine-grained control on all
the functions we want to test, and could be more difficult to
implement (using Expect surly will make things even more complicated).
Many open source software packages already have testing suites which
come with their source codes. I think they cannot afford not having
one, since a lot of them are still under active development. Humans
are not perfect, so no matter how careful we are, we are prone to
introduce new bugs when trying to develop new features, fix old bugs,
etc. I've looked at Perl's testing suite before, and it is in a
directory called t in its source code. I put one of files here as an
example, so that people can get a feeling how they look:
http://maxwell.phys.uconn.edu/~icrazy/sac/num.pl.txt
For example, we can generate a seismogram and perform FFT on it using
a FFT implementation that is known to work well, then put the original
values and the spectrum we get after doing FFT into the FFT part of
the testing suite. Each time we run the testing suite, it will try to
get a spectrum out of the same dataset we provide using the FFT
function in SAC and then compare the result with the known result. If
these two sets of results are the same (due to floating number
precisions, we consider they are the same as long as they differ very
little), the suite will print something like "FFT ok", otherwise "FFT
not ok". Similarly, we can write similar testing functions for other
functions in SAC. It is gonna take quite some time to implement the
testing suite, but I think it is well worth it.
By the way, how come "Solaris is the trouble one in the group"? What
is wrong with Solaris? (I don't use Solaris.)
As a related note, I was wondering how we are doing building tests on
different systems, since SAC needs to support Linux, OSX and Solaris.
There is a tool called Buildbot (http://buildbot.net/trac), which will
automatically rebuild and test the source code tree on all the
available systems each time something has changed (e.g. a programmer
commits a change to the CVS system), so that build problems will be
pinpointed quickly. Users of Buildbot include famous open source
projects such as KDE, Python, Zope, OpenOffice, etc. For example, this
is the one of the Buildbot summary pages for Python, just to give you
a visualization.
http://www.python.org/dev/buildbot/stable/
Best regards,
--
Kuang He
Department of Physics
University of Connecticut
Storrs, CT 06269-3046
Tel: +1.860.486.4919
Web: http://www.phys.uconn.edu/~he/
On Wed, Sep 3, 2008 at 3:11 PM, Brian Savage <savage<at>uri.edu> wrote:
Probably folks here have much more experience on building testing
suites than I do, but since you ask, I'll share my two cents.
If I understand you correctly, the idea of the macro approach you
mentioned is to create something using Expect
(http://expect.nist.gov/) to interact with SAC as if a human being is
operating on it, or more simply just to write shell scripts to feed
commands to SAC using *nix pipes. This is obviously doable, but I
agree with you that it does not offer a fine-grained control on all
the functions we want to test, and could be more difficult to
implement (using Expect surly will make things even more complicated).
Many open source software packages already have testing suites which
come with their source codes. I think they cannot afford not having
one, since a lot of them are still under active development. Humans
are not perfect, so no matter how careful we are, we are prone to
introduce new bugs when trying to develop new features, fix old bugs,
etc. I've looked at Perl's testing suite before, and it is in a
directory called t in its source code. I put one of files here as an
example, so that people can get a feeling how they look:
http://maxwell.phys.uconn.edu/~icrazy/sac/num.pl.txt
For example, we can generate a seismogram and perform FFT on it using
a FFT implementation that is known to work well, then put the original
values and the spectrum we get after doing FFT into the FFT part of
the testing suite. Each time we run the testing suite, it will try to
get a spectrum out of the same dataset we provide using the FFT
function in SAC and then compare the result with the known result. If
these two sets of results are the same (due to floating number
precisions, we consider they are the same as long as they differ very
little), the suite will print something like "FFT ok", otherwise "FFT
not ok". Similarly, we can write similar testing functions for other
functions in SAC. It is gonna take quite some time to implement the
testing suite, but I think it is well worth it.
By the way, how come "Solaris is the trouble one in the group"? What
is wrong with Solaris? (I don't use Solaris.)
As a related note, I was wondering how we are doing building tests on
different systems, since SAC needs to support Linux, OSX and Solaris.
There is a tool called Buildbot (http://buildbot.net/trac), which will
automatically rebuild and test the source code tree on all the
available systems each time something has changed (e.g. a programmer
commits a change to the CVS system), so that build problems will be
pinpointed quickly. Users of Buildbot include famous open source
projects such as KDE, Python, Zope, OpenOffice, etc. For example, this
is the one of the Buildbot summary pages for Python, just to give you
a visualization.
http://www.python.org/dev/buildbot/stable/
Best regards,
--
Kuang He
Department of Physics
University of Connecticut
Storrs, CT 06269-3046
Tel: +1.860.486.4919
Web: http://www.phys.uconn.edu/~he/
On Wed, Sep 3, 2008 at 3:11 PM, Brian Savage <savage<at>uri.edu> wrote:
A testing suite is currently only residing on my system.
I have used it minimally to test the output of certain commands, but it
would be nice to have a better or standard way
to create and test the system before it is released. This is especially
true for the sacio and sac libraries. As well as the sacswap program, thank
you very much by the way.
I can see a testing suite as two different beasts. One version might be an
interface to SAC itself and the other would be to specific functions within
SAC. A macro vs micro approach. Keep in mind that the tests need to be run
on Linux, OSX and Solaris. Solaris is the trouble one in the group. My
feelings are that a micro approach to the functions would be easier to
implement and allow us finer grain control on how they behave.
If you would like to give your input, it would be most helpful.