Chad Trabant
2021-01-05 09:35:15
Dear Colleagues,
Have you ever collected, stored or processed large volumes of seismic data that required non-traditional strategies (HPC, cloud, distributed frameworks, etc.)? If so, please consider submitting an abstract https://www.seismosoc.org/meetings/submission-system/ to our session at the SSA 2021 Annual Meeting in April to share your experiences.
Applications and Technologies in Large-Scale Seismic Analysis
The growth and maturation of technologies that make it easier to analyze large volumes of data has enabled new areas of research in seismology. Computational frameworks like Apache Spark and Dask augment existing tools like MPI. New programming languages like Julia and the emergence of new scalable analysis capabilities in languages like Java and Python supplement traditional languages like C and Fortran. New platforms like the commercial cloud offer alternatives to existing high performance computing platforms. Technologies like these increase accessibility to a new scale of inquiry, making large-scale research in seismology more tractable than ever before. In this session, we invite researchers and data providers to share work in data-hungry applications, approaches to large data collection, storage and access and experiences with processing platforms and architectures.
Conveners
Jonathan K. MacCarthy, Los Alamos National Laboratory (jkmacc<at>lanl.gov)
Chad Trabant, IRIS Data Services (chad<at>iris.washington.edu)
Have you ever collected, stored or processed large volumes of seismic data that required non-traditional strategies (HPC, cloud, distributed frameworks, etc.)? If so, please consider submitting an abstract https://www.seismosoc.org/meetings/submission-system/ to our session at the SSA 2021 Annual Meeting in April to share your experiences.
Applications and Technologies in Large-Scale Seismic Analysis
The growth and maturation of technologies that make it easier to analyze large volumes of data has enabled new areas of research in seismology. Computational frameworks like Apache Spark and Dask augment existing tools like MPI. New programming languages like Julia and the emergence of new scalable analysis capabilities in languages like Java and Python supplement traditional languages like C and Fortran. New platforms like the commercial cloud offer alternatives to existing high performance computing platforms. Technologies like these increase accessibility to a new scale of inquiry, making large-scale research in seismology more tractable than ever before. In this session, we invite researchers and data providers to share work in data-hungry applications, approaches to large data collection, storage and access and experiences with processing platforms and architectures.
Conveners
Jonathan K. MacCarthy, Los Alamos National Laboratory (jkmacc<at>lanl.gov)
Chad Trabant, IRIS Data Services (chad<at>iris.washington.edu)