Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
Sebastian Heimann
grond
Commits
0d8e2b83
Commit
0d8e2b83
authored
Jan 16, 2017
by
Marius Isken
Browse files
Merge branch 'master' of
https://gitext.gfz-potsdam.de/heimann/grond
parents
d67b9aa6
ef24b71b
Changes
8
Expand all
Hide whitespace changes
Inline
Side-by-side
README.md
View file @
0d8e2b83
# Grond
*What do you want to bust today?!*
A bootstrap-based probabilistic battering ram to explore solution spaces in
earthquake source parameter estimation problems.
## Installation
First, install
[
Pyrocko
](
http://pyrocko.org/current/install.html
)
,
then install Grond:
```
bash
git clone https://gitext.gfz-potsdam.de/heimann/grond.git
cd
grond
sudo
python setup.py
install
```
## Updating an existing installation
```
bash
cd
grond
# change to the directory where you cloned grond to initially
git pull origin master
sudo
python setup.py
install
```
## Basic usage
Grond can be run as a command line tool or by calling Grond's library functions
from a Python script. To get a brief description on available options of
Grond's command line tool, run
`grond --help`
or
`grond <subcommand> --help`
.
Once dataset and configuration are ready, the command
`grond go <configfile> <eventname>`
starts the optimization algorithm for a
selected event. Before running the optimization, to debug problems with the
dataset and configuration, use
`grond check <configfile> <eventname>`
. To get
a list of event names available in a configured setup, run
`grond events <configfile>`
.
During the optimization, results are aggregated in a directory,
referred to in the configuration as
`<rundir>`
. To visualize the results run
`grond plot <plotnames> <rundir>`
. The results can be exported in various way
by running the subcommand
`grond export <what> <rundir>`
.
## Example configuration file
...
...
@@ -9,7 +49,7 @@
---
!grond.Config
# Path, where to store output (run-directories)
rundir_template
:
'
gruns/
%(
problem_name
)s
.run'
rundir_template
:
'
gruns/
${
problem_name
}
.run'
# -----------------------------------------------------------------------------
...
...
@@ -19,20 +59,23 @@ rundir_template: 'gruns/%(problem_name)s.run'
dataset_config
:
!grond.DatasetConfig
# List of files with station coordinates
stations_stationxml_paths
:
[
meta/gfz2015sfdd/responses.stationxml
]
stations_stationxml_paths
:
-
'
events/${event_name}/responses.stationxml'
# File with hypocenter information and possibly reference solution
events_path
:
catalogs/gfz2015sfdd
.txt
events_path
:
'
events/${event_name}/event
.txt
'
# List of directories with raw waveform data
waveform_paths
:
[
data/gfz2015sfdd/raw
]
waveform_paths
:
-
'
data/${event_name}/raw'
# List of files with instrument response information
responses_stationxml_paths
:
[
meta/gfz2015sfdd/responses.stationxml
]
responses_stationxml_paths
:
-
'
meta/${event_name}/responses.stationxml'
# List with station/channel codes to exclude
blacklist
:
[
OTAV
,
RCBR
,
PTGA
,
AXAS2
,
SNAA
,
PAYG
,
RAR
,
SDV
,
VAL
,
XMAS
,
ODZ
,
MSVF
,
SHEL
,
SUR
,
ROSA
,
'
IU.PTCN.00.T'
,
'
C1.VA02..T'
,
RPN
]
Z51A
,
MSVF
,
SHEL
,
SUR
,
ROSA
,
'
IU.PTCN.00.T'
,
'
C1.VA02..T'
,
RPN
]
# -----------------------------------------------------------------------------
...
...
@@ -46,7 +89,8 @@ engine_config: !grond.EngineConfig
gf_stores_from_pyrocko_config
:
false
# List of directories with GF stores
gf_store_superdirs
:
[
'
gf_stores'
]
gf_store_superdirs
:
-
'
gf_stores'
# -----------------------------------------------------------------------------
...
...
@@ -61,10 +105,10 @@ target_configs:
-
!grond.TargetConfig
# Name of the super-group to which this contribution belongs
super_group
:
time_domain
super_group
:
'
time_domain
'
# Name of the group to which this contribution belongs
group
:
rayleigh
group
:
'
rayleigh
'
# Minimum distance of stations to be considered
distance_min
:
1000e3
...
...
@@ -73,7 +117,7 @@ target_configs:
distance_max
:
10000e3
# List with names of channels to be considered
channels
:
[
Z
]
channels
:
[
'
Z'
]
# How to weight stations from this contribution in the global misfit
weight
:
1.0
...
...
@@ -82,49 +126,49 @@ target_configs:
inner_misfit_config
:
!grond.InnerMisfitConfig
# Frequency band [Hz] of acausal filter (flat part of frequency taper)
fmin
:
0.00
1
fmax
:
0.00
5
fmin
:
0.00
2
fmax
:
0.00
8
# Factor defining fall-off of frequency taper
# (zero at fmin/ffactor, fmax*ffactor)
ffactor
:
1.5
# Time window to include in the data fitting. Times can be defined offset
# to given phase arrivals. E.g. 'begin-
6
00' would mean
6
00 s before arrival
# to given phase arrivals. E.g. 'begin-
1
00' would mean
1
00 s before arrival
# of the phase named 'begin', which must be defined in the travel time
# tables in the GF store.
tmin
:
'
begin
-6
00'
tmax
:
'
end+6
00'
tmin
:
'
{stored:
begin
}-1
00'
tmax
:
'
{stored:end}+1
00'
# How to fit the data (available choices: 'time_domain',
# 'frequency_domain', 'absolute', 'envelope', 'cc_max_norm')
domain
:
time_domain
domain
:
'
time_domain
'
# How to interpolate the Green's functions (available choices:
# 'nearest_neighbor', 'multilinear')
interpolation
:
nearest_neighbor
interpolation
:
'
nearest_neighbor
'
# Name of GF store to use
store_id
:
global_20s_shallow
store_id
:
'
global_20s_shallow
'
# A second contribution to the misfit function (for descriptions, see above)
-
!grond.TargetConfig
super_group
:
time_domain
group
:
love
super_group
:
'
time_domain
'
group
:
'
love
'
distance_min
:
1000e3
distance_max
:
10000e3
channels
:
[
T
]
channels
:
[
'
T'
]
weight
:
1.0
inner_misfit_config
:
!grond.InnerMisfitConfig
fmin
:
0.00
1
fmax
:
0.00
5
fmin
:
0.00
2
fmax
:
0.00
8
ffactor
:
1.5
tmin
:
'
begin
-6
00'
tmax
:
'
end+6
00'
domain
:
time_domain
interpolation
:
nearest_neighbor
store_id
:
global_20s_shallow
tmin
:
'
{stored:
begin
}-1
00'
tmax
:
'
{stored:end}+1
00'
domain
:
'
time_domain
'
interpolation
:
'
nearest_neighbor
'
store_id
:
'
global_20s_shallow
'
# -----------------------------------------------------------------------------
...
...
@@ -135,7 +179,7 @@ target_configs:
problem_config
:
!grond.CMTProblemConfig
# Name used when creating output directory
name_template
:
cmt_surface_wave_
%(
event_name
)s
name_template
:
'
cmt_surface_wave_
${
event_name
}'
# Definition of model parameter space to be searched in the optimization
ranges
:
...
...
@@ -170,7 +214,7 @@ problem_config: !grond.CMTProblemConfig
nbootstrap
:
100
# Type of moment tensor to restrict to (choices: 'full', 'deviatoric')
mt_type
:
deviatoric
mt_type
:
'
deviatoric
'
# Whether to apply automatic weighting to balance the effects of geometric
# spreading etc.
...
...
@@ -184,9 +228,9 @@ problem_config: !grond.CMTProblemConfig
solver_config
:
!grond.SolverConfig
# Distribution used when drawing new candidate models (choices: 'normal',
# 'multivariate_normal')
. U
sed in 'transition', 'explorative', and
# 'non_explorative' phase
sampler_distribution
:
normal
# 'multivariate_normal')
(u
sed in 'transition', 'explorative', and
# 'non_explorative' phase
)
sampler_distribution
:
'
normal
'
# Number of iterations to operate in 'uniform' mode
niter_uniform
:
1000
...
...
apps/grond
View file @
0d8e2b83
...
...
@@ -24,22 +24,24 @@ def d2u(d):
subcommand_descriptions
=
{
'init'
:
'print example configuration'
,
'events'
:
'print available event names for given configuration'
,
'check'
:
'check data and configuration'
,
'go'
:
'run Grond optimization'
,
'forward'
:
'run forward modelling'
,
'harvest'
:
'manually run harvesting'
,
'map-geometry'
:
'make station map'
,
'plot'
:
'plot optimization result'
,
'export'
:
'export results'
,
}
subcommand_usages
=
{
'init'
:
'init [options]'
,
'check'
:
'check <configfile> [options]'
,
'go'
:
'go <configfile> [options]'
,
'forward'
:
'forward <rundir> [options]'
,
'events'
:
'events <configfile>'
,
'check'
:
'check <configfile> <eventnames> ... [options]'
,
'go'
:
'go <configfile> <eventnames> ... [options]'
,
'forward'
:
(
'forward <rundir> [options]'
,
'forward <configfile> <eventnames> ... [options]'
),
'harvest'
:
'harvest <rundir> [options]'
,
'map-geometry'
:
'map-geometry <configfile> [options]'
,
'plot'
:
'plot <plotnames> <rundir> [options]'
,
'export'
:
'export (best|mean|ensemble|stats) <rundirs> ... [options]'
,
}
...
...
@@ -56,11 +58,11 @@ usage = '''%(program_name)s <subcommand> [options] [--] <arguments> ...
Subcommands:
init %(init)s
events %(events)s
check %(check)s
go %(go)s
forward %(forward)s
harvest %(harvest)s
map-geometry %(map_geometry)s
plot %(plot)s
export %(export)s
...
...
@@ -79,9 +81,9 @@ def add_common_options(parser):
type
=
'choice'
,
choices
=
(
'critical'
,
'error'
,
'warning'
,
'info'
,
'debug'
),
default
=
'info'
,
help
=
'set logger level to '
'"critical", "error", "warning", "info", or "debug". '
'Default is "%default".'
)
help
=
'set logger level to '
'"critical", "error", "warning", "info", or "debug". '
'Default is "%default".'
)
def
process_common_options
(
options
):
...
...
@@ -174,33 +176,68 @@ def command_init(args):
print
config
def
command_events
(
args
):
def
setup
(
parser
):
pass
parser
,
options
,
args
=
cl_parse
(
'events'
,
args
,
setup
)
if
len
(
args
)
!=
1
:
help_and_die
(
parser
,
'missing arguments'
)
config_path
=
args
[
0
]
config
=
grond
.
read_config
(
config_path
)
for
event_name
in
grond
.
get_event_names
(
config
):
print
event_name
def
command_check
(
args
):
def
setup
(
parser
):
parser
.
add_option
(
'--event'
,
dest
=
'event_name'
,
metavar
=
'EVENTNAME'
,
help
=
'process only event EVENTNAME'
)
'--target-ids'
,
dest
=
'target_string_ids'
,
metavar
=
'TARGET_IDS'
,
help
=
'process only selected targets. TARGET_IDS is a '
'comma-separated list of target IDs. Target IDs have the '
'form SUPERGROUP.GROUP.NETWORK.STATION.LOCATION.CHANNEL.'
)
parser
,
options
,
args
=
cl_parse
(
'check'
,
args
,
setup
)
if
len
(
args
)
!=
1
:
help_and_die
(
parser
,
'no config file given'
)
parser
.
add_option
(
'--plot'
,
dest
=
'show_plot'
,
action
=
'store_true'
,
help
=
'plot sample synthetics and data.'
)
parser
.
add_option
(
'--waveforms'
,
dest
=
'show_waveforms'
,
action
=
'store_true'
,
help
=
'show raw, restituted, projected, and processed waveforms'
)
parser
.
add_option
(
'--nrandom'
,
dest
=
'n_random_synthetics'
,
metavar
=
'N'
,
type
=
'int'
,
default
=
10
,
help
=
'set number of random synthetics to forward model (default: '
'10). If set to zero, create synthetics for the reference '
'solution.'
)
event_names
=
None
if
options
.
event_name
:
event_names
=
[
options
.
event_name
]
parser
,
options
,
args
=
cl_parse
(
'check'
,
args
,
setup
)
if
len
(
args
)
<
2
:
help_and_die
(
parser
,
'missing arguments'
)
config_path
=
args
[
0
]
event_names
=
args
[
1
:]
config
=
grond
.
read_config
(
config_path
)
target_string_ids
=
None
if
options
.
target_string_ids
:
target_string_ids
=
options
.
target_string_ids
.
split
(
','
)
grond
.
check
(
config
,
event_names
=
event_names
)
event_names
=
event_names
,
target_string_ids
=
target_string_ids
,
show_plot
=
options
.
show_plot
,
show_waveforms
=
options
.
show_waveforms
,
n_random_synthetics
=
options
.
n_random_synthetics
)
def
command_go
(
args
):
def
setup
(
parser
):
parser
.
add_option
(
'--event'
,
dest
=
'event_name'
,
metavar
=
'EVENTNAME'
,
help
=
'process only event EVENTNAME'
)
parser
.
add_option
(
'--force'
,
dest
=
'force'
,
action
=
'store_true'
,
help
=
'overwrite existing run directory'
)
...
...
@@ -212,20 +249,18 @@ def command_go(args):
help
=
'set number of events to process in parallel'
)
parser
,
options
,
args
=
cl_parse
(
'go'
,
args
,
setup
)
if
len
(
args
)
!=
1
:
help_and_die
(
parser
,
'
no config file given
'
)
if
len
(
args
)
<
2
:
help_and_die
(
parser
,
'
missing arguments
'
)
config_path
=
args
[
0
]
event_names
=
args
[
1
:]
config
=
grond
.
read_config
(
config_path
)
if
options
.
status
==
'quiet'
:
status
=
()
else
:
status
=
tuple
(
options
.
status
.
split
(
','
))
event_names
=
None
if
options
.
event_name
:
event_names
=
[
options
.
event_name
]
grond
.
go
(
config
,
event_names
=
event_names
,
...
...
@@ -236,19 +271,18 @@ def command_go(args):
def
command_forward
(
args
):
def
setup
(
parser
):
parser
.
add_option
(
'--event'
,
dest
=
'event_name'
,
metavar
=
'EVENTNAME'
,
help
=
'process only event EVENTNAME'
)
pass
parser
,
options
,
args
=
cl_parse
(
'forward'
,
args
,
setup
)
if
len
(
args
)
!=
1
:
help_and_die
(
parser
,
'
incorrect number of
arguments'
)
if
len
(
args
)
<
1
:
help_and_die
(
parser
,
'
missing
arguments'
)
event_names
=
None
if
options
.
event_name
:
event_names
=
[
options
.
event_name
]
event_names
=
args
[
1
:]
run_path
,
=
args
if
not
event_names
:
help_and_die
(
parser
,
'no event names given'
)
run_path
=
args
[
0
]
grond
.
forward
(
run_path
,
event_names
=
event_names
)
...
...
@@ -278,19 +312,6 @@ def command_harvest(args):
weed
=
options
.
weed
)
def
command_map_geometry
(
args
):
from
grond
import
plot
parser
,
options
,
args
=
cl_parse
(
'map-geometry'
,
args
)
if
len
(
args
)
!=
2
:
help_and_die
(
parser
,
'two arguments required'
)
config_path
=
args
[
0
]
output_path
=
args
[
1
]
config
=
grond
.
read_config
(
config_path
)
plot
.
map_geometry
(
config
,
output_path
)
def
command_plot
(
args
):
from
grond
import
plot
...
...
@@ -304,8 +325,8 @@ def command_plot(args):
help
=
'comma-separated list of ouptut formats (default: pdf)'
)
parser
.
add_option
(
'--dpi'
,
'--dpi'
,
dest
=
'dpi'
,
type
=
float
,
default
=
72
.
,
help
=
'DPI setting for raster formats (default=
72
)'
)
'--dpi'
,
'--dpi'
,
dest
=
'dpi'
,
type
=
float
,
default
=
120
.
,
help
=
'DPI setting for raster formats (default=
120
)'
)
plotnames_avail
=
plot
.
available_plotnames
()
...
...
src/__init__.py
View file @
0d8e2b83
from
core
import
*
# noqa
from
cmt
import
*
# noqa
from
double_dc
import
*
# noqa
from
dataset
import
*
# noqa
src/cmt.py
View file @
0d8e2b83
...
...
@@ -37,7 +37,6 @@ class CMTProblem(core.Problem):
core
.
Parameter
(
'rel_moment_iso'
,
label
=
'$M_{0}^{ISO}/M_{0}$'
),
core
.
Parameter
(
'rel_moment_clvd'
,
label
=
'$M_{0}^{CLVD}/M_{0}$'
)]
base_source
=
gf
.
Source
.
T
()
targets
=
List
.
T
(
core
.
MisfitTarget
.
T
())
ranges
=
Dict
.
T
(
String
.
T
(),
gf
.
Range
.
T
())
...
...
@@ -164,28 +163,45 @@ class CMTProblem(core.Problem):
return
out
def
evaluate
(
self
,
x
,
result_mode
=
'sparse'
):
def
evaluate
(
self
,
x
,
result_mode
=
'sparse'
,
mask
=
None
):
source
=
self
.
unpack
(
x
)
engine
=
self
.
get_engine
()
for
target
in
self
.
targets
:
target
.
set_result_mode
(
result_mode
)
resp
=
engine
.
process
(
source
,
self
.
targets
)
if
mask
is
not
None
:
assert
len
(
mask
)
==
len
(
self
.
targets
)
targets_ok
=
[
target
for
(
target
,
ok
)
in
zip
(
self
.
targets
,
mask
)
if
ok
]
else
:
targets_ok
=
self
.
targets
data
=
[]
results
=
[]
for
target
,
result
in
zip
(
self
.
targets
,
resp
.
results_list
[
0
]):
resp
=
engine
.
process
(
source
,
targets_ok
)
if
mask
is
not
None
:
ires_ok
=
0
results
=
[]
for
target
,
ok
in
zip
(
self
.
targets
,
mask
):
if
ok
:
results
.
append
(
resp
.
results_list
[
0
][
ires_ok
])
ires_ok
+=
1
else
:
results
.
append
(
gf
.
SeismosizerError
(
'skipped because of previous failure'
))
else
:
results
=
list
(
resp
.
results_list
[
0
])
data
=
[]
for
target
,
result
in
zip
(
self
.
targets
,
results
):
if
isinstance
(
result
,
gf
.
SeismosizerError
):
logger
.
debug
(
'%s.%s.%s.%s: %s'
%
(
target
.
codes
+
(
str
(
result
),)))
data
.
append
((
None
,
None
))
results
.
append
(
None
)
else
:
data
.
append
((
result
.
misfit_value
,
result
.
misfit_norm
))
results
.
append
(
result
)
ms
,
ns
=
num
.
array
(
data
,
dtype
=
num
.
float
).
T
if
result_mode
==
'full'
:
...
...
@@ -200,7 +216,7 @@ class CMTProblem(core.Problem):
resp
=
engine
.
process
(
source
,
plain_targets
)
results
=
[]
for
target
,
result
in
zip
(
self
.
targets
,
resp
.
result_list
[
0
]):
for
target
,
result
in
zip
(
self
.
targets
,
resp
.
result
s
_list
[
0
]):
if
isinstance
(
result
,
gf
.
SeismosizerError
):
logger
.
debug
(
'%s.%s.%s.%s: %s'
%
(
target
.
codes
+
(
str
(
result
),)))
...
...
@@ -232,7 +248,6 @@ class CMTProblem(core.Problem):
def
inter_group_weights2
(
self
,
ns
):
group
,
ngroups
=
self
.
get_group_mask
()
ws
=
num
.
zeros
(
ns
.
shape
)
for
igroup
in
xrange
(
ngroups
):
mask
=
group
==
igroup
...
...
@@ -269,7 +284,6 @@ class CMTProblem(core.Problem):
def
global_misfits
(
self
,
misfits
):
ws
=
self
.
get_target_weights
()[
num
.
newaxis
,
:]
*
\
self
.
inter_group_weights2
(
misfits
[:,
:,
1
])
gms
=
num
.
sqrt
(
num
.
nansum
((
ws
*
misfits
[:,
:,
0
])
**
2
,
axis
=
1
)
/
num
.
nansum
((
ws
*
misfits
[:,
:,
1
])
**
2
,
axis
=
1
))
return
gms
...
...
@@ -303,7 +317,7 @@ class CMTProblemConfig(core.ProblemConfig):
event_time
=
util
.
time_to_str
(
event
.
time
))
problem
=
CMTProblem
(
name
=
self
.
name_template
%
subs
,
name
=
core
.
expand_template
(
self
.
name_template
,
subs
)
,
apply_balancing_weights
=
self
.
apply_balancing_weights
,
base_source
=
base_source
,
targets
=
targets
,
...
...
src/core.py
View file @
0d8e2b83
This diff is collapsed.
Click to expand it.
src/dataset.py
View file @
0d8e2b83
import
copy
import
logging
from
collections
import
defaultdict
import
numpy
as
num
from
pyrocko
import
util
,
pile
,
model
,
config
,
trace
,
snuffling
,
gui_util
from
pyrocko
import
util
,
pile
,
model
,
config
,
trace
,
\
marker
as
pmarker
from
pyrocko.fdsn
import
enhanced_sacpz
,
station
as
fs
from
pyrocko.guts
import
Object
,
Tuple
,
String
,
Float
,
dump_all
,
load_all
...
...
@@ -16,11 +18,22 @@ class InvalidObject(Exception):
class
NotFound
(
Exception
):
def
__init__
(
self
,
*
value
):
self
.
value
=
value
def
__init__
(
self
,
reason
,
codes
=
None
,
time_range
=
None
):
self
.
reason
=
reason
self
.
time_range
=
time_range
self
.
codes
=
codes
def
__str__
(
self
):
return
' '
.
join
([
str
(
v
)
for
v
in
self
.
value
])
s
=
self
.
reason
if
self
.
codes
:
s
+=
' (%s)'
%
'.'
.
join
(
self
.
codes
)
if
self
.
time_range
:
s
+=
' (%s - %s)'
%
(
util
.
time_to_str
(
self
.
time_range
[
0
]),
util
.
time_to_str
(
self
.
time_range
[
1
]))
return
s
class
DatasetError
(
Exception
):
...
...
@@ -47,7 +60,7 @@ def dump_station_corrections(station_corrections, filename):
class
Dataset
(
object
):
def
__init__
(
self
):
def
__init__
(
self
,
event_name
=
None
):
self
.
events
=
[]
self
.
pile
=
pile
.
Pile
()
self
.
stations
=
{}
...
...
@@ -66,6 +79,7 @@ class Dataset(object):
self
.
synthetic_test
=
None
self
.
_picks
=
None
self
.
_cache
=
{}
self
.
_event_name
=
event_name
def
empty_cache
(
self
):
self
.
_cache
=
{}
...
...
@@ -84,15 +98,20 @@ class Dataset(object):
self
.
stations
[
station
.
nsl
()]
=
station
if
pyrocko_stations_filename
is
not
None
:
logger
.
debug
(
'Loading stations from file %s'
%
pyrocko_stations_filename
)
logger
.
debug
(
'Loading stations from file %s'
%
pyrocko_stations_filename
)
for
station
in
model
.
load_stations
(
pyrocko_stations_filename
):
self
.
stations
[
station
.
nsl
()]
=
station
if
stationxml_filenames
is
not
None
and
len
(
stationxml_filenames
)
>
0
:
logger
.
debug
(
'Loading stations from StationXML file %s'
%
stationxml_filenames
)
for
stationxml_filename
in
stationxml_filenames
:
logger
.
debug
(
'Loading stations from StationXML file %s'
%
stationxml_filename
)
sx
=
fs
.
load_xml
(
filename
=
stationxml_filename
)
for
station
in
sx
.
get_pyrocko_stations
():
self
.
stations
[
station
.
nsl
()]
=
station
...
...
@@ -116,7 +135,7 @@ class Dataset(object):
fileformat
=
fileformat
,
show_progress
=
show_progress
)
def
add_responses
(
self
,
sacpz_dirname
=
None
,
stationxml_filenames
=
None
):
def
add_responses
(
self
,
sacpz_dirname
=
None
,
stationxml_filenames
=
None
):
if
sacpz_dirname
:
logger
.
debug
(
'Loading SAC PZ responses from %s'
%
sacpz_dirname
)
for
x
in
enhanced_sacpz
.
iload_dirname
(
sacpz_dirname
):
...
...
@@ -124,13 +143,15 @@ class Dataset(object):
if
stationxml_filenames
:
for
stationxml_filename
in
stationxml_filenames
:
logger
.
debug
(
'Loading StationXML responses from %s'
%
logger
.
debug
(
'Loading StationXML responses from %s'
%
stationxml_filename
)
self
.
responses_stationxml
.
append
(
fs
.
load_xml
(
filename
=
stationxml_filename
))
def
add_clippings
(
self
,
markers_filename
):
markers
=
snuffling
.
load_markers
(
markers_filename
)
markers
=
pmarker
.
load_markers
(
markers_filename
)
clippings
=
{}
for
marker
in
markers
:
nslc
=
marker
.
one_nslc
()
...
...
@@ -151,15 +172,27 @@ class Dataset(object):