Well data files are read in from files which are named
*TVDS*_all_xyz.csv. The alphabetical order of these files must
match the order of the horizons, and start at the top. The files
should be comma-separated with:
X, Y, Depth (TVDSS or TVDSRD), Well name, Well time (from checkshot
or equivalent)
The well time values are optional.
If the subset wells are being treated separately (i.e. if subset =
1) then a separate set of well data files named
*TVDS*_subset_xyz.csv are required. The format for these is the same
as for the regular well data (above).
Time horizons can be supplied in two ways. The new way uses the time
array which is passed into SPRINT; this array contains the file
names of the time horizons, and needs to match the method array.
If the time array is not supplied, the time horizons are read from
files called *time.dat. The order and number of these files must
match the well data.
Stacking velocity data should be as output from Velit, with line
name/line number/inline in column 1, cdp/shotpoint/crossline in
column 2, time in column 3, VRMS in column 7, X in column 8 and Y in
column 9.
Other grid data may be specified as external drift etc. If running
volumetrics, mask grids may be supplied - these should be 1 within
the area and 0 outside.
If the --spillpoint flag is given, the coordinates of the inside and
outside points should be given in a file called
spillpoint_locations.txt. This is a space-separated file with the
inside point XY coordinates on the first line and the outside XY
coordinates on the second line.
Note that if full-to-spill volumetrics are desired, then the --volumetrics
flag should NOT be used.
To combine spillpoint calculation with a known OWC, use the
spillpoint-owc option.
The first and second arguments to sprint_layering() and similar
functions define if, and what sort, of volumetrics are to be
computed. The first argument is the base grid and the second is the
contact grid. In the simplest case, the base grid will be a file
name, and the contact will also be a file name. However, much more
complicated things are possible...
if $basegrid->{constant_base_depth} is defined then this is taken to be a
constant value.
if $basegrid->{base_grid_from_isochore} is defined then
$basegrid->{isochore} is taken to be the filename of an isochore
grid.
if $basegrid->{base_grid_is_constant_isochore} is defined then
$basegrid->{isochore} is taken to be a constant value.
If the contact is specified as $contact->{constant_owc} = nnnn then
this will be used as the contact.
If the contact is specified in the driver script along with the
method array as $contact->[$t][$b][$m] = nnnnn then this will be
used as a constant OWC for this layer. A grid file name can also be
supplied in this way.
If $options->{cross_validate} or the --cross-validate command-line
option is set, the run is modified to drop out wells and then
predict them. If a number is specified, then that number of wells is
dropped (if 1 is specified then a deterministic loop over all wells
is used; if more than 1 then a monte-carlo selection of the wells to
be dropped is implemented).
If the text string "specified" is used, this indicates that the list
of wells to be dropped is given in a file. This file is specified as
$options->{cross_validate_specified} and cannot be given on the
command line. In this case, only one run is performed. Care must be
taken to ensure that in the situation where multiple well-horizon
intersections occur, the SPRINT technique of appending asterisks to
the well names does not cause problems. The suggested solution is to
ensure that all well names in the input files are unique.
If a file called "cross_validation_supplement.txt" exists, then it is assumed
to be a renamed cross_validation_stats.txt file from a previous run and is
used to set the weighting factors for the --not-montecarlo option for methods
which do not already exist in the current run.
To ensure statistically valid results, a radius can be specified
using --validation-radius; wells within this distance of an
excluded well will also be dropped.
Using $options->{cross_validation_weights}{well_name} gives a
weighted sum in the RMS calculation. Distances can be computed using
a Perl one-liner e.g.:
perl -F, -ane \'next if /^X/; $\F[3] =~ s/ /_/g; $d = sqrt(($\F[0]-3561125.56)**2 + ($\F[1]-5855129.57)**2); $\F[3] $d \' 08_TVDSS_all_xyz.csv > distance_from_Field.txt
Can use a grid of k values if the grid is read in in the driver script.
... documentation incomplete
$velocity[$hrz]{min_vint} : Clip the minimum resulting Vint to this
$velocity[$hrz]{max_vint} : Clip the maximum resulting Vint to this
$velocity[$hrz]{isochron_min_threshold} : Value in msTWT; if the
isochron is thinner than this then exclude this well from the
computations
$velocity[$hrz]{vint_min_threshold} : Exclude wells which are slower than this
$velocity[$hrz]{vint_max_threshold} : Exclude wells which are faster than this
$velocity[$hrz]{fault_shadow_max_isochron}: If this value is
specified, then the isochron will be used as a mask to delete the
average velocities where the isochron is thinner than this
value. (Depending on the setting of fault_shadow_erode_nodes, this
deleted area can be expanded). The deleted area will then be
in-filled, and the filled average velocity grid will be used to
re-depth-convert. Following this, a revised interval velocity grid
will be computed. This process occurs *after* tne min_vint and
max_vint thresholds are applied, so it is possible (probable) that the
resulting interval velocity grid will contain values outside the
specified range.
$velocity[$hrz]{fault_shadow_erode_nodes}: This value only has any
effect if fault_shadow_max_isochron is specified. It serves to expand
the deleted area. If any node on the average velocity grid is within n
nodes of an deleted node, this node will be deleted as well.
$velocity[$hrz]{error_correct}: Force this horizon to tie to the
wells (otherwise error correction is only done for the final horizon).
$velocity[$hrz]{error_correct_threshold}: Correct the depth map so that the
remaining errors are less than this threshold. The value is expressed as a
percentage of the depth; typical values would be between 2 and 5. This parameter
only has an effect when error correction is occurring anyway (due to
$velocity[$hrz]{error_correct} being specified, or for the last layer). Note
that using this parameter does not enforce a tie, it simply reduces the
residual to be less than the threshold.
$velocity[$hrz]{keep_zero_thicknesses}: Ensure that the error
correction does not separate areas which have zero isochron, and does
not create negative thicknesses.
$velocity[$hrz]{v0}: Vo constant value or grid to be used when appropriate
$velocity[$hrz]{fixed_k}: Fixed k value to be used when appropriate
$velocity[$hrz]{minimum_k}
$velocity[$hrz]{maximum_k}: Limits for automatic Vo,k determination
$velocity[$hrz]{v0_referenced_to_other_horizon}: Used when the
supplied Vo map is referenced to another horizon rather than zero
$velocity[$hrz]{intercept}
$velocity[$hrz]{gradient} : For a method which requires running a
regression, use these parameters rather than actually running the
regression.
$velocity[$hrz]{isochore} : For the "supplied isochore" depth
conversion method, this is the filename of the isochore grid.
$velocity[$hrz]{depth} : For the "supplied depth" depth
conversion method, this is the filename of the depth grid.
Geostatistical parameters:
$velocity[$hrz]{variogram_model}->{variogram_range} : variogram range
for this horizon
$velocity[$hrz]{variogram_model}->{sk_mean} : Use simple kriging and
force the mean to this
$velocity[$hrz]{variogram_model}->{azimuth} : Use geometrical
anisotropy; this specifies the angle, from North, clockwise, in
degrees, of the major axis (the major axis length is
variogram_range). If this value is not specified, a circular variogram
model will be used.
$velocity[$hrz]{variogram_model}->{minor_range} : Length of the minor
axis (either this or the anisotropy ratio can be specified)
$velocity[$hrz]{variogram_model}->{anisotropy} : Ratio (between 0 and
1) of the major and minor axis lengths
$velocity[$hrz]{variogram_model}->{model} : Variogram model type. Any
of the types in gstat can be used. Common ones are \'Sph\' for
spherical and \'Exp\' for exponential
$velocity[$hrz]{residual_variogram_model} : Same bits as
variogram_model; apply to error correction when done per-layer.
$velocity[$hrz]{residual_variogram_model_final} : Same bits as
variogram_model; apply to error correction when done at the very end.
Overrides residual_range. Must be in method 0 if using more than one
method.
$velocity[$hrz]{vint_map} : replacement for the @vint_map array in the
calling parameter line. Used to pass external drift grids to the geostatistics,
for use with krige_vint_to_tie_with_external_drift and friends.
This is a file name, not a grid object.
If the $contact parameter is specified as
$contact[$hrz] = nnnnn; # Contact depth as a number
... then the following parameters can optionally be used to invoke the
spillpoint algorithm (the out2 parameter is also available if there is
more than one potential spillpoint
$velocity[$hrz]{spillpoint}{in}[0] = 2342345; # X coordinate of point within structure
$velocity[$hrz]{spillpoint}{in}[1] = 62342345; # Y coordinate of point within structure
$velocity[$hrz]{spillpoint}{out}[0] = 3342345; # X coordinate of point outside structure
$velocity[$hrz]{spillpoint}{out}[1] = 63342345; # Y coordinate of point outside structure
$options->{polygon_file} = "Polyonfile" : Use a polygon file when
creating pictures of grids in reports. The polygon file format is
similar to that used by Velit, CPS3 and others, but it must have the
line
-65535.0 -65535.0 0 0
between each polygon segment.
$velocity[$hrz]{shift_by_average}: Normally the error correction tends
to zero away from the wells (Simple Kriging with a mean of zero is
used). If this parameter is set, then the error correction grid will
instead tend to the average of all the wells (Ordinary Kriging is
used).
$velocity[$hrz]{shift_vavg_then_error_correct}: Useful for velocity
data from seismic processing etc.; back-out the average velocity from
the grids, and from the wells, and bulk-shift the average velocity
grid to match the average well average velocity, before
re-depth-converting. Should ensure that residual depth errors have a
mean close to zero.
The arrays listed above have an extra dimension, so they are of the
form $velocity[$top][$base]{feature}. All possible layering
combinations using the supplied data will be computed, in order.
The arrays are as for sprint_layering, however the layering choice
and method choice are randomly chosen for each iteration. There is
currenly no latin hypercube, so depth conversions with
(n_horizons/2) will be chosen much more often than depth conversions
with (n_horizons) or 1 horizon, as each combination has an equal
probability of being chosen.
This entry point is provided to ease the task of building driver
scripts post-cross-validation. All that is required is a list of
time horizons, and a cross-validation results file. The methods are
run in order, from best to worst.
--starting-iteration=n set starting iteration (default is 0)
--end-iteration=n set end iteration (default is the last one)
--force force recalculation of specified iterations
--iteration-report output a report (iteration_report.csv) on methods / layers for each iteration, without doing any calculations
--depth-statistics output mean, standard deviation, min, and max grids for the corrected and uncorrected base depth grids
--depth-stats-checkpoint=n output the depth stats every n, just in case
--spillpoint perform the spillpoint calculations and output mask grids
--spillpoint-sum compute the sum of the spillpoint mask grids
--spillpoint-checkpoint=n output the checkpoint sum every n, just in case
--spillpoint-flexed-depth use the error corrected depth for the spillpoint analysis (default is to use the uncorrected depth)
--spillpoint-owc=n use the specified OWC value in volumetrics, rather than assuming the strucutre is full-to-spill
--spillpoint-statistics output a summary of the spillpoint calculations to spill_statistics.csv
--depth-stdev-threshold=n only do the depth stats / spillpoint if the stdev of depth errors is less than n
--report=n output an html report on iteration n to report.html.
--report-all output an html report on all the iterations run. Implies skip-if-locked and skip-if-done-already
--report-skip-if-done-already Normally, specifying a report forces the computation to be re-done. This option makes it possible to skip iterations for which the report has already been run.
--output-intermediate-files save all the intermediate maps to disk (this includes the depth maps plus all the velocities etc
--output-intermediate-maps save all the intermediate maps to disk (this includes the depth maps plus all the velocities etc
--nooutput-intermediate-maps Don't.
--output-intermediate-depth-maps save all the intermediate depth maps to disk
--nooutput-intermediate-depth-maps Don't.
--threshold=n set the grid paging threshold. Default is 200.
--overnight only run the numbercrunching overnight
--cross-validate=s Cross-validate by dropping out up to n wells. If the value \"specified\" is used then all the wells listed in the file given in --cross-validate-specified will be dropped out at once, as a one-shot.
--cross-validate-specified=s Cross-validate wells listed in this file. File has one well per line.
--validation-radius=n Drop out wells which are within this distance of each other as a cluster
--xval-rms-threshold=n only do this iteration if the RMS error from the cross validation is less than this value. Note that this flag should not be used at the same time as the --cross-validate flag; use --cross-validate first to build up the statistics, and then run with this flag to exclude poor realisations.
--xval-threshold-corrected use the RMS of the corrected values in thresholding. Default is to use the uncorrected valued
--cross-validation-stats update the stats file and then exit
--cross-validation-max-runs=n stop after this number of cross-validation runs (one run is a complete dropping-out set of several iterations)
--max-rms Same as xval-rms-threshold, but when using the entry point sprint_best_from_crossvalidation
--skip=n Skip this number of iterations when the current one is locked. Only useful when using more than one CPU.
--unlock Forcibly unlock everything. Do not use this if running on more than one CPU
--output-final-vavg-map Output a map of average velocity to the error-corrected final horizon. Mainly for use with Pete's Sausage machine.
--nooutput-final-vavg-map Don't
--restart Delete the timestamp file and start from the beginning again (otherwise, results are cached on disk). Use this when the input data have changed.
--fill-time-horizons If an input time horizon is undefined anywhere, use values from the previous time horizon to fill in. Useful for tilted eroded reservoir sequences, or large-throw faults.
--make-conformal Ensure there are no negative isochrons. Where the horizons cross, truncate the deeper against the shallower.
--verbose=n How verbose to be. Value between 0 and 9 inclusive. 0 means no output at all. 5 is normal. 9 is extra debug.
--output-wedge Output a wedge-grid for each horizon_iteration combination.
--volumetrics Run volumetrics. Only use this when the deepest time horizon is the Top Reservoir horizon. If a Base Reservoir time horizon exists then per-layer volumetrics using the contact array need to be used.
--min-layering-layers When running different layering scenarios, ignore cases which have fewer than this number of layers.
--max-layering-layers When running different layering scenarios, ignore cases which have more than this number of layers.
--not-montecarlo Use heuristics based on cross_validation_stats.txt to choose methods, rather than pure monte-carlo, when cross-validating
--weight-not-monte=n Weighting factor for non-montecarlo crossvalidation method selection
--power-min-not-monte=n ditto
--power-count-not-monte=n ditto
--multiplier-not-monte=n ditto
--unused-multiplier=n ditto