Best-Fit Computing - COLUMBUS Network Adjustment Software



Quick Tips for Using COLUMBUS

Input Files


Observation and Station descriptions within your COLUMBUS input file

You can add observation descriptions (feature coding) to your COLUMBUS input files. Observation descriptions allow you to more easily identify duplicate observations (or observation sets) measured between the same stations.

To add an observation description, place the following keyword (with description) just before the observation definition in the file. Observation descriptions can be up to 60 characters in length.

      $OBS_DESC; Bearing taken from plat 43, 1947
      $BEARING_ONLY; 1; 2; 0.24000000; 15.000; NE


      $OBS_DESC; Poor visibility at this site
      $AZ_COMPACT; 5010; TIN CUP; NOOBS; NOOBS; 89.36158000; 0.500; 39879.30000; 0.30000; 0.00000; 0.00000

Station descriptions continue to be supported using the old format (multi-line - $STADESC) and with a new format (single line - $STA_DESC) detailed below. Both can now be up to 60 characters in length and can occur anywhere in the file.

Sample for station 'PEAR RIDGE'

      $STA_DESC; PEAR RIDGE; Sits high on a ridge - good visibility

Back to top


Commenting out whole sections of your COLUMBUS input file

You can comment out entire sections of your COLUMBUS input files using two new keywords.

To comment out a section of your input file, begin the section with the keyword $BEG_SKIP and end it with the keyword $END_SKIP:

$BEG_SKIP
      Lines to be skipped
      Lines to be skipped
      Lines to be skipped
      Lines to be skipped
$END_SKIP

You can quickly uncomment this section by placing the exclamation point in front of these two new commands:

!$BEG_SKIP
      These lines will now be parsed and the data loaded into COLUMBUS
      These lines will now be parsed and the data loaded into COLUMBUS
      These lines will now be parsed and the data loaded into COLUMBUS
      These lines will now be parsed and the data loaded into COLUMBUS
!$END_SKIP

You can also nest your comment blocks using these two keywords:
$BEG_SKIP
      Lines to be skipped
      Lines to be skipped
      Lines to be skipped
      Lines to be skipped


$BEG_SKIP
      Lines to be skipped
      Lines to be skipped
      Lines to be skipped
      Lines to be skipped
$END_SKIP


      Lines to be skipped
      Lines to be skipped
$END_SKIP

Back to top


Single observation input file formats

You can define many observations individually within the COLUMBUS ASCII (Text) input file. You can also utilize the method of defining an entire observation set (for example, horizontal angle, zenith angle, chord distance, and so on) by setting the unused observation fields to "NOOBS."

The single observation formats for Azimuths, Directions, Bearings, Horizontal Angles, Zenith Angles, Horizontal Distances, Chord (slope) Distances and Height Difference observations are shown below. Additional samples of these formats can be found at the bottom of some of the demo *.TXT input files shipped with COLUMBUS.

! AT Station Name; TO Station Name; Azimuth; Azimuth SD; Instr Hgt; Targ Hgt
$AZIMUTH_ONLY; BENTLY; 5010; 149.49170000; 1.000; 0.00000; 0.00000


! AT Station Name; TO Station Name; Direction; Direction SD; Instr Hgt; Targ Hgt; Set Number
$DIRECTION_ONLY; 20; 21; 92.49510000; 6.000; 0.00000; 0.00000; 1


! AT Station Name; TO Station Name; Bearing; Bearing SD; Quadrant
$BEARING_ONLY; 1; 2; 0.24000000; 15.000; NE


! AT Station Name; TO Station Name; BS Station Name; Hor Angle; Hor Angle SD; Instr Hgt; Targ Hgt
$HORIZ_ANGLE_ONLY; 5010; BEND; TIN CUP; 148.48430000; 1.700; 0.00000; 0.00000


! AT Station Name; TO Station Name; Zenith; Zenith SD; Instr Hgt; Targ Hgt
$ZENITH_ONLY; 5010; TIN CUP; 89.36158000; 0.500; 0.00000; 0.00000


! AT Station Name; TO Station Name; Hor Dist; Hor Dist SD
$HORIZ_DIST_ONLY; 1; 2; 810.10520; 0.20120


! AT Station Name; TO Station Name; Chord (Slope Dist); Chord SD; Instr Hgt; Targ Hgt
$CHORD_ONLY; 5010; TIN CUP; 39879.30000; 0.30000; 0.00000; 0.00000


! AT Station Name; TO Station Name; Hgt Diff; Hgt Diff SD
! Note: This has always been a single observation record
$HGTDIFF_COMPACT; BEND; BENTLY; 399.63000; 0.03000

If you need to edit these observations from within COLUMBUS, enter the applicable Data | Observations grid.

$AZIMUTH_ONLY, $ZENITH_ONLY, $CHORD_ONLY in the Azimuth Observation Set grid

$DIRECTION_ONLY in the Direction Observation Set grid

$BEARING_ONLY and $HORIZ_DIST_ONLY in the Bearing Observation Set grid

$HORIZ_ANGLE_ONLY in the Horizontal Angle Observation Set grid

$HGTDIFF_COMPACT in the Height Difference Observation Set grid

Back to top


Loading two or more data files using the $INCLUDE_FILE keyword

You can organize and load all you common data files by using the $INCLUDE_FILE keyword.

Use this keyword in your file to include other files into the loading process. For example, when you load file A.txt, which includes file B.txt, both file A.txt and file B.txt will be loaded automatically.

Another (more tedious way) of performing this operation is to Open file A.txt then Append file B.txt and append all subsequent files. For large projects, with dozens of input files, this method is cumbersome. Using the $INCLUDE_FILE keyword is a far better approach.

SCENARIO

Your project consists of five files (file A.txt, B.txt, C.txt, D.txt, and E.txt), and you want to load them automatically by selecting only one file using the File | Open command.

SOLUTION

There are a number of solutions, but perhaps the easiest to consider is the following. Create a file, Master.txt and include the five files within this file:

! Top of file 'Master.txt' which is located in the path c:\stations\
$INCLUDE_FILE; A.txt
$INCLUDE_FILE; obs\B.txt
$INCLUDE_FILE; obs\C.txt
$INCLUDE_FILE; obs\D.txt
$INCLUDE_FILE; c:\gps\obs\E.txt
! Bottom of file 'Master.txt'

To load all these files, use the File | Open command and select file, Master.txt.

NOTES

  1. If the included file name is a relative file name (in other words, does not contain a full path), COLUMBUS will append the relative path provided to the path of file Master.txt to obtain the search path for this file. In the example shown above, the search path for file A.txt will become c:\stations\A.txt.

    For the next three files, the search path will become c:\stations\obs\B.txt.

    The 'E.txt' file will be searched for in path c:\gps\obs\E.txt.

  2. Files that are included in the current file being loaded will not be loaded until the current file is completely loaded.

    In the above example, file Master.txt will be processed in its entirety before file A.txt, B.txt, C.txt, D.txt, or E.txt are loaded.

    Since stations should generally be loaded prior to the observations that reference these stations, be sure to structure your file loading accordingly.

Back to top


Support for GPS baseline files

COLUMBUS supports baseline files for many GPS post-processing systems. Furthermore, many hardware vendor software solutions provide a means to create NGS (National Geodetic Survey) Bluebook files for their GPS vector data. You can convert these files directly into COLUMBUS-compatible input files using our Bluebook conversion program.

GPS systems directly or indirectly supported by COLUMBUS include:

  • Trimble Navigation - Baseline extraction from binary SSF, SSK files, and text-based Trimble Geomatics Office ASC files.

  • Ashtech - 'O' binary files created by Ashtech Solution software. Some non Ashtech vendors also create this file type (e.g., *.OBN files).

  • Leica - SKI post-processing text files.

  • Topcon - With the Topcon Tools software, you can create Ashtech 'O'-compatible files and NGS Bluebook files; both file types are supported by COLUMBUS.

  • Sokkia - With the Sokkia post-processing software, you can create GeoLab "IOB" files. These files can be quickly converted to a COLUMBUS-compatible input file using our IOB Conversion Tool.

  • Thales Navigation - Ashtech 'O' binary files (may have a *.OBN extension).

Back to top


Creating single-line, observation set, data files

COLUMBUS is pre-set to create single line input files. However, should you change this setting, you can restore this mode by doing the following:

  1. Run COLUMBUS.

  2. From the File menu, select New.

  3. From the Options menu, select Save.

  4. Click the Compress ASCII check box, then click OK.

Whenever you save COLUMBUS data to an ASCII Text file (using either the File | Save or File | Save As options), the station and observation data will be written to a single-line, observation set format.

Note: Regardless of whether the Compress ASCII setting is enabled, COLUMBUS will load data files in either the compressed format or the traditional expanded format.

Back to top


COLUMBUS-compatible ASCII (text) input file sections

There are four primary sections to a COLUMBUS-compatible ASCII (Text) input file:

Datum, Units, Stations, and Observations

The four sections should be defined in the order shown above, and can be repeated as many times as needed. For example, if you want to put linear data in the file with some entries in meters and others in U.S. Feet, you can repeat the $UNITS section just above the point in the file where the applicable linear units change.

    The Datum section (denoted by the $DATUM keyword) identifies the datum with which to associate the data (which follows). If this section is not defined, then COLUMBUS will associate the loaded data with whatever datum is active in COLUMBUS at the time (specified in the Options | Datums dialog).

    The Units section (denoted by the $UNITS keyword) identifies the linear units, angular units and the format of DMS-type fields. If this section is not defined, then COLUMBUS will assume the linear units of the file data to be the same as the active linear units in COLUMBUS at the time (specified in the Options | Units dialog).

    The Station section (consisting of several keyword types; for example, $GEO, $GEO_COMPACT, $STATE_ELEV_COMPACT, and so on) is generally defined before any observations which reference these stations.

    The Observation section (consisting of several keyword types; for example, $HOR, $HOR_COMPACT, $GPS, $GPS_COMPACT, and so on) are usually defined last.

COLUMBUS also supports numerous additional codes to scale applicable observations, define default observation standard deviations and define instrument centering errors.

For detailed information regarding the COLUMBUS input format, see Appendix A in the User Manual.

Back to top


Declaring an observation set with only partial data

COLUMBUS organizes data into observation sets. Each set is a logical grouping of observations that may be measured in the field for each setup. For example, while in the field you may set up at pt. 1000, backsight pt. 2000, then foresight pt. 3000. In the process you will probably measure the horizontal angle, zenith angle and slope (chord) distance. These three observations then make up a set.

Some software packages only allow one observation per set. In other words, you need to re-supply the AT station, TO station and instrument/target height for each measured observation. In some cases this is convenient, but often it is redundant.

However, there are times when you occupy a setup and you only measure one observation (for example, a slope distance). To define an individual observation (or partial observation set) within the COLUMBUS ASCII (Text) data file, simply replace unmeasured fields (and their standard deviation field) with the text "NOOBS" (no observation measured).

Sample: Define a compact observation set in which only the azimuth and chord distance were observed (no zenith angle), along with the instrument and target heights from station "AAA" to station "BBB" (assume linear quantities are in meters):

$AZ_COMPACT; AAA; BBB; 70.304; 5.0; NOOBS; NOOBS; 1700.5; 0.025; 1.5; 1.0
Where the above values are:
Azimuth 70-30-40.000   SD 5.0 seconds
Chord 1700.50 m     SD 0.025 m
Instrument Height 1.50 m
Target Height 1.00 m

Back to top


Opening more than one file into a COLUMBUS project

Often, the engineer would like to create separate data files, then load them all into COLUMBUS, effectively merging the data into one project. The reasons for doing this are many, including:

  1. Keeping the GPS and terrestrial observation data in different files.

  2. Having multiple GPS or terrestrial data files with different linear units, centering errors, default standard deviations, and so on, defined at the top of each file.

  3. Taking the work for each day and easily merging it with the current master project.

COLUMBUS makes this operation simple. Let's say you have three COLUMBUS input files:

  • a GPS vector file
  • a terrestrial observation file using U.S. Survey Feet as the linear units
  • and a terrestrial observation file using Meters as the linear units.

In the latter two files, you would specify the units at the top of each file using the $UNITS record. The GPS vector file would most likely have the $UNITS record defined as well, most likely as meters.

You now want to load all three files into COLUMBUS and perform a network adjustment. All you need to do to merge the data is the following:

  1. Run COLUMBUS.

  2. From the File menu, select Open and load one of the three files.

  3. From the File menu, select Append and load the remaining two files.

All your data is now combined into one project. If you were to save the data out to a new file by selecting Save As from the File menu, all data would be put into that file for loading at a later time. If you prefer, simply leave the files as separate entities, and follow the steps above each time you want to work with them as a whole.

NOTE: Be sure you use consistent station names across all files. In other words, if station "HAT" is defined in one file, that same station must be "HAT" and not some other name (for example, "HATT") in any other files. Otherwise, COLUMBUS will treat them as completely different stations.

Back to top


Setting up COLUMBUS input file templates for different instrument types

COLUMBUS supports a wide range of file variables which can be used to create file input templates for your projects. These variables can be used to define default observation standard deviations, instrument/target centering errors, linear observation scalers, and so on.

Example:

Suppose you have two different total stations, each with its own inherent measuring errors. When you use instrument 'A', you want to set observation standard deviations one way and when you use instrument 'B', you want to set them another way. To do this, you can create two template input files: one for instrument 'A' and one for instrument 'B'. At the top of each file, you would define the default standard deviations for your observation types (assume horizontal angles, zenith angles, and chord (slope) distances). For this example, assume the active linear units are set to Meters and angular units are set to Degrees.

!Top of 'A' File Template - ! Denotes a comment line

!Horizontal angle SD of 5.0 seconds
$G_HOR_SD
5.0


!Zenith angle SD of 10.0 seconds
$G_ZEN_SD
10.0


!Chord (slope) distance SD of 0.0025m
$G_CRD_SD
0.0025


!Define all project specific instrument 'A' observations below here

!Top of 'B' File Template - ! Denotes a comment line

!Horizontal angle SD of 2.0 seconds
$G_HOR_SD
2.0


!Zenith angle SD of 4.0 seconds
$G_ZEN_SD
4.0


!Chord (slope) distance SD of 0.0015m
$G_CRD_SD
0.0015


!Define all project specific instrument 'B' observations below here

When working with instrument 'A', you simply make a copy of the 'A' template, then define all the observations below the default standard deviation definitions. Likewise, do the same when using instrument 'B'. The default standard deviations defined at the top of each file will override the individual observation standard deviation defined within each observation set record.

If your project uses both instrument 'A' and instrument 'B' in the same network, load the first file ('A' file) using the File | Open command. Load the second file ('B' file) with the File | Append command. (Click here for more information.) Both files will be automatically merged into one project.

You can make this as simple or complete as you like. You can even change the default observation standard deviation values midway through the file to affect observations below that point. Or you can turn off the default observation standard deviation values by setting them to Zero midway through the file.






Network Adjustment and Coordinate Transformation
Software Solutions • Since 1990
info@bestfit.com • Ph 503-531-8819
Copyright © 1995-2011 • Best-Fit Computing, Inc. All rights reserved.