ASSET 5.0 Model Tuning Guide v1.0
RADIO ENGINEERING SOLUTIONS
How-To Guide for Model Calibration Summer 2004
Page 1
How-To Guide for Model Calibration
Introduction This document outlines and provides selected details about building a empirical model in ASSET using drive test data. Data collection is not extensively discussed, as its procedures vary depending on equipment used. In general, it is assumed that DTI equipment is used to collect BCCH data on a live network. An update of the various data collection methodologies, data filtering guidelines, and discussion of acceptable formats is presented. It illustrates how to navigate between various modules of ASSET and suggests useful tips on various user dependent options. It provides some general guidelines to an ASSET user to better calibrate the propagation model. but do not address every possible approach to model calibration.
CW Measurements Traditionally CW Field measurements are carried out using a spectrum analyzer, which measure the output of a test transmitter, which produces a Continues Wave Output at the desired frequency and output power. This document does not discuss traditional CW type drive testing, but data preparation, import, and analysis is essentially the same. In carrying out CW type measurements, the engineer has full control of the transmit facility and knows with great certainty site power and antenna parameters. Most often, this is a omni antenna, so azimuth and downtilt become irrelevant. Unfortunately, in a live network, there may be some errors associated with the site databases used for this work. While CW measurements may only involve 2 or 3 site locations (and 1-5k sample points), BCCH measurements can utilize as many site locations as time permits, and the number of sample points can be magnitudes larger (100 – 200k). With the large diversity of site locations that may be used, it will be of greater difficulty to achieve traditional error limits of 8 dB (std. dev.) If this is a limiting factor to your work, reduce the number of sites used in analysis.
Live System/BCCH Measurements The need to carry out measurements on modulated Broadcast Channel (BCCH) arises from the long setup time involved in CW measurements and from the large overhead of data collection over the repeated routes in the same location. Modulated BCCH measurements involves using a Scanner that carries out fast multiple frequency scanning, and is also able to decode the Base Station Identity Code (BSIC) and the Transmitter ID. The scanning is carried out on LIVE networks, and does not use up system resources. The scanner scans all the frequencies that are used as a Broadcast Channel, and logs the position, the frequency, the BSIC and Transmitter ID. The major advantage of this method is the near nil setup time and the ease of data collection. This enables the data collection of many sites, and hence a more accurate model calibration. There is also a flexibility of choosing any site to carry out model tuning, even after the data collection is completed. There are a few disadvantages in carrying out Modulated BCCH measurements for model calibration: • •
The most prominent disadvantage being the use of directional antennae with very narrow vertical beam widths and having appreciable vertical down tilts. This tends to distort the radiation pattern of the antenna which has a significant effect on the model developed. In dense urban areas, often antennae are below the surrounding clutter, with the bore sight of the antenna pointing towards the street. This leads to tunneling of the signal through the street, with a very high roll off of signal strength of streets perpendicular to the main street.
For Internal Use Only
Preliminary v1.0
Page 2
Page 2
How-To Guide for Model Calibration
•
Calibrating this type of propagation is very difficult, if not impossible, using a slope intercept model. Also, the modeling of the data collected in the back lobe of the directional antenna is very difficult and tends to introduce error into the model. This problem can be addressed by using appropriate antenna filtering (i.e.) using a filter to exclude points outside the 3dB beamwidth of the antenna.
There are advantages and disadvantages to each method.
Post-Processing of Scanned Modulated BCCH data Post processing of the data involves assignment of a particular measurement of a particular frequency to its respective transmitters using unique BSIC–BCCH-Transmitter Ids’. For measurements in which the BSIC and/or the Transmitter ID are not decoded, the assignment is done on the basis of knowledge of the site location, the EiRP of the cell, the antenna pattern, the antenna height and basic propagation fundamentals. Post processing requires a database of the entire site database, which necessarily contains individual cell ID’s, the parent site ID, location information in Latitude and Longitude BCCH, BSIC, Transmitter Cell ID. Also required are the antenna type, antenna height and EiRP. When using the DTI Clarify Product, the post-processed output will be in a form similar to that shown below: Longitude -74.20389187 -74.21020485 -74.21688479 -74.20082879 -74.19245104 -74.18919493 -74.17711683
Latitude Drive_Number Sector_Rx_Power C_I BER RXQUAL 40.52577304 61 -99.3 16.32775 2.468901 4 40.52476605 61 -112.7 -0.4478591 33 7 40.52201996 61 -110.36 -0.9116459 33 7 40.52710476 61 -83.99 27 0.19 0 40.53154366 61 -51.69 27 0.19 0 40.53318964 61 -54.8 27 0.19 0 40.5362465 61 -91.71 27 0.19 0
This data is produced, on a per sector basis, in an MS Access format (*.mdb). In order to import into ASSET, it must be edited (with a text editor, or more simply, MS Excel) and put into a form that ASSET will recognize. Although there are several formats that ASSET can read, a common one for model tuning is the Signia format. The Signia Format The Signia format is used as it is convenient and easy to create (MS Excel is the most likely editor).There are two files needed for Signia, a Header (*.hd) file and a Data (*.dat) file. The Header and Data files are linked by an identical file name. Hint: The header and data file must be in the same folder, and the folder can have any path.
For Internal Use Only
Preliminary v1.0
Page 3
Page 3
How-To Guide for Model Calibration
Header files (*.hd) A Header file is a tab-delimited text file containing information regarding the individual cells. Although there are many fields, only a few of them are critical for model tuning.
Hint: If header file does not load, check format, spacing, and EOF marker (carriage return), for errors. Remove any tags or unit (degrees, feet, etc.) from the input. Hint: Be sure ANTENNA_TYPE is located in the antenna database file, or an error message will be generated when loading. TX_HEIGHT should be in meters, and TX_POWER in dBm. TX_POWER is the site EIRP, not hatchplate power. Hint: To facilitate file management, make the SITE_ID, the header file name, and the data file name identical.
For Internal Use Only
Preliminary v1.0
Page 4
Page 4
How-To Guide for Model Calibration
Data Files (.dat) The Data file is a tab-delimited text file containing Long, Lat, and RSSI of each cell as measured by the receiver device. Decimal Lat-Long (DLL) formatting is required and each line represents one measurement location. There is no limitation to the number of measurement points in a Data file. If MS Excel is used as the text editor, there will be a limit of 65k points. Longitude (DLL) Latitude (DLL) Received Signal Strength Indicator (RSSI) dBm
Tags
Single data entry
For Internal Use Only
Preliminary v1.0
Page 5
Page 5
How-To Guide for Model Calibration
Loading Drive Data into ASSET The Drive Data is loaded from the Main Menu Tab “Tools” “CW Measurements Analysis” Tab. This opens a pop-up window as shown below.
Not applicable. For use with RANOPT optimization tool.
Adds/Removes individual sector drive data files Displays sector information contained in header (*.hd) file Pops-up Filtering and Model Selection Window Pops-up Graph window for RxLev Vs Distance, or Mean Error Vs Distance Begins regression curve fitting and prompts user for Error Report Type Pops-up AutoTuner Window, showing initial parmeter set.
Hint: When a sector drive file is added, the user is prompted for 'Bin Averaging', which averages all the samples found within a map bin. This feature is usually not selected, but may be applicable for drives with high number of samples, such as in a Dense Urban area where the test vehicle was moving very slowly.
Add/Remove Buttons This adds or removes individual sector data files for analysis. The loaded sectors and file path/name are shown in the main window. It pop-up a standard Windows browser screen to Add File.
For Internal Use Only
Preliminary v1.0
Page 6
Page 6
How-To Guide for Model Calibration
Info Button Once a sector file is loaded, the site information pertaining to that cell can be reviewed. Information about the location of the test site, output power, antenna height, cable and connector type and losses, and the antenna type. If there is cause or need to edit this info, it can be done at through this window.
Hint: Most often, site parms such as FEEDER_LENGTH, TYPE, CONNECTOR_ LOSS, etc. may not be known. Insert the site EIRP value as the transmit power (TX_POWER) value and zero the cable and connector losses. Hint: If site parameter changes are made in these windows, the changes will be applied, but not committed. You must manually change the header file if you desire any permanent parameter changes.
For Internal Use Only
Preliminary v1.0
Page 7
Page 7
How-To Guide for Model Calibration
Options/Filter Tab This window provides filtering options that the user may wish to employ, depending on the task involved. Distance, signal level, Line-of-Sight, and Antenna Filtering are shown. Also given is the option of removing specific data points assigned to clutter types. More on the usage of these filters is given in the Tuning and Analysis section of this document.
Graph Button The tool will also produce a graph of the sample data vs. distance. This graph shows a numerical intercept and gradient value for the data, but does not typically give useful insight for calibration.
Also available is mean error vs. distance. This gives…. For Internal Use Only
Preliminary v1.0
Page 8
Page 8
How-To Guide for Model Calibration
Analyse Button Generating an analysis of the chosen base model versus the actual data points generates an Initial Statistics of the data loaded. The “Analyse” button prompts for Analysis Report Options, and based on user needs, the report options are chosen.
Display Mode only valid for Bin Info Report
Analysis Tab to generate Initial Statistics
Note: The report can either be generated in either MS EXCEL (best option) or any Text Editor. The following shows example of Report Options generated upon completion of analysis. File Summary - provides summary on a per-file or per-sector basis (i.e. per drive test) The file summary identifies the various sites loaded into the system for analysis, along with a site wise breakup of the data points (Num.Bins) collected for that sector, the Mean Error, the Root Mean Square Error (RMS Error), the Standard Deviation (Std.Dev Error) and the correlation coefficient (Corr.Coeff). It helps the user in assessing the model on a site by site basis, and also helps the user, if required, to reclassify certain sites under a different morphology class. Example File Summary Site ID Site Name DN03504C DN03504C
Num. Bins 923
Mean Error -15.6
RMS Error Std.Dev. Error Corr. Coeff. 18.8 10.5 0.8272
Overall Summary - gives overall summary of all drive files loaded. The overall summary provides the combined statistics of how the model compares with the collected data. The values provided in the overall summary are the key points by which the model is evaluated. Example Model Summary Model Num. Bin DEN_SU_1004_v1 923
Mean Error RMS Error Std.Dev. Error Corr. Coeff. -15.6 18.8 10.5 0.8272
Clutter Summary - gives breakdown of error based on clutter type. The clutter summary provides clutter wise distribution of mean error and standard deviation. This particular table is very useful to help tune clutter parameters. Example Clutter Summary Clutter Num. Bins Mean Error RMS Error Std.Dev. Error Corr. Coeff. For Internal Use Only
Preliminary v1.0
Page 9
Page 9
How-To Guide for Model Calibration
Forest OpenLand LowDensityBuilding MediumDensityBuilding Transportation • • • • •
13 373 281 204 52
-13.0 -20.8 -8.6 -13.0 -27.0
14.5 23.2 10.7 15.8 27.8
6.8 10.3 6.4 9.0 6.9
0.9117 0.7280 0.8976 0.7547 0.2984
Num. Bins - The number of RSSI samples within the sector file, broken down by clutter class. Mean Error - the calculated mean error between the measured and predicted values. A negative value indicates the model is underpredicting. RMS Error - the root-mean-squared error. Generally a measure of the 'spread' of the error between the measured and predicted values. Std. Dev. Error - the classic measure of 'goodness' in model tuning. It is more a measure of the 'magnitude' of error between the measured and predicted values. Correlation Coefficient - between 1.0 and -1.0, it is a statistical measure of degree of linear relationship between the measured and predicted values, or how well the sample points fit the model curve. The higher the value, the better the relationship. A value of 0.7 is typical.
These reports are useful to help tune your model and guide parameter changes. Error values (high or low) are not relevant with a small sample size (i.e., less than 200-300 pts.)
Autotune Button Once Header files are loaded, when the Autotune button is selected, a Model Calibration Utility window will appear. In the Status Log, the data files will load individually and the tool will compute Initial Statistics based on the selected model chosen in under 'Options'.
Initial Statistics
Status Log
For Internal Use Only
Preliminary v1.0
Page 10
Page 10
How-To Guide for Model Calibration
Notes Insert pic of model parms window. Discuss use of Height Profiler tool in analysis
For Internal Use Only
Preliminary v1.0
Page 11
Page 11
How-To Guide for Model Calibration
Preparing the Data File Screening File Screening refers reviewing a sector file with Asset and making a subjective call as to acceptance or rejection of the data set. As a first step, each drive file must be screened. Individual sector files are loaded and inspected in the ASSET map window. Screening of individual sectors is performed to check for anomalies such as possible blocked antenna or errors in the site database such as: • • • • •
Incorrect antenna orientation Excessive downtilt (greater than 10 deg for a very narrow (less than 4 deg BW) antenna Low antenna height (less than 10 meters, but dependent on cluster average Low EIRP (less than 33 dBm) Low number of data points (less than 300 samples)
The Sectors that are discarded are summarized on a Mortality List, with specific reasons and recommendations are made based on them. Example of a Failed Sector – Azimuth Error:
The site data indicates an azimuth of 340 degrees, but the plot shows very little correlation. This sample set may be discarded and added to a mortality list. A suggestion was made to the market to check for a possible sector cabling and/or verify antenna azimuth.
For Internal Use Only
Preliminary v1.0
Page 12
Page 12
How-To Guide for Model Calibration
Data Filtering After screening each sector file and accepting the data, the sample file needs to be filtered to remove data points that are either unreliable or not desirable for model tuning. These include data points that are within a certain radius of the antenna, beyond a certain radius of the antenna, data points that have RSSI less than a specified power, and data points that have a RSSI that is considered to be weaker than the noise floor of the scanner. The filtering process aids in excluding data points that lie outside the linear region of the amplifier of scanner and hence the propagation path. The values for power levels and distances are largely based on equipment specs and site specs respectively. Other filtering options can be applied based on Line-of-sight (LOS) or Non Line-of-sight (NLOS) data points. The filtering is based on terrain data, but can also take into consideration building vector heights and clutter heights if they are assigned. This filtering is used to compute the effect of diffraction. Exclude Clutter This removes samples based on clutter type. Often, clutter types with an insufficient number of samples (for reliability reasons) may also be excluded from analysis. This is done by selecting those clutter types from analysis in the filter window.
Removes samples outside of given distances. Values vary given morphology, site height and terrain. Removes samples based on signal strength. Values shown are typical.
Removes either LOS or Non-LOS samples. Based on terrain data only. Useful for evaluation of K7. Removes samples outside of antenna beamwidth. Checked when using directional antennas.
Antenna Beamwidth Filtering When using live-system or BCCH drive data, or when using directional antennas, it is necessary to filter datapoints outside of the main antenna beamwidth. This removes the sample points outside of the calculated 3dB beamwidth1 of the antenna, as inclusion of these points will distort the model as 1
The beamwidth is determined by ASSET by reading in the antenna pattern and cannot be altered or changed by the label in the antenna pattern file. For Internal Use Only
Preliminary v1.0
Page 13
Page 13
How-To Guide for Model Calibration
there will be a wide spread of signal values vs. distance. The influence on K1 and K2 may be substantial, as there will be a wider spread of sample points relative to distance from the site. See example graph. Plot of unfiltered drive data:
Antenna Azi = 340
For Internal Use Only
Preliminary v1.0
Page 14
Page 14
How-To Guide for Model Calibration
Plot of data file with 3dB Antenna filter
Options/Model Tab Multiple map resolutions… Map Resolution at which Model is to be tuned
Model to be tuned. See 'Adding a Base Model'
ASSET can have Digital maps with more than one resolution (typical 25m and 100m or 30m and 90m). Since Model Calibration is done based on bin by bin basis, selection of the Map Resolution is needed.
For Internal Use Only
Preliminary v1.0
Page 15
Page 15
How-To Guide for Model Calibration
Hint: If there is only a single Map Resolution, then that resolution is default, otherwise, a selection needs to be made. Choose a Map with a higher resolution, so as to produce a more finely tuned models, but if there is drive samples in lower resolution bins, these will not be included in the Analysis.
For Internal Use Only
Preliminary v1.0
Page 16
Page 16
How-To Guide for Model Calibration
Adding a Base Model Asset Default Model After setup of the Autotuner and Filtering Options, the user must define a 'default' model. The default will be renamed after tuning to according to market requirements and categorization of the sample files. This is usually based on build-up, with each market defining its morphology classes such as Dense Urban, Urban, Suburban, Rural. The Default model contains all the standard (untuned) values in the model, such as frequency, Effective Antenna Height Algorithm, the Diffraction Methodology, etc. These are seen under 'Configuration, -> 'Propagation Models'. Note: The 'Macrocell 3 Model' is used as a base model with its defaults for 1900 MHz. For a model in the PCS band, the frequency is set to 1920 MHz. The Effective Base Station Antenna Height algorithm used is the Relative algorithm (this is the calculated height between the base station antenna and the mobile antenna and is the most accurate representation). Diffraction Loss is calculated using Epstein-Peterson method without merging any of knife-edges along the path of the terrain database. Macrocell 3 Model Defaults K1 K2 160.00 40.00 K1 (near) = 0
K3 K4 -2.55 0.00 K2 (near) = 0
Effective Antenna Height Algorithm Diffraction Loss Calculation Method Mobile Antenna Height (m) Clutter Types Water Forest Open Low Density Building Med Density Building High Density Building Major Transportation Airport
K5 -13.82
K7 0.80
Relative Epstein Peterson 1.5
Through Clutter Loss (dB/km) -6.0 6.0 0.0 3.0 6.0 9.0 -3.0 -3.0
Through Clutter Loss Distance (m)
For Internal Use Only
K6 -6.55 D =0.0 km
Clutter Offset (dB) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 800
Preliminary v1.0
Page 17
Page 17
How-To Guide for Model Calibration
General Tab
Path Loss Tab
Two-piece model parms
For Internal Use Only
Preliminary v1.0
Page 18
Page 18
How-To Guide for Model Calibration
Clutter Tab
For Internal Use Only
Preliminary v1.0
Page 19
Page 19
How-To Guide for Model Calibration
Tuning the Model Setup of AutoTuner Parms Without setting limits for the tool, it will return results that provide statistical merit, but are not necessarily engineering sound. Initial parameter values, iteration limits, Delta Ranges (which limit the change it can make to a particular parameter), and the Fix or locking of other parameters which the user does not want changed during the auto routine need to be initialized. These may be narrowed as the user progresses towards a final value. For the initial setup of Optiimiser Parameters: • Max Iterations - 100 • Conv. Accuracy - 0.001 For Delta Ranges of K-parameters • K2 Delta Range - 1.0, changing to 0.1 when narrowing. • K7 Delta Range - 0.01 • • •
Zero all Through Clutter Settings and Fix Fix K5 to K7 to default Do not change K3 or K4 from default value Iteration Limiter Value Limiter Lock Values
For Internal Use Only
Preliminary v1.0
Page 20
Page 20
How-To Guide for Model Calibration
Initializing K1 and K2 The first step in model calibration is determining initial (or base) values of K1 and K2. The thinking at this point is to determine a best fit line for the most likely mobile location (dominant clutter type). K1 will be changed at several points during the tuning process, but should be adjusted periodically to prevent other parameters from deviating too far from final value. • • • •
Load Header files for a Single Morphology Type Determine most significant clutter type for given morphology Apply Autotuner defaults (given above) Apply Changes and record initial error values
Turn off all clutter except that which is determined most dominant and Analyse. The Autotuner will return an initial value for K2 and K1. If the value for K2 is reasonable, then commit the values and continue. Remember these values are just preliminary, and further tweaking will be necessary at the end of the calibration process. Hint: K1 always zeros the mean error. When mean error is positive, the model is underpredicting compared to the drive data and K1 should be reduced. When mean error is negative, the opposite applies.
Tuning for K7 The next step is tuning for diffraction loss and shadowing effects caused by the terrain. In urban or flat terrain areas, this may not be a significant factor, but must be investigated. K7 is a multiplying factor that alters the impact of the diffraction loss and its value is always less than 1.0 To assess the effect of the diffraction effect, data points with Non-LOS with the transmitting antenna are chosen. This is done be deselecting the LOS data points in the filter options.
Uncheck LOS Check Non-LOS
The Non-LOS data is then auto-tuned to return a value for K7 alone, by locking ALL other parameters in the Auto-Tune Module.
For Internal Use Only
Preliminary v1.0
Page 21
Page 21
How-To Guide for Model Calibration
Locked Parameters
The Delta value for K7 can be set to 0.01, and the Iteration limits to 100. If the returned value for K7 is acceptable, the value is manually applied to the model. To check the effects of the change in K7 on the overall model, the data points with LOS are included and the analysis is rerun. A change in the standard deviation, and or a change in mean error and correlation coefficient are observed. If the statistics show improvement, then the changes are committed. Note: The change in K7 value may result increase mean error in the analysis report. Do not worry about changing K1 until later in the process as other changes are still necessary. The mean error is zeroed out as a final step of model calibration.
Tuning K3 - K6 Model coefficients K3 – K6 are constants which alter the effect of the BS Effective Height Gain and the MS Antenna Gain. K3 and K4 are used to modify the effect of the mobile antenna height on the received signal strength. In most mobile networks, the mobile height is considered to be fixed at 1.5m above the terrain height. The default values for 1900MHz systems are K3 = -2.55 and K4 = 0.00. These values are not altered when model tuning. K5 and K6 are used to modify the effect of the base station antenna height gain on the received signal strength. Since the Effective Height Algorithm used is the Relative Method, the effect of the terrain data is more prominent than the absolute base station antenna height. The default values for K5 and K6 are hence not generally altered. For Internal Use Only
Preliminary v1.0
Page 22
Page 22
How-To Guide for Model Calibration
Tuning for Clutter Clutter Thru-Loss When using high-resolution clutter data, a more accurate model can be developed utilizing the Thru-Loss algorithm within the Macrocell 3 model. A reduction in std. dev of 1-2 dB can usually be achieved if applied properly. Tuning for Thru-Loss, like the model constants, is partly a manual and iterative process, but the Autotuner can help the user make initial assignments. As with other model parameters, the user must help guide the autotune process through use of range deltas and fixing of parameters. After a sanity check of values and noting cause and effect the values can be applied. Review the Clutter Summary to get the number of sample points used to make an assignment by the Autotuner. Some clutter types will have a very low number of samples and will need some manipulation by the user. Clutter types with a high number of samples are generally reasonable for assignment. Use these values to help guilde manual assignment to the ones with few sample points, as there should be a trend in the values. Lastly, round-up or down values to maintain simplicity (ex: 5.94 to 6.0). Use the table below for sanity checking of assignments. It will vary slightly from model to model, but will maintain a trend as mentioned (values will be higher for an urban, or more built-up area, and lower for a more rural or open area). Clutter Default Values K1 K2 160.00 40.00 K1 (near) = 0
K3 K4 -2.55 0.00 K2 (near) = 0
Effective Antenna Height Algorithm Diffraction Loss Calculation Method Mobile Antenna Height (m) Clutter Types Water Forest Open Low Density Building Med Density Building High Density Building Major Transportation Airport
K5 -13.82
K6 -6.55 D =0.0 km
K7 0.80
Relative Epstein Peterson 1.5
Through Clutter Loss (dB/km) -6.0 6.0 0.0 3.0 6.0 9.0 -3.0 -3.0
Through Clutter Loss Distance (m)
Clutter Offset (dB) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 800
Once all the Thru-Loss assignments have been made, Thru-Loss Distance is examined. Thru-Loss Distance is based on morphology, but is also influenced by the average antenna height. Typical values for Thru-Loss Distance range from 500 to 1000 meters. Use the Autotuner results for initial guidance and finalize based on error results. Finally, tweak the Thru-Loss values by examining the Clutter Summary.
Clutter Offsets Lastly, clutter offsets are assigned. Unlike Thru-Loss, Clutter offsets have no trend and will often be very close to zero when using Thru-Loss. This is a final offset made by the Autotuner to reduce the For Internal Use Only
Preliminary v1.0
Page 23
Page 23
How-To Guide for Model Calibration
mean error, but like above, is meaningless for clutter types with very few sample points. Use these returned values with the same discretion as all other values. Clutter Offsets are end-point offsets associated with each clutter type. Clutter Offsets are based on statistical analysis that makes the final adjustments to the Through Clutter Loss – Slope/Intercept model. This value is used as a balancing mechanism to minimize the mean error. Hence, it values may not appear to be intuitive or follow the trend of values for Through Clutter Loss. Clutter offsets work best to characterize Oceans, Lakes, and Rivers (or Water). An assignment with deviates from the Autotuner value is most often required, as it will mis-characterize the cross-water effect. A value of -6.0 dB is typical.
Final Tuning of K1, K2 and Clutter Offsets After all the Thru-Loss values and Thru-Clutter Distance are tweaked, finalized, and locked down, final adjustments need to be made to K1 and K2. Repeating the process for the initial adjustments of K1 and K2 returns the final values. This ensures a null mean error and null clutter mean errors, for the best slope possible. In very few cases does K2 require a change. It is more likely if the thru-loss of the dominant clutters was changed greatly in the above and the distribution of those changes (positively or negatively). Because clutter offsets cannot be fixed (or locked), the Autotuner assigns or updates them each time. The offsets will not be valid for clutter types that have very few data points for a statistically reasonable assignment. In these cases Clutter Offset must be manually assigned and reviewed based on the trend shown.
For Internal Use Only
Preliminary v1.0
Page 24
Page 24
How-To Guide for Model Calibration
Error Analysis Determining if your model is sound and reasonable is difficult. What if your results give a standard deviation greater than 8 dB? Where to you look for errors? What if none are found? If error values are not acceptable, consider possibility of a two-piece model. Eventually you will have to stop your analysis as every possible parameter will have been tweaked and modified. Effective Height Algorithm – Select a different effective height algorithm and recalculate the K5 and K6 parameters. Diffraction – Choose a different diffraction algorithm and retune the diffraction parameter (k7). Also investigate merging knife-edges. The Height Profile window and the drive test signal and signal error on the Map View provide valuable visual aids to identifying possible areas where merging may be required and by how much. Other parameters that may be changed are Clutter Heights, Separation and Mobile Heights. Adding clutter heights a separation value (must be > 0) can be of occassional aid when modeling urban environments. Clutter Separation has the effect of modeling the 'urban canyon' situation of a mobile being at street level. Lastly, mobile height models the situation of the mobile being at a specified height within the clutter.
For Internal Use Only
Preliminary v1.0
Page 25
Page 25
How-To Guide for Model Calibration
Comparison Test Using comparison plots to change the model parameters is difficult, but is effective to develop more reliable models. Changes are made manually, the results noted and its effect of other parameters. If the changes are reasonable to other tests and show an improvement in statistics, the changes are committed. Example of a Comparison Plot:
Hint: When producing a comparison plot, after you have determined the number and signal level for the respective bands, it is convenient to represent the bands with a color that is a lighter shade of that used in the drive test. This helps make the comparison more intuitive and easier to visualize.
For Internal Use Only
Preliminary v1.0
Page 26
Page 26
How-To Guide for Model Calibration
Relative Parameter Test Are the individual K parms and Thru-Loss values reasonable? How do the value compare from model to model. It is reasonable to assume K1 and K2 are larger for urban environments than rural environments. Does this trend hold?
Error Test Overall, mean error, RMS error, and std dev are used in regression analysis to quantify the results. Individually, the results are not significant, but have depth when viewed collectively. Mean Error In all cases, overall mean error should be, or very close, to zero. When not, it gives indication the model is overpredicting or underpredicting cell coverage. In most cases it will range from +15 to -15 when viewed on a per cell basis. If a cell mean error is significant, for example -20, it may indicate an operational problem with the cell and the site should be removed from the analysis list. Also, the morphology of the cell could be mis-classified compared to the other cells and it is just not a good fit. Std. Dev. and RMS Error Std. Dev and RMS error are almost the same and it is usually user preference on which one is most important. Error stats alone are not sufficient, as 8dB std dev may be impossible given environment and number of sites/sample points. If you look at the site statistics, you will see some sites above 8dB and some that are below. Modeling error can be broken down into two parts, the error due to signal fading and the human error tied to the accuracy of the databases used to model the source of the signal.
Single Slope Model vs. Dual Slope Model Sometimes it is more appropriate to model the data distribution with a 2-piece model. A two-piece will fill in coverage near the site if the drive data shows this trend and occasionally, can improve error results (1 to 2 dB Std.) It is applicable for rural environments, as man-made reflections mask this in urban settings. The characteristics of the radio propagation differ at the near-end and the far-end of the site. This model has a second K1 and K2, which serve to characterize near antenna coverage, and then, after some breakpoint distance, trends a line with a shallower K2 value. This is demonstrated in the picture below.
For Internal Use Only
Preliminary v1.0
Page 27
Page 27
How-To Guide for Model Calibration
Two Part Models Intercept 1 Slope 1
Break Point
Receive Level
Intercept 2 Slope 2
Distance from Base Station Hint: For the above graph to be theoretically valid, K1near has to be less than K1far, and K2near has to be greater (3-5dB) than K2far.
There is no clear way to tell if the drive data is single or dual-slope other than close visual and analytical inspection of the data. Start with a single slope model and if it does not give the results desired, a two-piece should be investigated. The breakpoint distance is best determined by inspection of the drive data for a good number of sectors under test. Most often it is seen that the break point distance is between 1.5 and 2 km for typical cells, however it will vary based on antenna height, EIRP, and morphology class.
Developing a 2-Piece Model A base model may be retuned to achieve the desired error statistics and at the same time concentrating on a best fit between the drive data and the propagation at the far-end of the site. Having calculated the various “K” values, Clutter Thru-Loss and Clutter Offsets, proceed to develop a model for the near-end by just tweaking K1 and K2 values by specifiying K1 (near) and K2 (near) to achieve a best fit for the near-end of the site. The near-end of the site is determined by a factor called break point distance (D). • • • •
Analyze data on whol as normal, and come to stopping point based on final error stats. Break data into two parts, near and far. Filter the data on breakpoint distance. Approximate using 4H1H2/lamba2 Analyze near field data to obtain K1 and K2 (near).
Hint: It also important to develop smooth transition from the near end to the far end. There should be no abrupt changes in signal level and 'feathering' of the transition must be taken to ensure satisfaction with coverage plot. Most often this is apparent when K2near and K2far get too far apart (>10dB typical). To smooth, slight adjustments to K1 and K2 may be needed after inspection.
Sources of Error The RF environment is very strange behaving at times. Direct and ground-reflected waves, as well as reflections from buildings all impinge upon the mobile and produce a signal that is widely For Internal Use Only
Preliminary v1.0
Page 28
Page 28
How-To Guide for Model Calibration
varying. Diffraction, shadowing, tunneling, and cross-water effects contribute as well. In combination is the speed of the mobile and its movement across a sector face, with varying gain and an you can produce a incredibly varying random environment. Fortunately, this randomness follows some sort of order and can be quantified by regression analysis and statistics. The 8dB figure mentioned above is a starting point, but it attempts to identify the randomness described as well as the FIT of a slope/intercept line across space containing the randomness. In some cases, this is very difficult, especially in areas of dense urban buildup. Database control is obviously critical. Sanity checks of system info and databases used in model tuning can be provided at key intervals, but some errors go unidentified. All databases (site, antenna, channelization, land-use and terrain) are possible sources of error. Using site locations with antennas that are unobstructed in their near-field is essential. All these factors add up to increasing the error of the model but are generally averaged out if enough sites and samples are observed and added to the sample set.
Conclusion Model tuning is an iterative process that requires time and patience, but most importantly - a deliberate approach. This document attempts to give one such approach that the authors have found successful. The overall strategy to maintain, regardless of approach is to find a middle point and then apply successive tweaking, trying to improve the results. Check your results, and tweak again. If you go off course, then you revert back to a known good result and try again, this time in another direction or with another parameter. This document has tried to mention, if not discuss, every parameter available for model tuning, and give some insight on how best to apply it. Model tuning is part science, part art. The science is knowledge of radio behavior and statistical merits. The art comes with adjustment of some parms and seeing their effect on others. Trial and error is the only way to become adept.
For Internal Use Only
Preliminary v1.0
Page 29
Page 29
APPENDIX 1 - About the ASSET Macrocell 3 Model Introduction The form and parameters of the base Macrocell 3 model is based on the ETSI-Hata/COST231 model with a few additional features incorporating algorithms for diffraction loss and effective base station heights. It also provides an accurate antenna masking process through interpolation and quantization of the antenna mask. The base Macrocell 3 slope/intercept model is of the form: Ploss = K1 + K2*log(d) + K3*(Hms) +K4*log(Hms) + K5*log(Heff) +K6*log(Heff)log(d) +K7*Diffn + Closs Where Ploss is Pathloss, Hms is mobile station height, Heff is base station effective height, Diffn is diffraction loss, and Closs is clutter loss. Distance (d) is in kilometers. • K1 is the intercept and is thought of as the amount of pathloss encountered at the 1km point. • K2 is the slope in dB/dec and takes a range between 30 and 40. • K3 and K4 are modifiers to the gain effect seen when the mobile antenna is near the ground. K4 is typically zero, however. • K5 and K6 are modifiers to the effective height gain of the base station. • K7 is a modifier to the calculated diffraction loss, and is usually less than 1.0 Each clutter can also be assigned an associated Thru-loss in dB/km and is used in conjunction with a Thru-Loss Distance. A clutter-offset parameter is utilized as a final adjustment to minimize the mean error associated with a clutter type. Other parameters associated with clutter are clutter heights and separation (average distance from obstruction to mobile). See below for a detailed explanation about Thru-Loss algorithm. Effective Height There are four Effective Antenna Height Algorithms within ASSET, each suited to different terrain and network characteristics.
• • • •
The Absolute method is not widely used in cellular networks but is in certain broadcast systems. The Average method works well in flat or gently rolling terrain. The Relative method works well in rolling-hilly terrain where the base station is normally above the mobile. The Slope method works well in hilly and severely hilly areas where the other algorithms consistently over-estimate the Heff.
Diffraction The diffraction algorithm determines how a loss figure is calculated when multiple knife-edges are detected along the terrain profile from base station to mobile. There are four methods within ASSET:
Page 30
How-To Guide for Model Calibration
• • • •
Epstein-Peterson - the loss from each knife edge is calculated and then summed together. Bullington - this method replaces the real terrain with a single knife edge. Deygout - loss is calculated relative to the main obstruction. Japanese Atlas - similar to Epstein-Peterson, but height of transmitter is altered.
Hint: Typically, Epstein-Peterson or Bullington are the most popular, but is user-preferenced. When using high-resolution terrain, merge knife-edges less than zero.
Appendix B - More on Clutter Thru-Loss and Thru-Loss Distance Clutter offsets are a fixed end point correction factor that improves the correlation between measured and predicted pathloss. This improves the Standard Deviation appreciably, but takes into consideration only the clutter type that the pathloss is being computed for and doesn’t take into consideration the loss due to the different clutter types in the path of propagation. Through Clutter Loss is the additional loss attributed to the clutter type that the signal propagates through. The total thru-loss for a prediction point is calculated by examining the clutter lying between the mobile towards the base over a defined distance, the Through Loss distance, d. When calculating thru-loss, the individual bins are weighted linearly so that the ones closest to the mobile have greatest effect, and the ones at point d have a minimum. The value of Through Clutter Loss would vary for different environments, and depends largely on the clutter through which the signal has already passed through. The effects of the clutter type in the path tend to have a residual effect on the value of the Through Clutter Loss parameter. It is not a quantitative measure of the additive loss associated with the clutter type, but rather represents a value that could shape the straight line better in order to fit the measured data, and hence may not be intuitively assigned or predicted. Through Clutter Distance represents the distance from the mobile towards the base station, through which the signal penetrates through the clutter. The remaining distance, the signal is assumed to propagate above the clutter. The Through Clutter Loss and Distance algorithm works as follows: • • •
Through Clutter Loss is added to the computed pathloss after applying a weighing factor. The weighting is linearly applied, with a weighting factor of 1.0 for the bin closest to the Mobile Antenna and a weighting factor of 0.0 at the bin that is at a distance defined by Through Clutter Distance. The Clutter Offset is used as an end point correction factor to balance the effect of Through Clutter Loss in order to minimize the mean error.
Though Clutter algorithm provides a smooth transition, or averaging effect of patloss between clutter areas. For example, consider a water edge next to a tree line. The loss does not jump from 4dB to -6dB immediately, but gradually decreases bin-by-bin. Through-Clutter models this more accurately than offset effects. It is similar to path profile algorithms found in other propagation tools.
For Internal Use Only
Preliminary v1.0
Page 31
Page 31
How-To Guide for Model Calibration
The following example illustrates the operation of Clutter Through Loss correction. Assume the following: Bin Size: 100x100 meters Through Clutter Distance: 1000 meters Total Through Clutter Loss correction = 0 + 0.06 + 0.12 + 0.3 + 0.4 + 0.5 + 0.36 + 0.42 + 0.48 + 0.9 + 1.0 = 4.54 dB.
TX
Mobile
F
F
F
0. 0. 2 1 0 0.0 0.12 6 0
F = Forest (6 dB/km)
B
B
B
F
F
F
B
B
0. 0.4 0.5 0.6 0.7 0.8 0.9 1.0 3
Clutter Type Weig ht
0. 0.4 0.5 0.360.420.480.9 1.0 Through Loss 3
Loss= 6 dB/km * 100/1000 mts * 0.7 = 0.42 dB
B = Buildings (10 dB/km) Through Clutter Distance = 1km For Internal Use Only
Preliminary v1.0
Page 32
Page 32
Glossary of Terms
•
Bins - the mapping resolution of the tool, usually 25x25 meters. The smallest measurable unit in the modeling tool. Several sample points could be located in a bin.
•
CW Measurements - a generic term used to describe propagation testing done with an unmodulated carrier, supported from a temporary TX facility, i.e., usually low power and omni transmit.
•
Filtering - removal of select data points within a sector file.
•
Intercept/Slope - the basic form of the prediction model, defined by the equation K1 + K2*log(d).
•
Mean Error - a statistical measure of the average error calculated by examining the differences between the predicted and measured values. It should be very close to zero in most cases.
•
Morphology - a general description for the area in which test sites are grouped, based on the surrounding buildup or concentration of land-usage. For example, Suburban - Hilly or Rural Coastal.
•
Mortality List - test locations that have been rejected for analysis due to discovery of database error, lack of samples, or unreasonable coverage footprint.
•
RMS Error (Root-Mean-Square) - A statistical measure of the spread of the error around the mean value calculated through examining the difference between the predicted and measured values. Its value should be close to the Standard Deviation.
•
Sample Points - an individual measurement point derived for a particular channel, for example: "The sector file is made up of 600 sample points."
•
Screening - review of sector files on the whole and determining acceptance or rejection of the file for modeling.
•
Standard Deviation (Std Dev) - The most common statistical measure of model error. Most often stated as 8 dB. The Std Dev is similar to RMS error, and the two are identical when the mean error is zero.
•
Two-Piece Model - a pathloss model with 2 distinct profiles separated at a break point distance. Identified by separate K1 and K2 values (near and far). Numerically, K2-near is higher close to the cell.
•
Tweaking - making a small change to the model and examining results.
Page 25
Quik Guide for ASSET Model Tuning
To calibrate a macrocell model, perform these initial steps: •
Inspect the drive test data to verify its validity and filter out any erroneous data.
•
Ensure that sufficient data points are available for each clutter class. In most situations it is desirable for the data to be evenly distributed with respect to log (distance) from the site, clutter classes and the sectors.
•
Enter a set of default values as an initial step.
Rough Calibration of the Standard Macrocell Model Having performed the initial recommended steps, use these recommended steps as a guide to roughly calibrating the standard macrocell model: •
Load one or more drive test files and use the filtering to remove questionable data and get an unbiased data set. For example, filter out readings with a signal level below the noise floor or clutter types with too little data to be statistically meaningful.
•
Derive a estimate of Slope Value (K2) from a plot of the Received Level vs. the 10 log (distance) using the Measurement Graph facility. Then fine tune this value.
•
Adjust the k1 parameter to a value, which will lower the mean error to 0. When the analysis report shows a positive mean error, it means the propagation model is pessimistic when compared to the drive test data by the reported value. In this case, you should lower the k1 value by the reported amount. Where a negative value is reported, the opposite applies.
•
Diffraction effects (k7) occur only when there is no Line of Sight from the site to the mobile. Therefore to determine the k7 parameter, filter the dataset to include only the non-LOS and a value determined using the process described in the above section. As a rule of thumb if the mean error is greater than 0, decrease k7 otherwise increase it.
•
Modify the filter to its original setting (to include LOS data as well in the analysis).
•
Readjust the k1 value if the reported mean in the analysis report has increased or decreased after the k7 change.
•
Adjust the k6 value, again using the process in the above section. It is useful to view the graphs and the Signal Error plot on the Map View to identify trends with successive parameter changes.
•
Readjust the k1 value if the reported mean in the analysis report has increased or decreased after the k7 change.
•
Adjust each clutter offset in turn trying to get the mean error of that particular clutter to 0.
•
Modify the k3, k4 and k5 parameters until the reported error is lowered.
•
Now you can fine tune the model.
Fine Tuning the Standard Macrocell Model When you have performed the initial and rough tuning steps, use these recommended guidelines when fine tuning the standard macrocell model. The objective is to identify what may be causing the differences between the propagation model and the actual drive test data and act on minimizing the error. Use the analysis, filtering and graph features to help you. Investigate:
•
Effective Height Algorithm – Select a different effective height algorithm and recalculate the k5 and k6 parameters.
•
Diffraction – Choose a different diffraction algorithm and retune the diffraction parameter (k7). Also investigate merging knife-edges. The Height Profile window and the drive test signal and signal error on the Map View provide valuable visual aids to identifying possible areas where merging may be required and by how much.
•
Two-Slope Model – Define alternate values for the intercept (k1) and the slope parameter (k2) to be used for a defined radius from the antenna. Typically a higher slope is used close to the antenna and a shallower slope further away.
•
Inspect the survey data and use the graphs and the drive test data Signal and Signal Error displays on the Map View to determine where the breakpoint (d) may be.
•
When a breakpoint distance has been found, calculate k1(near) and k2(near) in the same way as k1 and k2 but only using a subset the survey readings which have a distance of
•
Clutter Heights, Separation and Mobile Heights – Adding different clutter heights and selecting a clutter separation (clutter separation must be >0) can be of aid when modelling urban environments. The clutter separation has the effect of modelling the ‘urban canyon’ situation of a mobile being at street level. Finally mobile height models the situation of the mobile being at the specified height for the particular clutter.