Recovery Potential Screening

RPS Methodology, Step 6: Refine Your Assessment

In Steps 1 through 5, you compiled your data, customized your Recovery Potential Screening (RPS) Tool and completed at least one initial screening run. You used the techniques in Step 5 to compare differences revealed by your screening. From these tasks you have also been gaining a more specific understanding of how certain indicators and indices affect your watershed comparisons. Additionally, you may have generated several alternative screening runs and now wonder which one to use, or how to use multiple results together. Step 6 is where you revisit your draft results to date in light of your original screening purpose and determine the course of action that best meets your needs.

Revisit screening purpose and products. It is not unusual for an RPS project to begin with a very general goal, such as 'identify restorability differences' and refine or even change that goal as screening steps proceed. For example, an original plan such as 'identify and target the most restorable watersheds for restoration' might have evolved into 'identify the watersheds that have good RPI scores as well as moderately heavy nutrient loads and target these for TMDL implementation'. An originally single goal may also have transformed into multiple, more specific goals, such as 'target watersheds for CWA Section 319 nonpoint source projects that have impairments and a high social index score', 'target watersheds with pathogen TMDLs as well as nearby drinking water intakes for expedited restoration', and 'target watersheds with impaired but restorable aquatic habitats for collaborative restoration with fisheries programs'. Because creating alternative screening runs is so easy, having multiple goals is not a problem unless it is necessary to narrow them down to one. In such a case, a close look at the outputs of screening runs designed to address each goal may help inform the selection process.

Some considerations for revisiting purpose and products include:

  • Has the original screening purpose remained the same?
  • Will the originally planned screening approach and indicators still meet the intended purpose?
  • Have new goals or purposes emerged during the screening process, and do indicators and screening runs exist that address them?
  • Will the products of the screening identify or compare waters or watersheds by way of a transparent, reproducible process?
  • Will interim screening results help inform any needed decisions or strategies?
  • Will specific products help in communicating decisions or strategies?

In brief, if these questions reveal additional data or screening needs, you can repeat steps 3, 4 and 5 to fill those gaps. If they point instead to sufficient data with a need to choose among alternatives, proceed as follows.

Consolidate the screening runs. After conducting multiple screening runs and revisiting the scoring process, it is possible that you may have multiple versions that identify different sets of watersheds as more restorable. On the other hand, it is likely that you have observed some degree of consistency or pattern in the results - variations in indicator selection and weights often still point to many of the same watersheds as high-scoring. The opportunity to explore different combinations of indicators has been informative, but your challenge now is to distill your findings into the form you will need to inform and support action. This is a project-specific task without a single, universal solution for all settings; nevertheless, the following approaches may suit your own circumstances:

  • Select the preferred screening run. Reaching consensus on one screening run may be desirable as a decisive solution to evaluating many alternatives. During the project, your working group has likely established preferences for one (or very few) of the screening runs and indicator selections, drawing from their expert judgment and familiarity with the area being assessed. Or, your group's discussions of candidate and preferred indicators early in the assessment may have led to general agreement and interest in performing just one screening. Selection of a specific screening run should consider the source and indicator data quality, the evaluation of its component indicators against reference sites, how well the screening run addresses the assessment purposes and products as discussed above.
  • Integrate multiple screening runs if necessary. Should your group feel that no single run is ideal, consider integrating the results of the best runs available. Averaging or summing the RPI scores of multiple runs are two ways to combine results, if the objective is to reach one set of final scores. Further, it is possible to combine the results from multiple screening runs with the results of an entirely independent prioritization process (see for instance Table 7 in the screening example (PDF) (12 pp, 861 K)). Yet another approach involves targeting watersheds that surpass a defined set of scoring threshold (e.g., all watersheds with highest quartile ecological scores, lowest quartile stressor scores and above-mean social context scores in three of five screening runs). Statistical methods may provide a systematic, repeatable way to vary the combinations of indicators in multiple screening runs and observe the watersheds that consistently score well.
  • Maintain multiple alternatives intentionally. Sometimes the most useful output is, in fact, multiple sets of results rather than one. An assessment designed to assist a complex decision-making process, for example, might intentionally not select or rank-order specific watersheds but instead provide a few well documented alternatives to decision-makers for their final action. Different screenings could also reflect contingency plans for different budget scenarios, or different priority watershed selections for collaboration with different program partners. If your screening approach involved subsets such as separately screening lake and stream watersheds, or screening the three most common impairments separately, maintaining and using multiple sets of results is appropriate.

Match screening selections to specific actions. Reconciling several alternative screening runs may have clarified the outcome of your screening assessment while not yet providing the specifics you require - usually a subset of the screened watersheds that is specifically matched to an intended action. Again, screening goals and project circumstances can vary so widely that making final selections is a project-specific task. The following are some simple options that use different types of Recovery Potential Screening products to target selected watersheds:

  • Select a defined portion from rank-ordered screening results. Resource-limited programs often are aware of their capacity in terms of yearly number of projects, funding available or other measures. It is possible to match annual or long-term capacity estimates to rank-ordered screening scores in identifying priorities for restoration. Alternatively, prioritization might use rank-ordered sites more generally (such as 'within the top quartile RPI score' or 'ranking above the state mean in ecological and social index score') or in combination with other qualifying factors (such as 'also an Environmental Justice area' or 'needing three projects per sub-state region'). It is also possible to avoid selecting a priority subset at all, and use the complete set of rank-ordered recovery potential scores purely to inform day-to-day activities on the likely level of effort and related factors that may be expected if working on a given location.
  • Use bubble plot quadrants to help select priorities. Step 5 discussed how bubble plot median axes bisect the plot in horizontal and vertical dimensions, thereby creating quadrants that offer a basis for different types of prioritization strategies. Selecting the upper left quadrant, for example, would target all watersheds with ecological and stressor scores better than the median. The upper right quadrant, on the other hand, might be selected if priorities lean toward high ecological scores paired with elevated and potentially threatening stressor scores. The lower right quadrant, where high stressor and low ecological scores converge, would be a place where potentially more expensive and complex restoration challenges could be found. In any of the above, prioritizing the larger 'bubbles' within the selected quadrant would specifically target better social context for restoration as well.
  • Use geographic and scoring factors together. If geographic proximity-based factors like 'adjacency to green infrastructure hub or corridor' were not already included in the screening indicators, selection of priorities might also include a mapping component. Prioritization might start with rank-ordering or bubble plot quadrants as above, then target further with geographically observable factors. For example, impaired watershed priorities might include watersheds in the best quartile based on RPI score but also contiguous with a larger patch or corridor of healthy watersheds. Another approach might prioritize restoring specific impaired watersheds to expand the downstream extent of healthy headwaters above highly significant and economically valuable water bodies.

QA/QC your results.  Many of the QA/QC steps you first addressed in Step 3 after completing your customized RPS Tool are again appropriate at this stage, with modifications to account for the additional screenings performed:

  • File naming.  Verify that you now have a well-organized archive of RPS Tool files, each clearly named as to their location, purpose, content and date.  These should include a master (i.e., unused blank copy) tool file and a file copy for each saved screening run. You should also have well named and organized file copies for downloaded map and bubble plot images related to these screening runs, as these will be useful in reports and presentations as well as for informal planning discussions.  
  • Completeness.  If you have added indicator data, check and be sure that each indicator has been added in the right category (Base, Ecological, Stressor, Social) in all the screening runs you have saved. Also, be sure to note whether these new indicator names and their definitions have been added to the INDICATOR INFO tab – adding the data alone doesn’t provide this important metadata.
  • Availability.  If you have added indicators correctly, their names will appear in the drop-down menus on the SETUP tab in the correct category. Recheck to be certain that indicators used in all the screenings remain available and up to date in the master Tool file, and that no metrics added during the project are missing.
  • Use reference watersheds to validate results.  Your evaluation procedures should also examine screening results in comparison to reference sites of known quality, including healthy as well as impaired waters or watersheds. One commonly used approach involves spot-checking sample watersheds by manually checking raw values where watershed conditions related to alternate indicators (that were not used in the screening itself) are known, particularly if examples of what should be high and low scoring reference watersheds are available. For each measured set of index values, observe whether the indicator performed as expected with regard to these reference watersheds. For example, did a high percentage of healthy reference sites score in the top quartile as expected for a specific RPS index?  Unexpected results should trigger a much closer review of indicator inputs and assumptions throughout the screening process.