Skip to content

Commit

Permalink
[1pt] PR: Function to extract HAND and hydroid values at point locati…
Browse files Browse the repository at this point in the history
…ons for SRC adjustment tool (#444)

This merges in the script, adjust_rc_with_feedback.py, that will be expanded in future issues. The primary function that performs the HAND value and hydroid extraction is ingest_points_layer() but this may change as the overall process evolves.

- Added adjust_rc_with_feedback.py with ingest_points_layer(), a function to extract HAND and hydroid values for use in an automatic synthetic rating curve updating mechanism.

This resolves #437.
  • Loading branch information
BradfordBates-NOAA authored Aug 19, 2021
1 parent 8949dc3 commit 7472238
Show file tree
Hide file tree
Showing 5 changed files with 136 additions and 2 deletions.
21 changes: 21 additions & 0 deletions docs/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,27 @@ All notable changes to this project will be documented in this file.
We follow the [Semantic Versioning 2.0.0](http://semver.org/) format.
<br/><br/>

## v3.0.22.0 - 2021-08-19 - [PR #444](https://github.com/NOAA-OWP/cahaba/pull/444)

This adds a script, `adjust_rc_with_feedback.py`, that will be expanded in future issues. The primary function that performs the HAND value and hydroid extraction is ingest_points_layer() but this may change as the overall synthetic rating curve automatic update machanism evolves.

## Additions
- Added `adjust_rc_with_feedback.py` with `ingest_points_layer()`, a function to extract HAND and hydroid values for use in an automatic synthetic rating curve updating mechanism.

<br/><br/>

## Changes
- Remove `Dockerfile.prod`, rename `Dockerfile.dev` to just `Dockerfile`, and remove ``.dockerignore`.
- Clean up `Dockerfile` and remove any unused* packages or variables.
- Remove any unused* Python packages from the `Pipfile`.
- Move the `CHANGELOG.md`, `SECURITY.md`, and `TERMS.md` files to the `/docs` folder.
- Remove any unused* scripts in the `/tools` and `/src` folders.
- Move `tools/preprocess` scripts into `tools/`.
- Ensure all scripts in the `/src` folder have their code in functions and are being called via a `__main__` function (This will help with implementing memory profiling fully).
- Changed memory-profiling to be an option flag `-m` for `fim_run.sh`.
- Updated FIM API to save all outputs during a "release" job.


## v3.0.21.0 - 2021-08-18 - [PR #433](https://github.com/NOAA-OWP/cahaba/pull/433)

General repository cleanup, made memory-profiling an optional flag, API's release feature now saves outputs.
Expand Down
2 changes: 1 addition & 1 deletion fim_run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ else
fi

# identify missing HUCs
time python3 /foss_fim/tools/fim_completion_check.py -i $hucList -o $outputRunDataDir
# time python3 /foss_fim/tools/fim_completion_check.py -i $hucList -o $outputRunDataDir

echo "$viz"
if [[ "$viz" -eq 1 ]]; then
Expand Down
2 changes: 1 addition & 1 deletion src/add_crosswalk.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ def add_crosswalk(input_catchments_fileName,input_flows_fileName,input_srcbase_f
print('Note: NOT using bathy estimation approach to modify the SRC...')

# make hydroTable
output_hydro_table = output_src.loc[:,['HydroID','feature_id','Stage','Discharge (m3s-1)']]
output_hydro_table = output_src.loc[:,['HydroID','feature_id','Stage','Discharge (m3s-1)', 'HydraulicRadius (m)', 'WetArea (m2)', 'SLOPE', 'ManningN']]
output_hydro_table.rename(columns={'Stage' : 'stage','Discharge (m3s-1)':'discharge_cms'},inplace=True)

if output_hydro_table.HydroID.dtype != 'str': output_hydro_table.HydroID = output_hydro_table.HydroID.astype(str)
Expand Down
1 change: 1 addition & 0 deletions src/run_by_unit.sh
Original file line number Diff line number Diff line change
Expand Up @@ -400,6 +400,7 @@ echo -e $startDiv"Finalize catchments and model streams $hucNumber"$stopDiv outp
date -u
Tstart
[ ! -f $outputHucDataDir/gw_catchments_reaches_filtered_addedAttributes_crosswalked.gpkg ] && \
echo python3 -m memory_profiler $srcDir/add_crosswalk.py -d $outputHucDataDir/gw_catchments_reaches_filtered_addedAttributes.gpkg -a $outputHucDataDir/demDerived_reaches_split_filtered.gpkg -s $outputHucDataDir/src_base.csv -u $input_bathy_bankfull -v $outputHucDataDir/bathy_crosswalk_calcs.csv -e $outputHucDataDir/bathy_stream_order_calcs.csv -g $outputHucDataDir/bathy_thalweg_flag.csv -i $outputHucDataDir/bathy_xs_area_hydroid_lookup.csv -l $outputHucDataDir/gw_catchments_reaches_filtered_addedAttributes_crosswalked.gpkg -f $outputHucDataDir/demDerived_reaches_split_filtered_addedAttributes_crosswalked.gpkg -r $outputHucDataDir/src_full_crosswalked.csv -j $outputHucDataDir/src.json -x $outputHucDataDir/crosswalk_table.csv -t $outputHucDataDir/hydroTable.csv -w $outputHucDataDir/wbd8_clp.gpkg -b $outputHucDataDir/nwm_subset_streams.gpkg -y $outputHucDataDir/nwm_catchments_proj_subset.tif -m $manning_n -z $input_nwm_catchments -p $extent -k $outputHucDataDir/small_segments.csv
python3 -m memory_profiler $srcDir/add_crosswalk.py -d $outputHucDataDir/gw_catchments_reaches_filtered_addedAttributes.gpkg -a $outputHucDataDir/demDerived_reaches_split_filtered.gpkg -s $outputHucDataDir/src_base.csv -u $input_bathy_bankfull -v $outputHucDataDir/bathy_crosswalk_calcs.csv -e $outputHucDataDir/bathy_stream_order_calcs.csv -g $outputHucDataDir/bathy_thalweg_flag.csv -i $outputHucDataDir/bathy_xs_area_hydroid_lookup.csv -l $outputHucDataDir/gw_catchments_reaches_filtered_addedAttributes_crosswalked.gpkg -f $outputHucDataDir/demDerived_reaches_split_filtered_addedAttributes_crosswalked.gpkg -r $outputHucDataDir/src_full_crosswalked.csv -j $outputHucDataDir/src.json -x $outputHucDataDir/crosswalk_table.csv -t $outputHucDataDir/hydroTable.csv -w $outputHucDataDir/wbd8_clp.gpkg -b $outputHucDataDir/nwm_subset_streams.gpkg -y $outputHucDataDir/nwm_catchments_proj_subset.tif -m $manning_n -z $input_nwm_catchments -p $extent -k $outputHucDataDir/small_segments.csv
Tcount

Expand Down
112 changes: 112 additions & 0 deletions tools/adjust_rc_with_feedback.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
import argparse
import geopandas as gpd
from geopandas.tools import sjoin
import os
import rasterio

temp_workspace = r''
HAND_CRS = 'EPSG:3857'

def update_rating_curve(grouped_median, huc6):
pass


def ingest_points_layer(points_layer, fim_directory, wbd_path):

# Read wbd_path and points_layer to determine which HUC6 each point is in.
wbd_huc8_read = gpd.read_file(wbd_path, layer='WBDHU6')
points_layer_read = gpd.read_file(points_layer)

# Update CRS of points_layer_read.
points_layer_read = points_layer_read.to_crs(HAND_CRS)
wbd_huc8_read = wbd_huc8_read.to_crs(HAND_CRS)

# Spatial join the two layers.
water_edge_df = sjoin(points_layer_read, wbd_huc8_read)

# Convert to GeoDataFrame.
gdf = gpd.GeoDataFrame(water_edge_df)

# Add two columns for X and Y.
gdf['X'] = gdf['geometry'].x
gdf['Y'] = gdf['geometry'].y

# Extract information into dictionary.
huc6_list = []
for index, row in gdf.iterrows():
huc6 = row['HUC6']
if huc6 not in huc6_list:
huc6_list.append(huc6)

# Define coords variable to be used in point raster value attribution.
coords = [(x,y) for x, y in zip(water_edge_df.X, water_edge_df.Y)]

# Define paths to relevant HUC6 HAND data.
for huc6 in huc6_list:
print(huc6)

# Define paths to relevant HUC6 HAND data and get necessary metadata for point rasterization.
hand_path = os.path.join(fim_directory, huc6, 'hand_grid_' + huc6 + '.tif')
if not os.path.exists(hand_path):
print("HAND grid for " + huc6 + " does not exist.")
continue
catchments_path = os.path.join(fim_directory, huc6, 'catchments_' + huc6 + '.tif')
if not os.path.exists(catchments_path):
print("Catchments grid for " + huc6 + " does not exist.")
continue

# water_edge_df = water_edge_df[water_edge_df['HUC6'] == huc6]

# Use point geometry to determine pixel values at catchment and HAND grids.
hand_src = rasterio.open(hand_path)
water_edge_df['hand'] = [h[0] for h in hand_src.sample(coords)]
hand_src.close()
catchments_src = rasterio.open(catchments_path)
water_edge_df['hydroid'] = [c[0] for c in catchments_src.sample(coords)]

# Get median HAND value for appropriate groups.
water_edge_median_ds = water_edge_df.groupby(["hydroid", "flow", "submitter", "coll_time", "flow_unit"])['hand'].median()

output_csv = os.path.join(fim_directory, huc6, 'user_supplied_n_vals_' + huc6 + '.csv')

water_edge_median_ds.to_csv(output_csv)

# 1. Loop and find the corresponding hydroids in the Hydrotable
# 2. Grab slope, wetted area, hydraulic radius, and feature_id that correspond with the matching hydroids and HAND value for the nearest stage
# 3. Calculate new column for new roughness using the above info
# 3b. If multiple flows exist per hydroid, aggregate the resulting Manning Ns
# 3c. If range of resulting Manning Ns is high, notify human
# 4. Update Hydrotable
# 4a. Copy default flow and N columns to new columns with "_default" in the field name
# 4b. Overwrite the official flow and N columns with the new calculated values
# 4c. Add last_updated column with timestamp where values were changed, also add "submitter" column
# 5. What do we do in catchments that match the feature_id?
# 5a. If these catchments already have known data, then let it use those. If not, use new calculated Ns.

update_rating_curve(water_edge_median_ds, huc6)



if __name__ == '__main__':

# Parse arguments.
parser = argparse.ArgumentParser(description='Adjusts rating curve given a shapefile containing points of known water boundary.')
parser.add_argument('-p','--points-layer',help='Path to points layer containing known water boundary locations',required=True)
parser.add_argument('-d','--fim-directory',help='Parent directory of FIM-required datasets.',required=True)
parser.add_argument('-w','--wbd-path', help='Path to national HUC6 layer.',required=True)

# Assign variables from arguments.
args = vars(parser.parse_args())
points_layer = args['points_layer']
fim_directory = args['fim_directory']
wbd_path = args['wbd_path']

ingest_points_layer(points_layer, fim_directory, wbd_path)

# Open catchment, HAND, and point grids and determine pixel values for Hydroid, HAND value, and discharge value, respectively.

# Open rating curve file(s).

# Use three values to determine the hydroid rating curve(s) to update, then update them using a variation of Manning's Equation.

# Ensure the JSON rating curve is updated and saved (overwitten). Consider adding attributes to document what was performed.

0 comments on commit 7472238

Please sign in to comment.