Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[1pt] PR: Fix error msg of stats object not referenced #801

Merged
merged 3 commits into from
Jan 27, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,18 @@
All notable changes to this project will be documented in this file.
We follow the [Semantic Versioning 2.0.0](http://semver.org/) format.

## v4.0.19.5 - 2023-01-24 - [PR#801](https://github.com/NOAA-OWP/inundation-mapping/pull/801)

When running tools/test_case_by_hydroid.py, it throws an error of local variable 'stats' referenced before assignment.

### Changes

- `tools`
- `pixel_counter.py`: declare stats object and remove the GA_Readonly flag
- `test_case_by_hydroid_id_py`: Added more logging.

<br/><br/>

## v4.0.19.4 - 2023-01-25 - [PR#802](https://github.com/NOAA-OWP/inundation-mapping/pull/802)

This revision includes a slight alteration to the filtering technique used to trim/remove lakeid nwm_reaches that exist at the upstream end of each branch network. By keeping a single lakeid reach at the branch level, we can avoid issues with the branch headwater point starting at a lake boundary. This ensures the headwater catchments for some branches are properly identified as a lake catchment (no inundation produced).
Expand Down
6 changes: 4 additions & 2 deletions tools/pixel_counter.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,8 @@ def bbox_to_pixel_offsets(gt, bbox):
# Main function that determines zonal statistics of raster classes in a polygon area
def zonal_stats(vector_path, raster_path_dict, nodata_value=None, global_src_extent=False):

stats = []

# Loop through different raster paths in the raster_path_dict and
# perform zonal statistics on the files.
for layer in raster_path_dict:
Expand All @@ -130,7 +132,7 @@ def zonal_stats(vector_path, raster_path_dict, nodata_value=None, global_src_ext


# Opens raster file and sets path
rds = gdal.Open(raster_path, GA_ReadOnly)
rds = gdal.Open(raster_path)

assert rds
rb = rds.GetRasterBand(1)
Expand Down Expand Up @@ -174,7 +176,7 @@ def zonal_stats(vector_path, raster_path_dict, nodata_value=None, global_src_ext

# Loop through vectors, as many as exist in file
# Creates new list to contain their stats
stats = []

feat = vlyr.GetNextFeature()
while feat is not None:

Expand Down
26 changes: 23 additions & 3 deletions tools/test_case_by_hydro_id.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
import pandas as pd
import geopandas as gpd
import argparse
from datetime import datetime

from pixel_counter import zonal_stats
from tools_shared_functions import compute_stats_from_contingency_table
from run_test_case import test_case
Expand Down Expand Up @@ -130,7 +132,7 @@ def assemble_hydro_alpha_for_single_huc(stats,huc8,mag,bench):
help='The fim version to use. Should be similar to fim_3_0_24_14_ms',
required=True)
parser.add_argument('-g', '--gpkg',
help='filepath and filename to hold exported gpkg (and csv) file. Similar to /data/path/output.gpkg Need to use gpkg as output. ',
help='filepath and filename to hold exported gpkg (and csv) file. Similar to /data/path/fim_performance_catchments.gpkg Need to use gpkg as output. ',
required=True)


Expand All @@ -139,6 +141,13 @@ def assemble_hydro_alpha_for_single_huc(stats,huc8,mag,bench):
benchmark_category = args['benchmark_category']
version = args['version']
csv = args['gpkg']

print("================================")
print("Start test_case_by_hydroid.py")
start_time = datetime.now()
dt_string = datetime.now().strftime("%m/%d/%Y %H:%M:%S")
print (f"started: {dt_string}")
print()

# Execution code
csv_output = gpd.GeoDataFrame(columns=['HydroID', 'huc8','contingency_tot_area_km2',
Expand All @@ -162,7 +171,7 @@ def assemble_hydro_alpha_for_single_huc(stats,huc8,mag,bench):

for agree_rast in agreement_dict:

print('performing_zonal_stats')
print(f'performing_zonal_stats for {agree_rast}')

branches_dir = os.path.join(test_case_class.fim_dir,'branches')
for branches in os.listdir(branches_dir):
Expand Down Expand Up @@ -216,6 +225,17 @@ def assemble_hydro_alpha_for_single_huc(stats,huc8,mag,bench):
print('writing_to_csv')
csv_output.to_csv(csv_path_dot) # Save to CSV

print("================================")
print("End test_case_by_hydroid.py")

end_time = datetime.now()
dt_string = datetime.now().strftime("%m/%d/%Y %H:%M:%S")
print (f"ended: {dt_string}")

# calculate duration
time_duration = end_time - start_time
print(f"Duration: {str(time_duration).split('.')[0]}")
print()