Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QAQC implementation for Water Storage #697

Open
jenniferRapp opened this issue Feb 2, 2021 · 41 comments
Open

QAQC implementation for Water Storage #697

jenniferRapp opened this issue Feb 2, 2021 · 41 comments
Assignees
Labels

Comments

@jenniferRapp
Copy link

jenniferRapp commented Feb 2, 2021

Use the code that Lindsay and others have developed as a starting point to provide QAQC of the modeled variables that contribute to the water storage mapper. Brainstorm ways that we could simply provide a visual interface for modelers to see the values of individual parameters and highlight HRUs that are 'out of bounds' in some way. Make comparisons with authoritative datasets from other sources that can place the model output into context.

@jenniferRapp
Copy link
Author

jenniferRapp commented Feb 2, 2021

QAQC of the modeled variables that contribute to the water storage mapper.

  • Brainstorm ways that we could simply provide a visual interface for modelers to see the values of individual parameters and highlight HRUs that are 'out of bounds' in some way. % change from the previous model run?

  • Provide written report of HRUs and variables that are out of bounds in relation to the quantiles for the variable.

possible future activities:

  • Make comparisons with authoritative datasets from other sources that can place the model output into context.
  • Work with modelers to identify those author datasets and determine how we can make comparisons. Is it just making a layer available within a mapper?

@jenniferRapp
Copy link
Author

@mhines-usgs @mwernimont @lindsayplatt Can you all do a little brainstorming to help me understand what you think would be possible for these diagnostic tools?

@mhines-usgs
Copy link
Contributor

mhines-usgs commented Feb 3, 2021

initial thoughts on what we would need to understand to create a tool to look at the data beyond the simple threshold checks or known bad hru validations we currently have:
can the modelers identify/provide us with information on where 1) which values 'might go wrong?' and 2) provide us easy to implement guard rail values for every/or known potential problem data type for us to compare with incoming model results?

with those identified and organized, I think we can create visuals or summaries really easily of how the current daily data check out against those known facts.

i think it would be easy to create some sharp, nightly updating visuals in Tableau with the output from our data processing as compared to the provided guard rails or known thresholds

@lindsayplatt
Copy link
Contributor

In addition to Megan's comments above, I would offer the following:

  1. What are the expected "real-world" limits that can be applied to each data type (temperature, snowpack, soil moisture, etc)? We have a rough check of 10,000 mm but that is just a "we know this means something is REALLY wrong" check,
  2. How do they troubleshoot issues? I imagine that they would want to know the HRU ID or the segment ID where the bad values were detected and the actual bad values. Would be useful to know their process for troubleshooting so that we could build something with all the info they need.
  3. I think a tool that allows you to see the full viz (so what would be pushed out to prod) for water storage and water temp, so that you can see general patterns and explore the data, would be awesome. Then, we add an additional feature where you can filter the spatial view to just those that violated some of those "real-world" data checks.

@jenniferRapp
Copy link
Author

jenniferRapp commented Feb 10, 2021

A few comments from Rich McDonald: I think the key here is to have a modern (in format and accessibility) archive of historical and daily operational output that is easily available through an api. This would enable easy development of various dashboards as well as aid the development of tools to compare simulated and measured data.

Regarding input data checks. For both input and output, it would be great to have the complete (historical and current operational) output available, either on denali or via THREDDS (currently the NHGF project is working on this). If the data were easily accessible then the input climate forcings could be compared to the historical record and flagged if within some percentile of the historical record.

@jenniferRapp
Copy link
Author

Thinking about maintaining an archive of the full modeled data sounds like it should happen in Threads or similar. What do you think about working with data coming out of threads?

@mhines-usgs
Copy link
Contributor

It sounds like the data modelers are in need of data management tools and processes more than data validation, doesn't it? Is that something we want to be involved in? I personally feel like data management is something to stay away from (like GCMRC AHHHHHH, RUNNNN)

If they want their model data to go into a THREDDS server, that definitely seems possible (I'm fairly sure Tim Kern had asked Ivan to set up a THREDDS server on an EC2 over a year ago or more... maybe it was for this?) but I don't think any of us on your current team have experience managing a THREDDS server nor putting data into it in a programmatic way. Once data are in THREDDS though, THREDDS offers an interface that maybe you could consider an API?

@jenniferRapp
Copy link
Author

I think that the modelers are working with HiTest to get the data into THREDDS. I would just suggest that we work with the output and develop an interface that would highlight useful characteristics of the data, recent or historical.

@jenniferRapp jenniferRapp reopened this Mar 11, 2021
@jenniferRapp
Copy link
Author

@mhines-usgs @lindsayplatt
I'm hoping Megan can work on the quantile evaluations. Nicole said she could free up some of your time from the SD project if needed.

Refine the numeric values that our processes use to check against the model output. Alert folks about the 6 variable individual values.

  • The output of those checks provides tables of the values and HRUs in question that Steve could examine in his Jupyter notebooks. Identify ways to link to the S3 output table within the email warning. Make more accessible.

  • Calculate one set of quantiles per variable for all of CONUS.

  • Report out HRUs that have values 150% of the max value/90th value. [this may be somewhat iterative as I'm concerned that generalizing across the US will miss some HRUs that typically have very low maximum values for a variable.] I would like to take a look at the quantiles to understand the range of values that exists for each variable. Could we run the code against some of the days of data that were impacted in the Winter?

@mhines-usgs
Copy link
Contributor

I'm happy to try, I may beg for some Lindsay help if I get stuck, I'm not a big data whiz like her team!

@lindsayplatt
Copy link
Contributor

Happy to help as needed and do peer code reviews, too

@mhines-usgs
Copy link
Contributor

I've generated quantiles for each variable using the historic data from Steve that we had used to generate the latest set of total storage quantiles + the same scripts and slurm files on yeti to generate them, but only operating on one of the variables at a time. The quantile outputs are pretty large overall (6 rds files between 80MB and 3.6GB) and are being uploaded into s3, but I am not sure if you have access there Jen? If not, we'll have to figure out another way for you to access them if you wanted to review them. I will start writing some comparison tests to include in our pipeline using your guidelines (values 150% of max/90th quantile) Monday. The quantiles in s3 are being uploaded to: s3://prod-owi-resources/resources/Application/wbeep/model_output/test/variables/

@mhines-usgs
Copy link
Contributor

@lindsayplatt does that seem like a "legal use" of the existing scripts to generate quantiles for each variable?

@lindsayplatt
Copy link
Contributor

Yes, I think so. These are adding additional "validation" steps, so it makes sense to me

@mhines-usgs
Copy link
Contributor

@jenniferRapp can you confirm the comparisons you're looking for? I may be interpreting it wrong right now, but it won't be hard to fix.

using the scripts we have calculated quantile values for each variable, for each hru, between 0-100, at each 5% increment. i then took the 90th% value you mentioned and multiplied it by 150% to get a new max value, which I compare the daily value to, in order to determine if it's a flagged value or not. seeing your new comment in Teams, I think I'm interpreting that wrong. Do you mean two different comparisons? e.g. check to see if today's value > 90th% value? then check to see if today's value > max value? (but what max value are you referring to? just the max for the day of year for the hru? or the max hru value regardless of day of year?)

@jenniferRapp
Copy link
Author

Megan, you are doing it the way I imagined. I was just also curious if the 'raw' max values were very different from the 90th percentile. I tend to think the 150% of the 90th will be sufficient.

@mhines-usgs
Copy link
Contributor

here are outputs from my current comparisons using the newly calculated quantiles for each variable + calculated max values (1.5*90th) for what we know was a bad day on 2020-12-20

2020-12-20_daily_vs_historic_comparison.zip

each variable gets a csv output with the following data: hruid, 90%, max_value, today
where hruid is the hruid
90% is the 90% value from the calculated historic variable-specific quantiles
max_value is the 90% value multiplied by 1.5
today is the value from today's daily model output, used to compare with the max_value.

only rows where the today value is larger than the max_value are included in the output

@jenniferRapp
Copy link
Author

jenniferRapp commented Mar 15, 2021 via email

@mhines-usgs
Copy link
Contributor

no rush on my part! i have other work to fall back on.

(presuming this is after they fixed the bug) I grabbed the most recent data that went through the pipeline for 2021-03-13 and here are those outputs as well. I don't think we can get much more seasonal variation given that this version of the model has only been running since December - today (with a fix pushed out somewhere in there but I'm not clear on when exactly the fix was performed) so I only have daily data to run through and compare with the historical between those dates (Dec 18th or so until 'today')
2021-03-13_daily_vs_historic_comparison.zip

@jenniferRapp
Copy link
Author

Really interesting. The HRUs that are > max_value are different for each variable for the Dec 12th date. I think we might need some check for zero values. I'm still poking around

@jenniferRapp
Copy link
Author

There are quite a few HRUs that seem above the max_value today. Many of the daily values are close to the max_value though. Since I don't know what the actual maximum values are for each day it's hard to say whether these might be reasonable predictions and just a little higher than the long term data. I'm leaning toward running this with the 95th quantile to see if we get a lot fewer HRUs returned? Some of the PKWater_equiv HRUs still have pretty outlandish looking snow pack values compared to the 90th percentile, but they are improved compared to the December data. We also have to remember that the majority of the HRUs passed this test and were not returned.

@mhines-usgs
Copy link
Contributor

I can do a comparison against the max HRU value (regardless of day of year), and separately modify and post results for 95th quantile. in that case, do we just compare the value of 95th with today's value? (No multiplier like 90th*1.5?)

@jenniferRapp
Copy link
Author

I would still use the multiplier 1.5 with the 95th percentile.

@mhines-usgs
Copy link
Contributor

Here are comparisons for the two dates using the 95th instead of 90th
2020-12-20_daily_vs_historic_comparison_95th.zip
2021-03-13_daily_vs_historic_comparison_95th.zip

will get the comparison with max attached shortly.

@mhines-usgs
Copy link
Contributor

comparisons for those two dates against the max value for each variable for each HRU

maxes calculated using: this script on yeti
the maxes:
max_variables_hru.zip

2021-03-13_daily_vs_max_comparison.zip

2020-12-20_daily_vs_max_comparison.zip

@mhines-usgs
Copy link
Contributor

summarizing what I see running the comparisons for daily values against max with some on-screen logging:

for 2020-12-20
There were 84 values above their highest max for soil_moist_tot.
There were 2 values above their highest max for hru_intcpstor.
There were 9 values above their highest max for pkwater_equiv.
There were 1 values above their highest max for hru_impervstor.
There were 2 values above their highest max for gwres_stor.
There were 0 values above their highest max for dprst_stor_hru.

for 2021-03-13
There were 93 values above their highest max for soil_moist_tot.
There were 7 values above their highest max for hru_intcpstor.
There were 2242 values above their highest max for pkwater_equiv.
There were 1 values above their highest max for hru_impervstor.
There were 195 values above their highest max for gwres_stor.
There were 0 values above their highest max for dprst_stor_hru.

@mhines-usgs
Copy link
Contributor

i haven't merged it yet, but I added additional comparisons against each variable's calculated max value (of all time/regardless of day of year) to the current daily data validations, adjusted the final text log output so its more complete for all the comparisons performed, and also have adjusted the pipeline to push all the output from the comparisons (csvs and the text log) up to s3 into a tier-specific output folder. eventually if this seems like useful information, we could consider creating an html output instead of a text file, which would also give us a place to stick in direct links to the csv files or embed the tabular summaries right within it, or create quick and dirty map renderings of which hrus are having issues, etc.

@jenniferRapp
Copy link
Author

are you retaining the 90th or 95th comparisons? am going to look at the 95th now.

@mhines-usgs
Copy link
Contributor

currently it's set in this pull request with the 95th! that and anything else can change!!

@jenniferRapp
Copy link
Author

I think that sounds good, Megan. It will be interesting to have Jacob take a look and give us some feedback. Could we name the max_value "max_value150x95Q" or something like that? or even max_value150xQ. I am looking for a way to express units and be explicit for what the columns contain.

@mhines-usgs
Copy link
Contributor

sure, I went with max_value150x95Q to be most explicit, and updated the language in the text log too.

@jenniferRapp
Copy link
Author

Can we run it for December and March and I will share with Jacob. Or you can direct him to the files? What is the best way to get a review of the output and proposed process?

@mhines-usgs
Copy link
Contributor

sure I can re-run it for him with the updated column names and post those files to where they would be going if we were running it and pushing out to s3. once we merge it i could share the email example too.

@mhines-usgs
Copy link
Contributor

here are the output file in s3 for 2020-12-20:

Test result summary:
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/order_of_magnitude_test_2020-12-20.txt

Review model output comparison outputs:
Daily values exceeding historical quantiles 95th%x1.5:
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/dprst_stor_hru_higher_than_max_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/soil_moist_tot_higher_than_max_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/hru_intcpstor_higher_than_max_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/pkwater_equiv_higher_than_max_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/hru_impervstor_higher_than_max_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/gwres_stor_higher_than_max_2020-12-20.csv

Daily values exceeding historical max:
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/dprst_stor_hru_higher_than_ever_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/soil_moist_tot_higher_than_ever_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/hru_intcpstor_higher_than_ever_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/pkwater_equiv_higher_than_ever_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/hru_impervstor_higher_than_ever_2020-12-20.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/gwres_stor_higher_than_ever_2020-12-20.csv

For 2021-03-13:

Test result summary:
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/order_of_magnitude_test_2021-03-13.txt

Review model output comparison outputs:
Daily values exceeding historical quantiles 95th%x1.5:
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/dprst_stor_hru_higher_than_max_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/soil_moist_tot_higher_than_max_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/hru_intcpstor_higher_than_max_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/pkwater_equiv_higher_than_max_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/hru_impervstor_higher_than_max_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/gwres_stor_higher_than_max_2021-03-13.csv

Daily values exceeding historical max:
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/dprst_stor_hru_higher_than_ever_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/soil_moist_tot_higher_than_ever_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/hru_intcpstor_higher_than_ever_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/pkwater_equiv_higher_than_ever_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/hru_impervstor_higher_than_ever_2021-03-13.csv
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/gwres_stor_higher_than_ever_2021-03-13.csv

@jenniferRapp
Copy link
Author

@mhines-usgs I don't have access to those links.

@mhines-usgs
Copy link
Contributor

are you on VPN? the wbeep-test site is internal only

@jenniferRapp
Copy link
Author

works now.

@jenniferRapp
Copy link
Author

Why do you think that soil moisture has more values greater than the max value ever (84) than the number of HRUs that were greater than 150X95Q (36)? The Max value ever for soils has the HRU column in the second column instead of first.

the max value ever is defined as the maximum 'soil moisture' value measured across all days and years for a given HRU? or
maximum 'soil moisture' value measured across all days and years and HRUs?
(anywhere across the country).

@mhines-usgs
Copy link
Contributor

i think if you look closely at the number in the csv for higher than ever, they are almost identical values but slightly higher to make it say its greater?
image
the max value ever is the maximum soil moisture value for any day and year for a given HRU. It is HRU specific, just the maximum value that appeared in the whole 40/41 year historical dataset.

@mhines-usgs
Copy link
Contributor

Jacob liked the idea, but also wanted to see the data on maps, here's a quick and dirty first attempt
usgs-makerspace/wbeep-processing#186
with the output that I ran locally here in s3 for 2021-03-13 as an example
https://wbeep-test-website.s3-us-west-2.amazonaws.com/estimated-availability/date/test_results/order_of_magnitude_test_2021-03-13.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants