Checking for non-preferred file/folder path names (may take a long time depending on the number of files/folders) ...
This resource contains some files/folders that have non-preferred characters in their name. Show non-conforming files/folders.
This resource contains content types with files that need to be updated to match with metadata changes. Show content type files that need updating.
Authors: |
|
|
---|---|---|
Owners: |
|
This resource does not have an owner who is an active HydroShare user. Contact CUAHSI (help@cuahsi.org) for information on this resource. |
Type: | Resource | |
Storage: | The size of this resource is 2.0 MB | |
Created: | Sep 06, 2017 at 8:29 p.m. | |
Last updated: | Dec 04, 2018 at 9:09 p.m. | |
Citation: | See how to cite this resource |
Sharing Status: | Public |
---|---|
Views: | 2398 |
Downloads: | 117 |
+1 Votes: | 1 other +1 this |
Comments: | No comments (yet) |
Abstract
This iPython notebook demonstrates the workflow for obtaining and processing gridded meteorology data files with the Observatory for Gridded Hydrometeorology Python library.
Using the Sauk-Suiattle, Elwha, and Upper Rio Salado watersheds as the study sites of interest, each Jupyter notebook will guide the user through assembling the datasets and analyses from each of seven gridded data product.
In Usecase 1, users may inspect their study site of interest given in the form of summary spatial visualizations. The treatgeoself() function will yield a mapping file per study site, which reduces the gridded cell centroids to the subset that intersects with the study area (i.e., within the watershed). Within treatgeoself(), the user may determine the amount of buffer space to include outside of the study site (default is 0.06-degree buffer region).
In Usecase 2, each of the mapping files are used to guide data retrieval from each of the gridded data products. A series of _get_ functions then downloads the files to designated subfolders. The resulting file paths are cataloged into the mapping file, which can be summarized for data availability according to the elevation gradient using the mappingfileSummary() function. These downloaded files are compressed into tar.gz files, then migrated with their respective mapping files as content files within a new HydroShare resource, for ease of collaborative use.
In Usecase 3, the downloaded files from Usecase 2 are processed in to spatial and temporal summary statistics. The gridclim_dict() function compiles and computes daily, monthly, annual, and monthly-yearly average values for each variable described in the gridded data product metadata (e.g., the ogh_meta class dictionary). Monthly averages are then visualized as time-series plots, while spatial averages are visualized as spatial heatmaps. Finally, the dictionary of dataframes (the product of the spatial-temporal analyses) is saved into a json file and migrated out as a content file within a new HydroShare resource.
Subject Keywords
Content
Credits
Funding Agencies
This resource was created using funding from the following sources:
Agency Name | Award Title | Award Number |
---|---|---|
Bureau of Indian Affairs |
Contributors
People or Organizations that contributed technically, materially, financially, or provided general support for the creation of the resource's content but are not considered authors.
Name | Organization | Address | Phone | Author Identifiers |
---|---|---|---|---|
Sauk-Suiattle Indian Tribe | ||||
Skagit Climate Consortium |
How to Cite
This resource is shared under the Creative Commons Attribution CC BY.
http://creativecommons.org/licenses/by/4.0/
Comments
There are currently no comments
New Comment