Ayman Nassar

Utah State University

Subject Areas: Hydroinformatics, Cyberinfrastructure, Hydrologic Modeling, Remote Sensing, GIS, Physical Hydrology, Machine Learning

 Recent Activity

ABSTRACT:

This HydroShare resource is developed to subset and retrieve the HydroFabric dataset (Johnson, J. M. (2022), https://lynker-spatial.s3-us-west-2.amazonaws.com/copyright.html) needed to execute the NOAA Next Generation (NextGen) Water Resource Modeling framework. The NextGen hydrofabric describes the representation, discretization, and topology of the hydrologic landscape and drainage network as a three-part data product that includes: (1) catchment and flowpath features, (2) their connectivity, and (3) the attribute sets needed to execute models. For more details about the HydroFabric data, please visit this website: https://noaa-owp.github.io/hydrofabric/

Show More

ABSTRACT:

This HydroShare collection contains multiple Jupyter notebook that enable user to retrieve data from different data sources.

Show More

ABSTRACT:

The HydroData data catalog, associated python functions hf_hydrodata, and API are products of the HydroFrame project and are designed to provide easy access to a variety of other gridded model input datasets and point observations as well as national hydrologic simulations generated using the National ParFlow model (ParFlow-CONUS1 and ParFlow-CONUS2).

Show More

ABSTRACT:

This HydroShare resource provides Jupyter Notebooks with instructions and code for accessing and subsetting the NOAA National Water Model CONUS Retrospective Dataset. There are two Jupyter Notebooks
1. NWM_output_variable_retrieval_with_FeatureID.ipynb
2. NWM_output_variable_retrieval_with_shapefile.ipynb
The first retrieves data for one point (feature ID). The second retrieves data for areas specified interactively or via an uploaded shapefile.
These notebooks programmatically retrieve the data from Amazon Web Services (https://registry.opendata.aws/nwm-archive/), and in the case of Zone data retrieval average the data over the zones specified.
The notebooks provided are coded to retrieve data from NWM retrospective analysis version 3.0 released in ZARR format in December 2023.
The NOAA National Water Model Retrospective dataset contains input and output from multi-decade CONUS retrospective simulations (https://registry.opendata.aws/nwm-archive/ ). These simulations used meteorological input from retrospective data. The output frequency and fields available in this historical NWM dataset differ from those contained in the real-time operational NWM forecast model. Additionally, note that no streamflow or other data assimilation is performed within any of the NWM retrospective simulations

Show More

ABSTRACT:

This HydroShare resource provides Jupyter Notebooks with instructions and code for accessing and subsetting the NOAA Analysis of Record for Calibration (AORC) Dataset. There are two Jupyter Notebooks
1. AORC_Point_Data_Retrieval.ipynb
2. AORC_Zone_Data_Retrieval.ipynb
The first retrieves data for a point in the area of the US covered, specified using geographic coordinates. The second retrieves data for areas specified via an uploaded polygon shapefile.
These notebooks programmatically retrieve the data from Amazon Web Services (https://registry.opendata.aws/noaa-nws-aorc/), and in the case of shapefile data retrieval average the data over the shapes in the given shapefile.
The notebooks provided are coded to retrieve data from AORC version 1.1 released in ZARR format in December 2023.
The Analysis Of Record for Calibration (AORC) is a gridded record of near-surface weather conditions covering the continental United States and Alaska and their hydrologically contributing areas (https://registry.opendata.aws/noaa-nws-aorc/). It is defined on a latitude/longitude spatial grid with a mesh length of 30 arc seconds (~800 m), and a temporal resolution of one hour. Elements include hourly total precipitation, temperature, specific humidity, terrain-level pressure, downward longwave and shortwave radiation, and west-east and south-north wind components. It spans the period from 1979 across the Continental U.S. (CONUS) and from 1981 across Alaska, to the near-present (at all locations). This suite of eight variables is sufficient to drive most land-surface and hydrologic models and is used as input to the National Water Model (NWM) retrospective simulation. While the original NOAA process generated AORC data in netCDF format, the data has been post-processed to create a cloud optimized Zarr formatted equivalent that NOAA also disseminates.

Show More

 Contact

Mobile 4358905557
Email (Log in to send email)
Resources
All 0
Collection 0
Resource 0
App Connector 0
Resource Resource
Progress report
Created: Oct. 26, 2017, 3:35 a.m.
Authors: Ayman Nassar

ABSTRACT:

This is the progress report of Ayman Nassar

Show More
Resource Resource
Soil_properties_netcdf
Created: Aug. 31, 2021, 9:08 p.m.
Authors: Nassar, Ayman

ABSTRACT:

Soil Properties

Show More
Resource Resource
FlowFromSnow-Sciunit
Created: Nov. 29, 2021, 10:20 p.m.
Authors: Nassar, Ayman · Tarboton, David · Ahmad, Raza

ABSTRACT:

This resource illustrates how data and code can be combined together to support hydrologic analyses. It was developed June 2020 as part of a HydroLearn Hackathon.

Show More
Resource Resource
scinunit simple demo
Created: Dec. 17, 2021, 12:35 a.m.
Authors: Tarboton, David

ABSTRACT:

Sciunit simple demo

Show More
Resource Resource
Sciunit testing
Created: Jan. 18, 2022, 12:38 a.m.
Authors: Tarboton, David

ABSTRACT:

Illustration of General idea of use case for sciunit container.
1. User creates sciunit (sciunit create Project1)
2. User initiates interactive capturing (sciunit exec -i)
3. User does their work. For now assume this is a series of shell commands
4. User saves or copies the sciunit
5. User opens the sciunit on a new computer and can re-execute the commands exactly as they would have on the old computer, from command line, from bash shell escapes or python in Jupyter
6. User sees a list of the commands that were in the sciunit and could use editing of them to reproduce

The setup.
On CUAHSI JupyterHub the user has a resource (the one above) with some code that is a simple example for modeling the relationship between streamflow and snow

There is a python "dependency" GetDatafunctions.py in a folder on CUAHSI JupyterHub. This is not part of the directory where the user is working. It is added to the python path for the programs to execute. This is a simple example of what could be a dependency the user may not exactly be aware of (e.g. if it is part of the CUAHSI JupyterHub platform, but not part of other platforms).

An export PYTHONPATH command is used to add the dependency to the python path.

Then the analysis is illustrated outside of sciunit.

Then sciunit is installed and the analysis repeated using sciunit exec.

Finally sciunit copy copies the sciunit to the sciunit repository

Then on a new computer

Sciunit open retrieves the sciunit

After repeating one of the executions, the sciunit directory has the dependency container unpacked

Setting the PYTHONPATH to the unpacked dependency allows the commands to be run on the new computer, just as they were on the old computer.

This is the vision - running on the new computer with dependencies from the old computer resolved.

Would like the dependencies to be “installed” on the new computer so that they work with Jupyter and Jupyter escape bash commands.

All is done from the command line - the Jupyter Notebook is just used as a convenient notepad.

Show More
Resource Resource

ABSTRACT:

This notebook demonstrates how to prepare a WRFHydro model on CyberGIS-Jupyter for Water (CJW) for execution on a supported High-Performance Computing (HPC) resource via the CyberGIS-Compute service. First-time users are highly encouraged to go through the [NCAR WRFHydro Hands-on Training on CJW](https://www.hydroshare.org/resource/d2c6618090f34ee898e005969b99cf90/) to get familiar WRFHydro model basics including compilation of source code, preparation of forcing data and typical model configurations. This notebook will not cover those topics and assume users already have hands-on experience with local model runs.

CyberGIS-Compute is a CyberGIS-enabled web service sits between CJW and HPC resources. It acts as a middleman that takes user requests (eg. submission of a model) originated from CJW, carries out the actual job submission of model on the target HPC resource, monitors job status, and retrieves outputs when the model execution has completed. The functionality of CyberGIS-Compute is exposed as a series of REST APIs. A Python client, [CyberGIS-Compute SDK](https://github.com/cybergis/cybergis-compute-python-sdk), has been developed for use in the CJW environment that provides a simple GUI to guide users through the job submission process. Prior to job submission, model configuration and input data should be prepared and arranged in a certain way that meets specific requirements, which vary by models and their implementation in CyberGIS-Compute. We will walk through the requirements for WRFHydro below.

The general workflow for WRFHydro in CyberGIS-Compute works as follows:

1. User picks a Model_Version of WRFHydro to use;
2. User prepares configuration files and data for the model on CJW;
3. User submits configuration files and data to CyberGIS-Compute;
4. CyberGIS-Compute transfers configuration files and data to target HPC;
5. CyberGIS-Compute downloads the chosen Model_Version of WRFhydro codebase on HPC;
6. CyberGIS-Compute applies compile-time configuration files to the codebase, and compiles the source code on the fly;
7. CyberGIS-Compute applies run-time configuration files and data to the model;
8. CyberGIS-Compute submits the model job to HPC scheduler for model execution;
9. CyberGIS-Compute monitors job status;
10. CyberGIS-Compute transfers model outputs from HPC to CJW upon user request;
11. User performs post-processing work on CJW;

Some steps in this notebook require user interaction. "Run cell by cell" is recommended. "Run All" may not work as expected.

How to run the notebook:
1) Click on the OpenWith button in the upper-right corner;
2) Select "CyberGIS-Jupyter for Water";
3) Open the notebook and follow instructions;

Show More
Resource Resource
NWM_IGUIDE_DEMO
Created: Aug. 31, 2022, 3:43 p.m.
Authors: Nassar, Ayman · Tarboton, David · Kalyanam, Rajesh · Li, Zhiyu/Drew · Baig, Furqan

ABSTRACT:

This notebook demonstrates the setup for a typical WRF-Hydro model on HydroShare leveraging different tools or services throughout the entire end-to-end modelling workflow. The notebook is designed in such a way that the user/modeler is able to retrieve datasets only relevant to a user-defined spatial domain (space domain), for example, a watershed domain of interest and time domain using a graphical user interface (GUI) linked to HPC. In order to help users submitting a job on HPC to run the model, they are provided with a user-friendly interface that abstracts away details and complexities involved in the HPC use such as authorization, authentication, monitoring and scheduling of the jobs, data and job management, and transferring data back and forth. Users can interact with this GUI to perform modeling work. This GUI is designed in such a way to allow users/modeler to 1) select the remote server where the HPC job will run, 2) upload the simulation directory, which contains the configuration files, 3) specify the parameters of the HPC job that the user is allowed to utilize, 4) set some parameters related to the model compilation, 5) follow-up on the submitted job status and 6) retrieve the model output files back to local workspace. Once the model execution is completed, users can easily have access to the model outputs on HPC and retrieve them to the local workspace for visualization and analysis.

Show More
Resource Resource
AORC Subsetting and Simulation (AWS vs. Cheyenne)
Created: Oct. 25, 2022, 3:52 p.m.
Authors: Nassar, Ayman

ABSTRACT:

This HydroShare resource is developed for comparing between two AORC data sources. We retrieved the old version of AORC dataset from Amazon S3 Bucket, while the new version (AORC v 1.1) obtained from Cheyenne supercomputer. We include a couple of functions in the Jupyter Notebook that helps in subsetting and manipulating the netCDF dataset, not restricted only for the AORC data.

Show More
Resource Resource
WBD HUC12 - Great Basin
Created: Feb. 8, 2023, 7:08 p.m.
Authors: Castronova, Anthony M.

ABSTRACT:

These are the HUC 12 watershed boundaries for the Great Basin.

Show More
Resource Resource
FLINC_DEMO_May_08_2023
Created: May 8, 2023, 6:40 p.m.
Authors: Nassar, Ayman

ABSTRACT:

This is a demo for FLINC using "FlowfromSnow" as an example

Show More
Resource Resource
CONUS_HUC12
Created: May 11, 2023, 7:35 p.m.
Authors: Nassar, Ayman

ABSTRACT:

This HydroShare resource include a shapefile for HUC12 for the entire CONUS.

Show More
Resource Resource
WBD12
Created: May 30, 2023, 8:46 p.m.
Authors: Nassar, Ayman

ABSTRACT:

This HydroShare resource includes a shapefile of HUC12 for the entire CONUS.

Show More
Resource Resource
AORC Subset
Created: Nov. 14, 2023, 5:40 p.m.
Authors: Nassar, Ayman · Tarboton, David · Castronova, Anthony M.

ABSTRACT:

The objective of this HydroShare resource is to query AORC v1.0 Forcing data stored on HydroShare's Thredds server and create a subset of this dataset for a designated watershed and timeframe. The user is prompted to define their temporal and spatial frames of interest, which specifies the start and end dates for the data subset. Additionally, the user is prompted to define a spatial frame of interest, which could be a bounding box or a shapefile, to subset the data spatially.

Before the subsetting is performed, data is queried, and geospatial metadata is added to ensure that the data is correctly aligned with its corresponding location on the Earth's surface. To achieve this, two separate notebooks were created - [this notebook](https://github.com/CUAHSI/notebook-examples/blob/main/thredds/query-aorc-thredds.ipynb) and [this notebook] (https://github.com/CUAHSI/notebook-examples/blob/main/thredds/aorc-adding-spatial-metadata.ipynb) - which explain how to query the dataset and add geospatial metadata to AORC v1.0 data in detail, respectively. In this notebook, we call functions from the AORC.py script to perform these preprocessing steps, resulting in a cleaner notebook that focuses solely on the subsetting process.

Show More
Resource Resource

ABSTRACT:

This HydroShare resource provides Jupyter Notebooks with instructions and code for accessing and subsetting the NOAA Analysis of Record for Calibration (AORC) Dataset. There are two Jupyter Notebooks
1. AORC_Point_Data_Retrieval.ipynb
2. AORC_Zone_Data_Retrieval.ipynb
The first retrieves data for a point in the area of the US covered, specified using geographic coordinates. The second retrieves data for areas specified via an uploaded polygon shapefile.
These notebooks programmatically retrieve the data from Amazon Web Services (https://registry.opendata.aws/noaa-nws-aorc/), and in the case of shapefile data retrieval average the data over the shapes in the given shapefile.
The notebooks provided are coded to retrieve data from AORC version 1.1 released in ZARR format in December 2023.
The Analysis Of Record for Calibration (AORC) is a gridded record of near-surface weather conditions covering the continental United States and Alaska and their hydrologically contributing areas (https://registry.opendata.aws/noaa-nws-aorc/). It is defined on a latitude/longitude spatial grid with a mesh length of 30 arc seconds (~800 m), and a temporal resolution of one hour. Elements include hourly total precipitation, temperature, specific humidity, terrain-level pressure, downward longwave and shortwave radiation, and west-east and south-north wind components. It spans the period from 1979 across the Continental U.S. (CONUS) and from 1981 across Alaska, to the near-present (at all locations). This suite of eight variables is sufficient to drive most land-surface and hydrologic models and is used as input to the National Water Model (NWM) retrospective simulation. While the original NOAA process generated AORC data in netCDF format, the data has been post-processed to create a cloud optimized Zarr formatted equivalent that NOAA also disseminates.

Show More
Resource Resource

ABSTRACT:

This HydroShare resource provides Jupyter Notebooks with instructions and code for accessing and subsetting the NOAA National Water Model CONUS Retrospective Dataset. There are two Jupyter Notebooks
1. NWM_output_variable_retrieval_with_FeatureID.ipynb
2. NWM_output_variable_retrieval_with_shapefile.ipynb
The first retrieves data for one point (feature ID). The second retrieves data for areas specified interactively or via an uploaded shapefile.
These notebooks programmatically retrieve the data from Amazon Web Services (https://registry.opendata.aws/nwm-archive/), and in the case of Zone data retrieval average the data over the zones specified.
The notebooks provided are coded to retrieve data from NWM retrospective analysis version 3.0 released in ZARR format in December 2023.
The NOAA National Water Model Retrospective dataset contains input and output from multi-decade CONUS retrospective simulations (https://registry.opendata.aws/nwm-archive/ ). These simulations used meteorological input from retrospective data. The output frequency and fields available in this historical NWM dataset differ from those contained in the real-time operational NWM forecast model. Additionally, note that no streamflow or other data assimilation is performed within any of the NWM retrospective simulations

Show More
Resource Resource
HydroData Data Retrieval
Created: June 10, 2024, 6:44 p.m.
Authors: Nassar, Ayman · Tarboton, David

ABSTRACT:

The HydroData data catalog, associated python functions hf_hydrodata, and API are products of the HydroFrame project and are designed to provide easy access to a variety of other gridded model input datasets and point observations as well as national hydrologic simulations generated using the National ParFlow model (ParFlow-CONUS1 and ParFlow-CONUS2).

Show More
Collection Collection

ABSTRACT:

This HydroShare collection contains multiple Jupyter notebook that enable user to retrieve data from different data sources.

Show More
Resource Resource
HydroFabric Subsetter and Retrieval
Created: June 24, 2024, 7:08 p.m.
Authors: Nassar, Ayman · Tarboton, David

ABSTRACT:

This HydroShare resource is developed to subset and retrieve the HydroFabric dataset (Johnson, J. M. (2022), https://lynker-spatial.s3-us-west-2.amazonaws.com/copyright.html) needed to execute the NOAA Next Generation (NextGen) Water Resource Modeling framework. The NextGen hydrofabric describes the representation, discretization, and topology of the hydrologic landscape and drainage network as a three-part data product that includes: (1) catchment and flowpath features, (2) their connectivity, and (3) the attribute sets needed to execute models. For more details about the HydroFabric data, please visit this website: https://noaa-owp.github.io/hydrofabric/

Show More