Checking for non-preferred file/folder path names (may take a long time depending on the number of files/folders) ...
This resource contains some files/folders that have non-preferred characters in their name. Show non-conforming files/folders.
This resource contains content types with files that need to be updated to match with metadata changes. Show content type files that need updating.
Authors: |
|
|
---|---|---|
Owners: |
|
This resource does not have an owner who is an active HydroShare user. Contact CUAHSI (help@cuahsi.org) for information on this resource. |
Type: | Resource | |
Storage: | The size of this resource is 53.4 KB | |
Created: | Jan 02, 2018 at 8:38 p.m. | |
Last updated: | Jan 02, 2018 at 11:32 p.m. | |
Citation: | See how to cite this resource |
Sharing Status: | Public |
---|---|
Views: | 2239 |
Downloads: | 62 |
+1 Votes: | Be the first one to this. |
Comments: | No comments (yet) |
Abstract
This dataset includes R code, specifically the package WaterML, to download water quality data from iUTAH GAMUT station sensors installed to look at water quality/quantity along three montane-to-urban watersheds: Logan River, Red Butte Creek, and Provo River. An explanation of the GAMUT sensor network can be found at gamut.iutahepscor.org. The code requires installation of packages 'plyr' and 'WaterML'. Instructions for modifying code to extract sensor data for your timepoint of interest are included in the README file. The code has the option to write sensor data to .csv files in your working directory.
Additional code available at https://github.com/erinfjones/GAMUTdownload
Subject Keywords
Content
README.txt
### iUTAH GAMUT Aquatic Station ### Grab Sample download ### Version 1.1 ### Written by: Erin Fleming Jones, Contact at: erinfjones3@gmail.com ### Last updated: 3/3/2016 To Begin: This R code is designed to expedite the process of extracting GAMUT Aquatic Sensor data at specific time points, for example to pair with grab sample data collected at a GAMUT site. In order to use this code, you will need RStudio, available for free download at https://www.rstudio.com/products/rstudio/download/ and R, available at https://cran.r-project.org/ . You will need to install packages plyr, and WaterML. You can install packages using the tab at the top of the bottom right panel. Navigating: You can collapse portions of code by clicking on the arrows on lines beginning #####. You can also use a drop down menu by clicking right at the very bottom of the script frame (if you haven't clicked anywhere in the text it will read [Top Level]). If you are interested in data from a single watershed, you can highlight the relevant collapsed sections, and click the "Run" button. Logan River begins on line 55, Red Butte on line 385, and Provo on line 719, or drop down menu items Franklin Basin, Knowlton Fork, and Soapstone, respectively. Set Working directory: This code will write sensor data into a .csv file in your working directory. If you are not familiar with how to set a working directory in RStudio, there are options described at https://support.rstudio.com/hc/en-us/articles/200711843-Working-Directories-and-Workspaces. Date time set-up: Example sample collection dates and times are entered in the top section of the code labeled ### Set Date and times ###. Each watershed has a date included in a commented section of code, that is used to help keep files separate when multiple sampling points are being downloaded. Use find and replace (ctrl+f) to replace the example date (e.g. 15Aug) with your own unique identifier. In the next line, if samples at multiple sites were collected on the same day, enter your date of interest for a watershed after "StartDate=" and the next calendar day as "EndDate". If the samples within a watershed span multiple days, use the day after the last sampling day. Then enter the date and time for each individual site sampled in the designated lines. Times need to be rounded to the nearest 15 minute interval (e.g. 07:45:00, 15:30:00 are okay, 10:32:00 or 16:15:37 are not). Three Provo sites need to be rounded to the nearest hour. Version 1.2 will include code to allow 15 minute intervals. Be sure to use the YYYY-MM-DD HH:MM:SS format as demonstrated. Running just a portion: If you are not using all of the sites or watersheds, you will need to delete or comment out (i.e. add a # in front of the line) unused dataframe names (ex. CGrove17AugGrabSample ) contained in the "Spreadsheet" section after all the sites in a watershed. If there is an unused site the function to consolidate into a single dataframe will not run and will return an error.
How to Cite
This resource is shared under the Creative Commons Attribution CC BY.
http://creativecommons.org/licenses/by/4.0/
Comments
There are currently no comments
New Comment