Post

3 followers Follow
0
Avatar

What causes anomalous river level readings?

I'm working with the Real Time flood-monitoring API, and I noticed something odd this morning. I'm interested in the Hereford Bridge station on the river Wye:

 
There's a spike of 1.9m at 22:15 UTC on the 25th July:
 
 
Pretty sure it's impossible for the river level at the bridge to jump 1.7m in 15 minutes… what happens to cause these sorts of readings?
Henry Todd //

Please sign in to leave a comment.

5 comments

0
Avatar

You often see this sort of artifact in the pre-quality assessed records.  Depending on the instrument type, can be a blocked stilling well, debris becoming entangled near the sensor, ultrasonics having a moment or even children playing near the sensor at low flows (though hopefully not at that location at 22:15).   I think it's safe to assume the data doesn't reflect reality :-)

adam parkes 0 votes //
Comment actions Permalink
0
Avatar

Interesting! I'd thought that these sorts of "hiccups" in the data would be rectified after a day or so when there's enough data to discern them. I don't know *why* I thought that :-)

How does the data QA process work? How long, typically, does it take for data like this to be corrected?

I'm trying to keep as full a history as I can of the river levels at this particular bridge. So far I've only been querying for the readings that happened since my last known reading, and I was thinking about having a separate process to retrieve entire days running a few days behind the main job, to catch this sort of thing. This is the first time I'm seeing an anomaly "in the wild" though, and haven't tackled it yet. How far back should I have the historical process reading to correct for this sort of thing?

Henry Todd 0 votes //
Comment actions Permalink
0
Avatar

As a regular user of gauge data, until the APIs became available I would regularly request gauge data directly from the EA (WISKI database), this would usually come with a "quality flag" and comment with helpful comments provided by the EA Hydrometry and Telemetry (H&T) Team. 

I'm not clear on how the quality review process works with the API data or the relation between the database the API draws on verses WISKI.  Would be very interested to hear!

adam parkes 0 votes //
Comment actions Permalink
0
Avatar

Thanks Adam, I didn't know about the existence of WISKI, or even that there are potentially multiple sources for the data :-) I wonder if QA corrections/adjustments after the fact even show up in API queries…

Henry Todd 0 votes //
Comment actions Permalink
0
Avatar

Hi Henry,

Thank you for your question.

Spikes can occur for a number of reasons. Debris in front of sensors, communication issues causing data to get mixed up, power spikes etc.  Generally we have processes in place to minimise their occurrence but occasionally data spikes come through.

We are working with the business to publish more of the validated data, from WISKI. The real time telemetry API has not been validated as it comes directly from the telemetry systems. Telemetry teams review the data in an ongoing process and comment on quality, this is held in the WISKI database.

 

I hope this helps

Ella Fotheringham

Environment Agency

Data Services Platform team 0 votes //
Comment actions Permalink