
Hi Dale,
Thank you for your question - I will pass this onto our suppliers to investigate.
Many thanks
Ella Fotheringham
Environment Agency
Hi, it seems most times if I call for the level readings and try to used the "_sorted" parameter to get the readings in an already descending order, the request times out. I am using this to call all readings for a single day for example using the following...
https://environment.data.gov.uk/flood-monitoring/data/readings?parameter=level&today&_sorted
.... it does not seem to matter if a limit is set to a small number either. Any ideas and/or help?
Please sign in to leave a comment.
Hi Dale,
Thank you for your question - I will pass this onto our suppliers to investigate.
Many thanks
Ella Fotheringham
Environment Agency
Hi Dale.
We are still investigating this issue, in the meantime are you able to try the following link:
https://environment.data.gov.uk/flood-monitoring/data/readings?parameter=level&today&_sorted
Please let me know how you get on?
Many thanks
Ella Fotheringham
Environment Agency
HI Dale,
I have a further update for you:
This is large query and it seems that if the server is particularly busy it may occasionally time out.
The limit is applied after the sort operation (to ensure the latest data is included) so changing this does not affect the length of time the query will run.
Depending on the need for the most recent figures the archives might be a good option. These do not have the latest days data (the files are generated at 22:00 the following day) but are the most efficient and quickest way to get the data in bulk. See - https://environment.data.gov.uk/flood-monitoring/archive
Alternatively you could request an offline supply - simply complete the report an issue form with details of your request: https://support.environment.data.gov.uk/hc/en-gb/requests/new
Many thanks
Hi. Many thanks for looking into this. I had been retrying and logging some timings but found only that it is intermittent so as you say, likely down to server load at the time. What I will do is pull the current data from the API as normal and then once per day have a cronjob to download the previous day archive in CSV and parse that to top up anything missing.