Biodiversity Information Science and Standards :
Conference Abstract
|
Corresponding author: John Waller (jwaller@gbif.org)
Received: 23 Sep 2021 | Published: 27 Sep 2021
© 2021 John Waller, Nikolay Volik, Federico Mendez, Andrea Hahn
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation:
Waller J, Volik N, Mendez F, Hahn A (2021) GBIF Data Processing and Validation. Biodiversity Information Science and Standards 5: e75686. https://doi.org/10.3897/biss.5.75686
|
GBIF (Global Biodiversity Information Facility) is the largest data aggregator of biological occurrences in the world. GBIF was officially established in 2001 and has since aggregated 1.8 billion occurrence records from almost 2000 publishers. GBIF relies heavily on Darwin Core (DwC) for organising the data it receives.
GBIF Data Processing Pipelines
Every single occurrence record that gets published to GBIF goes through a series of three processing steps until it becomes available on GBIF.org.
Once all records are available in the standard verbatim form, they go through a set of interpretations.
In 2018, GBIF processing underwent a significant rewrite in order to improve speed and maintainablility. One of the main goals of this rewrite was to improve the consistency between GBIF's processing and that of the Living Atlases. In connection with this, GBIF's current data validator fell out of sync with GBIF pipelines processing.
New GBIF Data Validator
The current GBIF data validator is a service that allows anyone with a GBIF-relevant dataset to receive a report on the syntactical correctness and the validity of the content contained within the dataset. By submitting a dataset to the validator, users can go through the validation and interpretation procedures usually associated with publishing in GBIF and quickly determine potential issues in data, without having to publish it. GBIF is planning to rework the current validator because the current validator does not exactly match current GBIF pipelines processing.
Planned Changes
The new validator will match the processing of the GBIF pipelines project.
Suggested Changes/Ideas
One of the main guiding philosophies for the new validator user interface will be avoiding information overload. The current validator is often quite verbose in its feedback, highlighting data issues that may or may not be fixable or particularly important. The new validator will:
We see the hosted portal environment as a way to quickly implement a pre-publication validation environment that is interactive and visual.
Potential New Data Quality Flags
The GBIF team has been compiling a list of new data quality flags. Not all of the suggested flags are easy to implement, so GBIF cannot promise the flags will get implemented, even if they are a great idea. The advantage of the new processing pipelines is that almost any new data quality flag or processing step in pipelines will be available for the data validator.
Easy new potential flags:
It is also nice when a data quality flag has an escape hatch, such that a data publisher can get rid of false positives or remove a flag through filling in a value.
Batch-type validations that are doable for pipelines, but probably not in the validator include:
Conclusion
Data quality and data processing are moving targets. Variable source data will always be an issue when aggregating large amounts of data. With GBIF's new processing architecture, we hope that new features and data quality flags can be added more easily. Time and staffing resources are always in short supply, so we plan to prioritise the feedback we give to publishers, in order for them to work on correcting the most important and fixable issues. With new GBIF projects like the vocabulary server, we also hope that GBIF data processing can have more community participation.
tool, API, data quality, issue flag, workflow, reporting, hosted portals, data publication, occurrence data, publisher support, Darwin Core
John Waller
TDWG 2021