Where is my data, where did it come from and how was it obtained? Improving Access to Geoanalytical Research Data
Data is at the backbone of our research discussions, conclusions and solutions to problems nature presents us with. Did the apple fall to the ground, once or twice? It always does.
In today’s age with use of the most advanced laboratory capability the (geo)science community produces data at an ever increasing level of precision, resolution and volume. E.g. 1000 geochronology dates a day using laser ablation ICP mass spectrometry systems at a 30 µm resolution with <5% precision, that is <1Gb. Large Hadron Collider (LHC) detectors generate about one petabyte of collision data per second (~1Mb per collision). Most of this analytical data is highly variable and lacking standardised community-agreed metadata.
The greatest challenge pertaining to laboratory analytical research is to collate, store and make these data publicly available in standardised and machine-accessible form. But do we have to? Do we want to?
Bringing this year’s assembly debate to a virtual audience, this webinar puts the questions, problems, challenges and opportunities around geoanalytical research data to the center stage debate-style, with a topic researchers from almost every scientific division are concerned with. Short opening statements, from a panel representing the Earth, Environmental, Planetary and Space sciences, are followed with a discussion on how to improve the situation for EGU members who work with and on laboratory analytical data.
Discussions can be around:
Community development of systems to facilitate easy and efficient research data management, and need for more user buy-in.
The push from publishers and journals who increasingly require access to the supporting data from a trusted repository prior to publication of manuscripts.
When should data, initially collected in a researcher’s private domain, become public?
The need for and lack of global standards, best practices and protocols for analytical data management and exchange in order for scientists to better share their data in a global network of distributed databases.
When to capture analytical data, raw (lab), reduced (private/collaborative), polished (publicised).
To ensure long-term impact of these data, they need to be efficiently managed and losslessly transferred from laboratory instruments in “Private” domains to a “Collaboration” domain, to the “Public” domain, complete with all relevant information about the analytical process and uncertainty, and cross-references to originating samples and publications.
This webinar will be host five speakers: Steven L Goldstein, Olivier Pourret, Katy Chamberlain, Simon Marshall, and Shaunna Morrsion. The debate will be moderated by Kerstin Lehnert.
About the panelists:
Steven L Goldstein is Higgins Professor of Earth and Environmental Sciences at Columbia University in New York City and at Columbia’s Lamont-Doherty Earth Observatory. Goldstein is a geochemist who utilizes the products of natural radioactive decay in rocks and waters, as process tracers and to determine absolute ages in a wide range of research from magmatic processes to chemical oceanography, from the history of the early Earth to recent climate changes. Goldstein has actively promoted best practices for the reporting of geochemical data in the literature such as the Editors Roundtable that he helped to establish.
Olivier Pourret is associate professor at UniLaSalle, Beauvais (France). He is a hydrogeochemist with particular interest in trace metal fractionation in low-temperature aqueous systems. He is also an advocate for open and inclusive science, spanning the full range from data to publications to recognition of scientific achievements.
Katy Chamberlain is a lecturer at the University of Derby (United Kingdom). She is an igneous petrologist and field volcanologist specialising in the use of in situ microanalytical techniques. Katy is also passionate about changing the data culture in geochemistry and making geochemical data FAIR.
Simon Marshall is currently global chief geochemist for Newmont based in Australia. He has over 20 years of experience in applied exploration geochemistry across multiple continents in data rich and data poor environments. Simon will provide an industry perspective on the opportunities and challenges with managing and accessing data in exploration.
Shaunna Morrison is Research Scientist at the Earth and Planets Laboratory of the Carnegie Institution of Washington. Morrison is a mineralogist and planetary scientist with expertise in crystallography, crystal chemistry, and the application of data-driven techniques exploring and employing advanced analytics and machine learning techniques to better understand the complex relationships among Earth and planetary materials, their formational environments through deep time, and their coevolution with the biosphere.
Kerstin Lehnert is Doherty Senior Research Scientist at the Lamont-Doherty Earth Observatory of Columbia University and Director of the Geoinformatics Research Group. Kerstin’s work centres on the development and operation of community-driven data infrastructure for the Earth and space sciences and, in particular, on using cyberinfrastructure to improve access and sharing of data generated by the study of physical samples. Kerstin leads the EarthChem data facility for geochemistry, petrology and volcanology (NSF funded); the Astromaterials Data System (NASA funded); and the System for Earth Sample Registration (NSF funded). Kerstin is currently member of the NASEM Division Committee for the Gulf Research Program; member of the NOAA Science Advisory Board’s Data Archive & Access Requirements Working Group; chair of the EarthCube Council of Data Facilities; and President of the IGSN e.V.
You can view the online event here (Youtube).
If you have any questions about ‘Where is my data, where did it come from and how was it obtained? Improving Access to Geoanalytical Research Data’, please contact us via webinars@egu.eu.