- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The Nordic e-Infrastructure Collaboration (NeIC) facilitates the development and operation of high-quality e-Infrastructure solutions in areas of joint Nordic interest. NeIC is a distributed organization consisting of technical experts from academic institutions across the Nordic countries. NeIC's activities work to improve e-infrastructure and enable better research in the Nordic-Baltic region. The NeIC collaboration is steered by the Nordic e-Infrastructure providers in the Nordic countries and Estonia.
The NeIC conferences are organized biannually, bringing together e-infrastructure experts, researchers, policymakers, funders, and national e-infrastructure providers from the Nordics and beyond. The NeIC 2024 conference theme is “Nordic e-Infrastructure Tomorrow”.
Call for Papers and Conference registration is now Open
The conference proceedings will be published by Springer Nature Communications in Computer and Information Science (CCIS) with Gold Open Access.
Important Dates
The conference is Co-Sponsored by
Keynote by the Head of Research Infrastructure at the Ministry of Education and Research
This track focuses on the critical elements of fostering expertise and collaboration within the realm of electronic infrastructures. As e-Infrastructures evolve to support increasingly complex research and innovation endeavors, the development of competencies among users and administrators alike becomes essential. This track invites submissions that explore various aspects of competence building, including training programs, skill development initiatives, community engagement strategies, and best practices for maximizing the utilization of e-Infrastructures. The focus is on how to establish an efficient framework for competence building and training of e-Infrastructure staff, in addition to the establishment of repositories for sharing Portable workflows/tools e.g. as libraries/containers. Additionally, we welcome contributions that examine collaboration environments within e-Infrastructures, encompassing tools, platforms, methodologies, and experiences that facilitate interdisciplinary teamwork and knowledge exchange. We encourage submissions from researchers, educators, practitioners, and policymakers, presenting insights, case studies, lessons learned, and innovative approaches to cultivate competence and foster collaboration in diverse e-Infrastructure ecosystems, ultimately advancing scientific discovery and societal impact.
This Session includes two Tracks
This track is dedicated to exploring the principles and practices of managing data in a manner that is Findable, Accessible, Interoperable, and Reusable (FAIR). As the volume and diversity of data generated across disciplines continue to grow, ensuring that data is managed in accordance with FAIR principles becomes crucial for enabling its effective discovery, access, and reuse. This track invites submissions that address various aspects of FAIR data stewardship, including but not limited to metadata standards, data curation, data provenance, data integration, data interoperability, and data sharing policies.
Through this track, we aim to foster discussions and collaborations aimed at advancing the adoption of FAIR data practices across scientific domains, thereby maximizing the value and impact of research data.
The main focus of this track is the role of technological solutions in promoting the principles of Findability, Accessibility, Interoperability, and Reusability (FAIR) within data management frameworks. As the volume and complexity of data continue to grow exponentially across various domains, ensuring that data are FAIR becomes imperative for driving scientific reproducibility, collaboration, and innovation. This track focuses on cutting-edge technologies, architectures, standards, and methodologies aimed at facilitating FAIR data management practices. Topics of interest include metadata standards and ontologies, data integration and interoperability frameworks, data repositories and data management platforms, data discovery and access mechanisms, as well as data citation and provenance tracking mechanisms. The scope includes novel approaches, case studies, best practices, and challenges encountered in implementing FAIR data management solutions at the technology level.
This track aims to explore the evolving landscape of distributed resource management in federated electronic infrastructures, with on how to manage cross-border access to both compute and storage resources. Federated environments bring together diverse resources, including computing, storage, networking, and data services, across multiple administrative domains to support collaborative research and innovation. This track welcomes submissions that address the complexities and opportunities inherent in federated resource management, including but not limited to resource allocation, scheduling, provisioning, monitoring, accounting, and security. We invite contributions from researchers, practitioners, e-Infrastructure providers, and industry experts that propose novel approaches, frameworks, algorithms, architectures, and tools to enhance the efficiency, scalability, interoperability, and sustainability of federated e-infrastructures. Additionally, we encourage submissions that explore use cases, best practices, lessons learned, and case studies highlighting successful deployments and challenges encountered in federated resource management across diverse scientific, academic, and industrial domains.
This track focusses on e-Infrastructure for computing, storage, long-term preservation, and archiving of data with restricted access. As HPC systems become increasingly integral to processing large-scale datasets across various domains, the need to safeguard sensitive information, including personal, proprietary, and confidential data, becomes paramount. This track invites submissions that explore the unique challenges, methodologies, and solutions related to sensitive data management in HPC environments, covering aspects such as data privacy, confidentiality, integrity, and compliance with regulatory frameworks. Interesting questions include: How to leverage the experience from existing sensitive data archiving federation services, e.g. Federated EGA (European Genome-phenome Archive), and complement this with processing sensitive data on large-scale HPC clusters, e.g. EuroHPC Supercomputers. We welcome contributions from researchers, practitioners, and industry experts that propose novel techniques, algorithms, architectures, and best practices for securely handling sensitive data in HPC workflows. Additionally, we encourage submissions that discuss case studies, lessons learned, and practical experiences in mitigating risks and ensuring data protection within HPC ecosystems, fostering a deeper understanding of the complex interplay between computational performance and data security.
This track aims to explore the pivotal role of advanced electronic infrastructure in driving breakthroughs and advancements in Artificial Intelligence (AI) and Machine Learning (ML) domains. As AI and ML applications proliferate across various sectors, the demand for robust, scalable, and efficient e-infrastructures becomes increasingly apparent. This track invites submissions that delve into the latest developments, challenges, and opportunities in leveraging e-infrastructures to support AI and ML workflows. Topics of interest include but are not limited to distributed computing platforms, cloud computing architectures, high-performance computing (HPC) systems, data storage and management solutions, networking technologies, and security frameworks tailored for AI and ML applications. We welcome contributions from researchers, practitioners, and industry experts that offer insights, methodologies, case studies, and best practices to foster the convergence of e-infrastructure and AI/ML technologies, ultimately enabling transformative innovation across diverse domains.
This track invites submissions that delve into various aspects of HPC containerization, including container orchestration, performance optimization, reproducibility, security, and interoperability with existing HPC ecosystems.
Invited presentations:
- HPC Containers for LUMI SuperComputer - Abdulrahman Azab
- History of containers on the slovenian clusters - Dejan Lesjak
This track invites submissions that present compelling use cases, exemplifying how e-Infrastructures facilitate breakthroughs in fields such as physics, biology, chemistry, environmental science, engineering, social sciences, and beyond. We welcome contributions that demonstrate the unique capabilities of e-Infrastructures in enabling large-scale simulations, data-driven discoveries, collaborative research endeavors, and innovative solutions to real-world problems. Submissions may include case studies, research findings, methodologies, best practices, and lessons learned, offering insights into the transformative impact of e-Infrastructures on scientific exploration and knowledge dissemination. Researchers, practitioners, and domain experts are encouraged to share their experiences and insights, inspiring the broader community to harness the full potential of e-Infrastructures for scientific advancement.
This track is dedicated to exploring the critical role of electronic infrastructure in advancing the field of quantum computing. E-infrastructure for quantum computing is developing rapidly, with evermore qubits being available. The question being tackled here is how to make a high quality quantum computing service available. Focus on the providing software environments for quantum computing, enabling integration to existing HPC clusters, e.g. EuroHPC supercomputers, and providing end-to-end management of authentication and authorization, as well as allocation of resources to quantum computers. This track invites submissions that explore various facets of quantum computing e-Infrastructure, including hardware and software architectures, resource provisioning, programming models, simulation environments, and middleware frameworks. We welcome contributions from researchers, engineers, and practitioners, addressing challenges and proposing solutions to enable efficient utilization, management, and orchestration of quantum computing resources within e-Infrastructure environments. Additionally, we encourage submissions showcasing novel applications, use cases, and collaborative efforts leveraging quantum computing e-Infrastructure to advance scientific research, industrial innovation, and societal impact.
Invited Talks:
Quantum computing in the service of satellite data processing, Piotr Gawron
Earth observation data are constantly being produced by constantly growing number of satellites. Processing these data efficiently consists a major challenge and not all the produced data is processed and analyzed. At the same time Earth observation provides important information about our ecosystems in the age of rapidly changing climate. For this reason research on application of quantum computing for Earth observation data analysis has been initiated be a couple of research institutions. For me personally participation in this field allows me to study how one can apply a variety of quantum algorithmic techniques to image processing and to join efforts aiming at reducing the impact of climate change. I will present a short review of ideas and activities that aim at finding, possibly impactful, new methods of satellite data processing using quantum computing techniques.
There are a lot of potential uses for Ai in biomedical sciences. The field of bioinformatics, which deals with the analysis of biomedical data, is still catching up with the skill set needed to effectively use ML/Ai in the workflows.
In this tutorial, we will demystify some basic machine learning concepts needed to get started with and then show some examples where these concepts are already being applied. Then we will move on to get some hands-on experience with a selected tool to get some experience and to motivate you to learn and explore the possibilities to use ML/ML in your work.
Prerequisites: Basic understanding in concepts like sequence assembly, variants and protein folding. Familiar with Python language would be an advantage, but we will provide copy-paste examples just in case.
The ARC Compute Element (CE) is a distributed compute front-end on top of a conventional computing resource (e.g. a Linux cluster or a standalone workstation). It enables remote batch system job submission, and seamlessly handles data staging of any remote input files. ARC-CE’s can work in a grid of compute resources, removing the need for the end-user to specify what resource they want their job to run on. Direct submission to a specific HPC resource is also possible.
ARC has been one of several recommended compute element technologies of the World Wide LHC Computing grid since 2002, and is now one of the two remaining recommended ones together with HT-Condor CE.
The tutorial demonstrates the installation and configuration of an ARC-CE front-end for use in a distributed grid infrastructure, such as WLCG. Particular focus will be on supporting high-performance systems, using experience from Vega EuroHPC, Nordic WLCG Tier1 and other HPC centres. The tutorial addresses primarily system administrators, but also serves as a demonstrator of a seamless access to HPC resources to extended user communities, such as Life Sciences, Climate and Biodiversity, Astrophysics, Materials science and others.
The tutorial will demonstrate the installation of ARC 7, and focusing on an ARC-CE set up for token support.
A handful of test-clusters will be set up to allow attendees to type along.
Prerequisite: own laptop recommended; no previous experience with quantum computing expected. To use LUMI and Helmi participants will be required to have an institutional or company email address.
13:00-13.30 Alberto Lanzanova: NordiQuEst HPC-QC ecosystem. Going to the details.
13.30-14.30 Jake Muff: Introduction to using a Quantum Computer
14.30 - 15:00 Break (Cofee&Tea)
15:00 - 17:00 Jake Muff: Hands on Variational Quantum Eigensolver
In this workshop, we will have a look at the convergence of high-performance computing and quantum computing. Computational modelling is one field that in the future, is expected to be accelerated by quantum computers.
We start with a presentation NeIC project, Nordic-Estonian Quantum Computing e-Infrastructure Quest (NordIQuEst), by Alberto Lanzanova. NordIQuEst is a cross-border collaboration of seven partners from five NeIC member states that will combine several HPC resources and quantum computers into one unified Nordic quantum computing platform.
A practical approach to quantum programming follows this. In order to use quantum computers, in the future, novel quantum algorithms are required. These can, and should! be developed already now. In this part of the workshop, participants will get a chance to submit a quantum job to a real quantum computer. Participants will be shown how to entangle multiple qubits and be given tips on getting the most out of quantum computers today.
This will be followed by an introduction into a hybrid quantum-classical algorithm: the Variational Quantum Eigensolver. This workshop will utilise the EuroHPC supercomputer LUMI and Finland’s 5-qubit quantum computer Helmi.
When working with personal data, the General Data Protection Regulation (GDPR) poses a challenging situation for European researchers: the protection of the personal data of the study subjects can limit the transparency and sustainability of the scientific process by restricting data sharing and reuse. In recent years, national legislation have regulated the reuse of data originally collected for medical purposes, but the process for sharing non-medical research personal data is still unclear.
The workshop aims to identify ethical and GDPR-compliant solutions for sharing research personal data with the global scientific community. During the workshop, we will present some existing solutions and show how to adopt them in practice. Participants are encouraged to share their ideas and experiences to explore a wide range of available options, especially in the Nordic countries. The aim of the workshop is to produce a whitepaper towards a unified process for sharing personal data in non-medical research.
The dCache project provides open-source software deployed internationally to satisfy
more demanding storage requirements.
Its multifaceted approach provides an integrated way of supporting different use cases with the same storage, from high-throughput data ingest, data sharing over wide area networks, efficient access from HPC clusters and long-term data persistence on tertiary storage. Though it was originally developed for the
Today, HEP experiments are used by various scientific communities, including astrophysics, biomedicine, life science, and many others.
dCache tutorial will describe the key components and architectural design of dCache as well as demonstrate how to set up a minimal dCache installation with small examples explaining the configurations.
There are a lot of potential uses for Ai in biomedical sciences. The field of bioinformatics, which deals with the analysis of biomedical data, is still catching up with the skill set needed to effectively use ML/Ai in the workflows.
In this tutorial, we will demystify some basic machine learning concepts needed to get started with and then show some examples where these concepts are already being applied. Then we will move on to get some hands-on experience with a selected tool to get some experience and to motivate you to learn and explore the possibilities to use ML/ML in your work.
Prerequisites: Basic understanding in concepts like sequence assembly, variants and protein folding. Familiar with Python language would be an advantage, but we will provide copy-paste examples just in case.
The ARC Compute Element (CE) is a distributed compute front-end on top of a conventional computing resource (e.g. a Linux cluster or a standalone workstation). It enables remote batch system job submission, and seamlessly handles data staging of any remote input files. ARC-CE’s can work in a grid of compute resources, removing the need for the end-user to specify what resource they want their job to run on. Direct submission to a specific HPC resource is also possible.
ARC has been one of several recommended compute element technologies of the World Wide LHC Computing grid since 2002, and is now one of the two remaining recommended ones together with HT-Condor CE.
The tutorial demonstrates the installation and configuration of an ARC-CE front-end for use in a distributed grid infrastructure, such as WLCG. Particular focus will be on supporting high-performance systems, using experience from Vega EuroHPC, Nordic WLCG Tier1 and other HPC centres. The tutorial addresses primarily system administrators, but also serves as a demonstrator of a seamless access to HPC resources to extended user communities, such as Life Sciences, Climate and Biodiversity, Astrophysics, Materials science and others.
The tutorial will demonstrate the installation of ARC 7, and focusing on an ARC-CE set up for token support.
A handful of test-clusters will be set up to allow attendees to type along.
15:00 - 17:00 Jake Muff: Hands on Variational Quantum Eigensolver
In this workshop, we will have a look at the convergence of high-performance computing and quantum computing. Computational modelling is one field that in the future, is expected to be accelerated by quantum computers.
We start with a presentation NeIC project, Nordic-Estonian Quantum Computing e-Infrastructure Quest (NordIQuEst), by Alberto Lanzanova. NordIQuEst is a cross-border collaboration of seven partners from five NeIC member states that will combine several HPC resources and quantum computers into one unified Nordic quantum computing platform.
A practical approach to quantum programming follows this. In order to use quantum computers, in the future, novel quantum algorithms are required. These can, and should! be developed already now. In this part of the workshop, participants will get a chance to submit a quantum job to a real quantum computer. Participants will be shown how to entangle multiple qubits and be given tips on getting the most out of quantum computers today.
This will be followed by an introduction into a hybrid quantum-classical algorithm: the Variational Quantum Eigensolver. This workshop will utilise the EuroHPC supercomputer LUMI and Finland’s 5-qubit quantum computer Helmi.
We will start (Sensitive Data) session 2 with a discussion regarding the requirements for the sensitive data processing environment and present a plan for the LUMI EuroHPC supercomputer.
You learn to use an encryption tool on your laptop.
You will get an overview of the production and development plans for some of the HPC systems in Nordics.
In the end, we will discuss how to do collaboration in Nordic, and we can hear some examples of ongoing activities.
The dCache project provides open-source software deployed internationally to satisfy
more demanding storage requirements.
Its multifaceted approach provides an integrated way of supporting different use cases with the same storage, from high-throughput data ingest, data sharing over wide area networks, efficient access from HPC clusters and long-term data persistence on tertiary storage. Though it was originally developed for the
Today, HEP experiments are used by various scientific communities, including astrophysics, biomedicine, life science, and many others.
dCache tutorial will describe the key components and architectural design of dCache as well as demonstrate how to set up a minimal dCache installation with small examples explaining the configurations.