- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The Nordic e-Infrastructure Collaboration (NeIC) facilitates the development and operation of high-quality e-Infrastructure solutions in areas of joint Nordic interest. NeIC is a distributed organization consisting of technical experts from academic institutions across the Nordic countries. NeIC's activities work to improve e-infrastructure and enable better research in the Nordic-Baltic region. The NeIC collaboration is steered by the Nordic e-Infrastructure providers in the Nordic countries and Estonia.
The NeIC conferences are organized biannually, bringing together e-infrastructure experts, researchers, policymakers, funders, and national e-infrastructure providers from the Nordics and beyond. The NeIC 2024 conference theme is “Nordic e-Infrastructure Tomorrow”.
The conference proceedings will be published by Springer Nature Communications in Computer and Information Science (CCIS) with Gold Open Access.
Important Dates
Click here to download the Powerpoint presentation template.
The conference is Co-Sponsored by
Keynote by the Head of Research Infrastructure at the Ministry of Education and Research
Authors: Hemanadhan Myneni, Xin Liu, Jóhannes Nordal, Þór Arnar Curtis, Rakesh Sarma, Ebba Þóra Hvannberg, Helmut Neukirchen, Matthias Book, Andreas Lintermann, and Morris Riedel
Henric Zazzi
The conference dinner takes place at the Seaplane Harbour (Lennusadam), a popular maritime museum. The museum is located in a building originally constructed as a hangar for seaplanes in the area of Peter the Great's Naval Fortress. The main attraction in the museum is the submarine Lembit, which was ordered by Estonia from the United Kingdom and built in 1936.
The Seaplane Harbour is located at Vesilennuki 6, 2.5 km away from the conference venue. You can reach there by taxi or by walking with a group.
The singer-musician duo Merlin Purge and Margus Vaher will add a cheerful mood to the dinner.
Information on the NeIC run: https://indico.neic.no/event/259/page/107-neic-run
There are a lot of potential uses for Ai in biomedical sciences. The field of bioinformatics, which deals with the analysis of biomedical data, is still catching up with the skill set needed to effectively use ML/Ai in the workflows.
In this tutorial, we will demystify some basic machine learning concepts needed to get started with and then show some examples where these concepts are already being applied. Then we will move on to get some hands-on experience with a selected tool to get some experience and to motivate you to learn and explore the possibilities to use ML/ML in your work.
Prerequisites: Basic understanding in concepts like sequence assembly, variants and protein folding. Familiar with Python language would be an advantage, but we will provide copy-paste examples just in case.
The workshop runs from 13:00 to 17:00 with a 30 minute break in the middle.
The ARC Compute Element (CE) is a distributed compute front-end on top of a conventional computing resource (e.g. a Linux cluster or a standalone workstation). It enables remote batch system job submission, and seamlessly handles data staging of any remote input files. ARC-CE’s can work in a grid of compute resources, removing the need for the end-user to specify what resource they want their job to run on. Direct submission to a specific HPC resource is also possible.
ARC has been one of several recommended compute element technologies of the World Wide LHC Computing grid since 2002, and is now one of the two recommended ones together with HT-Condor CE.
The tutorial demonstrates the installation and configuration of an ARC-CE front-end for use in a distributed grid infrastructure, such as WLCG. We will show that ARC supports both high-performance systems, like Vega EuroHPC, Nordic WLCG Tier1 and other HPC centres, but also smaller community grids and cloud HPC resources. The tutorial addresses primarily system administrators, but also serves as a demonstrator of a seamless access to HPC resources to extended user communities, such as Life Sciences, Climate and Biodiversity, Astrophysics, Materials science and others.
The tutorial will demonstrate the installation of ARC 7, and focusing on an ARC-CE set up for token support.
A handful of test-clusters will be set up to allow attendees to type along.
If time allows, two more items will be discussed:
a). A demonstration of the new ARC cluster setup in the EGI Infrastructure Manager. The EGI IM ARC integration allows admins to set up a compute cluster running Slurm and ARC with a few clicks.
b). With ARCHERY we will show how a research community using a set of ARC-enabled resources (HPC/grid/cloud) to run computations can set up their own community grid without needing a central job-server. This allows the researcher to submit jobs to all their available ARC CEs, without the need to choose and specify particular CE for each job.
Prerequisite: own laptop recommended; no previous experience with quantum computing expected. To use LUMI and Helmi participants will be required to have an institutional or company email address.
13:00-13.30 Alberto Lanzanova: NordiQuEst HPC-QC ecosystem. Going to the details.
13.30-14.30 Jake Muff: Introduction to using a Quantum Computer
14.30 - 15:00 Break (Cofee&Tea)
15:00 - 17:00 Jake Muff: Hands on Variational Quantum Eigensolver
In this workshop, we will have a look at the convergence of high-performance computing and quantum computing. Computational modelling is one field that in the future, is expected to be accelerated by quantum computers.
We start with a presentation NeIC project, Nordic-Estonian Quantum Computing e-Infrastructure Quest (NordIQuEst), by Alberto Lanzanova. NordIQuEst is a cross-border collaboration of seven partners from five NeIC member states that will combine several HPC resources and quantum computers into one unified Nordic quantum computing platform.
A practical approach to quantum programming follows this. In order to use quantum computers, in the future, novel quantum algorithms are required. These can, and should! be developed already now. In this part of the workshop, participants will get a chance to submit a quantum job to a real quantum computer. Participants will be shown how to entangle multiple qubits and be given tips on getting the most out of quantum computers today.
This will be followed by an introduction into a hybrid quantum-classical algorithm: the Variational Quantum Eigensolver. This workshop will utilise the EuroHPC supercomputer LUMI and Finland’s 5-qubit quantum computer Helmi.
When working with personal data, the General Data Protection Regulation (GDPR) poses a challenging situation for European researchers: the protection of the personal data of the study subjects can limit the transparency and sustainability of the scientific process by restricting data sharing and reuse. In recent years, national legislation have regulated the reuse of data originally collected for medical purposes, but the process for sharing non-medical research personal data is still unclear.
The workshop aims to identify ethical and GDPR-compliant solutions for sharing research personal data with the global scientific community. During the workshop, we will present some existing solutions and show how to adopt them in practice. Participants are encouraged to share their ideas and experiences to explore a wide range of available options, especially in the Nordic countries. The aim of the workshop is to produce a whitepaper towards a unified process for sharing personal data in non-medical research.
The dCache project provides open-source software deployed internationally to satisfy
more demanding storage requirements.
Its multifaceted approach provides an integrated way of supporting different use cases with the same storage, from high-throughput data ingest, data sharing over wide area networks, efficient access from HPC clusters and long-term data persistence on tertiary storage. Though it was originally developed for the
Today, HEP experiments are used by various scientific communities, including astrophysics, biomedicine, life science, and many others.
dCache tutorial will describe the key components and architectural design of dCache as well as demonstrate how to set up a minimal dCache installation with small examples explaining the configurations.
There are a lot of potential uses for Ai in biomedical sciences. The field of bioinformatics, which deals with the analysis of biomedical data, is still catching up with the skill set needed to effectively use ML/Ai in the workflows.
In this tutorial, we will demystify some basic machine learning concepts needed to get started with and then show some examples where these concepts are already being applied. Then we will move on to get some hands-on experience with a selected tool to get some experience and to motivate you to learn and explore the possibilities to use ML/ML in your work.
Prerequisites: Basic understanding in concepts like sequence assembly, variants and protein folding. Familiar with Python language would be an advantage, but we will provide copy-paste examples just in case.
The workshop runs from 13:00 to 16:00 with a 30 minute break in the middle
The ARC Compute Element (CE) is a distributed compute front-end on top of a conventional computing resource (e.g. a Linux cluster or a standalone workstation). It enables remote batch system job submission, and seamlessly handles data staging of any remote input files. ARC-CE’s can work in a grid of compute resources, removing the need for the end-user to specify what resource they want their job to run on. Direct submission to a specific HPC resource is also possible.
ARC has been one of several recommended compute element technologies of the World Wide LHC Computing grid since 2002, and is now one of the two recommended ones together with HT-Condor CE.
The tutorial demonstrates the installation and configuration of an ARC-CE front-end for use in a distributed grid infrastructure, such as WLCG. We will show that ARC supports both high-performance systems, like Vega EuroHPC, Nordic WLCG Tier1 and other HPC centres, but also smaller community grids and cloud HPC resources. The tutorial addresses primarily system administrators, but also serves as a demonstrator of a seamless access to HPC resources to extended user communities, such as Life Sciences, Climate and Biodiversity, Astrophysics, Materials science and others.
The tutorial will demonstrate the installation of ARC 7, and focusing on an ARC-CE set up for token support.
A handful of test-clusters will be set up to allow attendees to type along.
If time allows, two more items will be discussed:
a). A demonstration of the new ARC cluster setup in the EGI Infrastructure Manager. The EGI IM ARC integration allows admins to set up a compute cluster running Slurm and ARC with a few clicks.
b). With ARCHERY we will show how a research community using a set of ARC-enabled resources (HPC/grid/cloud) to run computations can set up their own community grid without needing a central job-server. This allows the researcher to submit jobs to all their available ARC CEs, without the need to choose and specify particular CE for each job.
15:00 - 17:00 Jake Muff: Hands on Variational Quantum Eigensolver
In this workshop, we will have a look at the convergence of high-performance computing and quantum computing. Computational modelling is one field that in the future, is expected to be accelerated by quantum computers.
We start with a presentation NeIC project, Nordic-Estonian Quantum Computing e-Infrastructure Quest (NordIQuEst), by Alberto Lanzanova. NordIQuEst is a cross-border collaboration of seven partners from five NeIC member states that will combine several HPC resources and quantum computers into one unified Nordic quantum computing platform.
A practical approach to quantum programming follows this. In order to use quantum computers, in the future, novel quantum algorithms are required. These can, and should! be developed already now. In this part of the workshop, participants will get a chance to submit a quantum job to a real quantum computer. Participants will be shown how to entangle multiple qubits and be given tips on getting the most out of quantum computers today.
This will be followed by an introduction into a hybrid quantum-classical algorithm: the Variational Quantum Eigensolver. This workshop will utilise the EuroHPC supercomputer LUMI and Finland’s 5-qubit quantum computer Helmi.
We will start (Sensitive Data) session 2 with a discussion regarding the requirements for the sensitive data processing environment and present a plan for the LUMI EuroHPC supercomputer.
You learn to use an encryption tool on your laptop.
You will get an overview of the production and development plans for some of the HPC systems in Nordics.
In the end, we will discuss how to do collaboration in Nordic, and we can hear some examples of ongoing activities.
The dCache project provides open-source software deployed internationally to satisfy
more demanding storage requirements.
Its multifaceted approach provides an integrated way of supporting different use cases with the same storage, from high-throughput data ingest, data sharing over wide area networks, efficient access from HPC clusters and long-term data persistence on tertiary storage. Though it was originally developed for the
Today, HEP experiments are used by various scientific communities, including astrophysics, biomedicine, life science, and many others.
dCache tutorial will describe the key components and architectural design of dCache as well as demonstrate how to set up a minimal dCache installation with small examples explaining the configurations.