|
Leaders: | V.V. Korenkov S.V. Shmatov |
Deputies: | A.G. Dolbilov D.V. Podgainy T.A. Strizh |
Participating Countries and International organizations:
Armenia, Azerbaijan, Belarus, Bulgaria, CERN, China, Egypt, France, Georgia, Kazakhstan, Mexico, Mongolia, Russia, Slovakia, South Africa, Taiwan, USA, Uzbekistan.
The problem under study and the main purpose of the reserch:
The main objective of the MICC is to meet the needs of the JINR scientific community to the maximum extent possible in order to solve urgent tasks, from theoretical research and experimental data processing, storage and analysis to the solution of applied tasks in the field of life sciences. The tasks of the NICA project, the neutrino programme, the tasks of processing data from the experiments at the LHC and other large-scale experiments, as well as support for users of the JINR Laboratories and its Member States will be the priorities.
The project presupposes the inclusion of two activities, which, like the project, are aimed at meeting the requirements of a large number of research and administrative personnel:
- development of the digital platform "JINR Digital EcoSystem", which integrates existing and future services to support scientific, administrative and social activities, as well as to maintain the engineering and IT infrastructures of the Institute, which in turn will provide reliable and secure access to different types of data and enable a comprehensive analysis of information using modern technologies of Big Data and artificial intelligence;
- creation of a multi-purpose hardware and software platform for Big Data analytics based on hybrid hardware accelerators; machine learning algorithms; tools for analytics, reports and visualization; support of user interfaces and tasks.
Project:
Name of the Project | Project Leaders | Project code |
||
Laboratory Responsible from laboratories | Status | |||
1. | MICC Multifunctional Information and Computing Complex |
V.V. Korenkov S.V. Shmatov Deputies: A.G. Dolbilov D.V. Podgainy T.A. Strizh |
06-6-1118-1-2014/2030
|
MLIT | K.N. Angelov, A.I. Anikina, O.A. Antonova, A.I. Balandin, N.A. Balashov, A.V. Baranov, D.V. Belyakov, T. Zh. Bezhanyan, S.V. Chashchin, A.I. Churin, O.Yu. Derenovskaia, V.P. Dergunov, A.T. Dzakhoev, A.V. Evlanov, V.Ya. Fariseev, M.Yu. Fetisov, S.V. Gavrilov, A.P. Gavrish, T.M. Goloskokova, A.O. Golunov, L.I. Gorodnicheva, E.A. Grafov, E.N. Grafova, N.I. Gromova, A.E. Gushchin, A.V. Ilyina, N.N. Karpenko, I.I. Kalagin, A.S. Kamensky, I.A. Kashunin, M.Kh. Kirakosyan, A.A. Kokorev, G.A. Korobova, S.A. Kretova, N.A. Kutovsky, I.V. Kudasova, O.N. Kudryashova, E.Yu. Kulpin, A.E. Klochiev, A.V. Komkov, V.I. Kulakov, A.A. Lavrentiev, A.M. Levitin, Yu.M. Legashchev, M.A. Lyubimova, M.A. Maksimov, V.N. Markov, S.V. Marchenko, M. A. Matveev, A.N. Makhalkin, Ye. Mazhitova, A.A. Medyantsev, V.V. Mitsyn, N.N. Mishchenko, A.N. Mityukhin, A.N. Moibenko, I.K. Nekrasova, V.N. Nekrasov, D.A. Oleinik, V.V. Ovechkin, S.S. Parzhitsky, I.S. Pelevanyuk, D.I. Pryakhina, A.Sh. Petrosyan, D.S. Polezhaev, L.A. Popov, T.V. Rozhkova, Ya.I. Rozenberg, D.V. Rogozin, R.N. Semenov, A.S. Smolnikova, E. V. Solovieva, I.G. Sorokin, I.N. Stamat, V.P. Sheiko, D.A. Shpotya, B.B. Stepanov, A.M. Shvalev, M.L. Shishmakov, O.I. Streltsova, I.A. Sokolov, Sh.G. Torosyan, V.V. Trofimov, N.V. Trubchaninov, E.O. Tsamtsurov, V.Yu. Usachev, S.I. Vedrov, A.S. Vorontsov, N.N. Voytishin, A.Yu. Zakomoldin, S.E. Zhabkova, M.I. Zuev |
|
VBLHEP | K.V. Gertsenberger, A.O. Golunov, Yu.I. Minaev, A.N. Moshkin, O.V. Rogachevsky, I.V. Slepnev, I.P. Slepov |
|
BLTP | A.A. Sazonov |
|
FLNP | G.A. Sukhomlinov |
|
FLNR | A.S. Baginyan, A.G. Polyakov, V.V. Sorokoumov |
|
DLNP | A.S. Zhemchugov, Yu.P. Ivanov, V.A. Kapitonov |
|
LRB | V.N. Chausov |
|
UC | I.N. Semenyushkin |
Assosiated Personnel MICC |
A.V. Anisenkov, A.K. Kiryanov |
The concept of the development of information technology, scientific computing and Data Science in the JINR Seven-Year Plan provides for the creation of a scientific IT infrastructure that combines a multitude of various technological solutions, trends and methods. The IT infrastructure implies the coordinated development of interconnected IT technologies and computational methods aimed at maximizing the number of JINR strategic tasks to be solved that require intensive data computing. The large research infrastructure project "Multifunctional Information and Computing Complex" holds a special place in this concept.
The MICC LRIP main objective for 2024-2030 is to perform a set of actions aimed at the modernization and development of the major hardware and software components of the computing complex, the creation of a state-of-the-art software platform enabling the solution of a wide range of research and applied tasks in accordance with the JINR Seven-Year Plan. The rapid development of information technology and new user requirements stimulate the development of all MICC components and platforms. The MICC computing infrastructure encompasses four advanced software and hardware components, namely, the Tier1 and Tier2 grid sites, the hyperconverged "Govorun" supercomputer, the cloud infrastructure and the distributed multi-layer data storage system. This set of components ensures the uniqueness of the MICC on the world landscape and allows the scientific community of JINR and its Member States to use all progressive computing technologies within one computing complex that provides multifunctionality, scalability, high performance, reliability and availability in 24x7x365 mode with the multi-layer data storage system for different user groups.
Within the MICC LRIP, it is provided to support the operation of all MIСC hardware and software components, i.e., the Tier1 and Tier2 grid sites, the cloud infrastructure, the hyperconverged "Govorun" supercomputer, the multi-layer data storage system, the network infrastructure, the power supply and climate control systems, as well as to modernize/reconstruct the above components in accordance with new trends in the development of IT technologies and user requirements. In addition, it is required to ensure high-speed telecommunications, a modern local area network infrastructure and a reliable engineering infrastructure that provides guaranteed power supply and air conditioning for the server equipment.
Expected results upon completion of the project:
Modernization of the JINR MICC engineering infrastructure (reconstruction in accordance with modern requirements of the machine hall of the 4th floor of MLIT).
Modernization and development of the distributed computing platform for the NICA project with the involvement of the computing centres of the NICA collaboration.
Creation of a Tier0 grid cluster for the experiments of the NICA megaproject to store experimental and simulated data. Expansion of the performance and storage capacity of the Tier1 and Tier2 grid clusters as data centres for the experiments of the NICA megaproject, the JINR neutrino programme and the experiments at the LHC.
Enlargement of the JINR cloud infrastructure to broaden the range of services provided to users on the basis of containerization technologies. Automation of the deployment of cloud technologies in the JINR Member States’ organizations.
Expansion of the HybriLIT heterogeneous platform, including the "Govorun" supercomputer, as a hyperconverged software-defined environment with a hierarchical data storage and processing system.
Design and elaboration of a distributed software-defined high-performance computing platform that combines supercomputer (heterogeneous), grid and cloud technologies for the effective use of novel computing architectures.
Development of a computer infrastructure protection system based on fundamentally new paradigms, including quantum cryptography, neurocognitive principles of data organization and data object interaction, global integration of information systems, universal access to applications, new Internet protocols, virtualization, social networks, mobile device data and geolocation.
Expected results for the project in the current year:
Provision of the stable, safe and integral functioning of the JINR information and telecommunication network (backbone network (2х100 Gbps); the transport network of the NICA megaproject (4х100 Gbps); the MLIT mesh network (100 Gbps); backbone external telecommunication channels (3х100 Gbps); the Wi-Fi network at the Institute’s sites in 24x7x365 mode. Support of standard network services: email, file sharing, security, user database support and maintenance, IPDB network element database support, etc. Elaboration of a methodology and technology for dual authorization and certification authorities. Development of a project for alternative routes of the external network infrastructure. Elaboration of a project of a dedicated optical network for NICA collaborations.
Operation of the guaranteed power supply (diesel generators, uninterruptible power supplies) and climate control systems (chillers, dry coolers, inter-row air conditioners, etc.), as well as the fire safety system, of the MICC computing infrastructure in 24x7x365 mode. Maintenance of the full-scale and optimal functioning of the MICC engineering equipment. Modernization of Modules 1 and 2 of the machine hall on the 2nd floor. Design and implementation of the first stage of modernization of the server room in the hall of the 4th floor of the MLIT building.
Expansion of the performance and storage system of the MICC basic components, namely, the Tier1 center up to 23,000 CPU cores and 16,000 TB, Tier2/CICC up to 12,000 CPU cores, the EOS system up to 35 PB. Modernization of the EOS-based data lake. Enlargement and maintenance of the unified storage and access system for common software (CVMFS). Support of the software system for working with tape robots (CTA). Support and maintenance of the operation of WLCG virtual organizations, the NICA, COMPASS, NOvA, ILC and other experiments, local user groups on the MICC Tier1 and Tier2 resources. Implementation of a regional center for the JUNO experiment on top of the MICC resources.
Development of prototypes of fully functional Tier0, Tier1 centers for the experiments at the NICA accelerator complex. Creation of basic services for Tier0, Tier1 and third-party Tier2 centers: registration of users and resources; authorization and support for the security of resource use and user work in the distributed system; problem fixing and notification of resource users and administrators; systems for combining distributed computing resources; systems for combining distributed data storage resources.
Extension of the number of users and participants of the distributed information and computing environment (DICE) on the basis of the cloud resources of the JINR Member States’ organizations. Enlargement of the computing resources of the MICC cloud (if technically possible), including at the expense of resources acquired by the Baikal-GVD, JUNO, NOvA/DUNE experiments, and their maintenance. Update of all software components of the JINR cloud infrastructure and services to the latest versions. Implementation of a system for the automated testing of servers before putting them into operation. Enhancement of the HTCondor cluster monitoring system to monitor the status of multi-core jobs. Transfer of the system for alerting and monitoring the current state of cloud infrastructure components from Icinga to the Grafana/Prometheus stack.
Enhancement of the efficiency of using the distributed heterogeneous computing environment built on top of the DIRAC software by developing and introducing into the system a methodology for analyzing the performance of jobs running in the distributed environment. Optimization of the job launch mechanism via the use of the DIRAC software environment preinstalled in CVMFS. Conducting mass data production sessions within the ВМ@N experiment, technical support for launching jobs of the MPD and SPD experiments.
Development of a system for automating the jobs of deploying and configuring the system software of the HybriLIT platform. Development of a system for analyzing the load on computing resources to solve the tasks of modernizing and optimizing the configuration of the "Govorun" supercomputer. Testing and implementation of parallel and distributed data storage and processing systems such as MinIO, Apache Ignite, etc. to enhance the efficiency of working with model and experimental data on the HybriLIT platform. Development and integration of a system for collecting and analyzing statistics on the usage of application software by HybriLIT heterogeneous platform users via the Modules system. Enhancement of the GPU components of the "Govorun" supercomputer to provide advanced computing architectures for the current needs of users and planned research within the NICA experiments, as well as for the development of the ML/DL/HPC ecosystem, including the quantum computing polygon.
Trial operation of the prototype of a data storage and processing system for the SPD experiment using the MICC resources (cloud infrastructure for hosting middleware services, CICC computing infrastructure for performing jobs, EOS for data storage). Testing of work with the MICC tape storage.
Enlargement of the LITmon monitoring system through the integration of local systems for monitoring electrical equipment (diesel generators, transformers and uninterruptible power supplies) and refrigeration systems (cooling towers, pumps, water circuits, heat exchangers, chillers). Introduction of new MICC equipment in the monitoring system. Creation of a prototype of a control room for the MICC engineering infrastructure with a single access point. Elaboration of a prototype of a unified MICC accounting system based on the accounting systems of the complex components and a system for monitoring logs of serial consoles of MICC servers.
Activities of the infrastructure: | |||
---|---|---|---|
Name of the activity | Leaders | Implementation period |
Within the activity, two main directions of work are planned: the creation of the basic infrastructure of the digital platform (including the software-hardware and methodological support of its functioning) and different digital services. In addition to service support, digital services for scientific collaborations, whose activity is related to JINR’s basic facilities, will be developed and maintained for use by the Institute’s staff members. Expected results upon completion of the activity: Expected results in the activity in the current year: |
Bref nnotation and scientific rationale: Expected results upon completion of the activity: Expected results of the activity in the current year: |
Collaboration
Country or International Organization | City | Institute or laboratory |
Armenia | Yerevan | IIAP NAS RA |
Azerbaijan | Baku | ADA |
IP ANAS | ||
Belarus | Minsk | INP BSU |
JIPNR-Sosny NASB | ||
UIIP NASB | ||
Bulgaria | Sofia | INRNE BAS |
SU | ||
CERN | Geneva | CERN |
China | Beijing | IHEP CAS |
Egypt | Cairo | ASRT |
Giza | CU | |
France | Marseille | CPPM |
Georgia | Tbilisi | GRENA |
GTU | ||
TSU | ||
UG | ||
Kazakhstan | Almaty | INP |
Astana | BA INP | |
Mexico | Mexico City | UNAM |
Mongolia | Ulaanbaatar | IMDT MAS |
Russia | Chernogolovka | SCC IPCP RAS |
Dubna | Dubna State Univ. | |
SCC "Dubna" | ||
SEZ "Dubna" | ||
Gatchina | NRC KI PNPI | |
Moscow | BMSTU | |
FRC IM RAS | ||
IITP RAS | ||
ISP RAS | ||
ITEP | ||
JSCC RAS | ||
KIAM RAS | ||
MPEI | ||
MSK-IX | ||
MSU | ||
NNRU "MEPhI" | ||
NRC KI | ||
NRU HSE | ||
PRUE | ||
RCC MSU | ||
RSCC | ||
SINP MSU | ||
Moscow, Troitsk | INR RAS | |
Novosibirsk | BINP SB RAS | |
ICMMG SB RAS | ||
SKIF | ||
Protvino | IHEP | |
Puschino | IMPB RAS | |
Saint Petersburg | FIP | |
ITMO | ||
SPbSPU | ||
SPbSU | ||
Samara | SU | |
Vladikavkaz | NOSU | |
Vladivostok | IACP FEB RAS | |
Slovakia | Kosice | IEP SAS |
South Africa | Cape Town | UCT |
Taiwan | Taipei | ASGCCA |
USA | Arlington, TX | UTA |
Batavia, IL | Fermilab | |
Upton, NY | BNL | |
Uzbekistan | Tashkent | AS RUz |
INP AS RUz |