06-6-1118-2014/2030
   
 
 MICC
Multifunctional Information and Computing Complex

Leaders: V.V. Korenkov
S.V. Shmatov

Deputies: A.G. Dolbilov
D.V. Podgainy
T.A. Strizh

Participating Countries and International organizations:
Armenia, Azerbaijan, Belarus, Bulgaria, CERN, China, Egypt, France, Georgia, Kazakhstan, Mexico, Mongolia, Russia, Slovakia, South Africa, Taiwan, USA, Uzbekistan.


The problem under study and the main purpose of the reserch:
The main objective of the MICC is to meet the needs of the JINR scientific community to the maximum extent possible in order to solve urgent tasks, from theoretical research and experimental data processing, storage and analysis to the solution of applied tasks in the field of life sciences. The tasks of the NICA project, the neutrino programme, the tasks of processing data from the experiments at the LHC and other large-scale experiments, as well as support for users of the JINR Laboratories and its Member States will be the priorities.

The project presupposes the inclusion of two activities, which, like the project, are aimed at meeting the requirements of a large number of research and administrative personnel:
- development of the digital platform "JINR Digital EcoSystem", which integrates existing and future services to support scientific, administrative and social activities, as well as to maintain the engineering and IT infrastructures of the Institute, which in turn will provide reliable and secure access to different types of data and enable a comprehensive analysis of information using modern technologies of Big Data and artificial intelligence;

- creation of a multi-purpose hardware and software platform for Big Data analytics based on hybrid hardware accelerators; machine learning algorithms; tools for analytics, reports and visualization; support of user interfaces and tasks.

Project:

       Name of the Project Project Leaders Project code
Laboratory    Responsible from laboratories Status
1. MICC
Multifunctional Information
and Computing Complex
V.V. Korenkov
S.V. Shmatov 
Deputies:
A.G. Dolbilov
D.V. Podgainy
T.A. Strizh 
   

06-6-1118-1-2014/2030

Realization
MLIT K.N. Angelov, A.I. Anikina, O.A. Antonova, A.I. Balandin, N.A. Balashov, A.V. Baranov, D.V. Belyakov, T. Zh. Bezhanyan, S.V. Chashchin, A.I. Churin, O.Yu. Derenovskaia, V.P. Dergunov, A.T. Dzakhoev, A.V. Evlanov, V.Ya. Fariseev, M.Yu. Fetisov, S.V. Gavrilov, A.P. Gavrish, T.M. Goloskokova, A.O. Golunov, L.I. Gorodnicheva, E.A. Grafov, E.N. Grafova, N.I. Gromova, A.E. Gushchin, A.V. Ilyina, N.N. Karpenko, I.I. Kalagin, A.S. Kamensky, I.A. Kashunin, M.Kh. Kirakosyan, A.A. Kokorev, G.A. Korobova, S.A. Kretova, N.A. Kutovsky, I.V. Kudasova, O.N. Kudryashova, E.Yu. Kulpin, A.E. Klochiev, A.V. Komkov, V.I. Kulakov, A.A. Lavrentiev, A.M. Levitin, Yu.M. Legashchev, M.A. Lyubimova, M.A. Maksimov, V.N. Markov, S.V. Marchenko, M. A. Matveev, A.N. Makhalkin, Ye. Mazhitova, A.A. Medyantsev, V.V. Mitsyn, N.N. Mishchenko, A.N. Mityukhin, A.N. Moibenko, I.K. Nekrasova, V.N. Nekrasov, D.A. Oleinik, V.V. Ovechkin, S.S. Parzhitsky, I.S. Pelevanyuk, D.I. Pryakhina, A.Sh. Petrosyan, D.S. Polezhaev, L.A. Popov, T.V. Rozhkova, Ya.I. Rozenberg, D.V. Rogozin, R.N. Semenov, A.S. Smolnikova, E. V. Solovieva, I.G. Sorokin, I.N. Stamat, V.P. Sheiko, D.A. Shpotya, B.B. Stepanov, A.M. Shvalev, M.L. Shishmakov, O.I. Streltsova, I.A. Sokolov, Sh.G. Torosyan, V.V. Trofimov, N.V. Trubchaninov, E.O. Tsamtsurov, V.Yu. Usachev, S.I. Vedrov, A.S. Vorontsov, N.N. Voytishin, A.Yu. Zakomoldin, S.E. Zhabkova, M.I. Zuev

VBLHEP K.V. Gertsenberger, A.O. Golunov, Yu.I. Minaev, A.N. Moshkin, O.V. Rogachevsky, I.V. Slepnev, I.P. Slepov

BLTP A.A. Sazonov

FLNP G.A. Sukhomlinov

FLNR A.S. Baginyan, A.G. Polyakov, V.V. Sorokoumov

DLNP A.S. Zhemchugov, Yu.P. Ivanov, V.A. Kapitonov

LRB V.N. Chausov

UC I.N. Semenyushkin

Assosiated
Personnel
MICC
A.V. Anisenkov,  A.K. Kiryanov

  
Bref annotation and scientific rationale:
To attain the major goals of JINR’s flagship projects, it will be required to process a huge amount of experimental data. According to a very rough estimate, these are tens of thousands of processor cores and hundreds of petabytes of experimental data. The experiments of the NICA project and the JINR neutrino programme (Baikal-GVD, JUNO, etc.) entail Tier0, Tier1 and Tier2 grid infrastructures. To achieve these goals, it is essential to develop distributed multi-layer heterogeneous computing environments, including on top of the resources of the participants of other projects and collaborations.

The concept of the development of information technology, scientific computing and Data Science in the JINR Seven-Year Plan provides for the creation of a scientific IT infrastructure that combines a multitude of various technological solutions, trends and methods. The IT infrastructure implies the coordinated development of interconnected IT technologies and computational methods aimed at maximizing the number of JINR strategic tasks to be solved that require intensive data computing. The large research infrastructure project "Multifunctional Information and Computing Complex" holds a special place in this concept.


The MICC LRIP main objective for 2024-2030 is to perform a set of actions aimed at the modernization and development of the major hardware and software components of the computing complex, the creation of a state-of-the-art software platform enabling the solution of a wide range of research and applied tasks in accordance with the JINR Seven-Year Plan. The rapid development of information technology and new user requirements stimulate the development of all MICC components and platforms. The MICC computing infrastructure encompasses four advanced software and hardware components, namely, the Tier1 and Tier2 grid sites, the hyperconverged "Govorun" supercomputer, the cloud infrastructure and the distributed multi-layer data storage system. This set of components ensures the uniqueness of the MICC on the world landscape and allows the scientific community of JINR and its Member States to use all progressive computing technologies within one computing complex that provides multifunctionality, scalability, high performance, reliability and availability in 24x7x365 mode with the multi-layer data storage system for different user groups.

Within the MICC LRIP, it is provided to support the operation of all MIСC hardware and software components, i.e., the Tier1 and Tier2 grid sites, the cloud infrastructure, the hyperconverged "Govorun" supercomputer, the multi-layer data storage system, the network infrastructure, the power supply and climate control systems, as well as to modernize/reconstruct the above components in accordance with new trends in the development of IT technologies and user requirements. In addition, it is required to ensure high-speed telecommunications, a modern local area network infrastructure and a reliable engineering infrastructure that provides guaranteed power supply and air conditioning for the server equipment.

Expected results upon completion of the project:
Modernization of the JINR MICC engineering infrastructure (reconstruction in accordance with modern requirements of the machine hall of the 4th floor of MLIT).


Modernization and development of the distributed computing platform for the NICA project with the involvement of the computing centres of the NICA collaboration.

Creation of a Tier0 grid cluster for the experiments of the NICA megaproject to store experimental and simulated data. Expansion of the performance and storage capacity of the Tier1 and Tier2 grid clusters as data centres for the experiments of the NICA megaproject, the JINR neutrino programme and the experiments at the LHC.

Enlargement of the JINR cloud infrastructure to broaden the range of services provided to users on the basis of containerization technologies. Automation of the deployment of cloud technologies in the JINR Member States’ organizations.

Expansion of the HybriLIT heterogeneous platform, including the "Govorun" supercomputer, as a hyperconverged software-defined environment with a hierarchical data storage and processing system.

Design and elaboration of a distributed software-defined high-performance computing platform that combines supercomputer (heterogeneous), grid and cloud technologies for the effective use of novel computing architectures.

Development of a computer infrastructure protection system based on fundamentally new paradigms, including quantum cryptography, neurocognitive principles of data organization and data object interaction, global integration of information systems, universal access to applications, new Internet protocols, virtualization, social networks, mobile device data and geolocation.


Expected results for the project in the current year:
Provision of the stable, safe and integral functioning of the JINR information and telecommunication network (backbone network (2х100 Gbps); the transport network of the NICA megaproject (4х100 Gbps); the MLIT mesh network (100 Gbps); backbone external telecommunication channels (3х100 Gbps); the Wi-Fi network at the Institute’s sites in 24x7x365 mode. Support of standard network services: email, file sharing, security, user database support and maintenance, IPDB network element database support, etc. Elaboration of a methodology and technology for dual authorization and certification authorities. Development of a project for alternative routes of the external network infrastructure. Elaboration of a project of a dedicated optical network for NICA collaborations.

Operation of the guaranteed power supply (diesel generators, uninterruptible power supplies) and climate control systems (chillers, dry coolers, inter-row air conditioners, etc.), as well as the fire safety system, of the MICC computing infrastructure in 24x7x365 mode. Maintenance of the full-scale and optimal functioning of the MICC engineering equipment. Modernization of Modules 1 and 2 of the machine hall on the 2nd floor. Design and implementation of the first stage of modernization of the server room in the hall of the 4th floor of the MLIT building.

Expansion of the performance and storage system of the MICC basic components, namely, the Tier1 center up to 23,000 CPU cores and 16,000 TB, Tier2/CICC up to 12,000 CPU cores, the EOS system up to 35 PB. Modernization of the EOS-based data lake. Enlargement and maintenance of the unified storage and access system for common software (CVMFS). Support of the software system for working with tape robots (CTA). Support and maintenance of the operation of WLCG virtual organizations, the NICA, COMPASS, NOvA, ILC and other experiments, local user groups on the MICC Tier1 and Tier2 resources. Implementation of a regional center for the JUNO experiment on top of the MICC resources.

Development of prototypes of fully functional Tier0, Tier1 centers for the experiments at the NICA accelerator complex. Creation of basic services for Tier0, Tier1 and third-party Tier2 centers: registration of users and resources; authorization and support for the security of resource use and user work in the distributed system; problem fixing and notification of resource users and administrators; systems for combining distributed computing resources; systems for combining distributed data storage resources.

Extension of the number of users and participants of the distributed information and computing environment (DICE) on the basis of the cloud resources of the JINR Member States’ organizations. Enlargement of the computing resources of the MICC cloud (if technically possible), including at the expense of resources acquired by the Baikal-GVD, JUNO, NOvA/DUNE experiments, and their maintenance. Update of all software components of the JINR cloud infrastructure and services to the latest versions. Implementation of a system for the automated testing of servers before putting them into operation. Enhancement of the HTCondor cluster monitoring system to monitor the status of multi-core jobs. Transfer of the system for alerting and monitoring the current state of cloud infrastructure components from Icinga to the Grafana/Prometheus stack.

Enhancement of the efficiency of using the distributed heterogeneous computing environment built on top of the DIRAC software by developing and introducing into the system a methodology for analyzing the performance of jobs running in the distributed environment. Optimization of the job launch mechanism via the use of the DIRAC software environment preinstalled in CVMFS. Conducting mass data production sessions within the ВМ@N experiment, technical support for launching jobs of the MPD and SPD experiments.

Development of a system for automating the jobs of deploying and configuring the system software of the HybriLIT platform. Development of a system for analyzing the load on computing resources to solve the tasks of modernizing and optimizing the configuration of the "Govorun" supercomputer. Testing and implementation of parallel and distributed data storage and processing systems such as MinIO, Apache Ignite, etc. to enhance the efficiency of working with model and experimental data on the HybriLIT platform. Development and integration of a system for collecting and analyzing statistics on the usage of application software by HybriLIT heterogeneous platform users via the Modules system. Enhancement of the GPU components of the "Govorun" supercomputer to provide advanced computing architectures for the current needs of users and planned research within the NICA experiments, as well as for the development of the ML/DL/HPC ecosystem, including the quantum computing polygon.

Trial operation of the prototype of a data storage and processing system for the SPD experiment using the MICC resources (cloud infrastructure for hosting middleware services, CICC computing infrastructure for performing jobs, EOS for data storage). Testing of work with the MICC tape storage.

Enlargement of the LITmon monitoring system through the integration of local systems for monitoring electrical equipment (diesel generators, transformers and uninterruptible power supplies) and refrigeration systems (cooling towers, pumps, water circuits, heat exchangers, chillers). Introduction of new MICC equipment in the monitoring system. Creation of a prototype of a control room for the MICC engineering infrastructure with a single access point. Elaboration of a prototype of a unified MICC accounting system based on the accounting systems of the complex components and a system for monitoring logs of serial consoles of MICC servers.

Activities of the infrastructure:
  Name of the activity Leaders  Implementation period
Laboratory    Responsible from laboratories Status
1. The digital ecosystem
(Digital JINR)


V.V. Korenkov
S.D. Belov

2024-2026

Realization 
MLIT E.V. Antonov, A.A. Artamonov, N.A. Balashov, N.E. Belyakova, O.V. Belyakova, A.S. Bondyakov, N.A. Davydova, I.A. Filozova, L.A. Kalmykova, E.N. Kapitonova, A.O. Kondratiev, E. S. Kuznetsova, E.K. Kuzmina, S.V. Kunyaev, L.D. Kuchugurnaya, D.V. Neapolitanskiy, I.K. Nekrasova, M.M. Pashkova, L.V. Popkova, Ya.I. Popova, A.V. Prikhodko, T.F. Sapozhnikova, V.S. Semashko, S.V. Semashko, I.A. SokolovE.V. Sheiko, G.V. Shestakova, T.S. Syresina, D.Yu. Usov, P.V. Ustenko, T.N. Zaikina

VBLHEP
V.V. Morozov, I.V. Slepnev, A.V. Trubnikov

DSDD A.V. Sheiko, M.P. Vasiliev


Bref annotation and scientific rationale:
The activity is related to the creation of an Institute-wide digital platform "JINR Digital EcoSystem". The main objective is the organization of a digital space with a single access and data exchange between electronic systems, as well as the transition of actions that previously required a personal or written request to a digital form. The platform is designed to ensure the integration of existing and future services to support scientific, administrative and social activities, as well as to maintain the engineering and IT infrastructures of the Institute.

Within the activity, two main directions of work are planned: the creation of the basic infrastructure of the digital platform (including the software-hardware and methodological support of its functioning) and different digital services. In addition to service support, digital services for scientific collaborations, whose activity is related to JINR’s basic facilities, will be developed and maintained for use by the Institute’s staff members.

Expected results upon completion of the activity:
Creation of a hardware-software and methodological basis for the functioning of the Institute-wide digital platform.

Development and implementation of digital services for distributed access to resources (information, computing, administrative, organizational ones) in a unified environment.

Transition of the processes of getting permits, approvals and applications of different types into a digital form.

Creation of a catalogue and a distributed storage of data related to the scientific and technical aspects of the Institute’s activity, as well as of tools for their analysis, presentation and the construction of predictive models

Expected results in the activity in the current year:
Creation of a unified environment for the data storage and management of basic and applied DES services, integration with the Big Data infrastructure to analyze the specified data.

Commissioning of a deeply redesigned version of the PIN system integrated with the DES and the repository of JINR publications.

Integration of the JINR institutional repository of publications with other DES services, provision of publication data for automated processing and analysis.

Automation of the deployment, monitoring and support of reliable and safe operation of basic DES services.

Organization of a user support system, including various means of interaction with users and service administrators, electronic application services, knowledge bases and documentation organization tools.

Ongoing support and development of the "Dubna" EDMS. Preparation for the transfer of procurement processes to the document management system created by the Development of Digital Services Department.

Implementation of additional capabilities in the geoinformation system to support the activities of JINR technology services and departments upon their request. Integration of the geoinformation system with other DES services.

Implementation of the system "Management of buildings, premises and workplaces": organization of the workspace based on the digital twin of the building. Map of workplaces, their assignment to departments, status and usage calendar. Labor Code compliance monitoring of working conditions.

Creation and development of digital collaboration services (scientific documentation database, calendars, project management, etc.).

2. The multi-purpose hardware and software platform for Big Data analytics

P.V. Zrelov

2024-2026

Realization 
MLIT

 

A.A. Artamonov, D.A. Baranov, S.D. Belov, I.A. Filozova, Yu.E. Gavrilenko, A.V. Ilyina, I.A. Kashunin, M.A. Matveev, D.V. Neapolitanskiy, I.S. Pelevanyuk, R.N. Semenov, T.M. Solovieva, E.V. Sheiko, V.A. Tarabrin, T.N. Zaikina, D.P. Zrelova

Bref nnotation and scientific rationale:
The activity provides for the creation of a multi-purpose hardware and software platform for Big Data analytics, which implements a full cycle of continuous processing, from data acquisition to the visualization of processing and analysis results, forecasts, recommendations and instructions, within the JINR MICC. One of the tasks planned to be solved using the platform is the elaboration of an analytical system for managing the MICC resources and data flows to enhance the efficiency of using computing and storage resources and optimize experimental data processing, as well as the development of the intelligent monitoring of distributed computing systems and data centres. Another essential task is the creation and development of analytics tools for the services of the JINR Digital EcoSystem.

Expected results upon completion of the activity:
Creation of a universal core of a Big Data mining platform.

Development and implementation of a number of standard software solutions for different classes of tasks within the platform.

Elaboration and development of analytics tools for the JINR Digital EcoSystem.

Development of methods and creation of complex solutions for analysing the security of data and computer systems.

Development of artificial intelligence methods within the analytical platform and creation of a software environment for work with technical and scientific information.

Elaboration of common solutions based on Big Data analytics for expert and recommendation systems, including for the optimization of the processes of functioning of the MICC components.

Expected results of the activity in the current year:
Creation of a custom Big Data infrastructure based on CPU and GPU computing resources using software for organizing computing, libraries of analysis, modeling and visualization tools with open source code.

Methodology and software tools for the intellectual processing of scientific and technical information on the Institute’s topics (scientific publications, patents, materials for registering programs and databases, digital traces of projects, materials for the development of human resources).

Software tools and infrastructure for the intelligent monitoring of distributed computing systems based on the analysis of information about the system functioning (logs, state metrics, structure information, etc.) using large language models (LLMs).

Elaboration of a mechanism for monitoring and analyzing the security of network connections to resources hosted in the JINR MLIT cloud environment.

Acceleration of data processing by the ROOT framework using distributed computing on the Spark cluster.

Analytical tools, hardware and software infrastructure, methods for integrating and analyzing data from Digital EcoSystem services.


Collaboration

Country or International Organization City Institute or laboratory
Armenia Yerevan IIAP NAS RA
Azerbaijan Baku ADA
    IP ANAS
Belarus Minsk INP BSU
    JIPNR-Sosny NASB
    UIIP NASB
Bulgaria Sofia INRNE BAS
    SU
CERN Geneva CERN
China Beijing IHEP CAS
Egypt Cairo ASRT
  Giza CU
France Marseille CPPM
Georgia Tbilisi GRENA
    GTU
    TSU
    UG
Kazakhstan Almaty INP
  Astana BA INP
Mexico Mexico City UNAM
Mongolia Ulaanbaatar IMDT MAS
Russia Chernogolovka SCC IPCP RAS
  Dubna Dubna State Univ.
    SCC "Dubna"
    SEZ "Dubna"
  Gatchina NRC KI PNPI
  Moscow BMSTU
    FRC IM RAS
    IITP RAS
    ISP RAS
    ITEP
    JSCC RAS
    KIAM RAS
    MPEI
    MSK-IX
    MSU
    NNRU "MEPhI"
    NRC KI
    NRU HSE
    PRUE
    RCC MSU
    RSCC
    SINP MSU
  Moscow, Troitsk INR RAS
  Novosibirsk BINP SB RAS
    ICMMG SB RAS
    SKIF
  Protvino IHEP
  Puschino IMPB RAS
  Saint Petersburg FIP
    ITMO
    SPbSPU
    SPbSU
  Samara SU
  Vladikavkaz NOSU
  Vladivostok IACP FEB RAS
Slovakia Kosice IEP SAS
South Africa Cape Town UCT
Taiwan Taipei ASGCCA
USA Arlington, TX UTA
  Batavia, IL Fermilab
  Upton, NY BNL
Uzbekistan Tashkent AS RUz
    INP AS RUz