Speaker
Description
Research Cloud Computing Ecosystem in Armenia
Abstract
Growing needs for computational resources, data storage within higher-educational institutions and the requirement for a lot of investment and financial resources the idea or the concept of “National Research Cloud Platform (NRCP)” is crucial to provide necessary IT support for educational, research and development activities, which allow access to advanced IT infrastructure, data centers, and applications and protect sensitive information. In this article we will illustrate the concept of NRCP, background, deployment stages and architecture and finally some use cases.
Keywords
IaaS, NRCP, Openstack, ArmCloud, ArmCluster, ArmGrid, Earth science, Life Science, VM
1.Introduction
Virtualization transforms the IT industry landscape providing capabilities to run various virtual machines (VM) in the same hardware capacities, enhancing resource sharing and improving performance [1]. Low overhead costs for implementing this technology, high and constantly growing demand for computing resources and the need to provide more flexible services have led to the transition from the use of bare-metal servers and towards providing of virtualized resources (virtual machines, storage and even network infrastructures) that are easier to scale and provide a sufficient level of reliability. On the other hand, the Cloud computing environment has proven to be the base of these changes, which increased the Cloud services and computing resources requirement throughout scientific institutions and universities [2]. The term started use by Amazon Company in 2008.
Later, this novel technology was developed and provided as a service by GAFA (Google, Apple, Facebook, and Amazon) and other public cloud providers [3]. The main approach of Public cloud providers is to deliver on-demand services through the Internet to anyone who registers and pays for the services. Instead of public clouds, the private cloud infrastructures built-in for a couple of institutions or companies to host the facilities on their side [4]. For instance, national research cloud platforms provide cloud services to the academic and research community on the top of the research and education networks. It is possible to combine public and private cloud deployment models to create a synergy, calling hybrid cloud. Usually, the resources of Public clouds are supported the elasticity of computational resources in case of the need.
Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) are the leading cloud computing service layers [5]. IaaS provides the infrastructure such as VMs and other resources like VM disk image library, block and file-based storage, firewalls, load balancers, IP addresses, or virtual local area networks. IaaS is the basic layer in cloud computing models widely used by users, like Amazon Web Services (AWS) Elastic Compute Cloud (EC2) and Secure Storage Service (S3) [6]. PaaS or platform as a service model delivers computing platforms typically including an operating system (OS), programming language execution environment, database, web server. Technically it is a layer on top of IaaS providing a platform for building applications, like Microsoft Azure, to build, test, deploy, and manage applications and services through Microsoft-managed data centers [7]. SaaS provides a new service delivery model to access application services over the web without worrying about installing, maintaining, or coding the software. The SaaS provider manages the software maintenance or setup. Therefore, the software is available to access and operate without downloading or installing any piece of design or OS.
In addition to the leading cloud computing service layers, a wide range of services can be provided by Cloud, with an extra layer of flexibility and scalability, such as provisioning high-demand virtual high-performance computational (HPC) resources [8]. The critical challenge of deploying such cloud services is the complexity and the cost to purchase and maintain the computing resources needing a lot of human efforts to keep all the services up to date and reliable. The cost is a significant limitation for developing countries, like in the case of Armenia. In 2018, the Institute for Informatics and Automation Problems of the National Academy of Sciences of the Republic of Armenia (IIAP) started to realize the “National Research Cloud Platform (NRCP)” initiative. NRCP aims to deliver on-demand cost-effective Cloud computing resources and services to the local institutions and research communities.
The market analyzes with scientific communities and stakeholders aimed to find out the demands and complexity of scientific problems facing to solve, and gather the information related to the communities’ tools and packages. As a result, IIAP deployed user-oriented Cloud services to fulfill almost all types of demands in Armenia, ranging from general to domain-specific services. The rest of the article is organized as follows: Section 2 represents the architecture and design of NRCP. Section 3 represents prerequisites of some cloud services getting benefits from the infrastructure, while conclusions and discussions on future work follow in Section 4.
2.National Research Cloud Platform
In a first stage, a federated cloud infrastructure in the Black Sea Region has been deployed, enabling user communities from participant countries (Armenia, Georgia, Moldova, and Romania) to join the local virtualized resources providing them with VMs, networks, and storages [9]. The federated infrastructure offers user communities to use local or remote resources and makes user communities’ regional collaboration easier. The federated cloud platform based on OpenNebula middleware address the regional problems that require large amounts of computational resources, even when the actual simulations happen not in the zone where the data is stored. In the next stage, the virtualization is widely implemented for the core services of the Armenian National Grid (ArmGrid) infrastructure providing on-demand access to a sustainable computing platform [10]. The ArmGrid infrastructure consists of seven Grid sites located in the leading research centers and universities of Armenia. The total number of processors at the Grid sites was approximately 450 CPU cores. Instead of a single system ArmCluster (Armenian Cluster), ArmGrid is an autonomous decentralized system with distributed job management and scheduling opportunities [11]. Finally, a hybrid research computing platform has been deployed combing HPC with Grid and Cloud Computing based on ArmCluster HPC cluster, resource sharing ArmGrid Grid, and on-demand service provisioning federated cloud infrastructures. Each infrastructure identifies rules for making up the resources and executing applications, like resource ownership and sizing, application portability, or resource allocation policy.
Based, on these experimental infrastructures, a NRCP is suggested in Armenia aims to have better hardware utilization, increase the storage systems reliability and services management, and offer higher services with the virtualization support. The infrastructure provides VMs and networking services, consists of a cloud core service and a scheduler, application programming interfaces, databases, and nodes where VMs are running. The full virtualization on the kernel-based VM (KVM) hypervisor for each computational node has been implemented [12].
NRCP consists of three different Zones of Cloud resources, graphics processing unit (GPU) resources, and a data lake (see fig. 1). Combining these three solutions under a single umbrella provides domains specific services with high-availability and scalable services. The NRCP is a critical element of the Armenian e-infrastructure [9], a complex national IT infrastructure consisting of both communication and distributed computing infrastructures. Most importantly, all the input and output data reside on the NRCP side to reduce the time to process the data and possibly is shared data between different scientific groups.
Figure 1: National Research Cloud Platform
NRCP architecture design is mainly built on multiple Cloud controllers dedicated to different scientific communities to split the Cloud resources and the Cloud storage based on several scientific domains. The technical information of computational resources is generalized in Table 1.
Table 1
NRCP technical specification
Server type Quantity CPU/GPU model Server parameters Total cores
CPU/GPUs Cores RAM (GB)
Thin 4 Intel Xeon E5-2630 v4 2 20 256 80
Fat 2 Intel Xeon Gold 6138 4 80 512 160
Accelerated 2 Intel Core i9-10900KF 1 10 128 20
2 Intel Xeon E5-2680 v3 2 24 128 48
Intel Xeon Phi 7120P 2 122 244
2 Intel Xeon Gold 5218 2 32 192 64
Nvidia V100 32GB 2 10240 20480
Total (cores) 21096
Therefore, NRCP provide compute services consist of 616 physical and 20480 GPU cores, about 3 terabytes of memory and 1620 terabytes for data storage (see Table 2).
Table 2
The breakdown of storage facilities
Brand Model Type Quantity Raw capacity (TB) Total capacity (TB)
HPE MSA 2052 All-flash 2 8 16
NetApp E2824 Hybrid 1 12 12
NetApp E5760 Hybrid 2 720 1440
QNAP TS-809U-RP NAS 1 12 12
Supermicro JBOD Enclosure NAS 1 40 40
HPE MSL 2024 Tape (cold) 1 100 100
Total (TB) 1620
For instance, all Earth science production groups are consolidated under a single Zone with the same storage node, enabling them to share data if needed very easily and opens the door to a better collaboration possibility.
2.1 OpenStack IaaS
The OpenStack open-source cloud computing platform provision IaaS in private and public clouds, like AWS, supporting several hypervisors, load balancing, migration, and other features [10]. The OpenStack has been deployed and customized for NRCP providing orchestration needed to virtualize servers, storage, and networking. The current deployment is based on the OpenStack Rocky release using Centos 7 Linux distribution on all servers. In general, the deployment is automated using Puppet and Linux Bash scripts to simplify adding more Zones in the future [11]. The Controller, compute, network, and storage components have been used for the deployment (see fig. 2).
Figure 2: NRCP Opentack Cloud platform architecture
Based on the user name and password authentication, the open-source Horizon dashboard gives administrators an overview of the cloud environment, including resources and instance pools. The Compute (Nova) provides on-demand computing resources by provisioning and managing VMs. Various flavors of VMs ranging from small instances such as 2 CPU and 2 GB RAM with 40 GB HDD, to a very huge instance with 128 CPU cores, 256 GB RAM and 1 TB HDD are provided. The distribution on all instances is quite diverse, including Ubuntu (18.04, 20.04), Debian (10,11,12), Centos (7,8). Networking (Neutron) OSs contains an IP address pool, including floating IP assignment via dynamic host configuration protocol, load balancing, firewall, or virtual private network.
2.2 Data lake store
The Data Lake provides a scalable and secure platform that allows all users to upload and download their data with a high-speed, process the data in real-time, use the data for different simulations, share the data between groups.
For instance, in the domain of astrophysics, the infrastructure’s core is the Armenian Virtual Observatory (VO) repository providing an advanced experimental platform for data archiving, extraction, acquisition, correlation, reduction, and use. The Armenian VO has been ported to the distributed computing infrastructure, as a critical tool in the analysis of the vast amounts of data provided by the surveys, such as the Digitized First Byurakan Survey survey, as the largest and the first systematic objective prism survey of the extragalactic sky [12]. The survey consists of 1874 photographic plates containing about forty million low-dispersion spectra and 20 million objects covering 17,000 square degrees.
Another example is the Armenian Data Cube [13], a complete and up-to-date EO (Earth Observation) archive data (e.g., Landsat, Sentinel). EO, using precise and reliable data, is a critical element to address different environmental challenges, like water, soil, or plants. The Armenian Data Cube contains three years (2016-2019) of Landsat 7-8 and Sentinel 2-5P imagery analysis-ready data over Armenia. The full coverage of Armenia includes 11 Sentinel-2 and 9 Landsat 7-8 scenes. Because satellite images are often voluminous, gathering and processing these large files typically need HPC computational resources.
The Cloud storage facility is mounted on-demand on the VMs by special quotas to run the simulation smoothly using the storage data. The data inside the storage is replicated to keep it safe from any end-users’ errors.
2.3 Accelerated Computing and Deep learning
Dedicated servers with V1000 Tesla cards and Docker containers have been deployed with the following machine learning and deep learning tools to conduct the experiments quickly:
• Python: a popular language with high-quality machine learning and data analysis libraries.
• R: a language for statistical computing and graphics.
• Pandas: a Python data analysis library enhancing analytics and modeling.
• Jupyter Notebook: a free web application for interactive computing, enabling users to develop and execute code, and to create and share documents with a live code.
A dedicated storage volume is mounted on the container, enabling the user to use the Cloud storage data for the experiments based on the requirement.
In general, the system allows large datasets to ingest and manage to train algorithms efficiently. It will enable deep learning models to scale efficiently and at lower costs using GPU processing power. By leveraging distributed networks, deep learning on the cloud allows you to design, develop and train deep learning applications faster.
The users usually use SSH keys to access to the predefined VMs containing all the necessary tools, libraries and packages. The approach decreases the number of users that access the Horizon dashboard.
In general, everything is consolidated and harmonized on the controllers level in each Zone. The GPU zone is a dedicated environment where GPU usage is mandatory to increase the effectiveness of any scientific experiments such as biology, machine learning etc. This part is not consolidated under the Openstack umbrella rather we providing Docker containers for the users where the user can access the container and run the tasks directly because all necessary packages and tools are installed already inside the container.
2.4 Monitoring
Cloud monitoring is a critical enabler for providers and consumers to manage and control hardware and software infrastructures by providing information and key performance indicators, such as workload performance, quality of service or service level agreement.
Prometheus monitored all the resources that record real-time metrics in a time series database (allowing for high dimensionality) built using a HTTP pull model, with flexible queries and real-time alerting. This data is sent to Grafan`s as well to have a complete overview about the Cloud resources usage, increasing the usage matrices to the maximum level, as soon as any inactivity recognized with a resource, we contact the user to confirm the necessity of the needed resource to deliver the resources to other users on the system.
3.Scientific Communities
For the last three years NRCP serves multiple scientific projects and communities. The graph below represents different subjects or domain specific areas served by our solution. The article highlights only the earth science and life science communities.
Figure 3: NRCP scientific communities
3.1 Earth Science Community
The earth science user community addresses several critical societal challenges, such as weather prediction, air quality monitoring and prediction, water quality and quantity monitoring, or earth observation. GRASS geographic information system (GIS), quantum GIS and Data Cube codes and tools used for remote sensing processing for vector and raster geospatial data management, geoprocessing, spatial modelling and visualization. Several domain specific services have been developed using large scale simulations, which are transformed to the SaaS HPC solutions, including:
• Shoreline changes monitoring service using single-band and multi-band methods, based on water object identification and shoreline delineation multi-band methods, provides visible water indicators and the surrounding environment changes, accessible through Jupyter notebook [14]. The service identifies the location and changes over time of shoreline using remote sensing. As a case study, we validated the Lake Sevan, giving sufficiently reliable results.
• Normalized Difference Vegetation Index (NDVI) time series geoprocessing web service monitors the plant’s state as a greenness biomes [14]. NDVI time series analysis uses HPC resources to process a large number of high-resolution multispectral satellite images. The service hides the difficulties of dealing with geoprocessing processes and avoiding the time needed to search, collect, and upload input data sets. The service can quickly comprise the NDVI time series simulations with the available spatial and temporal environmental field data sets. Thirteen vegetation indices have been studied to find an optimal parallelization approach for our infrastructure.
• Regional-scale weather forecasting service serves operational forecasting and atmospheric research needs using different weather prediction models and parameterizations [16]. The service’s core is the next-generation mesoscale simulation Advanced Weather Research and Forecasting (WRF-ARW) modeling system running initial and boundary conditions derived from Global Forecast System analysis and forecasts with 0.25 deg [17].The service runs for the same region many times with various initial conditions. The service addresses many environmental and weather challenges, like predicting high temperatures in the south region of Armenia or analysis of wintertime cold-air pool.
• Hydrological modeling "Desktop as a Service (DaaS)" service studies, predicts, and manages water resources [18]. Hydrological models paired with meteorological models allows us to carry out long-term simulation of large watersheds using coarse spatial and temporal resolution. The service’s core is the river basin scale Soil and Water Assessment Tool (SWAT) model, consisting of sensitivity analysis, the calibration, and validation parts [23]. The service simulation is CPU intensive and uses many input data in terms of temperature, wind speed, precipitation, land use, soil data, and a digital elevation model for running the model. The most sensitive parameters to carry out the calibration, such as runoff processes and baseflow recession coefficient have been studied and selected. As a case study, the service was validated for the Sotk watershed of Lake Sevan to assess and study the feasibility watershed modeling in that region.
3.2 Life Science Community
Modern life sciences research depends on the traditional HPC systems and data analytics and
managing massive datasets. HPC has a vital role in genomic analysis to process a large amount of data (for instance, next-generation sequencing) or biomolecular systems to carry out Molecular Dynamics (MD) or molecular mechanics simulations. Our aim to transform services into SaaS solutions and optimized infrastructures by adopting advanced technology, including GPU computing and machine learning capabilities for molecular modeling, molecular biology, statistical analysis and bioinformatics, increasingly large biomolecular systems, and statistical analysis and bioinformatics. For instance, the Modeling and MD study service for complex systems based on the classic treatment of interaction among atoms offers a detailed picture of the structure and dynamics in the multicomponent system, which is of particular interest improving our knowledge and understanding of biological and chemical processes [19]. NAMD and GROMACS MD packages are customized for HPC simulation of large biomolecular systems, such as proteins, lipids, or nucleic acids.
4.Conclusion and lessons learned
The article summarizes the experiences gained so far, and highlighted a few scientific use cases, where the community is intensively using NRCP resources. Throughout the deployment and the implementation phases of NRCP deployment for diverse scientific communities with specific domain-oriented approaches, a list of recommendation has been collected:
• To consider the complete infrastructure and its capabilities before the deployment, boosting to choose the best possible options and tools satisfying the needs.
• To conduct benchmarks and experiments before putting the solution into production; to confirm the systems' reliability by handling different scenarios, even if the deployment of some packages needs to be done several times.
• To prepare a well-documented tutorial with exact details of all services and solutions, considering that not every user has an IT background when using the system.
• To conduct a training campaign with potential communities and explain the opportunities and challenges. It will help to understand the benefits of such solutions by boosting scientific experiments and simulations.
• To minimize the manual deployment as much as possible. For instance, it is planned to implement multiple bash scripts and Puppet automation.
• To maximize the overall resource usage in computing, networking, and storage resources considering the energy consumption minimization.
• To use federated identity authentication based on SAML 2.0 (Security Assertion Markup Language) to make easy user access [25].
It is planned to develop and provide user-specific high-level services, like SaaS solutions for all those communities, conducting the experiments without accessing the computing resources. Instead, the communities may use the browser to access any domain-specific service and run the experiment from it, further simplifying cloud resource usage. The OpenStack Ironic will be implemented for economic and most efficient use of computing resources focusing on HPC Cloud solutions provisioning based on completely virtual, bar-metal, and hybrid architectures. The future ultimate goal is the establishment of a National Open Science Cloud Initiative and its further integration with the European Open Science Cloud and European Research Infrastructures, like the European Life-science Infrastructure for Biological Information or European Open Science Infrastructure [26].
5.Acknowledgement
This paper is supported by the European Union’s Horizon 2020 research infrastructures programme under grant agreement No 857645, project NI4OS Europe (National Initiatives for Open Science in Europe), and the State Committee of Science of Armenia under the State Target Project “Deployment of a cloud infrastructure to tackle scientific-applied problems”.
6.References
[1] M.F. Mergen, V. Uhlig, O. Krieger, J. Xenidis. ”Virtualization for high-performance computing.” ACM SIGOPS Operating Systems Review. 2006 Apr 1;40(2):8-11.
[2] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, M. Zaharia, A view of cloud computing, Communications of the ACM 53.4 (2010) 50-58. doi:10.1145/1721654.1721672
[3] V. Chang, G. Wills, D. De Roure. ”A review of cloud business models and sustainability.” In2010 IEEE 3rd International Conference on Cloud Computing 2010 Jul 5 (pp. 43-50). IEEE.
[4] Y. Jadeja, K. Modi. ”Cloud computing-concepts, architecture and challenges.” In2012 International Conference on Computing, Electronics and Electrical Technologies (ICCEET) 2012 Mar 21 (pp. 877-880). IEEE.
[5] S.K. Sowmya, P. Deepika, J. Naren, Layers of Cloud–IaaS, PaaS and SaaS: A Survey, International Journal of Computer Science and Information Technologies 5.3 (2014) 4477-4480.
[6] J. Murty, Programming amazon web services: S3, EC2, SQS, FPS, and SimpleDB. " O'Reilly Media, Inc."; 2008 Mar 25.
[7] L. Qian, Z. Luo, Y. Du , L. Guo. (2009) Cloud Computing: An Overview. In: Jaatun M.G., Zhao G., Rong C. (eds) Cloud Computing. CloudCom 2009. Lecture Notes in Computer Science, vol 5931. Springer, Berlin, Heidelberg.
[8] R.R. Expósito, G.L. Taboada, S. Ramos, J. Touriño, R. Doallo, Performance analysis of HPC applications in the cloud, Future Generation Computer Systems 29.1 (2013) 218-229. doi: 10.1016/j.future.2012.06.009
[9] H Astsatryan, A Hayrapetyan, W Narsisian, V Sahakyan, Yu Shoukourian, G Neagu and A Stanciu. Environmental science federated cloud platform in the BSEC region, International Journal of Scientific & Engineering Research 1.1 (2014) 1130–1133.
[10] Hrachya Astsatryan, Yuri Shoukouryan and Vladimir Sahakyan. ”Grid activities in Armenia.” In Proceedings of the International Conference Parallel Computing Technologies (PAVT’2009). Novgorod, Russia, March, 2009.
[11] H.V. Astsatryan, Yu Shoukourian and V. Sahakyan. ”Creation of High-Performance Computation Cluster and DataBases in Armenia.” In Proceedings of the Second International Conference on Parallel Computations and Control Problems (PACO ‘2004), pages 466–470, 2004.
[12] Y. Yamato, OpenStack hypervisor, container and baremetal servers performance comparison, IEICE Communications Express 4.7 (2015) 228-232. doi: 10.1587/comex.4.228.
[13] Hrachya Astsatryan, Vladimir Sahakyan, Yuri Shoukourian, Pierre-Henri Cros, Michel Dayde, Jack Dongarra, Per Oster. ”Strengthening Compute and Data intensive Capacities of Armenia." IEEE Proceedings of 14th RoEduNet International Conference - Networking in Education and Research (NER'2015), Craiova, Romania, pp. 28-33, September 24-26 2015, DOI: 10.1109/RoEduNet.2015.7311823.
[14] O. Sefraoui, M. Aissaoui, M. Eleuldj, OpenStack: toward an open-source solution for cloud computing, International Journal of Computer Applications 53.3 (2012) 38-42. doi: 10.5120/8738-2991.
[15] J. Loope, Managing infrastructure with puppet: configuration management at scale. " O'Reilly Media, Inc."; 2011 Jun 9.
[16] A.M. Mickaelian, H.V. Astsatryan, A.V. Knyazyan, T. Yu. Magakian, G.A. Mikayelyan, L.K. Erastova, L.R. Hovhannisyan, L.A. Sargsyan and P.K. Sinamyan. Ten Years of the Armenian Virtual Observatory. ASPC, vol. 505, no. 16, 2016
[17] Sh. Asmaryan, A. Saghatelyan, H. Astsatryan, L. Bigagli, P. Mazzetti, S. Nativi, Y. Guigoz, P. Lacroix, G. Giuliani and N. Ray. Leading the way toward an environmental National Spatial Data Infrastructure in Armenia. South-Eastern European Journal Issue of Earth Observation and Geomatics 3 (2014) 53–62.
[18] Shushanik Asmaryan, Vahagn Muradyan, Garegin Tepanosyan, Azatuhi Hovsepyan, Armen Saghatelyan, Hrachya Astsatryan, Hayk Grigoryan, Rita Abrahamyan, Yaniss Guigoz and Gregory Giuliani, Paving the way towards an armenian data cube, Data 4.3 (2019) 1–10. doi: 10.3390/data4030117.
[19] Hrachya Astsatryan, Andranik Hayrapetyan, Wahi Narsisian, Shushanik Asmaryan, Armen Saghatelyan, Vahagn Muradyan, Gregory Giuliani, Yaniss Guigoz and Nicolas Ray, An interoperable cloud-based scientific GATEWAY for NDVI time series analysis, Elsevier Computer Standards & Interfaces 41 (2015). doi: 10.1016/j.csi.2015.02.001.
[20] H. Astsatryan, A. Shakhnazaryan, V. Sahakyan, Yu. Shoukourian, V. Kotroni, Z. Petrosyan, R. Abrahamyan and H. Melkonyan. ”WRF-ARW Model for Prediction of High Temperatures in South and South East Regions of Armenia.” In IEEE 11th International Conference on e-Science, pages 207–213. IEEE, 2015.
[21] Michael C Coniglio, James Correia Jr, Patrick T Marsh and Fanyou Kong, Verification of convection-allowing WRF model forecasts of the planetary boundary layer using sounding observations, Weather and Forecasting 28.3 (2013) 842–862. doi: 10.1175/WAF-D-12-00103.1.
[22] H. Astsatryan, W. Narsisian and Sh. Asmaryan, SWAT hydrological model as a DaaS cloud service, Springer Earth Science Informatics 9.3 (2016) 401–407. doi: 10.1007/s12145-016-0254-6.
[23] Arnold JG, Moriasi DN, Gassman PW, Abbaspour KC, White MJ, Srinivasan R, Santhi C, Harmel RD, Van Griensven A, Van Liew MW, Kannan N. SWAT: Model use, calibration, and validation, Transactions of the ASABE 55.4 (2012) 1491-1508. doi: 10.13031/2013.42256.
[24] Armen Poghosyan, Levon Arsenyan and Hrachya Astsatryan, Dynamic Features of Complex Systems: A Molecular Simulation Study, Springer High-Performance Computing Infrastructure for South East Europe’s Research Communities, pages 117–121, 2014.
[25] D.W. Chadwick, K. Siu, C. Lee, Y. Fouillat, Germonville D, Adding federated identity management to openstack, Journal of Grid Computing 12.1 (2014) 3-27. doi: 10.1007/s10723-013-9283-2.
[26] P. Budroni, J. Claude-Burgelman, M. Schouppe, Architectures of knowledge: the European open science cloud, ABI Technik 39.2 (2019) 130-41. doi: 10.1515/abitech-2019-2006.
Summary
Growing needs for computational resources, data storage within higher-educational institutions and the requirement for a lot of investment and financial resources the idea or the concept of “National Research Cloud Platform (NRCP)” is crucial to provide necessary IT support for educational, research and development activities, which allow access to advanced IT infrastructure, data centers, and applications and protect sensitive information. In this article we will illustrate the concept of NRCP, background, deployment stages and architecture and finally some use cases.