Matter Modeling Asked on August 19, 2021
For a matter modelling person, the most valuable resource is computing power. For many of us, computing power at hand limits the scale of problems we can solve. There are many national supercomputing facilities for academics. What are the resources available in each country?
Before Compute Canada (Antiquity)
Supercomputing in Canada began with several disparate groups:
Also in 2003: high-speed optical link was made between WestGrid & SHARCNET (West and East).
Amalgamation into Compute Canada (CC)
Throughout this time SHARCNET, and the others continue expanding to include more universities, colleges, and research institutions. ComputeOntario adds HPC4Health. Sub-groups of CC grow.
HPC facilities offered
You would have to ask a separate question to go into detail about all the systems that are offered by virtue of the fact that CC is an amalgamation of several pre-existing consortia. The following is what was made available after the formation of CC:
/home
250TB total./scratch
3.7PB total (LUSTRE)./project
10PB total./home
64 TB total./scratch
3.6PB total (LUSTRE)./project
16 PB total.Correct answer by Nike Dattani on August 19, 2021
Supercomputing in India started in the 1980's. After difficulties in obtaining supercomputers from abroad for weather forecasting and academic work (due to the potential for dual-use), it was decided to build ingenious supercomputing facilities.
Supercomputers were made by C-DAC (Center for Development of Advanced Computing, est. 1987) Pune, in several 'Missions', leading to the production of the PARAM (PARAllel Machine, also 'supreme' in Sanskrit) series.
Examples include PARAM 8000 (1990- several models including exports to Germany UK and Russia), PARAM-9000 (1994), PARAM Padma (2002), PARAM ISHAN (2016, IIT Guwahati campus) and PARAM Brahma (2020, IISER Pune campus). These supercomputers are interfaced with via the PARAMNet. (IIT's (Indian Institute of Technology) and IISER's (Indian Institute of Scientific Education and Research) are families of premier Indian research and technical institutes).
There also exists a project under the 12th five-year plan handled by the Indian Institute of Science (IISc) Banglore.
The National Supercomputing Mission jointly implemented by Department of Science and Technology (DST) Ministry of Electronics and Information Technology (MeitY), IISc and C-DAC is creating 70 supercomputers in various academic and research institutes linked by a high-speed network.
3 supercomputers were built during 2015-19 and 17 are being built in 2020.
As per C-DAC's website:
C-DAC has commissioned and operates three national supercomputing facilities for HPC users community.
These are:
C-DAC also provides high-performance computing facilities in the form of PARAM SHAVAK.
Other than the facilities directly hosted by C-DAC, most premier academic institutions have supercomputing facilities. Examples:
Top of the line supercomputers are also available with other organizations. Examples:
The above list is not complete. Many other institutions also operate supercomputing facilities (For instance, IIT Roorkee has a PARAM 10000). And those that don't often have lower powered server clusters offering computing power to researchers (IIT Indore operates an IBMx Intel HPC cluster).
Answered by Devashish on August 19, 2021
In Switzerland, the Swiss National Supercomputing Centre (CSCS) provides most computing power. Refer to the Wikipedia article for a list of all the computing resources; it started with a 2-processor computer in 1992. Most notably though, since December 2012 it is the provider of Piz Daint, which in 2016 after an upgrade became the third most powerful supercomputer in the world with 25 petaflops. Piz Daint is a Cray XC50/XC40 system featuring Nvidia Tesla P100 GPUs. The title of "third most powerful supercomputer in the world" is not current anymore, unfortunately. CSCS at the time of writing does provide four other active clusters. The CSCS computers are used by universities and research facilities including Meteo/weather services and private stakeholders.
Of course many universities and sub-departments have their own little clusters of computers for their high performance and specialized applications. Empirically, when studying at ETH Zürich, I had access to a cluster for students of the D-CHAB (chemistry department) called Realbeaver, the ETH-computer-cluster Euler, that is currently in stage VII of expansions as well as Piz Daint which I mentioned above. For the latter two, the computer resources are limited according to some shareholder agreements. For students, the resources generally depend on the course they are taking/the group they do their project in.
Answered by BernhardWebstudio on August 19, 2021
Finland has a long history in supercomputing; CSC - the Finnish IT Center for Science, administered by the Finnish Ministry of Education and Culture, has provided computing services since 1971, starting with an Univac computer.
The strategy in Finland has been to pool national resources from the start, and this has enabled Finnish researchers to have access to have access to up-to-date computer resources for many decades. The policy of CSC has been to update their supercomputers regularly, and they've been a semi-permanent attender on the top 500 list of supercomputers in the world.
Although many universities and departments in Finland also operate their own computer clusters, anyone with an academic affiliation in Finland can get a CSC user account, and apply for their computational resources with a relatively easy procedure. This has greatly aided computational work (especially in matter modeling!) for a long time.
CSC is currently installing new supercomputers. In addition to the recently installed Puhti supercomputer (Atos BullSequana X400 supercomputer, 1.8 petaflops, 682 nodes with 2x20 core Xeon Gold 6230 i.e. 27280 cores in total, a mix of memory sizes on each node and a 4+ PB Lustre storage system), the upcoming Mahti and Lumi supercomputers will lead to a huge increase in computing power.
Mahti is a Atos BullSequana XH2000 supercomputer, with 1404 nodes with a peak performance of 7.5 petaflops. Each node has two 64 core AMD EPYC 7H12 (Rome) processors with a 2.6 GHz base frequency (3.3 GHz max boost), and 256 GB of memory. Mahti will also have a 8.7 PB Lustre parallel storage system. Mahti should become generally available for Finnish users in August 2020.
Lumi is a EuroHPC supercomputer, with a computing power of over 200 petaflops, and over 60 PB of storage, and will become available in early 2021. Although this is a European joint project, since the supercomputer is based in Finland, it will have a quota for Finnish users.
Answered by Susi Lehtola on August 19, 2021
Users can apply for time on nationally-shared computing resources (e.g. TAIWANIA 1). Unfortunately only a limited amount of the support is available in English (mostly in traditional Chinese).
Answered by taciteloquence on August 19, 2021
Several other answers mention USA centers at national labs and NSF XSEDE. There's another NSF-funded project for high throughput computing (HTC)*, versus traditional high performance computing (HPC):
OSG (Open Science Grid)
The OSG is a distributed, international network of computing facilities aimed at providing high throughput computing. Rather than having a large central system, they utilize the unused cycles of computers in their network (some of which are traditional HPC systems, whereas others are closer to commodity resources).
Because OSG focuses on HTC across a distributed network, they have particular criteria about what sorts of jobs they can support. For example, parallelized parameter sweeps or image processing on discrete datasets would benefit from HTC/OSG, whereas jobs that share a large dataset or are otherwise tightly coupled wouldn't benefit much.
Nonetheless, a lot of analyses can be broken into small, independent jobs to run opportunistically on the network, so they have a lot of usage in the science communities.
*Briefly, HTC differs from HPC in that HTC is focused on sustained execution of many discrete "jobs" over longer periods time (months/years), compared to the shorter time scales (seconds/days) for HPC-centric systems. For HTC, metrics like FLOPS or peak performance are not very relevant; instead, the amount of operations over a weeks/months/years is of interest. HTCondor has more about HTC, and is used in the OSG.
Answered by ascendants on August 19, 2021
NATIONAL SUPERCOMPUTING CENTER IN CHANGSHA
Hunan University is responsible for operation management, National University of Defense Technology is responsible for technical support.
The peak computing performance of the whole system is 1372 trillion times, of which, the peak computing performance of the whole system CPU is 317.3 trillion times, and the peak computing performance of GPU is 1054.7 trillion times.
The system is configured with 2048 blade nodes to form a computing array. The node adopts a 2-way 6-core Intel Xeon Westmere EP high-performance processor with a main frequency of 2.93GHz and a memory of 48GB. Each node is equipped with an Nvidia M2050 GPU. The single computing node has a peak CPU performance of 140.64GFlops and a peak GPU performance of 515GFlops.
NATIONAL SUPERCOMPUTER CENTER IN TIANJING
NATIONAL SUPERCOMPUTER CENTER IN JINAN
NATIONAL SUPERCOMPUTER CENTER IN GUANGZHOU
NATIONAL SUPERCOMPUTING CENTER IN SHENZHEN
NATIONAL SUPERCOMPUTING CENTER IN WUXI
Answered by Franksays on August 19, 2021
Universities have supercomputers of smaller magnitude, but permit the same function. A supercomputer is not a fancy state of the art set up. It's processing and computing power is determined by the number of independent processors equipped to it. A real supercomputer may even use obsolete and years old processors (whose acquisition value is insignificant) Using state of the art processors would make them ridiculously expensive than they already are. A state of the art xeon processor from Intel for example costs thousands, acquiring the chip set needed to build a supercomputer would cost over 2 billion dollars! for the chips alone. Obsolete chips from disposed computers cost virtually nothing. With the advent of mainframe computing; companies that specialized in supercomputer structures either went out of business or folded like Cray, Wang, etc.
Common mainframes can be built. A simple mother board is equipped with several processors than the mother boards are inserted into a box (Shelves, which is connected to vertically on a rack. Then the mainframe chassis are linked. A supercomputer does what your computer at home does......with tens of thousands of processors; some dedicated to graphics/physics engines exclusively.
With distributive computing and cloud set up, processing without the need of large mainframes is becoming more apparent. Google rents supercomputer time. One company "Cycle Computing" has assemebled a makeshift super computer from linking old mainframes, the cost 1,300 bucks per hour
The biggest detriment to supercomputing now is energy consumption. The proliferation of more and more computing power has led to an exponential rise in energy demand. Processors get hot, for every watt of energy dedicated to actual processing 3 watts are needed to mechanically move waste heat away from the system. As more and more systems are added; more and more heat energy must be passed. Air based heat exchangers in cold climates may help with this (Thor Data Center in Reykjavik, Iceland, this supercomputer runs air cooling units outside) In the mid 1990s a top 10 supercomputer required in the range of 100 kilowatt, in 2010 the top 10 supercomputers required between 1 and 2 megawatt. For larger scale supercomputing, vaster energy requirements and energy dedicated solely to heat dissipation.
Answered by LazyReader on August 19, 2021
ARCHER (Advanced Research Computing High End Resource)
ARCHER is as of today the UK's national supercomputing service, run by the EPCC (Edinburgh Parallel Computing Centre). It has been operating since late 2013, and is based around a Cray XC30 supercomputer. Note, however, ARCHER is right at the end of its lifecycle. It was due to shut down in February of this year, but things are slightly behind schedule. (In fact, ARCHER2 is currently being set up, and is due to be operational shortly; see below.)
Here is a brief overview of its capabilities from the hardware & software informational page.
ARCHER compute nodes contain two 2.7 GHz, 12-core E5-2697 v2 (Ivy Bridge) series processors. Each of the cores in these processors can support 2 hardware threads (Hyperthreads). Within the node, the two processors are connected by two QuickPath Interconnect (QPI) links.
Standard compute nodes on ARCHER have 64 GB of memory shared between the two processors. There are a smaller number of high-memory nodes with 128 GB of memory shared between the two processors. The memory is arranged in a non-uniform access (NUMA) form: each 12-core processor is a single NUMA region with local memory of 32 GB (or 64 GB for high-memory nodes). Access to the local memory by cores within a NUMA region has a lower latency than accessing memory on the other NUMA region.
There are 4544 standard memory nodes (12 groups, 109,056 cores) and 376 high memory nodes (1 group, 9,024 cores) on ARCHER giving a total of 4920 compute nodes (13 groups, 118,080 cores). (See the "Aries Interconnect" section below for the definition of a group.)
The successor to ARCHER is currently being installed at the EPCC. See the news section on the website.
Again, here is a brief overview from the hardware & software informational page.
ARCHER2 will be a Cray Shasta system with an estimated peak performance of 28 PFLOP/s. The machine will have 5,848 compute nodes, each with dual AMD EPYC Zen2 (Rome) 64 core CPUs at 2.2GHz, giving 748,544 cores in total and 1.57 PBytes of total system memory.
ARCHER2 should be capable on average of over eleven times the science throughput of ARCHER, based on benchmarks which use five of the most heavily used codes on the current service. As with all new systems, the relative speedups over ARCHER vary by benchmark. The ARCHER2 science throughput codes used for the benchmarking evaluation are estimated to reach 8.7x for CP2K, 9.5x for OpenSBLI, 11.3x for CASTEP, 12.9x for GROMACS, and 18.0x for HadGEM3.
MMM Hub (Materials and Molecular Modelling Hub)
This one couldn't more more suited to the concern of this SE, as is evident in the name!
The Hub hosts a high performance computing facility known as Thomas. Thomas is a 17,000 core machine based around Lenovo 24 core Intel x86-64 nodes. It is designed to support small to medium sized capacity computing focusing on materials and molecular modelling. 75% of Thomas is reserved for Tier-2 use by MMM Hub partners who are contributing towards the running costs of the facility. The other 25% of the machine is available free of charge to materials and molecular modelling researchers from anywhere in the UK.
The Hub is operated through the partnership of eight of the UK’s leading universities (UCL, Imperial College London, Queen Mary University of London, Queen’s University Belfast, the University of Kent, King’s College London, the University of Southampton and the University of Oxford) and OCF Plc.
Per the page for the Thomas supercomputer, "75% of Thomas is reserved for Tier-2 use by MMM Hub partners who are contributing towards the running costs of the facility. The other 25% of the machine is available free of charge to materials and molecular modelling researchers from anywhere in the UK." See that page for points of contacts at each institution.
See the above link for other (Tier 2) services. Note that some like DiRAC are domain-specific (targeted at particle physics and astronomy research), though paid access is available for users outside of these fields.
Answered by Noldorin on August 19, 2021
Other answers have addressed National Science Foundation (NSF) resources via XSEDE here and Department of Energy (DOE) resources here within the United States. Another set of computing resources in the US are those via the Department of Defense (DoD).
HPCMP (High Performance Computing Modernization Program)
The DoD High Performance Computing Modernization Program (HPCMP) handles the computing centers administered by the DoD. As might be expected, the DoD HPCMP resources are meant for research that aligns with DoD mission statements. For those interested, the Army Research Laboratory (ARL), Air Force Research Laboratory (AFRL), and Navy Research Laboratory (NRL) all put out broad agency announcements (BAAs) that describe the current areas of research. An example for the Army Research Office can be found here.
Access to DoD HPCMP resources is generally restricted to those that already receive research funding from the DoD, so they are not as easy to get access to as NSF's XSEDE or DOE's NERSC. However, it is a major source of research computing in the US all the same. The DoD HPCMP has several machines that are meant for unclassified research that academics can get access to, provided they are supported by the DoD. These machines are outlined here and include many of the top computing machines in the world. As an example, the US Air Force's Mustang is currently #80 on the TOP500 list.
Answered by Andrew Rosen on August 19, 2021
Kan Balam (2007): Universidad Nacional Autónoma de México (UNAM)
Aitzaloa (2008): Universidad Autónoma Metropolitana (UAM)
Atócatl (2011): Universidad Nacional Auónoma de México (UNAM)
Abacus (2014): Centro de Investigación y Estudios Avanzados (CINVESTAV)
Miztli (2013): Universidad Nacional Autónoma de México (UNAM)
Yoltla (2014): Universidad Autónoma Metropolitana (UAM)
Reaches 45 TFlops.
Xiuhcoatl (2012): Centro de Investigación y Estudios Avanzados (CINVESTAV)
Connected via Optical Fiber to Kan Balam and Aitzaloa, combined > 7000 CPUs, 300 TFlops
The supercomputers mentioned until now are owned by universities or university research centers. Additionally, Mexico has a National Supercomputing Laboratory, which gives service to uses nationwide as well. It is hosted by the Benemérita Universidad Autónoma de Puebla, (BUAP) and is called "Laboratorio nacional de Supercómputo" (LNS). Their full infrastructure page is here , and below a summary of Cuetlaxcoapan, the main one.
Cuetlaxcoapan: LNS
Answered by Etienne Palos on August 19, 2021
NERSC (National Energy Research Scientific Computing Center)
NERSC, located at Lawrence Berkeley National Laboratory, is the primary computing facility for DOE. Currently its main HPC system is Cori, a Cray XC40 at #16 on the Top500 list, but a new Cray system named Perlmutter is supposed to be installed late 2020 through mid 2021. Both systems have (will have) both GPU-accelerated and pure CPU nodes. NERSC also provides a good amount of training opportunities for its users, some in cooperation with the leadership facilities mentioned below.
From their mission statement:
The mission of the National Energy Research Scientific Computing Center (NERSC) is to accelerate scientific discovery at the DOE Office of Science through high performance computing and data analysis.
From their website:
More than 7,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.
All research projects that are funded by the DOE Office of Science and require high performance computing support are eligible to apply to use NERSC resources. Projects that are not funded by the DOE Office of Science, but that conduct research that supports the Office of Science mission may also apply.
DOE also has two so-called leadership computing facilities. The point of these is not to support typical, small-scale computational research. Instead, they deliberately target a limited number of large-scale projects in need of large allocations, projects that may not be possible elsewhere. From experience with OLCF there is often also a need to demonstrate that your code can take advantage of the hardware offered.
OLCF (Oak Ridge Leadership Computing Facility)
The Oak Ridge Leadership Computing Facility (formerly known as the National Leadership Computing Facility), located at Oak Ridge National Laboratory, is home to the Summit supercomputer that debuted as #1 on the Top500 list, but was recently dethroned to #2. It's next supercomputer, Frontier, is supposed to reach exascale performance, and open to users in 2022.
ALCF (Argonne Leadership Computing Facility)
The Argonne Leadership Computing Facility (at Argonne National Laboratory) has a similar role. Currently, its main supercomputer is Theta (#34 on the Top500 list). Their planned exascale supercomputer Aurora is coming in 2021.
Answered by Anyon on August 19, 2021
CENAPAD stands for Centro Nacional de Processamento de Alto Desempenho (National High Performance Processing Center). They form a supercomputing network instituted by the Ministry of Science, Technology and Innovation (MCTI) and coordinated by the National High Performance Processing System (SINAPAD).
Some of them are:
Bellow is the distribution of SINAPAD related centers.
Just as a curiosity, the image below shows the CPU use by Brazilian states between 1995 and 2015.
Answered by Camps on August 19, 2021
XSEDE (Extreme Science and Engineering Discovery Environment)
XSEDE (pronounced like "exceed") provides access to both computational resources and trainings on HPC. These may be especially useful if your institution does not provide good support for scientific computing.
From their website:
- XSEDE provides live and recorded training on a wide range of research computing topics.
- XSEDE programs offer our users in-depth collaborations and on-campus facilitators.
- Most US-based researchers are eligible for no-cost XSEDE allocations. Get started in two weeks or less!
Answered by taciteloquence on August 19, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP