Skip to main content

Equipment and IT Infrastructure

In 2014, the ICMAT purchased a cluster co-financed by European ERDF funds which meant a breakthrough in high performance computing for the center. The new cluster has 20 compute nodes (2 x 8 cores, 32 GB RAM), a node with Xeon-Phi co-processors, one node with two GPGPUs and three fat nodes (2 x 8 cores, 512 GB RAM).

It's using Lustre as storage technology over an Infinibad network, providing 36 TB of high-speed, fault-tolerant, low-latency NAS over Infiniband network.

Cluster LOVELACE:

Total computing hardware cores: 400 
Total xeon phi cores: 480
Total memory: 2,688 GiB

With a total of  400 general-purpose computing cores, plus 480 xeon phi cores, the layout of the cluster is as follows:

Cluster layout
20 computing nodes with: 2 CPUS Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz  (8 cores, 20 Mb L3 cache) [16 cores]
32 GB RAM memory working at 1,8 GHz speed
3 computing nodes with: 2 CPUS Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz  (8 cores, 20 Mb L3 cache ) [16 cores]
512 GB RAM memory working at 1,8 GHz speed
1 GPGPU node with: 2 CPUS Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz  (8 cores, 20 Mb L3 cache) [16 cores]
256 GB RAM memory, working at 1,8GHz speed
2 GPU NVIDIA Corporation GK110GL [Tesla K20m]  (rev a1)
1 Xeon-Phi node with: 2 CPUS Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz   (8 cores, 20 Mb L3 cache) [16 cores]
256 GB RAM memory working at 1,8GHz speed
2 Intel Corporation Xeon Phi coprocessor 5100 series (rev 11)

ICMAT researchers also have access to the Supercomputer FINIS TERRAE at CESGA (Santiago de Compostela) which has been half funded by CSIC. This supercomputer is one of the largests in Europe (almost 20000GB RAM and 2600 processors).