{"id":11473,"date":"2024-07-16T08:59:27","date_gmt":"2024-07-16T08:59:27","guid":{"rendered":"https:\/\/hri.bigzero.co.in\/?page_id=11473"},"modified":"2025-02-01T13:11:09","modified_gmt":"2025-02-01T13:11:09","slug":"hri-hpc-facilities","status":"publish","type":"page","link":"https:\/\/hri.bigzero.co.in\/hi\/hri-hpc-facilities\/","title":{"rendered":"HRI-HPC Facilities"},"content":{"rendered":"
\n\t\t\t\t
\n\t\t
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t

HRI-HPC Facilities<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t\n\t\t\t\t\t\t<\/span>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t
\n\t\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t
<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t
\n\t\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\t\t
Clusters<\/span><\/strong><\/td>OEM\/Vendor<\/span><\/strong><\/td>Interconnect<\/span><\/strong><\/td>CPU\u2010Core \/ Nodes<\/span><\/strong><\/td>Memory<\/span><\/strong><\/td>\u0935\u093f\u0935\u0930\u0923<\/span><\/strong><\/td><\/tr>
C6\u2010Cluster<\/a><\/td>SupperMicro\/<\/strong>Netweb<\/a><\/td>1\u2010gigabit<\/td>4\u2010Core per node\u00a0\/<\/strong>
55nodes<\/td>
16GB<\/td>Grid\u00a0\u00a0Mathematica
jobs<\/td><\/tr>
C8\u2010Cluster<\/a><\/td>HP\u00a0\/<\/strong>Technet<\/a><\/td>1\u2010gigabit<\/td>12\u2010Core per node\/<\/strong>
49 nodes<\/td>
48 GB<\/td>Sequential\u00a0\/<\/strong>\u00a0Distributed
memory jobs<\/td><\/tr>
C9\u2010Cluster<\/a><\/td>IBM\u00a0\/<\/strong>Wipro<\/a><\/td>QDR\u2010Infiniband
1\u2010gigabit<\/td>
16\u2010Core per node\/<\/strong>
49nodes<\/td>
128 GB<\/td>Shared Memory\u00a0\/<\/strong>
MPI jobs<\/td><\/tr>
C10\u2010Cluster<\/a><\/td>IBM\u00a0\/<\/strong>Wipro<\/a><\/td>1\u2010gigabit<\/td>20\u2010Core per node\/<\/strong>
14 nodes<\/td>
64 GB<\/td>Sequential\u00a0\/<\/strong>\u00a0Distributed
memory jobs<\/td><\/tr>
C11\u2010Cluster<\/a><\/td>Fujitsu\/<\/strong>Locuz<\/a><\/td>QDR\u2010Infiniband
1\u2010gigabit<\/td>
24\u2010Core per node\/<\/strong>
40 nodes<\/td>
96 GB<\/td>Shared Memory\u00a0\/<\/strong>
MPI jobs<\/td><\/tr>
C12\u2010Cluster<\/a><\/td>Fujitsu\/<\/strong>Micropoint<\/a><\/td>QDR\u2010Infiniband
1\u2010gigabit<\/td>
24\u2010Core per node\/<\/strong>
44 nodes<\/td>
96 GB<\/td>Shared Memory\u00a0\/<\/strong>
MPI jobs<\/td><\/tr>
C13\u2010Cluster<\/a><\/td>Fujitsu\/<\/strong>Locuz<\/a><\/td>OPA\u2010Infiniband
100\u2010gigabit<\/td>
32\u2010Core per node\/<\/strong>
32 nodes<\/td>
192 GB<\/td>Shared Memory\u00a0\/<\/strong>
MPI jobs<\/td><\/tr><\/tbody><\/table>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\t\t

This Facility Has Been Funded Through the Five Year Plan Grants Received in Response to Proposals From Faculty Members at HRI, Starting With the X-plan (2002-2007), Continued With XI-plan (2008-2013) and Ongoing XII-plan (2013-2018).
The First Cluster Was Setup in Aug.2000 Using Twelve Desktop Machines as Computer Nodes. Each Node Was a Pentium-3 Computer (Cpu Speed: 550 MHz, Memory 256 MB) and These Were Connected to Each Other via an Ethernet Switch. This Cluster Was Used More for Learning Parallel Programming and Administering a Cluster More Than Anything Else. This Cluster Was Retired in April 2002 and the Machines Were Used as Desktops for Another Three Years.

The Second Cluster We Set Up Used Sixteen Pentium-4 Computers (Cpu Speed: 1.6 GHz, Memory 1GB) and We Continued to Use Ethernet as Interconnect. Peak Performance of Each Node in This Case Was 2.2 Gflops. This Cluster Was Used Very Heavily by Users as Each Node Was More Powerful Than Any Other Machine Available in the HRI Network at the Time. This Cluster Was Retired in Late 2005.<\/p>


The Third Cluster Was Kabir, a 42 Node Cluster of Dual Processor Servers With Two 2.4 GHz Intel Xeon Processors Having 2 GB Ram. And the Journey Continues……<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t

\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t
<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>","protected":false},"excerpt":{"rendered":"

HRI-HPC Facilities Clusters OEM\/Vendor Interconnect CPU\u2010Core \/ Nodes Memory Description C6\u2010Cluster SupperMicro\/Netweb 1\u2010gigabit 4\u2010Core per node\u00a0\/55nodes 16GB Grid\u00a0\u00a0Mathematicajobs C8\u2010Cluster HP\u00a0\/Technet 1\u2010gigabit 12\u2010Core per node\/49 nodes 48 GB Sequential\u00a0\/\u00a0Distributedmemory jobs C9\u2010Cluster IBM\u00a0\/Wipro QDR\u2010Infiniband1\u2010gigabit 16\u2010Core per node\/49nodes 128 GB Shared Memory\u00a0\/MPI jobs C10\u2010Cluster IBM\u00a0\/Wipro 1\u2010gigabit 20\u2010Core per node\/14 nodes 64 GB Sequential\u00a0\/\u00a0Distributedmemory jobs C11\u2010Cluster Fujitsu\/Locuz QDR\u2010Infiniband1\u2010gigabit 24\u2010Core per node\/40 nodes 96 GB Shared Memory\u00a0\/MPI jobs C12\u2010Cluster Fujitsu\/Micropoint QDR\u2010Infiniband1\u2010gigabit 24\u2010Core per node\/44 nodes 96 GB Shared Memory\u00a0\/MPI jobs C13\u2010Cluster Fujitsu\/Locuz OPA\u2010Infiniband100\u2010gigabit 32\u2010Core per node\/32 nodes 192 GB Shared Memory\u00a0\/MPI jobs This Facility Has Been Funded Through the Five Year Plan Grants Received in Response to Proposals From Faculty Members at HRI, Starting With the X-plan (2002-2007), Continued With XI-plan (2008-2013) and Ongoing XII-plan (2013-2018).The First Cluster Was Setup in Aug.2000 Using Twelve Desktop Machines as Computer Nodes. Each Node Was a Pentium-3 Computer (Cpu Speed: 550 MHz, Memory 256 MB) and These Were Connected to Each Other via an Ethernet Switch. This Cluster Was Used More for Learning Parallel Programming and Administering a Cluster More Than Anything Else. This Cluster Was Retired in April 2002 and the Machines Were Used as Desktops for Another Three Years. The Second Cluster We Set Up Used Sixteen Pentium-4 Computers (Cpu Speed: 1.6 GHz, Memory 1GB) and We Continued to Use Ethernet as Interconnect. Peak Performance of Each Node in This Case Was 2.2 Gflops. This Cluster Was Used Very Heavily by Users as Each Node Was More Powerful Than Any Other Machine Available in the HRI Network at the Time. This Cluster Was Retired in Late 2005. The Third Cluster Was Kabir, a 42 Node Cluster of Dual Processor Servers With Two 2.4 GHz Intel Xeon Processors Having 2 GB Ram. And the Journey Continues……<\/p>","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"elementor_header_footer","meta":{"_acf_changed":false,"_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"class_list":["post-11473","page","type-page","status-publish","hentry"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/hri.bigzero.co.in\/hi\/wp-json\/wp\/v2\/pages\/11473","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hri.bigzero.co.in\/hi\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/hri.bigzero.co.in\/hi\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/hri.bigzero.co.in\/hi\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/hri.bigzero.co.in\/hi\/wp-json\/wp\/v2\/comments?post=11473"}],"version-history":[{"count":49,"href":"https:\/\/hri.bigzero.co.in\/hi\/wp-json\/wp\/v2\/pages\/11473\/revisions"}],"predecessor-version":[{"id":23194,"href":"https:\/\/hri.bigzero.co.in\/hi\/wp-json\/wp\/v2\/pages\/11473\/revisions\/23194"}],"wp:attachment":[{"href":"https:\/\/hri.bigzero.co.in\/hi\/wp-json\/wp\/v2\/media?parent=11473"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}