1. Articles in category: Supercomputer

    49-72 of 89 « 1 2 3 4 »
    1. LBNL Plans For the Exascale Data Center

      LBNL Plans For the Exascale Data Center

      Last week, Lawrence Berkeley National Laboratory (LBNL) broke ground on a facility that will house its vision for the supercomputer of the future. The 140,000 square foot data center will overlook the San Francisco Bay on a hill above the UC Berkeley campus. It may also provides the first view into exascale – the new frontier for supercomputing. Planning for supercomputers that can surpass current petaflop levels to exaFLOPS (1,000,000,000,000,000,000 Floating Point Operations per Second ), the U.S. Department of Energy has recognized that energy consumption for powering that compute load is a particular challenge.

      Read Full Article
    2. Illinois supercomputing center gets LEED Gold

      The newly built data center at the University of Illinois that will soon support one of the world’s most powerful supercomputers received LEED Gold certification from the US Green Building Council. Even with 24MW of critical load, the National Petascale Computing Facility on site of the university’s National Center for Supercomputing Applications (NCSA) was given the USBGC’s second-highest award in recognition of an energy efficient design and a construction process that minimized impact on the
      Read Full Article
    3. Blue Waters Data Center Achieves LEED Gold

      Blue Waters Data Center Achieves LEED Gold
      The National Center for Supercomputing Applications (NCSA) announced that the National Petascale Computing Facility (NPCF) at the University of Illinois has been earned a Gold-level certification under the LEED (Leadership in Energy and Environmental Design) rating program for energy-efficient buildings. Blue Waters Constructed in 2010 the University of Illinois and NCSA opened the NPCF data center as the home to supercomputers and other high-performance systems operated by NCSA and used by scientists and engineers across the country. The Blue Waters project encompassed the NPCF and a 10 petaflop supercomputer, which was initially a venture with IBM. In 2011 NCSA and IBM determined that the project was too complex to proceed. IBM pulled the plug and NCSA later awarded they contract to Cray to build a XE6 system.
      Read Full Article
      Mentions: IBM LEED
    4. TACC Builds Data Center for New Supercomputer

      TACC Builds Data Center for New Supercomputer
      The Texas Advanced Computing Center (TACC) at The University of Texas at Austin announced that it is expanding the center’s current high performance computing data center to house the new Stampede supercomputer, which will be built in late 2012 and go into full production to the national science community in January 2013. The $56 million project will encompass a machine room and raised floor expansion, a separate building to include the transformer yard, a location to house the chillers, compressors and cooling towers, a tank for thermal energy storage, and an additional seminar room for training. The funds will also pay for the long-term upgrades to support the infrastructure of future projects. In this video Dan Stanzione, Deputy Director, Texas Advanced Computing Center talks about the power and cooling requirements of the expanded facility. Run time is about 2 minutes, 45 seconds.
      Read Full Article
    5. HPC News, SGI, Blue Waters, Dell

      HPC News, SGI, Blue Waters, Dell
      Here’s our review of today’s noteworthy links for the High Performance Computing (HPC) industry: Cray delivers first Blue Waters Cabinet. On December 1 Cray delivered the first full cabinet for the NCSA Blue Waters system. A photo gallery of the installation day can be found on the NCSA Facebook album, where in the comments it is confirmed that the cabinets will be water-cooled. The National Science Foundation’s Blue Waters project was awarded to Cray last month after NCSA and IBM terminated the original contract last summer. Dell’s HPC Strategy. The Register reports on how Dell is going to engage the market to grow its HPC strategy. The primary focus for Dell’s HPC strategy is to concentrate on smaller HPC systems where projects are well-bounded with known workloads and customers they know and understand. Dell is putting together recipes for popular HPC apps in small, medium ...
      Read Full Article
    6. NCSA Blue Waters Project Awarded To Cray

      NCSA Blue Waters Project Awarded To Cray
      NCSA and Cray announced that they have finalized a contract with the University of Illinois’ National Center for Supercomputing Applications (NCSA) to provide the supercomputer for the National Science Foundation’s Blue Waters project. Back in August NCSA and IBM jointly announced that IBM has terminated its contract with the University of Illinois. The Blue Waters Infrastructure The multi-phase, multi-year project was awarded to Cray for $188 million and will start with a Cray XE6 system, upgrading to the recently announced Cray XK6 with built-in GPU computing capability. Bill Kramer, deputy project director of the Blue Waters project at the NCSA at the University of Illinois, told The Register that Blue Waters was not a specific system, but rather a complete set of infrastructure, including a data center, plus computation, networking, and storage and, most importantly given the software goals of the NCSA, code that scales to real-world petaflops performance.
      Read Full Article
    7. IBM Files Patent For 100 Petaflop Supercomputer

      IBM Files Patent For 100 Petaflop Supercomputer
      IBM has filed a patent for a massive supercomputing system that could reach 107 petaflops, more than 12 times the compute power of the current leader in the Top 500 supercomputer rankings. Powered by Blue Gene Last month IBM unveiled the Blue Gene/P and /Q systems that will use the A2 processing core and achieve upwards of 20 petaflops (quadrillion floating-point operations per second). The new patent describes the interconnected ASIC nodes using a five-dimensional torus network and is listed as being “capable of achieving 107 petaflop with up to 8,388,608 cores, or 524,288 nodes, or 512 racks is provided.”
      Read Full Article
    8. IBM To Power 20 Petaflop Supercomputer

      IBM To Power 20 Petaflop Supercomputer
      IBM lifted the curtain on its Blue Gene/Q SoC last week in Santa Clara and noted that it will soon be installed in two of the most powerful Blue Gene systems ever deployed. Power 7 vs. SoC With the plug pulled on the 10 petaflop Power7-based Blue Waters for NCSA, IBM is working with two Department of Energy labs for a 10 petaflop ‘Mira’ system at Argonne National Lab and a 20 petaflop “Sequoia” at Lawrence Livermore. The current top supercomputer in the world, the Japanese K, can sustain 8.162 petaflops. The Power-7 chip was set to perform at 256 gigaflops per 8 cores and consume 200 watts, where the Blue Gene/Q SoC will pull 204 gigaflops per processor, with an 18 core count, and consumes 55 watts at peak. With a significant increase in performance the Blue Gene/Q chip delivers 15 times as many peak ...
      Read Full Article
    9. NSA building $896M supercomputing center

      NSA building $896M supercomputing center
      The NSA's new High Performance Computing Center, slated to be complete by December 2015, will be designed to with energy efficiency, security, and lots of "state-of-the-art" computing horsepower in mind, according to unclassified specs found in the documents, which detail numerous military construction project budgets, including several NSA efforts. NSA has long been a supercomputing powerhouse. The secretive signals intelligence agency purchased the first Cray supercomputer in 1976, and even keeps two Cray supercomputers on display at its National Cryptologic Museum alongside spy gadgets such as centuries-old code books and a working German Enigma machine from World War II. The specs for the new supercomputing center read much like the NSA is building a massive data center, with typical requirements for raised flooring, chilled water systems, fire suppression, and alarms. Power requirements are 60 megawatts, equivalent to the power requirements of Microsoft's recently completed 700,000 square foot ...
      Read Full Article
    10. Building A Sturdy Data Center Roof

      Building A Sturdy Data Center Roof
      We’ve seen a lot of videos that look at various aspects of data center construction, including many time-lapse videos providing an accelerated view of the process. Here’s a new one: a video that focuses on the construction of the data center roof for the new Swiss National Supercomputing Centre (CSCS) in Lugano. This 1-minute clip provides a sense of the infrastructure for a strong roof, which is an important consideration in buildings where heavy equipment will be stored on the rooftop. Each roof beam is 35 meters long and weights 50 tons, and are moved into place by a mobile crane that weighs 380 tons.
      Read Full Article
    11. Biggest Problem for Exascale Computing: Power

      Biggest Problem for Exascale Computing: Power
      Follow Reuters Facebook Twitter RSS YouTube READ Special Report: STD fears sparked case against WikiLeaks boss 07 Dec 20101Spain gets debt warning before EU summit 10:43am EST2Murdered Alabama children were tortured: documents 09 Dec 20103UPDATE 4-Greek police clash with anti-austerity protesters 8:10am EST4Time names Mark Zuckerberg 2010 Person of the Year 9:58am EST5 DISCUSSED 62 WikiLeaks backers hit MasterCard and Visa in cyberstrike 58 Tax deal moves forward despite doubts 56 Democrats defy Obama, oppose tax deal WATCHED The year in 60 seconds Tue, Dec 14 2010 Bejeweled bra exposed in NY Thu, Oct 21 2010 U.S. Navy breaks railgun record Mon, Dec 13 2010 BROKER CENTER Special Advertising Feature Trade Now at Fidelity Biggest Problem for Exascale Computing: Power Tweet This Share on LinkedIn Share on Facebook Digg More from Earth2Tech Abound Solar Snags Ample Funding for 775 MW of Factories 14 Dec 2010 Smart ...
      Read Full Article
    12. Google Unveils Earth Engine to Save World’s Forests

      Google Unveils Earth Engine to Save World’s Forests
      Protecting the world’s forests will be a crucial way to fight climate change, given deforestation contributes to more carbon emissions than all vehicles combined. Now Google has emerged as a key warrior in the deforestation battle. On Thursday morning in Cancun, Mexico at the COP 16 U.N. climate negotiations, the search engine giant unveiled Google Earth Engine, a product which combines an open API, a computing platform and 25 years of satellite imagery available to researchers, scientists, organizations and government agencies. While the software and satellite imagery in Google Earth are already being used to look at world climate change data, Google Earth Engine offers tools and parallel processing computing power to groups to be able to use satellite imagery to analyze environmental conditions in order to make sustainability decisions.
      Read Full Article
    13. Tiny Supercomputers The Size of a Sugarcube

      Tiny Supercomputers The Size of a Sugarcube
      The world's most powerful supercomputer could be the size of a sugar cube and more energy efficient than you might ever imagine. Researchers at IBM's Zurich Labs have developed a prototype supercomputer called the Aquasar that uses a water-cooling principle to keep the system from overheating. The Aquasar is a normal-sized computer; there's nothing tiny about it. But IBM thinks that the water-cooling technology that's proven effective in this supercomputer could work just as well in a vastly smaller machine. The processors in today's computers get very hot, and they have to be cooled off, usually by air. IBM found that using water to cool off a computer's processors is 4,000 times more efficient than using air.
      Read Full Article
      Mentions: IBM
    14. IBM Research A Clear Winner in Green 500

      IBM Research A Clear Winner in Green 500
      A system from IBM Research is the most energy efficient supercomputer in the world, finishing atop the Green 500 list released today at the SC10 supercomputing conference in New Orleans. The Green 500 list recognizes the systems with the best performance-per-watt to raise awareness about the power consumption of high-performance clusters and “ensure that supercomputers only simulate climate change, not create it.” The IBM Research system proved more efficient than the more powerful Tsubame 2.0 from the Tokyo Institute of Technology, which placed second. The Chinese Tianhe-1A system, which took the top spot in the Top 500 rankings for overall supercomputing power, finished 10th in the Green 500. IBM’s system had a Linpack benchmark of 653 teraflops, but got 1,684 mflops of performance from every watt, easily outdistance Tsubame 2.0′s efficiency of 948Mflops per watt.
      Read Full Article
      Mentions: IBM
    15. China’s Tianhe-1A Achieves 2.507 Petaflops

      China’s Tianhe-1A Achieves 2.507 Petaflops
      With the Top500 Supercomputer bi-annual list just around the corner, The National Supercomputing Center in Tiajin, China announced that its new Tianhe-1A supercomputer has set a new performance record of 2.507 petaflops on the LINPACK benchmark. The benchmarks have been submitted to Top500.org, where the current top spot from June 2010 is held by Jaguar - the Oak Ridge National Laboratory Cray XT5-HE Opteron. Jaguar clocked in at 1.7 PetaFLOPS (1 PetaFLOP is 10 15 Floating point Operations Per Second) and China’s Nebulae jumped to number two on the June list at 1.271 PetaFLOPS.
      Read Full Article
    16. Using an iPad to Manage A Supercomputer

      Using an iPad to Manage A Supercomputer
      The Apple iPad is proving popular with executives who want to travel lighter. Can the iPad manage all the tasks you’d normally perform on a notebook computer? In this video, Steve Finn from BAE Systems demonstrates how he configured his iPad to remotely manage his 20,000-processor Altix ICE high performance computing system. This demo was posted at The Rich Report, the YouTube channel for HPC industry veteran Rich Brueckner and the InsideHPC web site, which Brueckner recently acquired from founder John West. This video runs about 4 minutes.
      Read Full Article
    17. Finland to get new supercomputing data centre for research and education

      Finland to get new supercomputing data centre for research and education
      The CSC’s current stock of supercomputers in Otaniemi. “THE IT CENTER for Science (CSC) in Finland said it will create a ‘state of the art’ data centre for supercomputers, data storage and other IT systems, which will be completed in 2012. Finland Minister for Education and Culture Henna Virkkunen said the center will focus on energy efficiency and IT Center for Science managing director Kimmo Koski said it will also look to reduce data center costs. ‘The datacenter project is extremely important for Finland,’ Virkkunen said. ‘It will strengthen the international competitive ability for Finnish research by providing an eco-efficient location for the new supercomputer and the services it will make available in Finland.’
      Read Full Article
      Mentions: CSC
    18. Cray’s Rack-Mounted Supercomputer

      Cray’s Rack-Mounted Supercomputer
      Cray is one of the most venerable names in supercomputing, but it’s still finding new initiatives. In March Cray announced the launch of the CX1000 system, a “rack-mounted supercomputer” that gives the company a supercomputing solution at every level of the high performance (HPC) server market. in this five-minute video, Cray President and CEO Peter Ungaro and Richard Dracott, Intel’s General Manager of High Performance Computing, discuss the introduction of the Cray CX1000 supercomputer and its fit with Cray’s Adaptive Supercomputing vision.
      Read Full Article
      Mentions: Intel Cray
    19. New Supercomputer Will Track Climate Change

      New Supercomputer Will Track Climate Change
      The National Center for Atmospheric Research (NCAR) broke ground yesterday on a new data center in Cheyenne, Wyoming that will house one of the world’s most powerful supercomputers. The future NCAR-Wyoming Supercomputing Center (NWSC) will be a 171,000 square foot facility in North Range Business Park. Scientists will use the supercomputing center to accelerate research into climate change, examining how it might affect agriculture, water resources, energy use, sea levels and impact on extreme weather events, including hurricanes.
      Read Full Article
    20. IBM liquid-cooled supercomputer heats building

      IBM liquid-cooled supercomputer heats building
      An IBM supercomputer is doubling as a space heater via a technique that reduces energy use by 40 percent and dramatically lowers the overall carbon footprint. Based at Swiss university ETH Zurich and dubbed Aquasar, the liquid-cooled supercomputer went live on Thursday and started analyzing fluid dynamics while simultaneously providing heat for the building. In a typical data center, about half of the energy is used for cooling.
      Read Full Article
      Mentions: IBM
    21. Super Guzzlers?

      Super Guzzlers?
      Environmentally conscious people often support a slower pace of life, with lesser amounts of consumption. Stopping to smell the roses may be great but in the data center industry it could spell disaster for a company. Ergo, enter the monsters of the cyber world – the Supercomputers. On one hand, you need supercomputers to cope up with the necessity to operate faster and faster and crunch through ever- increasing amounts of data and on the other hand, there are mounting concerns about the power required to drive and cool these mammoths. Imagine hot CPUs scrunched together in one small space, throwing off heat in vast amounts and imagine the kind of energy that will need to go into keeping them cool. Even if they are on the Green500 list, they still need vast amounts of energy to operate. Take for example, the Cray X5-HE at the Oak Ridge National Laboratory. It ...
      Read Full Article
    22. Do SuperComputers Turn a Green Data Center Gray? By Doug Maloney

      Do SuperComputers Turn a Green Data Center Gray?  By Doug Maloney
      Green data center technology and supercomputing aren't two things that go well together, and there's a reason why. Faster computational performance in a densely packed space requires the hottest – literally – CPUs available packed very close together. All that heat requires somewhere to go, which requires cooling and more dollars to the power bill. Exhibit A for this is IBM's Blue Waters supercomputer being built at the University of Illinois Urbana-Champaign campus, as reported by News.com. The machine is getting its own 88,000 square foot building and be theoretically capable of speeds of 10 petaflops, about 10 times as fast as the fastest supercomputer today. Blue Waters will use lots and lots of brand new IBM Power7 processors expected out in the first half of 2010 – a total of 16,384 chips together. Each Power7 processor integrates eight processing cores in one chip package and each ...
      Read Full Article
    49-72 of 89 « 1 2 3 4 »
  1. Categories

    1. Data Center Design:

      Construction, Container, Data Center Outages, Monitoring, Power and Cooling
    2. Policy:

      Cap and Trade, Carbon Footprint, Carbon Reduction Commitment, Carbon Tax, Emissions
    3. Power:

      Biomass, Fossil Fuel, Fuel Cell, Geothermal, Hydro, Nuclear, Solar, Wind
    4. Application:

      Cloud Computing, Grid Computing
    5. Technology:

      Microblogging, Networking, Servers, Storage, Supercomputer
  2. Popular Articles