If it’s the Texas Advanced Computer Center‘s (TACC) Ranger supercomputer, it continues making an impact in the world. If the system could talk, it might proclaim, “There is life after retirement!”
“Ranger was the first supercomputer in open science to approach the petascale mark,” said Happy Sithole (pronounced ‘see-toll-yah’), director of the Center for High Performance Computing (CHPC) in Cape Town, South Africa. “Now, it is starting projects that are important in building high performance computing in Africa.”
In 2013, after five years as one of the National Science Foundation‘s (NSF) flagship production systems, something wonderful happened to Ranger. Instead of being retired or sold for parts, Ranger’s advocates at TACC broke down its massive processing power into several individual racks and shipped them from Austin to South Africa, Tanzania and Botswana to give root to a young and growing supercomputing community there. Closer to home in Texas, racks were sent to Texas A&M, the Baylor College of Medicine, and the Applied Research Laboratory at The University of Texas at Austin.
“It’s a beacon and an instigator of success to come,” Sithole continued. “I’m looking forward to the repurposing of Ranger in establishing high performance computing activities in Africa and the legacy this supercomputer will leave. It’s a very good thing for us to have chosen a system like Ranger in this space.”
Big Idea, Big Award, Big System
Back in 2006, everything about Ranger was big – the idea, the award, the system and the desire to do bigger and better science.
The NSF announced it would fund the deployment of the Ranger supercomputer at TACC at The University of Texas at Austin. The award covered its maintenance, operations and support to users for a lifespan of four years, which was later extended to five years. At $59 million dollars, the award was the largest single NSF grant ever received by The University of Texas at Austin.
A network of 62,976 cores packed into 15,744 quad-core AMD Opteron microprocessors, all networked by an innovative InfiniBand interconnect switch named Magnum designed by Sun Microsystems. Ranger debuted as the fifth most powerful computer in the world on the June 2008 Top 500 list, and it was hailed by the NSF as the most powerful supercomputing system in the world for open science research.
Ranger’s deployment marked the beginning of the Petascale Era in high performance computing (HPC), where systems would approach a thousand trillion floating point operations per second and manage a thousand trillion bytes of data. Ranger would also serve as the largest HPC resource on the NSF TeraGrid, a nationwide network of expert computer scientists and academic HPC centers that provided scientists access to large-scale computing power and resources from 2004 through 2011.
Today, that network is called the Extreme Science and Engineering Discovery Environment (XSEDE).
“TACC had the strongest culture of highly competent service to the research community,” remarked Dan Atkins, professor of Community Information at the University of Michigan, Ann Arbor. Atkins represented the NSF as the inaugural director of the Office of Cyberinfrastructure at the dedication ceremony for Ranger.
The scientists and expert staff at TACC proved instrumental to the success of Ranger through support of the academic users. “The leadership was excellent and the passion for discovery was palpable,” Atkins added.
The Essence of Ranger
A supercomputer today is defined not by what it’s made of, but what it can do.
The technology and hardware that goes into a supercomputer is cutting-edge, impressive and expensive, but more importantly, supercomputers help solve the grand challenge problems facing society ― problems such as engineering better medicines, making solar energy economical, providing access to clean water, and advancing health informatics, to name a few.
Andreas (“Andy”) Bechtolsheim, co-founder of Sun Microsystems, helped design the Sun Constellation Linux Cluster that was to be Ranger.
“The essence of Constellation was a completely new type of switch we called Magnum, which could truly scale to a petascale-type system,” Bechtolsheim said.
“At the time, it was the biggest switch ever built. It had a bandwidth in excess of a hundred terabytes per second, which was a hundred times greater than a conventional switch. Our engineering team felt that the launch of this switch with Ranger was an historic moment in petaflop computing.”
In addition to the Magnum switch, Ranger was a first in many other ways, according to Tommy Minyard, co-principal investigator and director of Advanced Computing Systems at TACC.
“The most important impact Ranger had on the supercomputing community was the fact that we were able to build a system of that scale using commodity components, commodity processors and InfiniBand technology. Nobody had ever built an InfiniBand network with that many nodes on it, at that scale, with that many processors,” Minyard said.
Supercomputing Expands in Africa
Meanwhile, in South Africa, supercomputing was just getting started. On May 22, 2007, the CHPC opened at the University of Cape Town, and the center was determined to make supercomputing a reality for South Africa.
“We are funded by the Department of Science and Technology, South Africa,” Sithole said. “The main purpose is to provide high performance computing facilities to researchers all over the country.”
In 2007, South African computer scientists assembled a cluster of 640 processors capable of 2.5 teraflops, or 2.5 million million mathematical operations per second. But they didn’t stop there. In just a short two years, the CHPC launched the Tsessebe Sun Constellation System. It reached a peak performance of 31 teraflops in 2009 and made history as the first supercomputer in Africa to rank among the Top 500 list of the fastest systems in the world.
“At the same time, there are collaborative projects like the Square Kilometer Array (SKA) radio telescope, that require growing computational power,” Sithole said. He expects that by the launch of the SKA in 2024, daily raw data in excess of an exabyte will require supercomputers many times more powerful than the fastest in the world today.
“Even though South Africa is the main host of the SKA, there are eight other African countries, which are involved in this project,” Sithole said. “And for them to be able to contribute to this project, they need to be able to have some processing capabilities,” Sithole said.
Ranger: Science Impact and R&D Contributions
Back in Texas, Ranger went into full production on February 4, 2008. The supercomputer become a flagship of the U.S. academic community and put The University of Texas at Austin squarely on the world map as a leader in supercomputing. It ran for five years and was decommissioned on February 4, 2013. More than 4,000 scientists used Ranger to crunch numbers on 2,244 research projects and completed more than three million simulation experiments.
One of the most important research projects involved the development of a system that allowed the NOAA National Hurricane Center to try something new with hurricane tracking.
“We worked with the weather researchers to implement and run an ensemble forecasting technique, where instead of running one model with one set of input conditions to get a path of what the hurricane was going to do, they would run 20 models,” Minyard said. Each model had slightly different weather conditions to cover uncertainties from weather station data. And each one of these group forecasts used more than 1,500 cores, totaling 30,000 cores at a time to run 20 different forecasts.
Hurricane paths were calculated and overlaid on top of each. “And they would see that 17 or so of the paths would all lie in this one little region. So, they had pretty high confidence that the hurricane was probably going to follow that path,” Minyard said.
The group forecasts were put to the test with Hurricane Ike in 2008. The result: NOAA improved five-day path predictions for the Gulf coast near Houston by an order of magnitude. “That’s where the biggest challenge is,” Minyard said. “Five days before the hurricane, where do you want to have your resources available for your emergency response?”
With regard to research and development, one of the biggest innovations made with Ranger were improvements to Lustre, an open source parallel file system. Lustre lets multiple computers read and write to the same filesystem and also spreads data storage to multiple servers to scale bandwidth beyond what a single server could provide.
“When we first deployed Ranger, Lustre was not widespread,” Minyard said. “It was not as robust and stable as it is now. And a lot of those things we were able to work out on Ranger. Now, Lustre has become a lot more widespread, a lot more mainstream, and adopted by industry.”
Up and Running in Africa
As the date to decommission Ranger drew closer, conversations began about what to do with the still very powerful machine.
Chris Jordan, manager of TACC’s Data Management and Collections group, made the journey to Cape Town. He was invited to give presentations at a workshop on high performance computing, and there, the idea to send Ranger to Africa took hold.
Because Ranger was mostly a commodity cluster, any individual rack could be a standalone machine. TACC shipped 20 racks to several universities within South Africa. “They’ll be able to teach parallel computing and do local science on campuses where they had no infrastructure at all,” Jordan said.
A collaboration has formed between the researchers at the universities who received Ranger racks in late 2013 and early 2014.
“One of the key things we’re looking at in this collaboration,” Sithole said, “is to help students work through the configuration of the system. We value the contribution that TACC has made in providing access to this system and the time that TACC has spent in helping our technical people understand the configuration.”
“The other thing we want to do is develop curriculum within the universities to start introducing high performance computing.”
Once a base for supercomputing is established, people will be ready to start purchasing new technologies, Sithole added.
The thousands of scientists in the U.S. who used Ranger most likely didn’t notice when its lights turned off. Operational funding was extended for one year to allow Ranger to continue supporting world-class science until Stampede was deployed as part of XSEDE. In 2013, their computational models and data seamlessly migrated to TACC’s newer system, which today is about 20 times as powerful as Ranger.
Great things start small, according to TACC Executive Director Dan Stanzione. “Often, I look back at my career, and we all started building clusters with the pieces that we could get our hands on,” he said. “Now, we build the top systems in the world. We couldn’t have done that if at one point we didn’t start building clusters at a smaller scale.”
Ranger’s new life is giving budding computer scientists a chance to learn high performance computing. “It’s a huge win and the right thing to do,” Stanzione said. “It’s an opportunity for one of the world’s top supercomputers to continue to have a big impact on a lot of people who still need it.”
The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is a center of computational excellence in the United States. The center’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies. To fulfill this mission, TACC identifies, evaluates, deploys and supports powerful computing, visualization and storage systems and software. TACC’s staff experts help researchers and educators use these technologies effectively, and conduct research and development to make these technologies more powerful, more reliable and easier to use. TACC staff also help encourage, educate and train the next generation of researchers, empowering them to make discoveries that change the world.
- Ranger’s Greatest Hits
- Upgrading the Hurricane Forecast
- Inside the Swine Flu Virus
- Putting Quarks on a Virtual Scale
- Reducing Jet Noise by Controlling Turbulence
- Biologically Inspired Energy
- Center for High Performance Computing, South Africa
- Billionaire Thinks in Trillions for His Computer Designs
- Petascale Science at TACC
Ranger Supercomputer Begins New Life, Makes Global Journey to Africa have 1956 words, post on www.scientificcomputing.com at 2014-07-30 15:57:42. This is cached page on Technology Breaking News. If you want remove this page, please contact us.