sc2002 - From Terabytes to Insights

For the past 14 years, SC has brought together scientists, engineers, systems administrators, programmers, and managers to share ideas and glimpse the future of high performance networking and computing, data analysis and management, visualization, and computational modeling. In Baltimore, participants from around the globe will once again gather to showcase new developments in information architectures, scalable computing, data analysis, applications, and collaborative technologies. But beyond the cool exhibits, the faster-than-lightening computer systems, and the eye-popping simulations, SC2002 will provide a vision of the changes that will impact our lives and our world in the years to come. Thanks to high performance computing and networking, terabytes of data are being transformed into knowledge about the origins of our universe, the condition of our planet and its ecosystems, and about our own genetic makeup. SC has grown far beyond "big iron." It's about making the most of the information age and using a new cyberinfrastructure to generate insight and understanding and change the way we live.

Now is the time to consider participating in SC2002. The conference will feature a first-class technical program that includes tutorials, technical paper presentations, poster exhibits, an exhibitor forum, and "birds of a feather" sessions. Exhibitions will offer a glimpse at the cutting-edge science and technology that will impact our world in coming years through a unique combination of industry and research displays.

Each year, conference participants come to the conference to see the exhibits. Last year, attendees spent an average of 10.6 hours over three days visiting the exhibits. Moreover, SC attendees are the people your business needs to reach – 83 percent have a say in their organization's purchasing decisions and 55 percent said they plan to buy one or more of the products and developments they learned about at the conference. Of those attendees who gave a dollar figure, 76 percent planned purchases of more than $100,000.

Can you catch the same audience at another trade show? Seventy percent reported they have not attended another show in the past year and 60 percent said they are "very likely" to attend SC next year. Attendees include R&D personnel, systems administrators, engineers, and top managers. Last year, they came from 26 states, the District of Columbia, Europe, and Asia.

The Maui Consortium is a collection of like-minded organizations pooling resources, talent, and experience to advance the state of the art in cluster scheduling, grid scheduling, and resource management areas. The consortium works together to find common solutions to needs which span the sites. Personnel and financial resources are pooled to arrive at a solution to these problems in a timely and cost effect manner.

SC02 Caltech-SLAC Experiments

The FAST kernel was first demonstrated publicly in experiments conducted during the SuperComputing Conference (SC2002), November 16-22, 2002, in Baltimore, by a Caltech/SLAC research team working in partnership with CERN, DataTAG, StarLight, Cisco, and Level(3).

Highlights (with GE cards):

Standard MTU (1,460B application data)
All statistics are averages over > 1hr
Peak window size = 14,255 pkts
925Mbps (95% utilization) single flow averaged over 1hr
21TB in 6 hrs with 10 flows (8.6Gbps, 88% utilization)
11.5Gbps with 14 flows during SCinet bandwidth challenge

It is the combination of high speed and large distance that is challenging. This is measured by the product of aggregate throughput and the distance of transfer, in bit-meter-per-sec (bmps). The bmps achieved in the Caltech-SLAC experiments are shown in the following thumbnail.

IBM has won a $290 million government contract to build what are expected to be the world's two fastest supercomputers at Lawrence Livermore National Laboratory, the company plans to announce Tuesday.
One machine, ASCI Purple for nuclear weapons research, will be three times faster than the world's current top-ranked supercomputer, NEC's Earth Simulator, which has been clocked at 35 trillion calculations per second, or "teraflops." The other machine, the Linux-powered Blue Gene/L for civilian research, will be 10 times faster than Earth Simulator, with a speed of 360 teraflops, according to IBM.

Also included in the $290 million government contract is a third, smaller computer--a comparatively ordinary cluster of 944 x335 servers and 32 x345 machines.

The deal, scheduled for announcement at the SC2002 supercomputer show in Baltimore, reflects the progress IBM has made in the supercomputer market, beyond its stronghold of mainframes and other business-oriented computers that handle tasks such as logging inventory and sales transactions.

In 1993, IBM got its first systems onto the Top500 list of the world's fastest supercomputers. Today, the list includes 134 IBM machines with a combined computing power larger than that of any other company in the rankings.

The design details of Blue Gene/L still haven't been settled beyond a plan for it to have 65,536 computing nodes, said Peter Ungaro, IBM vice president of high-performance computing. The design for ASCI Purple, though, is better established, and brute force figures prominently in it.

ASCI Purple, due to be running by the end of 2004, is expected to have 196 interconnected 64-processor servers, making a total of 12,544 Power5 chips. It will come with 50 terabytes of memory--about 20,000 times as much as a PC. The supercomputer also will have IBM disk storage arrays holding 2 petabytes, or a quadrillion bytes, of data--about 50,000 times the capacity of a PC.

As for physical size, ASCI Purple will weigh about 197 tons, be linked to 119 miles of optical cable and 28 miles of copper cable, and occupy 8,900 square feet of floor space--or about two basketball courts. It will consume 4.7 megawatts of power, enough current for 4,000 homes, according to IBM.

Big Iron business
Supercomputers don't sell in as large volumes as mainstream business systems, but the market is important for other reasons. First, supercomputer research and development can be plowed back into mainstream computer products. In addition, government-funded initiatives help subsidize that development work. For example, the U.S. Energy Department's Advanced Simulation and Computing program, which grew out of the earlier Accelerated Strategic Computing Initiative, is underwriting ASCI Purple.

Blue Gene/L is one step in IBM's ongoing project to build a machine by 2007 that can perform a quadrillion calculations per second--a "petaflop." The task of the ultimate Blue Gene computer will be to predict the folding of proteins, the large biological molecules that are assembled from genetic information encoded in DNA.

The 360-teraflop performance of Blue Gene/L is expected to be more than the collective 293-teraflop ability of today's entire Top500 supercomputer list.

For enormous systems with thousands of processors, a major challenge will be simply keeping all the components up and running and circumventing problem areas when they occur. IBM is working on autonomic computing technology, or machines that can diagnose and repair problems themselves, "so we can make systems of this size more self-maintaining," IBM's Ungaro said. "If there are failures, they can be routed around so the machine is still available to users."

Nuclear know-how
In the mid-1990s, the Energy Department launched what was then the Accelerated Strategic Computing Initiative, a plan to spur the development of supercomputers so they'd be fast enough to simulate nuclear weapons explosions in detail. The program, with a budget in the billions of dollars, was embraced by the nation's three national laboratories--Sandia National Laboratories, Los Alamos National Laboratory and Lawrence Livermore National Laboratory--as a way they could assure that nuclear weapons would work as designed, without having to rely on actual tests.

The result has been a succession of ever-more-powerful supercomputers. The first contract was awarded in 1995 for work at the Sandia labs in Albuquerque, N.M., on Intel's ASCI Red system. The supercomputer was designed to perform 1 trillion calculations per second, or 1 teraflop.

Next came the three-teraflop machines, Blue Mountain, built by SGI for Los Alamos National Laboratory in New Mexico, and Blue Pacific, built by IBM at the Livermore lab in California.

The third generation was ASCI White, the second IBM machine at Livermore labs. It was designed to run at 10 teraflops, but the machine topped out at 12.3 teraflops. The fourth generation, ASCI Q at Los Alamos, is designed ultimately to reach 30 teraflops. However, it's still under construction and so far exists as two 7.7-teraflop parts.

New generation
ASCI Purple--named after the color resulting from a mixture of red, white and blue--was to be the pinnacle of the program, with a target of 100 teraflops. It was to be the system that could handle the ultimate task: a "full physics" simulation in three dimensions of a nuclear blast, both of the "primary" fission explosion that begins the process and the resulting "secondary" fusion reaction that provides most of the energy in the nuclear detonation.

But IBM believes there will be successors to ASCI Purple.

"ASCI was originally laid out through 100 teraflops. But clearly they have a lot more science that needs to be done within the program. I believe they have further aspirations," Ungaro said.

Lab researchers are looking forward to more-sophisticated modeling abilities from future supercomputers. "We've done the primary and secondary of a simplified theoretical weapon," said Lawrence Livermore National Laboratory spokesman David Schwoegler. The simulation took about two months, he said--but ASCI Purple will allow simulations in less time than that.

sc2002.org