Profs Keren Bergman, Alex Gaeta, and Michal Lipson win funding for phase 2 of their project, Photonic Integrated Networked Energy-efficient Datacenters
With the explosive growth of data analytics applications, high-performance computing (HPC) systems and datacenters are today’s critical information infrastructure. Increasing compute performance is essential to meeting future needs, but the scaling systems are highly energy-inefficient. The performance of these systems’ parallel architectures is determined by how data is moved among the numerous compute and memory resources, and is dramatically affected by the growing energy consumption associated with the necessary massive data movement.
Professors Keren Bergman, Alex Gaeta, and Michal Lipson are working to fundamentally address these data movement challenges with new optical computing architectures. Their team, which includes collaborators from MIT, Lawrence Berkeley National Laboratory, SUNY-Polytechnic Institute’s College of Nanoscale Science and Engineering, Quintessent Inc., Nvidia, and Cisco Systems, recently won an ARPA-E (Advanced Research Projects Agency–Energy, U.S. Department of Energy) $6 million two-year grant to support phase 2 of their project, “Photonic Integrated Networked Energy-efficient datacenters” (PINE). PINE’s goal is to leverage the unique properties of photonics to steer bandwidth to where it is needed rather than over-provisioning, which currently dominates energy consumption.
“Our PINE architecture unleashes the truly revolutionary impact of photonics to create a new paradigm for future ultra-energy-efficient datacenters and HPC high-performance computing systems,” said project PI Bergman, Charles Batchelor Professor of Electrical Engineering. “In essence, we are using optical interconnection networks to reduce the system-wide energy consumption of datacenters and HPC systems and make them ‘green.’”
The PINE architecture is designed to support diverse emerging data-intensive workloads while optimizing energy efficiency. It maximizes the benefits provided by seamlessly integrating low-power silicon photonic links and large numbers of embedded photonic switches to link together photonic multi-chip modules. PINE’s low costs and deep integration capabilities will allow all links to be optical, making possible the availability of each server’s resources to all other servers.
Bergman explained, “The sharing of resources presents an abstract concept of the datacenter as a single, unified machine that enables fine-grained allocation of resources and prevents applications from being bottlenecked on a particular resource type. This ‘deeply disaggregated’ approach gives us much more flexibility.”
Phase 2 will build upon the successes of phase 1, which was a two-year $4.4M project that demonstrated the first energy optimized high-bandwidth density silicon photonic links. In phase 2, the team will perform system-level integration consisting of photonic interconnected multi-chip modules with switching flexibility to demonstrate the PINE architecture under realistic workloads. On their Columbia testbed in Bergman’s Lightwave Research Lab, the team plans to demonstrate speed-up of machine learning and data analytics applications, executing with substantially reduced energy consumption.
PINE’s flexible interconnectivity enables it to assign datacenter/HPC resources to workloads with precise temporal and size accuracies so that only the required amounts of computation power, memory capacity, and interconnectivity bandwidth are made available over the needed time period. This efficient usage of resources reduces the vast amounts of wasted energy consumption of current datacenters, and simultaneously accelerates time to completion of HPC applications.
Working with industry leading partners, including NVIDIA, the leader in GPUs and GPU-accelerated datacenter analytics, Cisco Systems, main developers and suppliers of datacenter networking equipment, together with startup Quintessent, the PINE team is also focused on accelerating technology transfer to practical datacenter and HPC system deployments.
“Our team encompasses the complete stack of leading expertise necessary to drive a full solution with transformational impact on the market deployment of ultra-energy efficient scalable datacenters,” Bergman said. “We’re very excited to move this forward.”
Source: Columbia University