In modern server architectures, the processor socket and the memory system
are implemented as separate modules. Data exchange between these modules
is expensive -- it is slow, it consumes a large amount of energy, and there are long wait times for narrow data links. Emerging big-data workloads will require especially large amounts of data movement between the processor and memory. To reduce the cost of data movement for big-data workloads, the project attempts to design new server architectures that can leverage 3D stacking technology. The proposed approach, referred to as Near Data Computing (NDC), reduces the distance between a subset of computational units and a subset of memory, and can yield high efficiency for workloads that exhibit locality. The project will also develop new big-data algorithms and runtime systems that can exploit the properties of the
The project will lead to technologies that can boost performance and reduce the energy demands of big-data workloads. Several reports have cited the importance of these workloads to national, industrial, and scientific computing infrastructures. The project outcomes will be integrated into University of Utah curricula and will play a significant role in a new degree program on datacenter design and operation. The PIs will broaden their impact by publicly distributing parts of their software infrastructure and by engaging in outreach programs that involve minorities and K-12 students.
PhD student. Research Interest: interactive data analytics and systems.