Deep artificial neural network technology has been widely used to solve many challenging tasks in computer vision, natural language processing, speech recognition, and more. Most of today's deep learning algorithms are designed for high performance servers and running in the cloud. As the edge devices (e.g., mobile phones and smart watches) become more capable and the advantages of on-device artificial intelligence (AI) (e.g. protecting privacy, working without a network, processing data locally in real-time) become more evident, bringing AI to the edge will be inevitable. However, the limited resources (e.g., computation, memory, and battery) of edge devices bring a whole new level of challenges: (1) On-device AI must keep the model size small without sacrificing accuracy; (2) On-device AI must keep the power usage low; (3) Future on-device AI should enable efficient processing and analysis on multi-modal data (e.g., video, audio, and text); and (4) On-device AI should be interpretable and reproducible. This project aims to address these challenges by (1) exploring innovative machine learning algorithms (e.g., multi-task learning) for multi-modal data analysis; (2) exploring multi-modal pruning algorithms (reducing the neural network size without compromising accuracy) that can be applied on edge devices; (3) investigating and explaining how pruning works and using the derived theory to guide further pruning optimization; and (4) improving the energy efficiency of on-device AI algorithms and developing energy-aware scheduling algorithms for on-device AI apps.

Excessive energy consumption is a major constraint when designing and deploying the next generation of supercomputers. Minimizing energy consumption of high performance computing requires novel energy-conscious technologies at multiple layers from architecture, system support, and applications. One obstacle that hinders the exploration of these new technologies is the lack of tools and systems that can provide accurate, fine-grained, and real-time power and energy measurement for technology evaluation and verification.

This project bridges the gap by building Marcher, a heterogeneous high performance computing infrastructure equipped with cutting-edge power-efficient accelerators including Intel Many Integrated Cores and Nvidia Graphics Processing Units, power-aware memory systems, hybrid storage with hard disk drives and solid state disks, and high performance interconnects. The Marcher system supports the development of two complementary component-level power measurement tools for major computer components: (i) pluggable Power Data Acquisition Card (PODAC) for direct and decomposed power measurement and (ii) Software Power Meter (SoftMeter) that indirectly estimates the power consumption of systems where direct measurement is not feasible or too costly.

This project has succesfully completed and Marcher is available to a broader community and researchers to conduct research and education in green computing. Contact us if you want to post your research blogs at GreenSoft . Try GreenCode, the cloud-based IDE that compiles and runs programs written in more than 20 languages and reports the performance and energy consumption your code.

 

This project addresses optimizing energy efficiency in the execution of parallel algorithms. High energy cost is a salient constraint when running large scale parallel applications on the next generation of supercomputers that contain heterogeneous multicore processors and interconnections, motivating a rethinking of conventional approaches to modeling, designing and scheduling parallel tasks by taking energy-efficiency into consideration. In this project, we collaborate with Marquette University and UC-Riverside to explore energy-efficient parallel task design and scheduling as well as develop a power profiling tool that can measure decomposed runtime power consumption of different computing components (e.g. processors, memory, networks and disks).