Circuits and Systems Society Artificial Intelligence for Industry Forum
The Artificial Intelligence for Industry Forum, sponsored by Circuits and Systems Society, was held on September 21st, 2018 at Intel Auditorium in Santa Clara, CA. The event was organized by Dr. Yen-Kuang Chen and Dr. Tong Zhang at Intel Corporation and co-sponsored by Santa Clara chapters of CAS, SSCS, ComSoc, SPS, and CIS. The speakers invited to this forum were: Dr. Debbie Marr (Intel), Dr. Vivienne Sze (MIT), Dr. Mark Sandler (Google), and Dr. Jongsoo Park (Facebook). The event attracted 334 attendees that included IC designers, systems and software engineers, academics, and students from local industry and schools.
Photo on left: CASS AI Forum speakers, organizers, sponsors, and co-sponsoring society officers. From left to right: Dr. Yen-Kuang Chen (CAS Board of Governors / Intel), Dr. Tong Zhang (Intel), Robert S. Ogg (CAS-SCV Chair), Dr. Yong Lian (CAS President), Dr. Jongsoo Park (Facebook), Dr. Mark Sandler (Google), Dr. Vivienne Sze (MIT), Dr. Debbie Marr (Intel Labs), Eduard Alarcón (Vice President - Technical Activities CAS), Mojtaba Sharifzadeh (SSCS-SCV Chair), Dr. Mehran Nekuii (ComSoC-SCV Vice-Chair)
The first lecture titled “Architecture for Machine Learning” was given by Dr. Debbie Marr who is a Sr. Principal Engineer and Director at Intel Labs. Dr. Marr started the discussion with “Makimoto’s Wave” and how it has shaped the evolution of processors in order to provide a balance between differentiation, operational efficiency, innovation and stabilization. She then gave an overview of the research activities at Intel’s Accelerator Architecture Lab. The second part of the lecture covered software and hardware co-optimization techniques for efficient computation in AI/ML applications. The last part of the lecture talked about how FPGAs are used in DL applications. The speaker highlighted that Microsoft, Baidu, Alibaba, and Amazon have used FPGAs in their clouds. This was followed by a summary of various DL studies at Intel Labs on Recurrent Neural Networks, Binarized Neural Networks, Sparse Ternary Networks, and TensorTile.
Photos: Dr Marr’s lecture (upper left); Dr Sze’s lecture (upper right); Dr Sandler’s lecture (lower left); Dr Park’s lecture (lower right).
Next lecture titled “Energy-Efficient Edge Computing for AI-driven Applications” was given by Dr. Vivienne Sze who is with the Research Laboratory of Electronics, Massachusetts Institute of Technology. Dr. Sze started the discussion by outlining the motivating factors for processing at the “Edge” instead of the “Cloud”: Privacy and Latency. She emphasized on the need for Energy-Efficient Pixel Processing because most of the data traffic today is in video. Next, Dr. Sze described Energy-Efficient Hardware for DNNs, their limitations, and future work in this area. One key note from this discussion was the energy cost associated with data movement. Also, an energy consumption pie chart of GoogleNet was shown to emphasize that all data types determine the energy profile of a system. Moving along, the lecture expanded on common benchmarks used to evaluate DNN hardware such as accuracy, latency, energy, and cost. The future work segment of the lecture described a “Super Resolution Algorithm” where the data is streamed with low resolution from the source easing bandwidth requirement and then enhanced using CNNs. The last portion of the lecture covered the Navion Chip from MIT that is an example of energy efficient processor used for localization and mapping in autonomous navigation application.
The third lecture of the forum titled “Deep Learning Inference in Facebook Data Centers: Characterization, Performance Optimizations, and Hardware Implications” was given by Dr. Jongsoo Park who is a research scientist at Facebook AI System SW/HW Co-design Team. Dr. Park gave a brief introduction of his team, their expertise and research scope followed by a deep dive in DL application domains such as ranking and recommendation, computer vision, and language (e.g. translation). The last lecture of the forum titled “Designing Efficient Architectures for Mobile Computer Vision” was given by Dr. Mark Sandler, a research scientist at Google Inc. Dr. Sandler started the discussion by describing smart “viewfinder” or image recognition application such as Google Lens. Running such application on a portable electronic device such as a cellular phone requires the processor to be optimized for latency, power, memory, and accuracy. Dr. Sandler outlined a few strategies for optimizing the resolution, multiplier, and quantization of a network follwed by a case study on three recent architectures: Mobilenet V1, Mobilenet V2, and Shufflenet V2. The last portion of the lecture was focused on “Automatic Architecture Search,” where a default network is trained into an adapted network through iterative process. This basic idea is used in NetAdapt. In addition to this, other networks such as MNasNet and block structures were also covered. The talk concluded by comparing the accuracy of all the aforementioned networks.
Overall the event was a great success for the CAS society and the local sponsoring chapters. We hope to organize similar events in future. The slide presentations of the lectures are available at http://sites.ieee.org/scv-cas/ under “Recent Events” and the event name.
Imran Bashir, IEEE CAS Santa Clara Valley Chapter Executive Committee