Dejan S. Milojicic is an established researcher with a full career in industry, and a long-term IEEE Computer Society volunteer. As an Computer Society Board of Governors member, he participated in developing the Computer Society 2011 Strategic Plan. He is the founding editor in chief of Computing Now and an IEEE Internet Computing editorial board member. He was appointed first Special Technical Communities chair, and served as past chair of the Computer Society Technical Committee on Operating Systems and on many program committees (ICDCS, CLOUD, and EDOC, among others). An IEEE-CS, ACM, and Usenix member for more than 20 years, Milojicic is an IEEE Fellow, Computer Society Golden Core Member, and ACM Distinguished Engineer. He has received an Computer Society Outstanding Contribution award.
At HP Labs since 1998, Milojicic is a senior researcher and managing director of the Open Cirrus Cloud Computing Testbed. Previously, he was with OSF Research Institute in Cambridge, Massachusetts, and Institute Mihajlo Pupin in Belgrade, Serbia. He teaches cloud management at San Jose State University, California.
Milojicic received a PhD from Kaiserslautern University, Germany. He has served on six thesis committees and guided 40 interns. He has published two books, more than 120 papers, and been awarded 11 patents and 22 patent applications.
Hewlett Packard Labs
dejan.milojicic@hpe.com
DVP term expires December 2022
Presentations
Generalize or Die: System Software for Memristor-based Accelerators for Deep Learning
The deceleration of transistor feature size scaling has motivated growing adoption of specialized accelerators implemented as GPUs, FPGAs, ASICs, and more recently new types of computing such as neuromorphic, bio-inspired, ultra-low energy, reversible, stochastic, optical, quantum, combinations, and others unforeseen. There is a tension between specialization and generalization, with the current state trending to master slave models where accelerators (slaves) are instructed by a general purpose system (master) running an Operating System (OS). This talk revisits System Software functionality for memristor-based accelerators. We explore one accelerator implementation, the Dot Product Engine (DPE), for a select pattern of applications in machine learning. We demonstrate that making an accelerator, such as the DPE, more general will result in broader adoption and better utilization.
Memory-Driven Computing
With the ending of the Moore’s Law and accelerated generation of data from IoT and edge sensors a new era of computing is arriving. It is marked by increased use of accelerators and by the need for large amounts of memory to store all the collected data. New types of non-volatile memory (NVM) that are coming to the market (3DXP, ReRAM, PCM, etc.) offer a great promise but also generate new problems in how to use NVM. This has generated the need for new programming models and new applications. This is called memory-driven computing (MDC).
In this talk we revisit the hardware architecture for memory MDC, encompassing NVM, photonic interconnects, new interconnect standards, such as GenZ, and accelerators. Then we describe new applications and the speedups offered by increasingly deeper adoption of MDC. Finally, we offer examples of new programming models for MDC. With this comprehensive architectural, programming and applications approach, we can finally address the demands of the new era of computing.
Applying AI/ML to cybersecurity
Cybersecurity is one of the key risks for any business today. Growing attack surface includes amateur threats, such as phishing, and sophisticated distributed denial of service attacks. Prevention is nearly impossible. Given enough time attackers will get in: cost of attack is low and automated probing will find a weakness. Advanced persistent threats show hackers are patient. Defense depends on security analysts who are rare, without training, and with high turnover. AI/ML can help with detection of threat signatures across the corporation and with advisory to security analysts. AI/ML can drive down response times from (hundreds of) hours to seconds and scale analyst effectiveness from 1 or 2 incidents to 1000s. With adequate knowledgebase, it can preserve corporate knowledge and transfer response learnt by AI to new analysts. To advance the adoption of AI/ML applied to cybersecurity, HPE is partnering with IEEE, governments, and universities around the world. We are defining an annual grand challenge in applying AI/ML to cybersecurity, comprised of: 1) arena/rodeo to operate the challenge; 2) standards for operational metrics (preventing attacks; ensuring AI performs as expected); training data; workflows integrating AI into current processes; and human-machine teaming model; 3) knowledgebase for network traffic capture; policy for AI/ML configuration; AI strategies; execution environment; and capturing security analyst knowledge. Over years, grand challenge will result in a repository of training data, sharing data logs and attack details. As a result it will increase adoption of AI/ML in cybersecurity and reduce attacks.
Technology Predictions: Art, Science, and Fashion
Abstract: Predicting the future is never easy, it always entails a degree of uncertainty, if not luck. Predicting technology trends is even harder as it requires both technical and business acumen, e.g., whether the technology will be developed, productized, and ultimately adopted on the market. It is almost an art to distill between a fashion and a true scientific trend. At the same time, the public likes to read predictions and many individuals and organizations regularly write technology predictions, such as Gartner, MIT, Forbes and many others regularly produce predictions. IEEE Computer Society started its technology predictions informally in early 2010 and formally via annual press releases in 2014, followed by their respective scorecards in 2016. We realized that our audience appreciates self-evaluation, hence we introduced scorecards at the end of the period of prediction. Our predictions reached substantial audience, e.g., in 2018, it was picked up by 300 media outlets (84.6M audience), which is entirely different from classical publishing. We considered predictions as a new type of publication, a lightweight, short publication (approximately a paragraph per prediction). These predictions also triggered other media outreach, such as blogs, interviews, panel sessions, and this special issue of IEEE Computer magazine. Over the years we became better in press releases and social media announcing our report, to the extent that it became visible at the IEEE Board of Directors, and found its way to the report of the IEEE Executive Director. One notable side-product that grew out of our predictions was the 2022 report that comprehensively predicted 23 technologies 7 years ahead. This report had a sister report written by Industrial Technology Research Institute (ITRI), Taiwan on technology predictions specific to Asia. These technology predictions surpassed all our expectations in terms of impact, and we plan to continue for as long as audience has interest.