Rajiv Gupta

2024-2026 Distinguished Visitor
Share this on:

Rajiv Gupta

Bio:

Rajiv Gupta is a Distinguished Professor and the Amrik Singh Poonian Professor of Computer Science at UC Riverside, where he is a member of the RIPLE research group. His research interests include Programming, Compiler, Runtime & Architectural Support for Parallel & Distributed Heterogeneous Systems, and Software Tools for Monitoring and Managing Runtime Behavior. He has coauthored 319 papers and a coinventor of 9 US patents. His h-index is 66, with over 15,500 citations. Rajiv has supervised PhD dissertations of 38 students, including two winners of the ACM SIGPLAN Outstanding Doctoral Dissertation Award in Programming Languages — (2009) Xiangyu Zhang, Purdue Univ. and (2001) Rastislav Bodik, Univ. of Washington. Five of his advisees are recipients of the NSF CAREER Award. Papers co-authored by Rajiv and his students were selected for: inclusion in 20 Years of PLDI (1979-1999), SIGSOFT distinguished paper award in ICSE 2003, best paper award in PACT 2010, best paper award in HiPC 2020, best student paper award in LCPC 2015, the most original paper award in ICPP 2003, and outstanding paper award in ICECCS 1996. He received the UCR Doctoral Dissertation Advisor/Mentor Award (2012).

Rajiv is a Fellow of the ACM (2009), the IEEE (2008), the AAAS (2011). He is a recipient of the NSF’s Presidential Young Investigator Award (1991). Rajiv served on the Technical Advisory Group on Networking and Information Technology created by the US President’s Council of Advisors on Science and Technology (PCAST) during its review of the Federal NITRD Program (2006-2007). He served as the Conference Chair for FCRC 2015; General Chair for PPoPP’20, ASPLOS’11 and PLDI’08 conferences; and Co-General Chair for ASPLOS’24 and CGO’05 conferences. He also served as the Program Chair for PLDI’03, HPCA’03, LCTES’05, and CC’10 conferences; Program Co-Chair for CC’21 and HiPEAC’08 conferences; and Program Vice-Chair for the HiPC’03 conference. Rajiv has served on program committees of major conferences in PL/Compilers & Computer Architecture including PLDI, POPL, PPoPP, OOPSLA, CGO, ISCA, ASPLOS, MICRO, HPCA, ICS, ICDCS, PACT & HiPEAC. He served as an Associate Editor for ACM TACO & IEEE TC and is currently serving on the Journals of Parallel Computing & Computer Languages editorial boards.

Abstracts:

Parallel Graph Processing on Clusters, Multicores, and GPUs

The importance of iterative graph algorithms has grown due to their widespread use in graph analytics. Although computations on graphs with millions of nodes and edges contain vast amounts of data level parallelism, exploiting this parallelism is challenging due to the highly irregular nature of real-world graphs. In this talk I will present an overview of the research by the GRASP group (http://grasp.cs.ucr.edu/index.html) that greatly improves the communication efficiency, I/O efficiency, and SIMD-efficiency of graph processing on a cluster, a single multicore machine, and on GPUs. I will describe in greater detail how asynchronous graph processing can tolerate communication latency and efficiently support fault tolerance on a cluster via use of new Relaxed Consistency and Confined Recovery protocols. Finally, I will briefly outline techniques for optimizing the evaluation of multiple graph queries by synergistically evaluating them.

Efficient Big Graph Analytics Via Redundancy Reduction

A great deal of research in graph analytics as focused on building frameworks that exploit parallelism available on a various hardware platforms ranging from a single GPU or a multicore server to a cluster of servers and/or GPUs. In this talk I will describe the recent work by the GRASP group (http://grasp.cs.ucr.edu/index.html) that combines parallelism with a complimentary approach that comprehensively reduces redundancy for scaling performance. Redundancy can be found and removed not only from the computation and propagation of values, but also from graph traversal and graph data transfer across memory hierarchy. Our work has applied redundancy reduction to the two main graph analytics scenarios, involving static (fixed) graphs and evolving (changing) graphs, and obtained substantial performance improvements. 

To remove redundancy from the evaluation of a query over a static graph, we combine the use of a small proxy graph and the large original graph in a two-phase query evaluation. The first phase evaluates the query on the proxy graph incurring low overheads and producing mostly precise results. The second phase uses these mostly precise results to bootstrap query evaluation on the larger original graph producing fully precise results. We have developed a new form of proxy graph named the Core Graph (CG) that is not only small, it also produces highly precise results [EuroSys 2024]. A CG is a subgraph of the larger input graph that contains all vertices but on average contains only 10.7% of edges and yet produces precise results for 94.5-99.9% vertices in the graph for different kinds of queries. 

To remove redundancy from evaluation of a query over an evolving graph (i.e., evaluation of a query on multiple snapshots of a changing graph), we propose Common Graph [ASPLOS 2023].  We first observe that edge deletion operations are significantly more expensive than edge addition operations for many graph queries. Common Graph converts all deletions to additions by finding a common graph that exists across all snapshots. After computing the query on this graph, to reach any snapshot, we simply need to add the missing edges and incrementally update the query results. Common Graph also allows sharing of common additions among snapshots that require them, and breaks the sequential dependency inherent in the traditional streaming approach where snapshots are processed in sequence, enabling additional opportunities for parallelism. Common Graph achieves 1.38x-8.17x improvement in performance over KickStarter based on streaming across multiple benchmarks.

Advances in Artificial Intelligence for Genomic Medical Diagnosis:

Genomics is a new and very active application area of computer science. The past ten years there has been an explosion of genomics data — the entire DNA sequences of several organisms, including human, are now available. These are long strings of base pairs (A,C,G,T) containing all the information necessary for an organism’s development and life. Computer science is playing a central role in genomics: from sequencing and assembling of DNA sequences to analyzing genomes in order to locate genes, repeat families, similarities between sequences of different organisms, and several other applications. The area of computational genomics includes both applications of methods, and development of novel algorithms for the analysis of genomic sequences. Artificial intelligence (AI) and machine learning have significantly influenced many facets of the healthcare sector. Advancement in technology has paved the way for analysis of big datasets in a cost- and time-effective manner. Efforts to reduce mortality rates require early diagnosis for effective therapeutic interventions. However, metastatic and recurrent cancers evolve and acquire drug resistance. It is imperative to detect novel biomarkers that induce drug resistance and identify therapeutic targets to enhance treatment regimes. The introduction of the next generation sequencing (NGS) platforms to address these demands, has revolutionised the future of precision oncology. 

 

Links:

Website

LinkedIn

2024 Mary Kenneth Keller Computer Science and Engineering Undergraduate Teaching Award
“For outstanding contributions to undergraduate education through teaching and service creating an inclusive community of experiential undergraduate mentorship.”
Learn more about the Mary Kenneth Keller Computer Science and Engineering Undergraduate Teaching Award