Luminating the Way: A Conversation with Debendra Das Sharma, Technical Achievement Award Recipient

IEEE Computer Society Team
Published 03/22/2024
Share this on:

SharmaA luminary in the field of computer architecture and interconnect technologies, Dr. Debendra Das Sharma has had a major impact on the field. Being a Ph.D. graduate of Computer Engineering from the University of Massachusetts, Amherst, Sharma is currently a Senior Fellow at Intel and co-general Manager of Memory and I/O Technologies within the Data Platforms and Artificial Intelligence Group. His scholarly achievements extend past the classroom, holding an impressive portfolio of over 190 US patents and 500 patents worldwide. Additionally, he shares his ideas and insights consistently as a sought-after keynote speaker, plenary speaker, and distinguished lecturer at various conferences such as the IEEE International Test Conference, IEEE Hot Interconnects, IEEE Cool Chips, and many more .

Dr. Das Sharma has made valuable contributions to critical interconnect technologies, including PCIe, CXL, and UCIe, which are said to have shaped the landscape of modern computing. He serves on the Board of Directors and as treasurer for the PCI Special Interest Group (PCI-SIG), where he continues to lead the evolution of PCIe specifications today.

In honor of his many achievements, he has received the 2024 Edward J. McCluskey Technical Achievement Award for, “…for architectural innovations driving open composable systems at package, node, Rack, and Pod levels with PCI-Express, CXL, and UCIe standards.”

 

Congratulations on receiving this year’s IEEE Computer Society Edward J. McCluskey Technical Achievement Award! How does receiving recognition such as this impact your perspective on your work, and how do you see it influencing the next phase of your career?


Thank you! I’d like to thank the IEEE Computer Society for the recognition. I feel this recognition belongs to the entire industry for three very successful standards: PCI-Express (PCIe), Compute Express Link (CXL), and Universal Chiplet Interconnect Express (UCIe). PCI-Express has been the backbone interconnect of all computer systems for the past two decades and will continue to be so for decades to come. CXL is a key interconnect in data centers with its support for heterogeneous computing, memory scalability, resource pooling, and fine-grained data sharing in large distributed systems. UCIe is the interconnect between chiplets on-package to design a system-in-package.

Most of my career is spent developing these three industry-standard interconnects right from their inception. It gives me great pleasure to see these deployed in real products. Going forward, the computing landscape will continue to see a lot of innovations. Interconnects are one of the key pillars in this landscape. I plan to continue to influence the direction the industry takes with these interconnect standards as we continue our journey to deliver exponential compute capability growth with ever-increasing power constraints.


Honor your colleagues’ achievements. Nominate Someone for a Major Award Today!


Can you share a pivotal moment in your career that significantly contributed to advancements in technologies such as Peripheral Component Interconnect Express (PCIe), Compute Express Link (CXL), and Universal Chiplet Interconnect Express (UCIe)? How did these advancements impact the industry?


Numerous innovations over the years have made PCIe, CXL, and UCIe successful. Let me give you one pivotal moment that led to the creation and direction of CXL. As you will see here, successful industry standards require the right technical approach, a long-term vision, conviction, and most importantly, trust earned by champions from industry peers, including competitors.

At Intel and HP before that, I was involved with I/O, memory, and cache coherency protocols. Naturally, we always ask the question why we need three different interconnects. We have been exploring overlaying memory and coherency protocols on top of PCIe since 2006 for applications like accelerators and memory pooling. Our analysis showed that PCIe speeds needed to be above a certain range for applications like pooling to be viable. Intel Accelerator Link (IAL) was conceived when PCIe was embarking on its fifth generation at 32 GT/s. Some of the key learnings in enabling our proprietary cache-coherency protocol came in handy here. It boiled down to three things. First, use PCIe infrastructure as-is with some simple enhancements to overlay coherency and memory semantics. We resist the temptation to reinvent the wheel. It is easy to underestimate the heavy lift it takes to enable an ecosystem – everything from the IP, validation infrastructure, silicon, software, post-silicon tools, etc. We overlaid coherency and memory semantics on PCIe with about a dozen commands each. Second, low latency is critical. We needed to ensure the ecosystem could successfully deliver latency close to our proprietary cache coherent link. Third is simplicity, which means an asymmetric protocol. We carefully abstracted the caching agent functionality to work with any company’s implementation of cache coherency (the home agent part). Eventually, Intel donated the IAL 1.0 specification which became the CXL 1.0 specification when we formed the CXL consortium. These three technical aspects combined with backward compatibility have contributed immensely to the run-away success of CXL, in my opinion.

After deciding on the technical directions above, as we were developing the IAL specification, the question was, “Do we open this up or keep it Intel-proprietary?” It was a business decision! Intel had its usages and devices that could justify a proprietary specification. We had our networking solution, FPGA, and memory expansion on IAL interconnect. Back then there were several competing standards: CCIX, OpenCAPI, and Gen-Z, each with multi-generational products, but none with real industry-wide traction. So, Intel opening a 4th standard risked not getting traction as a latecomer while having to support those who would be interested in it. The safe thing seemed to be to keep IAL proprietary, license based on need, and a few years down the road open if the market was amenable to it. However, that seemingly safe approach had the risk that customers would be more entrenched over time in multiple competing solutions that they may not want to change to one common standard easily. As a result, each silicon vendor may be forced to deploy multiple standards for multiple customers. Some of us advocated internally to open up IAL as soon as the specification was complete so that the industry feels welcome to innovate at the same time Intel is innovating with its products. That was the direction we took. I spent a lot of my time and energy along with my colleagues to make this a success. I am very happy that the industry embraced CXL enthusiastically, even though it meant some silicon providers had to change the direction of their products mid-stream to intercept CXL away from the competing standard they were implementing. Within 3 years, all the competing standards folded and donated their assets to CXL. In addition to consolidating the industry behind CXL, it also has several other positive benefits that I had not anticipated back then. PCIe found a solid application with CXL memory that demands faster transition in its data rate. And thanks to the goodwill we had with CXL and PCIe, when I led the launch of UCIe for die-to-die interconnects in 2022, the industry immediately embraced it.

 

As a member of the Board of Directors and treasurer for the PCI Special Interest Group (PCI-SIG), you’ve played a crucial role in the development of PCIe specifications. How do you see the future of PCIe evolving?


I expect PCIe will continue to be the de-facto interconnect across the entire compute continuum for the foreseeable future. Right now, we are developing the 7th generation of PCIe. We double the per-pin bandwidth every generation while being fully backward compatible (that is interoperable) with all prior generations of PCIe. When we started developing the first generation of PCIe, I thought one decade and 3 generations of backward compatibility would be a huge success story. We have far exceeded those metrics with more than 2 decades and 7 generations. We can probably do the eighth generation of PCIe in a backward-compatible manner. We are currently developing PCIe to evolve in an optical-friendly manner. So, we will have a migration path to optics in the future, both with off-package as well as co-packaged optics. We have also embarked on a journey to enable multiple paths between any two PCIe devices while keeping the existing architectural ordering mechanisms intact. This will enable ease of migration to higher speeds as well as enable people to deploy multiple independent links connecting to a device. The key thing to note here is that the innovation capability of close to a thousand member companies does and will continue to do wonders in evolving the technology and keeping it relevant for decades to come.

 

With over 190 US patents and 500+ patents worldwide, your contributions to the field are extensive. Can you share when one of your patents played a pivotal role in advancing a particular technology? Why does this specific instance stand out the most?


About 65% of my total patents and 90% of the recent 100 patents are deployed in the three industry standards I am associated with. If I must pick one, it will be the 128b/130b encoding which has been in use since PCI-Express 3.0 specification. PCIe 2.0 had a data rate of 5 GT/s. The encoding thus far was 8b/10b encoding. So, to double the data rate, PCIe 3.0 should have been 10 GT/s. However, our analysis showed that servers had to pay a huge power and cost penalty which we could circumvent if we went to 8 GT/s instead of 10 GT/s. So, we decided to keep the frequency at 8 GT/s and go with a new 128b/130b encoding scheme to gain back the 25% overhead with 8b/10b encoding to double the bandwidth. This mechanism was used in two subsequent generations (PCIe 4.0 at 16 GT/s and PCIe 5.0 at 32 GT/s). All shipping systems with PCIe 3.0 or higher, since 2011 use this encoding. It will be used for the rest of the life of PCI-Express at these speeds, for as long as PCIe continues to be backward compatible.

This invention stands out due to its impact and the length of time it has been deployed across all systems. It also helped the industry to quickly transition to subsequent generations of PCIe at 16 GT/s, 32GT/s, and 64 GT/s (vs 20/40/80 GT/s), considering the availability of high-volume platform ingredients at the time of these technology transitions.

You’ve been actively involved in IEEE Computer Society conferences and events such as IEEE International Test Conference, IEEE Hot Interconnects, IEEE Cool Chips, and more. How has your participation in these gatherings contributed to your professional network? What connections have had the most significant impact on your work and achievements?


It is always great to participate in IEEE conferences and network. I get inspired by the energy and the quality of work people are doing. It takes my mind off the tyranny of the urgent and look at other things. One of the immediate advantages for me is understanding the pain points of different segments of the market and exploring the opportunities. We do not want to be a walled garden with our technologies. If you look at PCIe, CXL, or UCIe, we are happy to leverage what has been done by others and only innovate essential things. For example, the 8b/10b encoding we used in the first two generations of PCIe, was used by other standards before us. Similarly, when we adopted PAM-4 signaling for PCIe 6.0 we studied and gathered important learnings from the multiple test chips other standards had done ahead of us so that we can execute with agility and quality. Now, we do not copy other standards blindly. For example, we did not go with the very low bit error rate that other standards have with PAM-4 – it would not work for us given our latency sensitivity. So, we chose a better bit-error rate and invented it on multiple fronts to meet our requirements. In my opinion, this is how things should be done; leverage what makes sense and invent when you must. It comes from decades of experience. Learning and leveraging is not a one-way street. We also need to be very open in sharing our experiences, both good and bad, so that others can learn and do better than us later. All of us are technologists trying our best to deliver compelling technologies for the greater good. Conferences are a great venue for dissemination of ideas.

Additionally, your involvement as a panelist for several conferences suggests a deep involvement in shaping industry discussions. How do these panel experiences contribute to the advancement of technology, and what insights have you gained from engaging with your peers in these forums?


A panel is a more formal engagement in addition to side-bar discussions in conferences. It also tends to be more focused and specialized. I do get visibility to the implementation challenges people are encountering which augments what I naturally learn as the chief I/O architect for Intel through our industry-enabling efforts. I also gain insights into new usage models that people are thinking about. For example, optical-friendly PCIe and some of the fabric-related enhancements with CXL 3.0 came from discussions with my peers in these forums.

More About Debendra Das Sharma


Dr. Debendra Das Sharma is an Intel Senior Fellow and co-GM of Memory and I/O Technologies, Data Platforms and Artificial Intelligence Group, at Intel Corporation. He is a leading expert on I/O subsystem and interface architecture. He delivers Intel-wide critical interconnect technologies in Peripheral Component Interconnect Express (PCIe), Compute Express Link (CXL), Universal Chiplet Interconnect Express (UCIe), and Intel’s Coherency interconnect, as well as their implementation.

Dr. Das Sharma is a member of the Board of Directors and treasurer for the PCI Special Interest Group (PCI-SIG). He has been a lead contributor to PCIe specifications since its inception. He is the co-inventor of CXL and a founding member of the CXL consortium. He co-leads the CXL Board Technical Task Force, and is a leading contributor to CXL specifications. He co-invented the chiplet interconnect standard UCIe and is the chair of the UCIe consortium.

Dr. Das Sharma has a bachelor’s in technology (with honors) degree in Computer Science and Engineering from the Indian Institute of Technology, Kharagpur and a Ph.D. in Computer Engineering from the University of Massachusetts, Amherst. He holds 190+ US patents and 500+ patents world-wide. He is a frequent keynote/ plenary speaker, distinguished lecturer, invited speaker, and panelist at the IEEE International Test Conference, IEEE Hot Interconnects, IEEE Cool Chips, IEEE 3DIC, SNIA SDC, PCI-SIG Developers Conference, CXL consortium, Open Server Summit, Open Fabrics Alliance, Flash Memory Summit, Intel Innovation, and Universities (CMU, Texas A&M, Georgia Tech, UIUC, UC Irvine). He has been awarded the Distinguished Alumnus Award from Indian Institute of Technology, Kharagpur in 2019, the IEEE Region 6 Outstanding Engineer Award in 2021, the first PCI-SIG Lifetime Contribution Award in 2022, the IEEE Circuits and Systems Industrial Pioneer Award in 2022, and the IEEE Computer Society Edward J. McCluskey Technical Achievement Award in 2024.