| Page 415 | Kisaco Research

The National Institute of Health's modernized high-performance computing (HPC) capabilities enable significant innovations in scientific research. NIH's supercomputer Biowulf is the first and largest supercomputer completely dedicated to advancing biomedical research, and produced the first complete, gapless sequence of a human genome. Looking to the future, HPC's emerging scientific requirements include compatibility with petabyte-scale data, increased CPU computing, additional storage capacity, and flexibility. To face these challenges, HPC is exploring transformative technologies such as Gen AI and quantum computing to help improve the discovery of scientific research. This presentation will discuss the current state of HPC, limitations within HPC, and future exploration of Gen AI and quantum.

HPC
AI/ML Compute
Systems Infrastructure/Architecture

Author:

Xavier Soosai

Chief Information Officer
Center for Information Technology/National Institute of Health

As the Director of the Office of Information Technology Services of the Center for Information Technology (CIT), Soosai oversees ten service areas and the delivery of scientific research and business operations across the institutes and centers (ICs) at NIH. This includes maintaining the high-performance computing environment used by NIH intramural scientists; maintaining NIH’s secure, high-speed network; ensuring the viability and availability of collaboration services, compute hosting and storage services, identity and access management services, service desk support, and more for the NIH community. 

Soosai works with CIT leadership and internal service area managers and collaborates with NIH ICs to define scope and provide technical expertise, strategic planning, and leadership for local and enterprise IT projects that drive efficiency and innovation across NIH. Additionally, Soosai is responsible for directing the evaluation and adoption of rapidly evolving technology and forecasting future technology needs.

 

Xavier Soosai

Chief Information Officer
Center for Information Technology/National Institute of Health

As the Director of the Office of Information Technology Services of the Center for Information Technology (CIT), Soosai oversees ten service areas and the delivery of scientific research and business operations across the institutes and centers (ICs) at NIH. This includes maintaining the high-performance computing environment used by NIH intramural scientists; maintaining NIH’s secure, high-speed network; ensuring the viability and availability of collaboration services, compute hosting and storage services, identity and access management services, service desk support, and more for the NIH community. 

Soosai works with CIT leadership and internal service area managers and collaborates with NIH ICs to define scope and provide technical expertise, strategic planning, and leadership for local and enterprise IT projects that drive efficiency and innovation across NIH. Additionally, Soosai is responsible for directing the evaluation and adoption of rapidly evolving technology and forecasting future technology needs.

 

Compute Express Link (CXL) has risen as a promising interconnect technology that enables seamless high-speed, low-latency communication between host processors and various peripheral devices, making it attractive for memory-intensive applications. In this talk, we'll evaluate the performance characteristics of CXL memory in real-world environments using ASIC-based CXL memory. We then present our learnings from microbenchmarks, explore the potential use cases and evaluate their benefits from using CXL memory. Based on our comprehensive evaluations, we share our insights of how we may better utilize CXL memory to optimize the performance and cost of data center applications, as well as our vision for next-gen memory architecture leveraging new capabilities from CXL 2.0/3.x.

Emerging Memory Innovations
HBM/CXL
Systems Infrastructure/Architecture

Author:

Ping Zhou

Researcher/Architect
Bytedance Ltd.

Ping Zhou is a Senior Researcher/Architect with ByteDance, focusing on next-gen infrastructure innovations with hardware/software co-design. Prior to joining ByteDance, Ping worked with Google, Alibaba and Intel on products including Google Assistant, Optane SSD and Open Channel SSD. Ping earned his PhD in Computer Engineering at University of Pittsburgh, specializing in the field of emerging memory and storage technologies.

Ping Zhou

Researcher/Architect
Bytedance Ltd.

Ping Zhou is a Senior Researcher/Architect with ByteDance, focusing on next-gen infrastructure innovations with hardware/software co-design. Prior to joining ByteDance, Ping worked with Google, Alibaba and Intel on products including Google Assistant, Optane SSD and Open Channel SSD. Ping earned his PhD in Computer Engineering at University of Pittsburgh, specializing in the field of emerging memory and storage technologies.

Online commercial app marketplaces serve millions of apps to billions of users in an efficient manner. Bandit optimization algorithms are used to ensure that the recommendations are relevant, and converge to the best performing content over time. However, directly applying bandits to real-world systems, where the catalog of items is dynamic and continuously refreshed, is not straightforward. One of the challenges we face is the non-trivial computation costs for large scale systems, which is further aggravated by user privacy related constraints for server side computation. To address this problem we introduce an efficient two-layer bandit approach which is contextualized to user cohorts of similar taste. We mitigate cannibalization at runtime within a single multi-intent content surfacing platform by formalizing relevant offline evaluation metrics, and by involving the cross-component interactions in the bandit rewards. The framework allows flexibility for tradeoffs between compute, storage and accuracy of  models. The user engagement in our proposed system has more than doubled as measured by online A/B testings.

AI/ML Compute
Enterprise Workloads
Data Movement/Demands
Systems Infrastructure/Architecture

Author:

Puja Das

Senior Director, Personalization
Warner Bros. Entertainment

Dr. Puja Das, leads the Personalization team at Warner Brothers Discovery (WBD) which includes offerings on Max, HBO, Discovery+ and many more.

Prior to WBD, she led a team of Applied ML researchers at Apple, who focused on building large scale recommendation systems to serve personalized content on the App Store, Arcade and Apple Books. Her areas of expertise include user modeling, content modeling, recommendation systems, multi-task learning, sequential learning and online convex optimization. She also led the Ads prediction team at Twitter (now X), where she focused on relevance modeling to improve App Ads personalization and monetization across all of Twitter surfaces.

She obtained her Ph.D from University of Minnesota in Machine Learning, where the focus of her dissertation was online learning algorithms, which work on streaming data. Her dissertation was the recipient of the prestigious IBM Ph D. Fellowship Award.

She is active in the research community and part of the program committee at ML and recommendation system conferences. Shas mentored several undergrad and grad students and participated in various round table discussions through Grace Hopper Conference, Women in Machine Learning Program colocated with NeurIPS, AAAI and Computing Research Association- Women’s chapter.

Puja Das

Senior Director, Personalization
Warner Bros. Entertainment

Dr. Puja Das, leads the Personalization team at Warner Brothers Discovery (WBD) which includes offerings on Max, HBO, Discovery+ and many more.

Prior to WBD, she led a team of Applied ML researchers at Apple, who focused on building large scale recommendation systems to serve personalized content on the App Store, Arcade and Apple Books. Her areas of expertise include user modeling, content modeling, recommendation systems, multi-task learning, sequential learning and online convex optimization. She also led the Ads prediction team at Twitter (now X), where she focused on relevance modeling to improve App Ads personalization and monetization across all of Twitter surfaces.

She obtained her Ph.D from University of Minnesota in Machine Learning, where the focus of her dissertation was online learning algorithms, which work on streaming data. Her dissertation was the recipient of the prestigious IBM Ph D. Fellowship Award.

She is active in the research community and part of the program committee at ML and recommendation system conferences. Shas mentored several undergrad and grad students and participated in various round table discussions through Grace Hopper Conference, Women in Machine Learning Program colocated with NeurIPS, AAAI and Computing Research Association- Women’s chapter.

The presentation delves into the evolution, current state, and prospective developments within data-driven machine learning. In an era where data has ascended to the status of a pivotal resource, this presentation emphasizes its indispensable role in shaping the landscape of machine learning and how these changes have significantly influenced systems infrastructure.
Delving into the past, it meticulously examines the historical origins of data-driven modeling, charting its progression from rudimentary concepts to the intricate algorithms that underpin modern machine learning. The presentation illuminates early techniques like perceptrons and decision trees and elucidates their enduring impact on the field.
In the present, this presentation expounds upon the transformative influence of big data and deep learning, illuminating real-world applications while highlighting the associated challenges and opportunities that have engendered profound alterations in systems infrastructure.
As we look towards the future, this presentation provides invaluable insights into emerging trends and technologies such as quantum computing and edge AI, poised to redefine the future of machine learning and further revolutionize systems infrastructure.
By amalgamating theoretical insights, empirical observations, and forward-looking perspectives, this presentation offers a comprehensive overview of the past achievements, current dynamics, and potential future scenarios in the realm of data-driven machine learning, shedding light on how these changes have reshaped systems infrastructure.

Enterprise Workloads
AI/ML Compute
Emerging Memory Innovations

Author:

Rahul Gupta

AI Research Scientist
US Army Laboratory

Dr. Rahul Gupta has been working at the Army Research Lab for more than a decade. In his current position he is conducting research and development using Deep Learning Artificial Neural Network and Convolutional Neural Network. He joined ARL as a Distinguished Research Scholar and led several successful programs. He became a Fellow of the American Society of Mechanical Engineers in 2014. He is passionate about mentoring and team building with the goal of providing the Army the best possible technology to dominate today’s complex Multi-Domain Environment (MDE).

Rahul Gupta

AI Research Scientist
US Army Laboratory

Dr. Rahul Gupta has been working at the Army Research Lab for more than a decade. In his current position he is conducting research and development using Deep Learning Artificial Neural Network and Convolutional Neural Network. He joined ARL as a Distinguished Research Scholar and led several successful programs. He became a Fellow of the American Society of Mechanical Engineers in 2014. He is passionate about mentoring and team building with the goal of providing the Army the best possible technology to dominate today’s complex Multi-Domain Environment (MDE).

Systems Infrastructure/Architecture
AI/ML Compute
Market Analysis
Moderator

Author:

Mike Howard

Vice President of DRAM and Memory Markets
TechInsights

Mike has over 15 years of experience tracking the DRAM and memory markets. Prior to TechInsights, he built the DRAM research service at Yole. Prior to Yole, Mike spent time at IHS covering DRAM and Micron Technology where he had roles in engineering, marketing, and corporate development. Mike holds an MBA from The Ohio State University and a BS in Chemical Engineering and BA in Finance from the University of Washington.

 

Mike Howard

Vice President of DRAM and Memory Markets
TechInsights

Mike has over 15 years of experience tracking the DRAM and memory markets. Prior to TechInsights, he built the DRAM research service at Yole. Prior to Yole, Mike spent time at IHS covering DRAM and Micron Technology where he had roles in engineering, marketing, and corporate development. Mike holds an MBA from The Ohio State University and a BS in Chemical Engineering and BA in Finance from the University of Washington.

 

Speakers

Author:

Murali Emani

Computer Scientist
Argonne National Lab

Murali Emani is a Computer Scientist in the Data Science group with the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. At ALCF, he co-leads the AI Testbed where they explore the performance, efficiency of novel AI accelerators for scientific machine learning applications. He also co-chairs the MLPerf HPC group at MLCommons, to benchmark large scale ML on HPC systems. His research interests are in Scalable Machine Learning, AI accelerators, AI for Science, and Emerging HPC architectures.  His current work includes

- Developing performance models to identifying and addressing bottlenecks while scaling machine learning and deep learning frameworks on emerging supercomputers for scientific applications.

- Co-design of emerging hardware architectures to scale up machine learning workloads.

- Efforts on benchmarking ML/DL frameworks and methods on HPC systems.

 

Murali Emani

Computer Scientist
Argonne National Lab

Murali Emani is a Computer Scientist in the Data Science group with the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. At ALCF, he co-leads the AI Testbed where they explore the performance, efficiency of novel AI accelerators for scientific machine learning applications. He also co-chairs the MLPerf HPC group at MLCommons, to benchmark large scale ML on HPC systems. His research interests are in Scalable Machine Learning, AI accelerators, AI for Science, and Emerging HPC architectures.  His current work includes

- Developing performance models to identifying and addressing bottlenecks while scaling machine learning and deep learning frameworks on emerging supercomputers for scientific applications.

- Co-design of emerging hardware architectures to scale up machine learning workloads.

- Efforts on benchmarking ML/DL frameworks and methods on HPC systems.

 

Author:

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Author:

Simone Bertolazzi

Principal Analyst, Memory
Yole Group

Simone Bertolazzi, PhD is a Senior Technology & Market analyst, Memory, at Yole Intelligence, part of Yole Group, working with the Semiconductor, Memory & Computing division. As member of the Yole’s memory team, he contributes on a day-to-day basis to the analysis of memory markets and technologies, their related materials, device architectures and fabrication processes. Simone obtained a PhD in physics in 2015 from École Polytechnique Fédérale de Lausanne (Switzerland) and a double M. A. Sc. degree from Polytechnique de Montréal (Canada) and Politecnico di Milano (Italy), graduating cum laude.

Simone Bertolazzi

Principal Analyst, Memory
Yole Group

Simone Bertolazzi, PhD is a Senior Technology & Market analyst, Memory, at Yole Intelligence, part of Yole Group, working with the Semiconductor, Memory & Computing division. As member of the Yole’s memory team, he contributes on a day-to-day basis to the analysis of memory markets and technologies, their related materials, device architectures and fabrication processes. Simone obtained a PhD in physics in 2015 from École Polytechnique Fédérale de Lausanne (Switzerland) and a double M. A. Sc. degree from Polytechnique de Montréal (Canada) and Politecnico di Milano (Italy), graduating cum laude.

Oracle AI Vector Search enables enterprises to leverage their own business data to build cutting-edge generative AI solutions. AI Vectors are data structures that encode the key features or essence of unstructured entities such as images or documents. The more similar two entities are, the shorter the mathematical distance between their corresponding AI vectors. With AI Vector search, Oracle Database is introducing a new vector datatype, new vector indexes (in-memory neighbor graph indexes and neighbor partitioned indexes), and new Vector SQL operators for highly efficient and powerful similarity search queries. Oracle AI Vector Search enables applications to combine their business data with large language models (LLMs) using a technique called Retrieval Augmentation Generation (RAG), to deliver amazingly accurate responses to natural language questions. With AI Vector Search in Oracle Database, users can easily build AI applications that combine relational searches with similarity search, without requiring data movement to a separate vector database, and without any loss of security, data integrity, consistency, or performance. At the heart of this are the hardware and system requirements needed to facilitate scale up and scale out AI vector search.

Emerging Memory Innovations
Systems Infrastructure/Architecture

Author:

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Data Movement/Demands
Systems Infrastructure/Architecture
AI/ML Compute
Moderator

Author:

Mahesh Wagh

Senior Fellow & Server System Architect
AMD

Mahesh Wagh is AMD Sr. Fellow, Server System Architect in the AMD Datacenter System Architecture and Engineering team, developing world-class products and solutions around EPYC processors.

Prior to joining AMD, Mahesh was a Senior Principal Engineer at Intel corporation, focusing on IO and SoC architecture and related technology developments. He has broad experience in chipset and IO architecture, design and validation on both Server and Client platforms.

Some of Mahesh´s significant achievements include the enhancements to PCI Express Architecture and Specification, leading CPU IO domain architecture and IO IP architecture & Interfaces and leading AMD´s Compute Express Link (CXL) efforts.

Mahesh Wagh

Senior Fellow & Server System Architect
AMD

Mahesh Wagh is AMD Sr. Fellow, Server System Architect in the AMD Datacenter System Architecture and Engineering team, developing world-class products and solutions around EPYC processors.

Prior to joining AMD, Mahesh was a Senior Principal Engineer at Intel corporation, focusing on IO and SoC architecture and related technology developments. He has broad experience in chipset and IO architecture, design and validation on both Server and Client platforms.

Some of Mahesh´s significant achievements include the enhancements to PCI Express Architecture and Specification, leading CPU IO domain architecture and IO IP architecture & Interfaces and leading AMD´s Compute Express Link (CXL) efforts.

Speakers

Author:

Paul Crumley

Senior Technical Staff Member
IBM Research

Paul G Crumley, a Senior Technical Staff Member at IBM Research, enjoys creating systems to solve problems beyond the reach of current technology.

 

Paul’s current project integrates secure, compliant AI capabilities with enterprise Hybrid Cloud allowing clients to extract new business value from their data.

 

Paul’s previous work includes the design and construction of distributed, and high-performance computing systems at CMU, Transarc, and IBM Research. Projects include The Andrew Project at CMU, ASCI White, IBM Global Storage Architecture, Blue Gene Supercomputers, IBM Cloud, and IBM Cognitive Systems. Paul has managed data centers, and brings his first-hand knowledge of these environments, combined with experience of automation and robustness, to the design of AI for Hybrid Cloud infrastructure.

Paul Crumley

Senior Technical Staff Member
IBM Research

Paul G Crumley, a Senior Technical Staff Member at IBM Research, enjoys creating systems to solve problems beyond the reach of current technology.

 

Paul’s current project integrates secure, compliant AI capabilities with enterprise Hybrid Cloud allowing clients to extract new business value from their data.

 

Paul’s previous work includes the design and construction of distributed, and high-performance computing systems at CMU, Transarc, and IBM Research. Projects include The Andrew Project at CMU, ASCI White, IBM Global Storage Architecture, Blue Gene Supercomputers, IBM Cloud, and IBM Cognitive Systems. Paul has managed data centers, and brings his first-hand knowledge of these environments, combined with experience of automation and robustness, to the design of AI for Hybrid Cloud infrastructure.

Author:

Debendra Das Sharma

TTF Co-Chair: CXL Consortium & Senior Fellow: Intel
CXL Consortium

Debendra Das Sharma (Senior Member, IEEE) was born in Odisha, India, in 1967. He received the B.Tech. degree (Hons.) in computer science and engineering from IIT Kharagpur, Kharagpur, India, in 1989, and the Ph.D. degree in computer systems engineering from the University of Massachusetts, Amherst, MA, USA, in 1995.,He joined Hewlett-Packard, Roseville, CA, USA, in 1994, and Intel, Santa Clara, CA, USA, in 2001. He is currently an Senior Fellow with Intel. He is responsible for delivering Intel-wide critical interconnect technologies in Peripheral Component Interconnect Express (PCI Express), Compute Express Link (CXL), Universal Chiplet Interconnect Express (UCIe), Coherency Interconnect, Multi-Chip Package Interconnect, and Rack Scale Architecture. He has been leading the development of PCI-Express, CXL, and UCIe inside Intel as well as across the industry since their inception. He holds 160+ U.S. patents and more than 400 patents worldwide.,Dr. Das Sharma has been awarded the Distinguished Alumnus Award by IIT, in 2019, the 2021 IEEE Region 6 Engineer of the Year Award, the PCI-SIG Lifetime Contribution Award in 2022, and the 2022 IEEE CAS Industrial Pioneer Award. He is currently the Chair of UCIe Board, a Director of PCI-SIG Board, and the Chair of the CXL Board

Debendra Das Sharma

TTF Co-Chair: CXL Consortium & Senior Fellow: Intel
CXL Consortium

Debendra Das Sharma (Senior Member, IEEE) was born in Odisha, India, in 1967. He received the B.Tech. degree (Hons.) in computer science and engineering from IIT Kharagpur, Kharagpur, India, in 1989, and the Ph.D. degree in computer systems engineering from the University of Massachusetts, Amherst, MA, USA, in 1995.,He joined Hewlett-Packard, Roseville, CA, USA, in 1994, and Intel, Santa Clara, CA, USA, in 2001. He is currently an Senior Fellow with Intel. He is responsible for delivering Intel-wide critical interconnect technologies in Peripheral Component Interconnect Express (PCI Express), Compute Express Link (CXL), Universal Chiplet Interconnect Express (UCIe), Coherency Interconnect, Multi-Chip Package Interconnect, and Rack Scale Architecture. He has been leading the development of PCI-Express, CXL, and UCIe inside Intel as well as across the industry since their inception. He holds 160+ U.S. patents and more than 400 patents worldwide.,Dr. Das Sharma has been awarded the Distinguished Alumnus Award by IIT, in 2019, the 2021 IEEE Region 6 Engineer of the Year Award, the PCI-SIG Lifetime Contribution Award in 2022, and the 2022 IEEE CAS Industrial Pioneer Award. He is currently the Chair of UCIe Board, a Director of PCI-SIG Board, and the Chair of the CXL Board

Author:

Manoj Wadekar

AI Systems Technologist
Meta

Manoj Wadekar

AI Systems Technologist
Meta

Compute performance demand has been growing exponentially in recent years, and with the advent of Generative AI, this demand is growing even faster. Moore’s law coming to an end as well as the Memory Wall (bandwidth & capacity) are the main performance bottlenecks. The chiplet system-in-package (SiP) is the industry's solution to these bottlenecks. Silicon interposers are industry’s main technology to connect chiplets in SiPs, but they introduce several new bottlenecks. The largest interposer going to production is 2700mm2, which is ~1/4 the largest standard package substrate. Thus, a SiP with silicon interposer has limited compute & memory chiplets, thus limited performance.
This presentation introduces Universal Memory Interface (UMI), a high bandwidth efficient D2D connectivity technology between compute and memory chiplets. UMI PHY on standard packaging provides similar bandwidth/power to D2D PHYs with silicon interposers, thus enables creation of large & powerful SiPs required to address Gen AI applications.

Emerging Memory Innovations
Interconnects

Author:

Ramin Farjadrad

Co-Founder & CEO
Eliyan

Ramin Farjadrad is the inventor of over 130 granted and pending patents in communications and networking. He has a successful track record of creating differentiating connectivity technologies adopted by the industry as International standards (Two Ethernet standards at IEEE, one chiplet connectivity at OCP.) Ramin co-founded Velio Communications, which led to a Rambus/LSI Logic acquisition, and Aquantia, which IPO’d and was acquired by Marvell Technologies. Ramin’s Ph.D. EE is from Stanford.

Ramin Farjadrad

Co-Founder & CEO
Eliyan

Ramin Farjadrad is the inventor of over 130 granted and pending patents in communications and networking. He has a successful track record of creating differentiating connectivity technologies adopted by the industry as International standards (Two Ethernet standards at IEEE, one chiplet connectivity at OCP.) Ramin co-founded Velio Communications, which led to a Rambus/LSI Logic acquisition, and Aquantia, which IPO’d and was acquired by Marvell Technologies. Ramin’s Ph.D. EE is from Stanford.

Shell Upstream has been processing large subsurface datasets for multiple decades driving significant business value.  Many of the state of the art algorithms for this have been developed using deep domain knowledge and have benefitted from the hardware technology improvements over the years. However, the demand for more efficient processing as datasets get bigger and the algorithms become even more complex is ever-growing. This talk will focus on the memory and data management challenges for a variety of traditional HPC workflows in the energy industry. It will also cover unique challenges for accelerating modern AI-based workflows requiring new innovations. 

AI/ML Compute
Enterprise Workloads
HPC

Author:

Dr. Vibhor Aggarwal

Manager: Digital & Scientific HPC
Shell

Vibhor is an R&D leader with expertise in HPC Software, Scientific Visualization, Cloud Computing and AI technologies with 14 years of experience. He and his team at Shell are currently work on problems in optimizing HPC software for simulations, large-scale and generative AI, combination of Physics and AI models, developing platform and products for HPC-AI solutions as well as emerging HPC areas for energy transition at the forefront of Digital Innovation. He has two patents and several research publications. Vibhor has a BEng in Computer Engineering from University of Delhi and a PhD in Engineering from University of Warwick.    

Dr. Vibhor Aggarwal

Manager: Digital & Scientific HPC
Shell

Vibhor is an R&D leader with expertise in HPC Software, Scientific Visualization, Cloud Computing and AI technologies with 14 years of experience. He and his team at Shell are currently work on problems in optimizing HPC software for simulations, large-scale and generative AI, combination of Physics and AI models, developing platform and products for HPC-AI solutions as well as emerging HPC areas for energy transition at the forefront of Digital Innovation. He has two patents and several research publications. Vibhor has a BEng in Computer Engineering from University of Delhi and a PhD in Engineering from University of Warwick.