See Previous Events for presentations to past events.
September 2020
Kazuhm: Expanding an HPC compute ecosystem by activating and unifying “secondary use” compute and storage from campus IT assets
PRESENTATION download: HERE We demonstrate a simple technology that enables on-demand compute and storage for research and educational purposes, in support of: - Edge/decentralized compute environments - Data preparation or analytics that do not require a full HPC solution This technology was enabled by Kazuhm's advancements to container orchestration technology that enable cross-OS/device type container orchestration and the ability to partition resources on connected nodes. We will discuss use cases and applications of an on-demand compute/storage model using campus assets,…
Find out more »NEC Workshop: Using AI/ML with NEC SX-Aurora TSUBASA
Accelerate AI/ML algorithms based on statistical machine learning, Numpy and Scikit-learn using NEC Vector Engine Processor This hands-on, instructor-led workshop will focus on executing codes, utilizing Scikit learn and NumPy libraries, on NEC Vector processor. Participants can experience coding AI/ML algorithms on NEC’s pure vector processor using a variety of programs. The workshop programs will focus on utilizing wide variety of AI/ML algorithms such as Decision Tree, Logistic Regression, SVD, K-Means, LDA etc. The target applications will showcase performance acceleration…
Find out more »October 2020
Technology Forum with Bright Computing: Modernizing High Performance Computing Systems with Auto-Scaler Technology to Improve Performance and Throughput
PRESENTATION download: HERE https://www.youtube.com/watch?v=ZLrrWWq6M-Y&t=1s Join us to learn how Bright Computing has developed a new “auto scaling” technology for HPC clusters that can continuously optimize the cluster configuration to match the workload. Auto-scaler works with mixed batch-scheduled and Kubernetes clusters and resizes the clusters by dynamically reassigning servers between the two, based on workload and configured policies. The talk will explain how this technology brings a new level of utility and functionality to HPC systems. Bright’s technology is being deployed…
Find out more »November 2020
Technology Forum with NVIDIA
NVIDIA Ampere GPU Architecture – A Giant Leap Forward for AI and HPC Marc Hamilton, NVIDIA PRESENTATION download: HERE https://www.youtube.com/watch?v=p_DMLXCwWzo&t=64s Around the world, a new breed of AI Supercomputer is being deployed outside of traditional supercomputer centers. Two of the three fastest new supercomputers on the Top500 this year were installed at industrial companies, not supercomputer centers. The power of AI is increasingly being applied alongside traditional scientific simulation by scientists, researchers, and engineers working to solve the world’s most…
Find out more »April 2021
Technology Forum: Expanse Supercomputer for Industry
The Expanse Supercomputer, a Resource for University-Industry Collaborations PRESENTATION download: HERE https://www.youtube.com/watch?v=MiPTCwGyB7E&t=39s The San Diego Supercomputer Center’s (SDSC) newest supercomputer, Expanse, supports SDSC's vision of “Computing without Boundaries” by increasing the capacity and performance for thousands of users of batch-oriented and science gateway computing. Expanse provides new capabilities that will enable research increasingly dependent upon heterogeneous and distributed resources composed into integrated and highly usable cyberinfrastructure. It also implements new technical capabilities such as Direct Liquid Cooling. SDSC has acquired…
Find out more »May 2021
Heterogeneous Computing and Composable Architectures with Next-Gen Interconnects with GigaIO
Heterogeneous Computing and Composable Architectures with Next-Gen Interconnects Speaker: Alan Benjamin, CEO, GigaIO PRESENTATION download: HERE https://www.youtube.com/watch?v=z3kdgBL0RUg&t=157s The next step in taking full advantage of heterogeneous computing is the ability to use larger numbers and multiple types of accelerators, each optimized for a particular step in the workflow. In most accelerated workflows, data often must move to and from storage with each processing step. The result is slower than necessary application performance and overburdened networks. The answer is not just…
Find out more »June 2021
Technology Forum: Increasing the Impact of High Resolution Topography Data with OpenTopography
Presentation found on SDSC's YouTube channel: TF with OpenTopography High-resolution topography is a powerful tool for studying the Earth's surface, vegetation, and urban landscapes, with broad scientific, engineering, and educational-based applications. Over the past decade, there has been dramatic growth in the acquisition of these data for scientific, environmental, engineering and planning purposes. In the US, the U.S. Geological Society is undertaking the 3D Elevation Program (3DEP) to map the entire lower 48 with lidar by 2023. The richness of…
Find out more »September 2021
Technology Forum: SDSC Voyager – An Innovative Resource for AI & Machine Learning
Presentation found on SDSC's YouTube channel: TF with Voyager Join us to learn about SDSC’s most recent supercomputer award, the Voyager system. With an innovative system architecture uniquely optimized for deep learning (DL) operations and AI workloads, Voyager will provide an opportunity for researchers to explore and implement new deep learning techniques. Amit Majumdar Director for Data Enabled Scientific Computing, SDSC Amit Majumdar is the Director of SDSC’s Data Enabled Scientific Computing division and is an Associate Professor in the…
Find out more »October 2021
Technology Forum with Janssen: Leveraging High-Performance Computing and Cloud Environments for the Analysis of Biobank-Scale Datasets
Presentation found on SDSC's YouTube channel: TF with Janssen Some of the most exciting biological datasets in recent years have originated from large-scale biobanking efforts. These high-dimensional datasets include electronic medical records, imaging, and genomic profiles from hundreds of thousands of individuals, significantly increasing the power to understand the risk factors and genetic basis of disease. However, working with petabyte-scale datasets in an efficient and scalable manner is non-trivial and requires careful planning to generate, store, and analyze. In this…
Find out more »February 2022
Technology Forum with Graphcore: Exploiting Parallelism in Large Scale Deep Learning Model Training: From Chips to Systems to Algorithms
REGISTER We live in a world where hyperscale systems for machine intelligence are increasingly being used to solve complex problems ranging from natural language processing to computer vision to molecular modeling, drug discovery and recommendation systems. A convergence of breakthrough research in machine learning models and algorithms, increased accessibility to hardware systems at cloud scale for research and thriving software ecosystems are paving the way for an exponential increase in model sizes. Effective parallel processing and model decomposition techniques and…
Find out more »