insideBIGDATA AI News Briefs – 11/22/2023

Welcome insideBIGDATA AI News Briefs, our timely new feature bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We’re working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. We know this field is advancing rapidly and we want to bring you a regular resource to keep you informed and state-of-the-art.

Video Highlights: PyTorch 2.0 on the ROCm Platform

From the recent PyTorch Conference we present a Lightning Talk: PyTorch 2.0 on the ROCm Platform by Douglas Lehr, Principal Engineer at AMD. Douglas talks about the current state of PyTorch on the ROCm platform including efforts to achieve day 0 support for Triton on Pytorch 2.0 as well as performance improvements, efforts with Huggingface, and other areas.

insideBIGDATA AI News Briefs – 9/28/2023

Welcome insideBIGDATA AI News Briefs, our timely new feature bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We’re working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. We know this field is advancing rapidly and we want to bring you a regular resource to keep you informed and state-of-the-art.

insideBIGDATA AI News Briefs – 9/22/2023

Welcome insideBIGDATA AI News Briefs, our timely new feature bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We’re working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. We know this field is advancing rapidly and we want to bring you a regular resource to keep you informed and state-of-the-art.

insideBIGDATA AI News Briefs – 9/13/2023

Welcome insideBIGDATA AI News Briefs, our timely new feature bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We’re working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. We know this field is advancing rapidly and we want to bring you a regular resource to keep you informed and state-of-the-art.

KDD 2023 – August 6-10 – Long Beach, CA

Sponsored by the ACM, the 29TH SIGKDD Conference on Knowledge Discovery and Data Mining is coming to Long Beach, CA on August 6-10. The annual conference is the premier international forum for data mining researchers and practitioners from academia, industry, and government to share their ideas, research results and experiences. The KDD conferences feature keynote presentations, oral paper presentations, poster sessions, workshops, tutorials, panels, exhibits, demonstrations, and the KDD Cup competition.

insideBIGDATA AI News Briefs – 7/27/2023

Welcome insideBIGDATA AI News Briefs, our podcast channel bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We’re working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. We know this field is advancing rapidly and we want to bring you a regular resource to keep you informed and state-of-the-art.

Video Highlights: Generative AI with Large Language Models

At an unprecedented pace, Large Language Models like GPT-4 are transforming the world in general and the field of data science in particular. This two-hour training video presentation by Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, introduces deep learning transformer architectures including LLMs.

Transfer Learning in Computer Vision 

In this contributed article, Ihar Rubanau, Senior Software Developer at Sigma Software Group, discusses how transfer learning has become a popular technique in computer vision, allowing deep neural networks to be trained with limited data by leveraging pre-trained models. This article reviews the recent advances in transfer learning for computer vision tasks, including image classification, object detection, semantic segmentation, and more. The different approaches to transfer learning are discussed such as fine-tuning, feature extraction, and domain adaptation, and the challenges and limitations of each approach are highlighted. The article also provides an overview of the popular pre-trained models and datasets used for transfer learning and discusses the future directions and opportunities for research in this area.

Video Highlights: Ultimate Guide To Scaling ML Models – Megatron-LM | ZeRO | DeepSpeed | Mixed Precision

In this video presentation, Aleksa Gordić explains what it takes to scale ML models up to trillions of parameters! He covers the fundamental ideas behind all of the recent big ML models like Meta’s OPT-175B, BigScience BLOOM 176B, EleutherAI’s GPT-NeoX-20B, GPT-J, OpenAI’s GPT-3, Google’s PaLM, DeepMind’s Chinchilla/Gopher models, etc.