reCAPTCHA WAF Session Token
Lastest IT Trends

AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich – 666



Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don’t miss Tom’s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field.

🔔 Subscribe to our channel for more great content just like this:

🗣️ CONNECT WITH US!
===============================
Subscribe to the TWIML AI Podcast:
Join our Slack Community:
Subscribe to our newsletter:
Want to get in touch? Send us a message:

📖 CHAPTERS
===============================
00:00 – Introduction
02:46 – Sparks of artificial general intelligence
04:51 – Embers of autoregression
10:03 – Influence of LLMs in the field
12:05 – Dissociating language and thought in large language models
16:04 – Future of LLMs: monolithic vs. modular architecture
18:56 – Uncertainty quantification
21:57 – Sources of uncertainty in machine learning – A statisticians’ view
25:29 – A gentle introduction to conformal prediction and distribution-free uncertainty quantification
27:37 – What uncertainties do we need in Bayesian deep learning for computer vision?
30:46 – DEUP: Direct Epistemic Uncertainty Prediction
34:21 – Uncertainty quantification as one solution in tackling hallucinations
37:57 – Survey of hallucination in natural language generation
38:36 – Cognitive mirage
39:26 – A stitch in time saves nine
41:00 – How to quantify uncertainty in LLMs
42:05 – Language models (mostly) know what they know
44:17 – SELFCHECKGPT: zero-resource black-box hallucination detection
45:30 – Bartscore: evaluating generated text as text generation
46:11 – LM-Polygraph: uncertainty estimation for language models
48:56 – A stitch in time saves nine
51:29 – The internal state of an LLM knows when it’s lying
53:02 – Open-Set Recognition: a Good Closed-Set Classifier is All You Need?
54:01 – Familiarity hypothesis
55:21 – How soon will the research shape the use of LLMs?
59:42 – Debate on whether emergent properties are real
01:02:28 – Exciting opportunities in ML and DL
01:04:01 – RAG as memory modules to LLMs
01:07:05 – Predictions

🔗 LINKS & RESOURCES
===============================
Sparks of Artificial General Intelligence: Early experiments with GPT-4 –
Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve –
Dissociating language and thought in large language models –
A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification –
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? –
DEUP: Direct Epistemic Uncertainty Prediction –
Cognitive Mirage: A Review of Hallucinations in Large Language Models –
A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation –
Language Models (Mostly) Know What They Know –
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models –
LM-Polygraph: Uncertainty Estimation for Language Models –
The Internal State of an LLM Knows When It’s Lying –
Open-Set Recognition: a Good Closed-Set Classifier is All You Need? –
What Does it Mean for a Machine to “Understand”? with Thomas G. Dietterich – 315

📸 Camera:
🎙️Microphone:
🚦Lights:
🎛️ Audio Interface:
🎚️ Stream Deck:

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
WP Twitter Auto Publish Powered By : XYZScripts.com
SiteLock