Last year, especially over the holidays, I interacted with a lot more people who asked me, "Oh, you work with AI? What do you think about X?", where X is usually some popular AI topic. People have most frequently asked me about ChatGPT, GPT-3/GPT-whatever, diffusion models (though they often don't know they're called that), and Quantum AI.
It's kind of funny, because I consider myself an RL researcher (and none of those things I listed are RL), but most of my publications have been on discrete optimization/neural-guided search/symbolic regression related things, which are often applied to a control context. I guess that could be considered kind of RL? Anyway, nobody ever asks about RL anymore! I guess it's just not as cool as generative AI right now, even though RLHF (Reinforcement Learning from Human Feedback) is making a strong appearance in tuning things like ChatGPT to match human preferences.
Anyway, and I know this is a problem for all of science, it seems that the general public is really quite disconnected from the reality of what these advances in AI are. First of all, they're not "AI" in the way that most people think of "AI." There are still many, many people out there who hear the term "AI" and immediately think of something like the Terminator. There are also those who think that Boston Dynamics (see video below) uses AI. (The latest I've heard from the rumor mill is that they are gradually incorporating more ML/AI techniques, but for years now they have primarily used classical robotics techniques.)
It's important to note that these models (while technically AI, since they're built using machine learning and neural networks, which are considered subsets of AI) are exceptionally narrow. What I mean by this is that you can't ask ChatGPT to drive a car for you, or even to generate a video. Similarly, you can't ask a self-driving car to generate a piece of music in the style of Bach for you.
It's been my experience that people don't understand this, and I think it's a pretty important fact. I'm not blaming anyone for this, as it's nobody's fault, but perhaps it would be helpful if every news article about AI included a blurb describing this fact.
I'm a bit frustrated with people who hear terms like "Quantum AI" and immediately assume it must be the next big thing. No disrespect to Quantum AI researchers, but from the papers I've seen (maybe I've missed some critical result), Quantum AI is barely keeping pace with non-quantum AI. This will definitely change as quantum computers and the tooling around them improve, but still. Whatever hype articles are being written about quantum AI need to stop, because they're misleading people into thinking that quantum AI is just around the corner and that it's going to turn the world upside down. Come at me, quantum AI horde.
People also have a responsibility to investigate a topic on their own when it seems potentially important. I'm not saying people have to become experts on the topics they're reading about, but if you're reading about quantum AI, for example, and you think it sounds important but you don't know what a quantum computer is, maybe go read up quick on what it is. That's all I'm saying.
Diffusion models and GPT and transformers are legit awesome. It’s incredible and often mind-blowing what has been accomplished with these. The artwork generated by diffusion models is amazing, and all versions of GPT have been extremely impressive with their ability to generate novel, human-sounding, text.
Case in point, here’s an excerpt from me chatting (ha, ha) with ChatGPT:
Me: Can you write me a story about a quantum unicorn?
ChatGPT: Sure! Here is a short story about a quantum unicorn:
Once upon a time, in a far-off magical land, there lived a rare and majestic creature known as a quantum unicorn. This unicorn was unlike any other, for it had the ability to exist in multiple places at the same time. It could be found prancing through the enchanted forests, soaring through the clouds, and even swimming through the oceans, all at the same time.
The quantum unicorn was a being of great power and wisdom, and it was said that anyone who was fortunate enough to catch a glimpse of it would be blessed with good fortune and prosperity. Many brave knights and sorcerers tried to capture the quantum unicorn, hoping to harness its incredible abilities for their own gain.
But the quantum unicorn was not easily caught. It was a master of quantum mechanics, and could easily manipulate the fabric of reality to evade its pursuers. It would pop in and out of existence, appearing and disappearing at will, always one step ahead of those who sought to capture it.
Despite its evasive nature, the quantum unicorn was a kind and benevolent creature, and it used its powers to help those in need. It would heal the sick and injured, and bring hope and joy to those who were feeling down.
The legend of the quantum unicorn spread far and wide, and it became the stuff of folklore and myth. But to those who were lucky enough to see it with their own eyes, the quantum unicorn was a very real and wondrous being, one that would be remembered for generations to come.
Mind. Blown.
ChatGPT was even able to remember the math problem I had asked it about early on in the conversation, although it did get the math problem wrong. I’d asked for the square root of 289, which is 17, and it told me the answer was 17.14415. Regardless, still impressive.
In conversation (although everyone agrees they're very impressive), even GPT, diffusion models, and other generative AI somehow manage to be a divisive topic. There are those who are convinced that the essence of human creativity has been captured successfully in an algorithm, and we are now seeing the result of that. Conversely, some people think that human creativity will never be captured by any algorithm and that anything generated by a machine will always be inferior to that which is created by a person.
Personally, I don't think we've yet captured all of human creativity in an algorithm, but I believe we will one day. Why wouldn't we?
Despite what it may sound like here, I am really grateful for the interest people show in my work and I welcome the opportunity to help deepen a layperson’s knowledge of some of these most popular AI systems and techniques. Educating people on how these things work, while challenging, is a worthwhile endeavor. Plus, sometimes it is just fun to have a philosophical discussion with someone about AI, whether human creativity has been captured, if true general AI will come, etc. etc.
This post is really a ramble. Originally I was going to focus on my experiences with people asking about AI, but somewhere along the line it turned into me dumping on quantum AI (still think everything I said needed to be said) and complaining about people.
Subscribe for more pointless rambling.
Maybe I’ll surprise us all and write something good one day. ✌️