Practical AI – Episode #247

The OpenAI debacle (a retrospective)

get Fully-Connected with Chris & Daniel

All Episodes

Daniel & Chris conduct a retrospective analysis of the recent OpenAI debacle in which CEO Sam Altman was sacked by the OpenAI board, only to return days later with a new supportive board. The events and people involved are discussed from start to finish along with the potential impact of these events on the AI industry.

Featuring

Sponsors

Traceroute – Listen and follow Season 3 of Traceroute starting November 2 on Apple, Spotify, or wherever you get your podcasts!

Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.io – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

đź“ť Edit Notes

Chapters

1 00:08 Welcome to Practical AI
2 00:35 Sponsor: Traceroute
3 02:23 AI soap opera
4 04:18 Parallel journeys
5 05:51 OpenAI's origins
6 08:31 Microsoft's relationship
7 08:57 2 OpenAIs
8 12:35 Capped profit model
9 15:29 OpenAI's huge influence
10 16:56 AI supercomputers
11 17:59 Early exploration
12 19:39 Actually open AI / GPT-2
13 20:38 GPT-3 / Start-up vs non-profit
14 22:20 Public awareness of AI
15 23:55 Regulation vs innovation
16 25:41 Sam gets fired
17 27:13 Microsoft's stakes
18 28:29 Microsoft's offer
19 30:12 A CEO a day / The CEO list
20 30:52 Internal issues
21 31:39 What else is out there?
22 34:15 New board new goals?
23 35:24 Employee's voices
24 35:57 Scary new AI
25 37:39 Q* hysteria
26 40:18 Rise of AI risk management
27 42:30 Wake up call to regulators
28 43:35 Notion around AGI
29 44:57 Advent of Gen AI
30 46:22 Outro

Transcript

đź“ť Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Well, welcome to another Fully Connected episode of the Practical AI podcast. My name is Daniel Whitenack, I am the founder and CEO at Prediction Guard, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. How are you doing, Chris?

Doing very well, today, Daniel. It has been quite a past week or so.

It has been quite a week. We’ve had a couple – well, in the US we’ve had usually people take off a couple of holiday days for Thanksgiving, but even leading up to that and during that there was all of the craziness of I think what will be remembered as a very unique Thanksgiving season, in the AI world at least…

Yes, it’s been a soap opera to say the least.

Yeah. I was trying to think up a good pun from a soap opera title, Days of Our Lives or something, but I couldn’t think up one for OpenAI. Days of our Artificial Lives, or I don’t know what it would be…

I don’t know, but it’s definitely – I mean, not even day by day, but at some point it’s hours by hours, radical changes along the way. And of course, we’re talking about the saga of OpenAI and all that happened, which is what we’re going to talk about today.

Yeah, yeah. I mean, I think it’s what we have to talk about. And I think rather than just us giving a few hot takes, which I hope that we will have, I think it would be good to kind of step back and kind of look at the history of OpenAI, how it came about, and the progression of OpenAI as an entity, and their offerings… Which kind of frames up some of, I think, the drama that we’ve seen over the past week, the week of Thanksgiving, November 2023.

It’s interesting, Chris - I was looking back, we had Wojciech Zaremba from OpenAI…

One of the founders.

…early on. Yeah, yeah, one of the founders. That was episode 14 of this podcast. We’re now on about 250, I think…

Somewhere in there.

Yeah, it seems like from those early stages till now we’ve been kind of going on our own journey in parallel with OpenAI, as they’ve had this amazing journey as a company or as an organization… And way back then when we were talking to Wojciech, we were talking about really reinforcement learning and robots. I don’t know if you remember at that time they were doing these of robotic arms, and they would have stuffed giraffes, and they would hit the robotic arms with the stuffed giraffes, which is kind of comical, but it was like a perturbation… They were trying to create robust reinforcement learning to control robotics. And that was kind of – I need to look back at the episode, I think that was basically what we had talked about back then. That was kind of the focus.

I remember being very interested in that episode. He’s such a smart person, holy cow… Because I had been leading up the first AI team at Honeywell at the time, and we were doing also robotic arm work within Honeywell’s business, using convolutional nets to see, and… It was early days compared to what we’re doing these days, but holy cow, this guy, he was so smart… And that was one of those episodes that’s really hung with me over the years.

[00:05:50.22] Yeah. And kind of stepping back and taking a wider view of OpenAI as a whole, I’m even looking at the transcript of that episode now, and at this time would have been, I guess, four or more years ago, Wojciech said “The goal of opening is quite ambitious. It’s to figure out a way to build artificial general intelligence, or artificial intelligence, or to be more exact, how to build it in a way that it’s safe, or that we can control it.” And he says, “Let’s say figure out from a political perspective how to deploy it in a way that’s beneficial to humanity as a whole.” So that’s kind of one little clip from that episode. And if you look at OpenAI’s founding, if we just take a look at how it was founded and how it progressed to this point of the chaos of last week, it was really focused uniquely on this problem of creating an organization that would steward us towards Artificial General Intelligence in a way that was beneficial to humanity. And there were various people involved in that, various funding groups… We mentioned Wojciech, but of course Sam Altman, who has been in the news a lot… Elon Musk, Greg Brockman and others that were part of that original group when the organization was founded in December of 2015.

I want to explicitly point out, it was set up as a nonprofit at that point in time.

That’s correct.

Which somewhat changed, as we’ll talk about.

Yeah, which maybe was part of the tension that led to last week.

Indeed.

Yeah. So it kind of had this initial board… Sam Altman was part of that; of course, he was in the news a lot this last week… Prior to OpenAI, Sam was the president of Y Combinator from 2014 until 2019, and I know that there were some things in the news kind of trying to imply certain things about why he was let go, or fired, or left Y Combinator in 2019, and trying to tie those forward… I don’t know that that was totally coherent for me in terms of what it was… But anyway, that’s kind of his past in this startup venture-backed world; I think that would be the key thing to highlight there, is he’s coming from this sort of venture-backed startup world, IPO, raised a bunch of rounds of funding, and kind of big tech mindset, I guess.

Yeah. I agree with you. I think that was the very beginning, in my sense of it, at least, of kind of that tension… It’s a very different culture from a nonprofit effort, in terms of the way you run your business, and such as that.

One kind of key other commercial player in this would be Microsoft, who has invested at this point over $10 billion in OpenAI, Global LLC. So that will be an important maybe distinction here in a minute… But Microsoft is a big player in this, which is why you might have seen Microsoft’s CEO making statements during the past week etc.

Correct. For whatever reason, kind of going to that point, because that is a structural concern of OpenAI, that is by almost any account a little bit bizarre. And even this podcast is almost as old as OpenAI. We’re not quite there… But in 2019 we were operating, when all this stuff was happening, and I remember it. And they’ve sort of created - they have the parent company, which is nonprofit, and the short of it is they have this LLC, which is a for-profit entity, which is a subsidiary of the nonprofit. But because of the way they’re operating now, all of the funding, the investment, everything has gone into the LLC, and yet you have this group of disconnected investors from the daily operations operating at the nonprofit level, above. So it’s two entities that were never really intended to operate together in a direct legal manner, or at least from an intent standpoint, and somehow the attorneys have made this all work so that it is legal.

[00:10:01.12] Yeah. When I was looking at key takeaways in terms of what we’re doing here, which is sort of a retrospective on this crazy week of events, one of the things - which is not really related to OpenAI, but if you’re in business, it was a key takeaway - is don’t create convoluted corporate structures. It’s not going to help anybody. So there you go, that’s a non-AI tip. If you’re in the process of structuring some complicated corporate thing, try to simplify it.

But yeah, you’re right, the start was nonprofit, and NPR even quoted this as kind of like - it’s almost like an anti big tech company. That’s how they, quote-unquote, referred to this nonprofit version of OpenAI when it started; that it would prioritize principles over profit. Again, the idea with the founding of this was it would be a way for the best AI researchers in the world to help steward this really disruptive and potentially harmful technology, in some respects, into the future in a way that it would benefit humanity. That was kind of the framing.

I think that really plays into what ended up happening, which developed over the years, is that they were doing two things. They were serving that mission, as they founded from 2015 on, but as we have all learned and talked about quite a lot over the years, it’s very, very, very expensive to create these large models. And so you get the sense of they were constantly fundraising, and they hit this point where they needed a serious infusion of cash to push forward where they were wanting to go… And I think Microsoft comes along and says “Well, we’ll give you an initial billion…” But that also happened at the same point where this new corporate structure evolved. And I think that was all tied together, as well as I can tell from reading many, many articles on the topic, to get that investment going.

And so they did that, they got the new thing, they got the initial infusion of a billion, and then the year after that they got 10 billion more from Microsoft. But by doing that, you had the driving forward on how fast can we get there, and be the leader, and do this, and that entire startup kind of culture contending with the “Our mission says we’re going to do this for humanity. And we’re going to do this safely”, and you can see they’re crashing together, and have been the last week or so.

Yeah, yeah. Specifically, what you’re referring to, Chris, is this transition from a nonprofit organization to a quote-unquote capped for profit, which I had no idea even existed until –

The 100-time cap.

Yes, until we started talking about this a couple of years ago. But yes, a cap nonprofit where the profit would be capped at 100 times any investment, which according to the numbers that you just told me about investment, would be a significant profit regardless… But a capped nonprofit. And according to OpenAI at the time, and I think what they’ve said, this really had to do with attracting talent and attracting investment at the levels that they would need to achieve this progress towards artificial general intelligence that would benefit humanity.

Now, there’s two kind of key pieces here. The OpenAI Inc, which is the nonprofit - so it’s very confusing, and maybe slightly annoying to have to refer to these differently… But OpenAI Inc, the nonprofit, and we mentioned a second ago, Microsoft investing in OpenAI Global LLC, which is the capped for-profit…

[00:13:57.19] And what’s interesting is that, essentially, this created a scenario where the board, which had full control of the capped for-profit company, as a nonprofit, couldn’t have a board member that would have some sort of financial stake in the for-profit company… Which - all of that seems kind of convoluted, but let me say it again. So the board that’s controlling the OpenAI entities could not have members on it that would have financial stakes in the capped for-profit OpenAI Global LLC… Which means that Microsoft, for example, as a huge investor in OpenAI Global LLC, did not hold a board seat on OpenAI Inc. Which will play on to kind of the timeline that we’ll talk about here in a second, of what’s happened over the past couple of weeks.

And one other thing to throw in on that is, though I am not an attorney, I’m fairly sure that the 100-time cap was one of the mechanisms by which they could make the for-profit fit into the nonprofit… Because pure for-profits, in theory, have essentially unlimited ability to generate profit. Nonprofits are not allowed to have profit. You must use those expenses. And the 100 times caps I think was something of a bridge. So I say that from a “I did not go to law school to learn this”, but I’m pretty sure it’s tied in there somehow.

Well, Chris, in this saga of what’s happened, we have this nonprofit originally, OpenAI Inc, that has spun out this for-profit OpenAI Global LLC, which has received a huge amount of funding from Microsoft… And as we all know over the past couple of years has really become the dominant force in the AI industry, with releases of all sorts of amazing technology and tools, and of course, most recently, ChatGPT.

So it might be worth just kind of giving people some context around this, of what happened over those years with OpenAI that made it such a driving force, which eventually led to – I mean, you could have a company that has a convoluted corporate structure, and it blows up, and the CEO gets fired, and no one hears about it in the news and our lives pretty much go on… Although it’s probably unfortunate for their lives. But here, this had an impact kind of on the whole AI industry, and I think will have an impact on the AI market moving forward, because OpenAI was such a dominant force in the industry.

So maybe we could go back and visit some of those milestones in the history that came along from those early days when we were talking about robots, and stuffed giraffes, to now, where we’ve got GPTs, I guess. So back in 2016 - so this would have been almost when we were starting the podcast - NVIDIA gifted the first DGX-1 supercomputer to OpenAI…

It’s quite a gift, of the day…

Yeah. So they were kind of early on into this wave of really powerful, cutting edge, GPU-powered supercomputers that could train larger and larger models… Which is one of the things that’s of course led to these very powerful foundation models that we’ve seen in recent years. I don’t know if you’ve got – they haven’t gifted you one and you have it in your garage or something, Chris?

No, my first DGX-1 - which was when that was all there was on the DGX line, the original one - I got at Honeywell, and we had it under a desk for a while, because we got it and then we had to figure out how to get it in the data center, and stuff… But yeah, that was – I remember thinking “Wow, I have an AI supercomputer under our desk here. This is amazing.” Anyway, a little side story…

[00:18:00.21] And in those kind of years after that, you see what maybe I would consider – and I know it’s hard to make generalizations like this, because there’s a lot going on at OpenAI even now… But in those early years I think you really saw this kind of exploration phase of setting up the right tooling, setting up the right compute, letting researchers explore various things, like the robotic stuff, like RL computer vision things… And you saw things like the Universe Software Platform, OpenAI Gym, which I know I’ve used in workshops for reinforcement learning type of things… So you really saw this kind of wide range of exploration in those early years leading up to February 2019, when GPT-2 was announced. And this was the first kind of – from OpenAI at least, the first major language model or foundation model, based model that gained a lot of attention for its ability to output really human-like coherent text. Of course, it was the first in the line leading up to the latest GPT models that we see now, like GPT-4. But that was in February of 2019… Still something that I think was basically just being looked at by people like us…

Yeah, episode 32 of our own podcast was covering that.

Oh, there you go. Episode 32. GPT-2, episode 32… It’s interesting now, Chris, when I’m helping people learn how to fine-tune a language model or something like that, I’ll often use GPT-2… And it’s interesting to see, because now this would be considered a very small model. And not only that, but it’s open, right? So OpenAI was, by its very nature and stated aims, going to be very open with his research, and models, and IP, and all that… So GPT-2 is on Hugging Face, and it’s a great model to use even now to kind of figure out how to fine-tune language models. But it’s just interesting to compare that to, for example, GPT-3.5, or GPT-4. I’ll never see those models in terms of downloading the weights and all of that.

It became a different org.

Yes. And with their own reasons for doing that, and that’s their decision… But yeah, in 2020 then, so a year later, OpenAI announced GPT-3, which was a language model that was more extensively trained on this sort of huge corpus of content from the public Internet. A lot of scraped data and all that… And I think this is really where you started to see some pretty crazy outputs from these language models; what people might have thought not possible, they started to see this with GPT-3. This is also where you kind of see the shift within OpenAI from releasing GPT-2 as a model to releasing GPT-3 as an API, with a more gated release, a slower release… I was even reading in kind of the lead-up and debriefing from the chaos of last week that you could interpret some of this slower release of GPT-3 as an API and a product to really this tension that you’re seeing between a startup kind of mentality wanting to fail fast, release things fast, learn from things in public, and this kind of more shielded nonprofit, good of humanity side, that is wanting to do things maybe in a more slow way, that ensures safety, and releases things not with harmful outputs, that sort of thing. So you see this kind of starting to really collide and come together, I think, around GPT-3.

[00:22:19.04] GPT-2, GPT-3 were language models, caused a big stir in the AI world again still not something that public – as far as ChatGPT, it wasn’t something that a lot of people knew about in the general public. I think even when they released DALL-E in 2021 - this was a kind of text to image model, where you could put in your prompt the astronaut riding the horse on the moon, and you could get that… And I think the public saw that as still kind of like a novelty, and that sort of thing. And that led up all the way to December 2022, of course, when AI started, when ChatGPT was released, at least in free preview…

Which changed the world.

Yes. When we were talking, even going back to DALL-E, people were aware that there was this thing, and they saw all these crazy AI pictures… But if you had asked someone who wasn’t following this industry the way we do, most of them out there would have said “Oh, I’ve seen some of those photos, but I couldn’t tell you what the organization was at the time.” They didn’t know. That really changed with ChatGPT. The whole world woke up to this stuff.

Yeah. And we’ve been talking about OpenAI on this podcast pretty much non-stop since then, and everyone else has as well. Of course, ChatGPT isn’t the only thing going on in the AI world, and hopefully we’re representing that on this podcast… But it is certainly a driving force, and we’ve mentioned it a lot, and that’s why the events of last week caused so much stir.

It’s worth noting, still, things happen after ChatGPT, right? We had GPT-4, we had I think what you saw as a shift in public discourse from Sam Altman and Greg Brockman at OpenAI, really related to “Hey, we need to really provide recommendations for governance of super-intelligence, governance of artificial intelligence”, and also at the same time rapidly releasing new products as well. So you see this - again, this tension, like “Hey, we want to talk publicly about governance of super-intelligence and regulations around this, and at the same time we’re going to have our OpenAI Dev Days, and release four new offerings, which are going to blow your mind and immediately permeate all industries.” So you’ve got GPT Vision, and the GPTs, plural, which is kind of easier ways to create customized models of systems and workflows, along with other things like their assistants, playground and API.

So again, you just see these two things kind of coexisting, where - I’ll use the words of NPR; they say “Two competing tribes within OpenAI: adherence to the serve humanity and not stakeholders credo, and those who subscribe to the more traditional Silicon Valley MO of using investor money to release consumer products into the world as rapidly as possible.” So you see this even in this past year, I think, in public discourse and in the release of products. And that leads us all the way to November 16th… So I don’t know, where were you on November 16th, Chris?

You know, we’re talking like this great historical moment… As we’re recording this, this is Sunday afternoon; this was a week and a half ago that we’re talking, on a Thursday. And so the whole thing happened in that week before Thanksgiving. And it was done in one week, basically from a Thursday to a Thursday, for all practical purposes; or Thursday to a Wednesday. And I was working, and I happened to look at the news, and the first thing I saw was OpenAI had fired Sam. And I was like –

[00:26:18.00] Which was crazy, because it was like right after his keynote…

I know. It was a day later. It was a day after his keynote.

Yeah, yeah. It was just mind-boggling. Yeah. So apparently, he got a text on Thursday night from one of the co-founders, asking him to join a Google Meet on Friday… And they had already kind of tapped the CTO, Mira Murati, as the next CEO… And on Friday, literally, Microsoft learns of Altman’s firing, “a minute” before the world does… Again–

So it was on Friday that we heard it, actually, not Thursday. But yes.

Yeah, exactly. So it happened – well it got into motion on Thursday, and then some of us heard about it on Friday… And apparently also Microsoft, which was just – so I knew about the kind of convoluted corporate structure and all of that, but it was just mind-boggling to me that Microsoft was not informed about this, given their investment in the for-profit entity…

Which I think they have 49% stake in, if I recall correctly.

Correct. Yeah, it’s something like that. I don’t remember the exact figure. But certainly they’ve invested over $10 billion, with plans to invest 10+ more billion dollars. And of course, they’ve integrated OpenAI, ChatGPT etc. into Azure, into Bng, into Microsoft Office 365… And so yeah, it just sort of boggles the mind that this happened in the way that it did. So then on the 17th OpenAI I releases a statement that Sam Altman was ousted… And later that evening, Greg Brockman announces he’s quitting.

They got rid of him as the chair… So he was fired as chair, but then he turned around – he was still president, and he quit as president standing to show Sam Altman that he stood with him.

Okay. Well, that brings us to the point Chris - Sam and Greg are out, and Microsoft on that next Saturday then, kind of going day by day, Microsoft releases a statement saying that they… And I don’t know if you saw the video, the Microsoft video, but essentially, they immediately offered to hire Sam Altman into Microsoft.

And Greg, and anybody that left OpenAI, as it would unroll.

Yes. And I think I could tell in those videos just the sort of still shock and unbelief in a lot of people… So Microsoft releases these statements… People, of course, start wondering, “Why did this happen? This had to be something really bad that Sam did. Why would this ever happen?” And there was some reporting and some statements that basically were kind of confusing and murky. They were saying it wasn’t like any violation of security or privacy practices, or kind of malfeasance on Sam’s part… But you kind of got these vague hands of “Oh, he wasn’t completely forthright with the board”, and his communication didn’t allow them to properly make decisions… And so you get that kind of mix of stuff going on until November 19th, when Altman announces that he has been hired into this new research unit of Microsoft, and posts on Twitter/X “The mission continues.”

[00:30:10.27] And in the meantime, Mira was the first designated CEO… They had another CEO for another day on Sunday, and then a third one… They went through CEOs, one a day, for a while there.

Yes. I was on my Zoom calls. When I was hopping on I was asking people if they had been asked by OpenAI to be the next CEO yet… [laughter] Because it seemed like they were working their way down a list. I don’t know how far down the list I was. Kind of sad I wasn’t in that top 10… But they were definitely working their way down some type of list.

And I forget the name of the one that came after Mira. It’s escaping me right now while we’re talking. But the OpenAI employees, on their internal Slack, were using - “I’ll gently say FU, and showing the middle finger on their Slack. Apparently, the employees had enough. By the time we got to mid-weekend, and going on Saturday into Sunday, the employees started finding their voice, as we’ll hear about next.

And there were reports basically that up to 95% of the employees within OpenAI were going to depart OpenAI if some deal wasn’t struck to have Sam Altman return as CEO… Which obviously would be the end of OpenAI, at least as we know it. So I don’t know, I can definitely tell you that - and this is one thing I’ll highlight later on, is all of these people who have relied on OpenAI as the sort of bulwark of the industry, and integrated this across their products, were really in a state of panic… Because it’s like, hey OpenAI is still saying they’re gonna provide great support for all this stuff, but also you’re saying potentially 95% of your employees are leaving… So where does that leave all of these – so it was really kind of a time of reckoning, where it’s like “Hey, we’ve built a whole strategy around OpenAI’s products and models… So now what? And the resilience of relying on this kind of single family of models I think really was showing in that moment.

If only there were services that could help people find a variety of different models, and not be entirely dependent on a single family… *coughs* Prediction Guard – I didn’t say that. Sorry.

This is a point we’ll make later, but we even a few weeks ago had some comments with the executive order, their firming up of the market around these kind of foundation models, because of the regulatory burdens that would starting to be put on them. And I think that this kind of opens up the field a little bit more, regardless of what has played out with OpenAI. I think people are really wrestling with the fact of “Hey, what else is out there?” And of course, you’ve got amazing players in that space, from Mistral, to MosaicML, to Meta and what they’re doing with LLaMA 2… And people are really finding – of course, we’re at Prediction Guard helping people build with these models, but a lot of people are finding a lot of success with these models. And I got a number of messages during this chaos about “Hey, we weren’t thinking about trying open models or models outside of the GPT family prior to all of this craziness going on, but now we’re wrestling with that.” Not necessarily over the fence yet in terms of a strategy around that, but it’s definitely caused people to think a little bit more about this kind of market.

Yeah. Having backup through diversity, if you will, in terms of model selection is now going to be in the corporate consideration for any significant organization going forward. We’ve seen the chaos, and that will change the marketplace in general.

[00:34:12.23] Correct, correct. That brings us all the way to November 21st of 2023. So this would be Tuesday, I believe, when OpenAI releases a statement that they’ve come to a “deal in principle” with Sam Altman to return a CEO, with a completely new board, which is chaired by former Salesforce co-CEO, Brett Taylor. And so one way you could take this is the sort of Silicon Valley venture-backed world one. Like, the board is now chaired by a leader in that space, and so a big question mark kind of hangs over this… “What about that original vision/mission of the OpenAI nonprofit? Is that completely dead, or does it still exist?” I think that’s a real question to ask.

Yeah. If I was channeling the OpenAI marketing team, I would expect that they would say “Absolutely, that still exists. That’s an overriding mission.” But I would also suggest that the marketplace is probably not convinced of that at this point in time.

Yeah, for sure.

Before we go on, I just want to point out one thing… Another big, big retro consideration that will not go quickly is the ability for the voice of the employees and a unified stand in such a corporate – this was the biggest AI company in the world in the sense of mindshare, and what they’re doing in models… And the employees made this happen, when they said “You’re not going to have an organization if you continue down this path.” That made a big difference, and I think that’s another thing that we will see play out in organizations going forward.

Well, there’s one more piece of the mystery puzzle here, Chris, which is now a meme in and of itself, I would say…

It is…

So they struck the deal with Sam Altman, and then Wednesday November 22nd, if I’m getting the date right, there was reporting that ahead of Sam Altman’s departure some researchers wrote a letter to the board of directors of OpenAI, telling them about this discovery or new model that was being worked on called Q*, that they basically said was a threat to humanity. So this brought up new kind of questions around was Sam Altman not taking this seriously, and that’s why he was fired, or did it play in at all to this? And then of course, immediately, in addition to all of those questions, people started speculating “Well, what is Q*? And is it a threat to humanity?” and there the memes started across the internet.

Our extended family got together for Thanksgiving, and I make a point of not bringing up AI. I mean, we have a broad family, lots of interests, not in AI in general, or even technology in general… And this immediately came up as the first big topic. Everyone is scared about whether Q* is this threat to humanity. And it is not you and I and the rest of us AI folk talking about this, this is the general population. I was really quite startled to sit into the family’s gathering and have that become the primary conversation. I was not expecting it.

Of course, everybody’s going wild about this, just like when they were going wild about anticipating what is GPT-5, or GPT-4.5, or whatever the next thing from OpenAI is, there’s always going to be this level of hype around it… I think what we can kind of practically know is maybe based on what they’ve said, which is that – the report basically said that this was a model that can maybe solve math problems better than previous models, which has always been a sticking point for these generative models…

[00:38:13.19] And the Q in the name kind of hints at this having some sort of reinforcement learning aspect to it, because one of the main kind of mechanisms within deep reinforcement learning is called Q learning, which is kind of a mechanism by which an AI model or a policy model tries to predict the long-term return of making an action. So essentially planning, which I think you also forwarded me along a LinkedIn post…

Yeah. Yann LeCun.

I’ll read it. He weighed in, because… Keeping in mind, prior to him - this came out a couple of days ago; he put it out, at least on LinkedIn. He probably put it out on Twitter, too… But it was exactly to your point right there. He had been watching several days of this kind of hysteria about Q*, and people panicking… And I will note that there were some pretty crazy news articles about it along the way. What Dr. LeCun said was “Please ignore the deluge of complete nonsense about Q*. One of the main challenges to improve LLM reliability is to replace autoregressive token prediction with planning”, to your point, Daniel. “Pretty much every top lab”, and he names a few, “is working on that, ad some have already published ideas and results. It’s likely that Q* is OpenAI’s attempt at planning… They pretty much hired Noam Brown to work on this.” And I think he was trying to get back to Practical AI, from his perspective…

Yeah. And I think this just represents a sort of wider rift in the AI research community as well, where you kind of have a race to dominate the market with new models kind of at odds with this really zealous promotion of like trying to prevent AI from advancing beyond our control… And so you see this playing out not just at OpenAI, but elsewhere. And I think that led into some of the Q* craziness.

Well, in terms of retrospective, as we kind of – you know, we’ve told the whole story, we’ve stepped back and looked at OpenAI… There’s definitely, from my perspective, some takeaways. Of course, I’m actively leading a company that is providing hopefully trustworthy LLM APIs in enterprise environments, that can be self-hosted… So my big takeaway, and I think what I was busy doing all week was answering people’s questions about “Hey, if I don’t have OpenAI, what is there?” And it turns out that there’s a lot. So it doesn’t necessarily have to be our APIs, but there’s a lot of options around using Enterprise models in a safe, secure environment, that can be deployed under your control… And yes, there’s kind of this balance to take around like “Hey, OpenAI’s APIs are really good, because they’re a managed service”, but they also have downsides. And I think you’ve seen this over time with other managed services, right? Like with running your own database. There’s a trade-off between relying on a – especially if it’s a one-off unique hosted service, that is a single point of failure, versus kind of hosting your own, or maybe having your data spread out with different databases across your infrastructure… You see some of that kind of infrastructure concern, and the concern around the resiliencies of these systems really playing out… And I think that we’ll see other companies rise to that as well. There’s going to be a new wave of these companies that play off of what has happened over the past week, to provide enterprise solutions.

[00:42:03.27] Yeah. AI risk management as an industry field has been born. That’s what’s happened here.

AI isn’t just the data science team that is going and either using APIs, or in some cases creating their own models; you now have risk management as a formal corporate concept that everyone will be adopting, and will go through all the ranks of every organization. So an entire new industry has been born out of this.

Yes. Another point that I saw being made is - this is kind of a wake-up call to regulators as well. You have companies testifying before Congress to tell them how maybe they should be regulated… But this is kind of a wake-up call that hey, maybe these companies aren’t so good at regulating themselves that often… So maybe there needs to be a different way that we approach regulation of this technology. And you see other evidences of that, coincidentally around the same time. It seems that Meta has disbanded their responsible AI team… Now, I’m not making a comment on – I’m sure that they have other thought processes going around that, but it is an indication of maybe some of what was good faith efforts at this, but in practicality at odds with commercial pressures… So yeah, it’ll be interesting to see how that plays out on the regulatory side as well.

I agree. There’s another cultural - kind of in the large cultural issue as we’re doing a retrospective… If we go back just a few years on this podcast, and not very far, if you really think about it, when the subject of artificial general intelligence, AGI would come up, we didn’t take it too seriously. It was one of those things that wasn’t – it wasn’t really practical AI for us at the time. It was one of those “Someday, maybe, kind of get there” things. And I think that what we’ve seen in a couple of different waves - we saw it with ChatGPT coming into being, and the capabilities, and the fact that it hit the public’s consciousness so intensely… And then the concern – no matter what Q* ends up being, regardless of what the outcome is, Thanksgiving dinners were actually with people with genuine concern…

Whether it’s general or not, it’s general in a certain way.

There you go. And the power of it, and what that would mean for their own lives, even if they had nothing to do with the AI industry. And so what we’ve seen change in the large is that the notion of artificial general intelligence is entirely legitimate, to ponder, to be concerned about, to either fear or be excited about, or maybe a bit of all of the above. But that’s a very different thing if we were to go back a couple of years; we had a very, very different perspective.

Well, with that, Chris, I think that’s a good perspective to kind of bring us to a close here. It’s been an interesting Thanksgiving week. And on these episodes, we normally also try to provide some learning opportunities for people. There’s one that I would like to highlight quickly, which I think will be a lot of fun… Some of you might have heard of Advent of Code, which is something that a lot of people do to learn new coding skills and try new things each holiday season… And I’m helping run an advent of generative AI hackathon with Intel. That’s happening December 5th through the 11th, so I would encourage you, if you haven’t been hands-on with these technologies and with these models, this is a great way to kind of get into that without spending a bunch of money, and learn from a lot of experts in the field, with access to really people that are working in this day to day. So check that out at adventofgenai.com. We’ll link that in the show notes, along with all of the many, many articles that have been written about OpenAI over the past week.

Well, interesting week, Chris… Who knows what we’ll be talking about next week, but I’m excited to do it.

You never know these days, I’ll tell you what. We’ll find out. Talk to you then.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. đź’š

Player art
  0:00 / 0:00