More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco shortly after leaving his post as president of Y Combinator to become CEO of the artificial intelligence company he left co-founded in 2015 with Elon Musk. and others, OpenAI.
At the time, Altman was describing the potential of OpenAI in language that seemed odd to some. Altman said, for example, that the opportunity with general artificial intelligence – artificial intelligence that can solve problems as well as a human – is so great that if OpenAI were able to crack it, the outfit could “may -to be capturing the light cone of all future value”. in the universe.” He said the company “was going to have to not publish research” because it was so powerful. When asked if OpenAI was guilty of fearmongering – Musk called out several times all organizations developing AI to be regulated — Altman spoke of the dangers of not think of the “societal consequences” when “you build something on an exponential curve”.
The audience laughed at various points in the conversation, unsure how seriously Altman should be taken. No one is laughing now, though. While machines aren’t yet as smart as people, the technology that OpenAI has since released comes as a surprise to many (of which Musk), with some critics fearing it could be our downfall, especially with more sophisticated technologies expected to arrive soon.
Indeed, although heavy users insist that it’s not that smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering qissues as a person that professionals across a range of industries try to deal with the implications. Educators, for example, wonder how they will be able to distinguish original writing from the algorithm-generated essays they are required to receive — and that can elude anti-plagiarism software.
Paul Kedrosky is not an educator per se. He’s an economist, venture capitalist, and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risk and unintended consequences in complex systems.” But he is one of those who are suddenly worried about our collective future, Tweeter yesterday: “[S]Shame on OpenAI for launching this pocket nuke without restrictions into an unprepared society. Kedrosky wrote, “Obviously I think ChatGPT (and its ilk) should be removed immediately. And, if ever reintroduced, only with strict restrictions.
We spoke with him yesterday about some of his concerns and why he thinks OpenAI is driving what he considers the “most disruptive change the US economy has seen in 100 years.” , and not in a good way.
Our chat has been edited for length and clarity.
TC: ChatGPT was released last Wednesday. What triggered your reaction on Twitter?
PK: I’ve played around with these conversational UIs and AI services in the past, and obviously that’s a huge step forward. And what particularly disturbed me here was the flippant brutality of it, with massive consequences for a multitude of different activities. These aren’t just the obvious ones, like high school essay writing, but in just about any area where there’s grammar – [meaning] an organized way of expressing oneself. It could be software engineering, high school term papers, legal papers. All are easily eaten by this ravenous beast and spat out without compensation for whatever was used to train it.
I overheard a co-worker at UCLA who told me he had no idea what to try out at the end of the current term where they were getting hundreds per course and thousands by department, because they no longer had any idea what is wrong and what is not. So to do this so casually — as someone told me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product and then notifies the developer before the general public knows about it so that the developer can patch their product and we don’t have mass devastation and power grid outages. It’s the opposite, where a virus has been released into the wild with no regard for the consequences.
I feel like it could eat the world.
Some might say, “Well, did you feel the same way when automation came to auto factories and autoworkers were laid off?” Because it’s kind of a larger phenomenon. But it’s very different. These specific learning technologies are self-catalytic; they learn from requests. So the robots in a manufacturing plant, while disruptive and creating incredible economic consequences for the people working there, didn’t then turn around and start absorbing everything that was going on inside the plant. , moving sector by sector, when that’s not just what we can expect but what you should expect.
Musk left OpenAI in part disagreements on the company’s development, he said in 2019, and he has long spoken of AI as an existential threat. But people laughed at the fact that he didn’t know what he was talking about. We are now faced with this powerful technology and it is unclear who is stepping in to fix it.
I think it’s going to start in a bunch of places at once, most of which will look really awkward, and people will [then] giggle because that’s what technologists do. But too bad, because we got into it by creating something with such a consequence. So in the same way the FTC required people who run blogs years ago [make clear they] having affiliate links and making money from them, I think on a trivial level people are going to be forced to disclose that “we didn’t write any of that.” This is all machine generated.’ [Editor’s note: OpenAI says it’s working on a way to “watermark” AI-generated content, along with other “provenance techniques.”]
I also think we’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI for copyright infringement in the context of our machine learning algorithms being trained. I think there is going to be a larger DMCA issue here with regards to this service.
And I think there’s the potential for a [massive] lawsuit and possibly a settlement regarding the consequences of the services, which you know will probably take too long and not help enough people, but I don’t see how we won’t end up [this place] towards these technologies.
What is the thinking at MIT?
Andy McAfee and his group there are more optimistic and have a more orthodox view that whenever we see disruption other opportunities arise, people are mobile, they move from place to place and from one profession to another, and we shouldn’t be so narrow-minded that we think that this particular evolution of technology is the one around which we cannot mutate and migrate. And I think that’s largely true.
But the lesson of the past five years in particular has been that these changes can take a long time. Free trade, for example, is one of those incredibly disruptive economy-wide experiments, and we’ve all said to ourselves as economists that the economy will adjust and people generally will benefit from lower prices. What no one expected was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].
You talked about writing essays in high school and college. One of our children has already asked — theoretically! – if it would be plagiarism to use ChatGPT to write an article.
The point of writing an essay is to prove you can think, so it short-circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework because we don’t know if they’re cheating or not, that means everything has to happen in class and has to be framed. We can’t take anything home. More things need to be done orally, and what does that mean? This means that the school has become much more expensive, much more artisanal, much smaller and at the precise moment we are trying to do the opposite. The consequences for higher education are devastating in terms of the effective provision of a service.
What do you think of the idea of universal basic income, or allowing everyone to participate in the gains of AI?
I’m a much weaker supporter than I was before COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home, and they offered QAnon. So I’m very worried about what happens when people don’t have to jump in a car, drive somewhere, do a job they hate and come home, because the devil finds work for idle hands, and there will be many idle hands and much devilry.
#ChatGPT #virus #released #wild