Roundup: The reality of ChatGPT, Woz feels "robbed" by Musk, and Microsoft’s merger woes

Stay tuned to the end for a fun (not AI-generated) illustration

Roundup: The reality of ChatGPT, Woz feels "robbed" by Musk, and Microsoft’s merger woes

Welcome to this week’s Disconnect Roundup, where I give you my thoughts on some important stories and suggest some great things to read based on what happened this past week. Everyone’s getting the roundups this month, but starting in March they’ll only be for paid subscribers, so if you want to keep getting them, make sure to sign up!

With that said, this week we’re digging further into generative AI with three stories I found did a great job of breaking down what it is and what its impacts could be. Then, we take a look at a surprisingly critical interview Steve Wozniak gave to CNBC and review a new report that throws another wrench into Microsoft’s planned Activision Blizzard acquisition. Finally, we end off with a bunch of great reading recommendations followed by an illustration that’s been on my mind lately.

Enjoy!

What is ChatGPT, Really?

Obviously, ChatGPT and generative AI more broadly are going to be a big topic this year. Microsoft has already plowed $10 billion into ChatGPT to try to take on Google and give its subscription products a leg up on competitors, and VCs are making a ton of AI deals in the hope they’ll be able to cash in despite the tech slump. But what are these technologies really doing, and what promise do they really hold? We can’t get distracted by the hype here, as we too often do with tech products. To that end, I wanted to point out three articles I read this week that I feel give us a clearer understand of these technologies.

First is a piece by Ted Chiang in the The New Yorker that compares ChatGPT to a blurry JPEG file. When you resave a JPEG over and over, it keeps getting blurrier because of the compression that happens with each save; ChatGPT is similar. It’s ingesting a lot of information from the web, but it can’t retain it all. What it spits out is a lossy recombination of what it’s taken in, but that doesn’t mean it understands the content in question. Yet, as Chiang explains, it looks impressive to us.

Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.

Chiang’s article demystifies large language models, and by doing so it also makes clear some of the potential consequences of getting drunk on the hype and being unable to see through the PR. The repackaging that tools like ChatGPT engage in “is what makes it harder for us to find what we’re looking for online right now,” he writes, and only further encourages the content mill approach to online writing. But it also shows why relying on such tools to replace human writing is going to have deeper consequences.

Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.

And it’s not the case that, once you have ceased to be a student, you can safely use the template that a large language model provides. The struggle to express your thoughts doesn’t disappear once you graduate—it can take place every time you start drafting a new piece. Sometimes it’s only in the process of writing that you discover your original ideas.

He concludes with a provocative question: “What use is there in having something that rephrases the Web?”

Following on from Chiang’s piece is one by Dan McQuillan, who not only continues this demystification, but also looks at the politics of large language models. McQuillan positions ChatGPT as a harmful “bullshit generator.”

The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do. It's a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage.

The potential harm of these tools only escalates as their capabilities improve because their “inevitable hallucinations” become harder to spot. But on top of that is the politics of these tools and how their owners seek to roll them out.

OpenAI is acquiring billions of dollars of investment on the back of the ChatGPT hype. The point here is not only the pocketing of a pyramid-scale payoff but the reasons why institutions and governments are prepared to invest so much in these technologies. For these players, the seductive vision isn't real AI (whatever that is) but technologies that are good enough to replace human workers or, more importantly, to precaritise them and undermine them. ChatGPT isn't really new but simply an iteration of the class war that's been waged since the start of the industrial revolution.

At this point, I don’t think it’s controversial to say that one of the key innovations of Silicon Valley’s tech industry has been to disempower labor, whether it’s through shifting more workers from employee to contractor status both at corporate HQs and in things like the gig economy; rolling out new systems to surveil workers and speed up the pace of work, like Amazon’s algorithmic management tools; or the new wave of layoffs being used to discipline white-collar workers who felt empowered over the last decade. It’s important we consider how generative AI fits into this broader effort, because you can be sure these capitalists aren’t embracing AI to better humanity.

Finally, an assessment not of ChatGPT, but OpenAI’s Whisper speech-recognition model by Papa Reo, a group that works to protect and revive the Māori language in Aotearoa New Zealand. I’ve been interested in their Indigenous perspective on AI technologies since reading a profile on Te Hiku Media in Wired in 2021 — I’d love to chat with them in future — and I think it’s one we should be conscious of as there’s a push to expand the use of AI and allow these companies to ingest whatever data they want to train their models.

Given the attempted eradication of Indigenous languages by colonizers, Indigenous groups are concerned about tech companies’ attempts to capture their language data without permission and the potential impact of the tools they create with it.

The main questions we ask when we see papers like FLEURS and Whisper are: where did they get their indigenous data from, who gave them access to it, and who gave them the right to create a derived work from that data and then open source the derivation? The history of colonisation and how actions like this do more harm than good is clear. […]

For our organisation, the way in which Whisper was created goes against everything we stand for. It's an unethical approach to data extraction and it disregards the harm that can be done by open sourcing multilingual models like these. […] If we release a synthetic voice that mispronounces te reo Māori, we further damage the language already negatively influenced by English vowels and intonation.

I won’t outline everything they argue, because I think it’s better for you to go read the assessment, but they also explain alternative ways to do this work that are more in line with an anti-colonial approach and which actually give Indigenous peoples like the Māori sovereignty over their data.

As far as we are aware there were no Māori or Hawaiians involved in making this model, and indigenous data were scraped from the web and used to create this model. We assume that OpenAI and the researchers had no right to use indigenous data in this way. If this assumption is incorrect, then who gave them that right and did they have the authority to give that right? […]

Ultimately, it is up to Māori to decide whether Siri should speak Māori. It is up to Hawaiians to decide whether ʻōlelo Hawaiʻi should be on Duolingo. The communities from where the data was collected should decide whether their data should be used and for what. It's called self determination. It is not up to foreign governments or corporations to make key decisions that will affect our communities.

Woz Isn’t Buying Tech’s Bullshit

Forgive me for this one. It’s not as newsy as the other topics, but I still found it worth pointing out. Earlier this week, Steve Wozniak showed up on CNBC’s Squawk Box to give his takes on the tech industry and had some pretty critical things to say. Wozniak, of course, is known for co-founding Apple alongside Steve Jobs, and I don’t think it’s unfair to say Jobs the marketer took advantage of Wozniak the kind nerd. (Don’t get me wrong, I’m sure Woz is set for life.)

It’s pretty clear the hosts — Joe Kernen, Becky Quick, and Andrew Ross Sorkin — thought they were getting a very different interview, often appearing surprised at how much Woz had given up believing in Silicon Valley’s bullshit. At one point, they’re talking about AI and Woz explains he’s just not buying the notion of artificial “intelligence.” He explains he read Ray Kurzweil’s singularity work and was “convinced by a certain date we’d have computers that had emotions and feelings.” But then he changed his mind, and realized computers are not going to be able to think for themselves. “They’re not going to have that intuition,” he says. Kernen interrupts a couple times during this, to say he believes Kurzweil’s writing and is hoping he’ll live to see it. Woz replies, “I don’t buy that anymore,” and explains that his son convinced him it was bullshit.

In another part of the interview, Sorkin asks him whether Elon Musk is comparable to Steve Jobs, saying he’s often compared them. Woz initially agrees, then clarifies it’s not to see them as “great entrepreneurs,” but of “having the ability to communicate and wanting to see seen as the important person, and being like a cult leader.” He proceeds to go in on Elon Musk and Tesla, saying “A lot of honesty disappears when you look at Elon Musk and Tesla,” and says he “robbed my family … of so much money” by constantly deceiving people about what Tesla would deliver on autonomous driving.

He also takes the time to provide a moderate opinion on ChatGPT — that it’s impressive, but we said the same thing when a computer could beat us at chess and didn’t look at the drawbacks of the technologies — throw cold water on the Metaverse, and even says he doesn’t get the new phones anymore because they’re always the same thing trying to convince us they’re something new.

The interview stood out to me not just because of the criticisms of Elon Musk, but also because here is someone who has lived through this long cycle of the tech industry — from personal computers to the internet to today. He clearly believed in a lot of the hype at one point, but like many of us, now sees the industry through clear eyes and is trying not to get drawn in by the marketing anymore. It’s refreshing, if nothing else.

UK Could Stop Microsoft’s Activision Acquisition

Microsoft was dealt another setback to its attempt to buy Activision Blizzard for $68.7 billion. After challenges from the US Federal Trade Commission and European Commission, the UK’s Competition and Markets Authority (CMA) released its provisional report on the deal this week, and it’s arguably even more damning than the other two agencies’ reviews. They write,

Given we have provisionally found that Microsoft already has a strong position in this market through its ownership of Xbox, a global cloud computing service, and the leading PC operating system (OS), we are concerned that even a moderate increment to its strength may be expected to substantially reduce competition in this developing market to the detriment of current and future cloud gaming users.

The CMA believes Microsoft would be incentivized to make popular Activision Blizzard properties exclusive, which would lead to “higher prices, reduced range, lower quality, worst service and/or reduced innovation” in the games industry. One particularly good part of the report is that it specifically points to what the acquisition could mean for Microsoft’s dominance of streaming and subscriptions through Xbox Game Pass, which any clear-eyed observer can see is the point of the acquisition: to get access to a large library of games, including some very popular franchises, to fill out its subscription library and entice more people to subscribe.

To that end, the CMA found that 24% of Call of Duty players on Playstation would abandon the platform if the franchise became Xbox exclusive. But there’s a bigger point here, and I think it’s disappointing how few gamers can see it because they’ve fallen for Microsoft’s trick of using traditional gaming rivalries (Xbox vs Playstation) and the goodwill felt toward Xbox head Phil Spencer (who, at the end of the day, is just another corporate executive) to rally support. If this deal goes through, it will be a significant merger that will have wide-ranging effects in an already consolidating industry — further accelerating those pressures, and reducing the incentive to experiment. Ultimately, it would be bad for gamers, even if it would be great for Microsoft.

I’d just say that if you’re interested in this topic and the potential impacts of this consolidation, I had a great conversation with Waypoint editor Rob Zacny last year that gets into how the industry has been changing for the worse.

Some other things to read

  • Brian Merchant wrote about tech’s relationship to Los Angeles, and how interesting it is to assess from that vantage point: “not just to take a window seat to the innovations happening in entertainment and virtual reality, but to explore the toll of all tech in its La La Land phase.”
  • A new MIT study dug into the potential climate impact of self-driving cars and all the computing power they’d require. The researchers didn’t have great news: “widespread global adoption of self-driving cars would generate an additional 0.14 gigatons of greenhouse gas emissions per year—as much as the nation of Argentina.”
  • Some more unofficial Twitter Files (not from Elon and his hand-picked scribes): “The Trump administration and its allied Republicans in Congress routinely asked Twitter to take down posts they objected to — the exact behavior that they’re claiming makes President Biden, the Democrats, and Twitter complicit in an anti-free speech conspiracy to muzzle conservatives online.”
  • Billy Perrigo at TIME reports that a Kenyan court “rejected Meta’s attempt to have its name struck from a lawsuit alleging widespread failings in its safety operations in Africa.” Amnesty International says it means Meta will be “significantly subjected to a court of law in the global south” for the first time in its history. Billy’s been reporting on Meta’s Kenyan content moderation since last year.
  • Jon Christian at Futurism has been tracking the media companies getting AI to write articles. The publisher behind Sports Illustrated and Men’s Journal said they’d use the tech responsibly. Their first AI story “contained persistent factual mistakes and mischaracterizations of medical science that provide readers with a profoundly warped understanding of health issues.”
  • People aren’t buying into Elon Musk’s Twitter Blue: The Information reports the subscription service has just 180,000 US subscribers and 290,000 global subscribers. That’s nowhere near replacing all that lost ad revenue.
  • Adam Neumann is back to explain what his a16z-backed housing company Flow is all about. As you might expect, it makes absolutely no sense. People don’t want a “sense of ownership,” Adam. They want a stable and affordable place to live!
  • Kate Wagner digs into the unethical embrace of Saudi Arabia’s NEOM project by starchitects that want you to believe they’re building a better future.
  • Elon Musk is angry engagement on his tweets is dropping. When an engineer told him it might be because people are tiring of his antics, he was fired on the spot.
  • Americans might be spared so far, but Netflix is rolling out its ill-considered password-sharing crackdown in Canada, New Zealand, Spain and Portugal. Let’s just say: 🏴‍☠️
  • Who is @catturd2, the account dominating MAGA Twitter? Miles Klee decided to find out.
  • Finally, not a read, but I had an in-depth conversation with Edward Niedermeyer this week about what’s been going on at Tesla and the many challenges it faces moving forward with probes of Autopilot, a more competitive EV market, and a CEO intent on turning off a lot of potential customers.

A Photo to Drill Into Your Brain

With more bad news out of Neuralink, this piece of art by the fantastic Eli Valley has been on my mind again the past few days.