Contents Archive

That Was The Week Diary

Jan 21, 2023 ยท 2023 #1 Editorial

The Dawning of a New Era

That Was The Week - 2023 #1

Watch the show

Main video playback

Play the hosted video for this issue.

Editorial read aloudSpoken editorialListen to the written editorial narrated in your voice.
Audio versionFull show audioPlay the complete newsletter audio feed beyond the editorial.
Read Original Watch Transcript Audio

It is sooo tempting to have ChatGPT write this week's editorial. To see what it would say.

I fed Gary Marcus's article below, 24 Seriously Embarrassing Hours for AI, and asked it to summarize. it did a good job.

The article presents a critical evaluation of recent developments in the field of autonomous vehicles and artificial intelligence, specifically focusing on Tesla, OpenAI and large language models such as ChatGPT. The author suggests that Tesla staged their famous 2016 driverless car demonstration, and that the demo was not a real, unedited run taken by a single car. The author also suggests that the $100 billion invested in Tesla, partly on the strength of this demonstration, has not yet resulted in a car that can drive without human assistance, as promised by Tesla CEO, Elon Musk.

The author also highlights recent court testimony that suggests some high-level employees working on driverless cars were apparently unaware of human factors engineering, which is an essential aspect of autonomous vehicles. The author also suggests that OpenAI has been using sweatshops behind the scenes to filter out bad data for the ChatGPT model. The author references a Time article by Billy Perrigo, which describes how poorly-paid labor in Kenya was used to evaluate graphic descriptions of sexual situations involving children and animals.

The author also references a test of the latest large language model, Claude, by Riley Goodside, which suggests that the model quickly lands in the land of hallucinations, similar to ChatGPT. The author also mentions CNET's recent use of ChatGPT-written stories, which resulted in a number of mistakes, and the musician Nick Cave's scathing review of ChatGPT's riffs on his music.

The author concludes by suggesting that these developments are not a good look for the field, and that we must make better choices in the future. The author's critique is that Tesla's 2016 driverless car demo was staged, and that the company has not yet achieved the level of autonomy that was promised by its CEO, Elon Musk. The author also raises concerns about the use of sweatshops by OpenAI and the lack of human factors engineering by some high-level employees working on driverless cars. The author suggests that large language models such as ChatGPT are prone to hallucinations and that the recent use of ChatGPT-generated content has resulted in a number of mistakes. Overall, the author's critique is that these developments raise serious ethical and moral concerns and that the field must make better choices in the future.

I then asked it to reply to the main points, and it definitely did a decent job FYI.

Here are a few potential arguments against the author's views:

Tesla's 2016 driverless car demo was a demonstration of the company's technology and capabilities at the time, and not meant to be taken as a guarantee of a fully autonomous vehicle in the near future. It's common practice in technology demonstrations to showcase the best possible scenario, and not an exact representation of the current state of the technology.

The $100 billion invested in Tesla is a long-term investment and the company is still working on developing fully autonomous vehicles. It's not accurate to say that the company has not achieved what was promised, as the development of autonomous technology is a complex and ongoing process.

OpenAI's use of human labor to filter out bad data is not unique to the company and is a common practice in the industry. Additionally, the use of labor in developing countries is not necessarily unethical, and the company has stated that they are committed to ensuring fair compensation and working conditions for their workers.

The lack of human factors engineering in some high-level employees working on driverless cars does not necessarily indicate a lack of concern for the safety of users. It's possible that the company is still in the process of developing their approach to human factors engineering, or that the employees in question were not directly involved in that aspect of the project.

Large language models such as ChatGPT are still in the early stages of development and it's not uncommon for them to have limitations and produce hallucinations. The technology is constantly improving and advancements are being made to address these issues.

CNET's use of ChatGPT-generated content may have resulted in mistakes, but it's not necessarily an indication of the overall reliability of the technology. It's possible that the mistakes were caused by limitations in the specific implementation of the technology or the way it was used, rather than a flaw in the technology itself.

In summary, it's important to acknowledge that autonomous technology and artificial intelligence are complex fields that are still in the early stages of development. While it's important to be critical of the industry and raise ethical concerns, it's also important to consider the nuances and complexities of the issues at hand, and not to make sweeping negative assumptions about the industry as a whole.

The fact that it did OK is the key driver behind Microsoft's desire to become a major beneficiary of the technology. A $10bn investment is not a lot for a multi-trillion dollar upside. As another article this week notes, ChatGPT can give an unlimited number of interns to anybody who wants to use it as a thought partner. It can replace many jobs, but it can also create outcomes for individuals who do not currently employ anybody. Creators in particular can make good use of it.

I am very bullish on OpenAI. But I am also similarly minded about other players likely to participate in this market.

Computers are better than humans at parsing large amounts of data and quite good at selecting the specific data needed to address any question. They can now write prose too and understand written and spoken inputs.

This is the dawning of a new era....

Welcome to 2023

Newer Older