#31: 22-Year Old Creates ChatGPT Detector, Google Gears Up for AI Arms Race, and the Dark Side of AI Training
42 Minuten
Podcast
Podcaster
Beschreibung
vor 2 Jahren
This week in AI news, we talk about education, an AI arms race, and
a very dark side of AI training. First up, A 22-year-old has
created an app that claims to detect text generated by ChatGPT. The
tool is called GPTZero, and it was created by Edward Tian, a senior
at Princeton University, to combat the misuse of AI technology.
Tian believes AI is at an inflection point and has the potential to
be "incredible" but also "terrifying." The app works by looking at
two variables in a text: “perplexity” and “burstiness,” and it
assigns each variable a score. First, the app measures its
familiarity with the text presented, given what it has seen during
training. The less familiar it is, then, the higher the text's
perplexity is, meaning it's more likely to be human-written,
according to Tian. It then measures burstiness by scanning the text
to see how variable it is; if it varies a lot, it's likely to be
human-written. Tian's app aims to incentivize originality in human
writing and prevent the "Hallmarkization of everything" where all
written communication becomes formulaic and wit. Paul and Mike
discuss what this means, ethical issues, and opportunities and
challenges for this tool. Next up, this week, Google staked
its position in the AI arms race by announcing its commitment to
dozens of new advancements in 2023. The New York Times reported
that Google’s founders, Larry Page and Sergey Brin, were called in
to refine the company’s AI strategy in response to threats like
ChatGPT and major players like Microsoft, who just formally
announced its multi-billion-dollar partnership with OpenAI.
According to the Times, Google now intends to launch more than 20
new AI-powered products and demonstrate a version of its search
engine with chatbot features this year. And finally, a new
investigative report reveals the dark side of training AI models. A
recent investigation by Time found that OpenAI used outsourced
Kenyan laborers earning less than $2 per hour to make ChatGPT less
toxic. That included having workers review and label large amounts
of disturbing text, including violent, sexist, and racist remarks,
to teach the platform what constituted unsafe outputs. Some workers
reported serious mental trauma resulting from the work, which was
eventually suspended by OpenAI and Sama, the outsourcing company
involved, due to the damage to workers and the negative
press. As Paul put it in a recent LinkedIn post, this raises
larger questions about how AI is trained: “There are people, often
in faraway places, whose livelihoods depend on them exploring the
darkest sides of humanity every day. Their jobs are to read, review
and watch content no one should have to see.”
a very dark side of AI training. First up, A 22-year-old has
created an app that claims to detect text generated by ChatGPT. The
tool is called GPTZero, and it was created by Edward Tian, a senior
at Princeton University, to combat the misuse of AI technology.
Tian believes AI is at an inflection point and has the potential to
be "incredible" but also "terrifying." The app works by looking at
two variables in a text: “perplexity” and “burstiness,” and it
assigns each variable a score. First, the app measures its
familiarity with the text presented, given what it has seen during
training. The less familiar it is, then, the higher the text's
perplexity is, meaning it's more likely to be human-written,
according to Tian. It then measures burstiness by scanning the text
to see how variable it is; if it varies a lot, it's likely to be
human-written. Tian's app aims to incentivize originality in human
writing and prevent the "Hallmarkization of everything" where all
written communication becomes formulaic and wit. Paul and Mike
discuss what this means, ethical issues, and opportunities and
challenges for this tool. Next up, this week, Google staked
its position in the AI arms race by announcing its commitment to
dozens of new advancements in 2023. The New York Times reported
that Google’s founders, Larry Page and Sergey Brin, were called in
to refine the company’s AI strategy in response to threats like
ChatGPT and major players like Microsoft, who just formally
announced its multi-billion-dollar partnership with OpenAI.
According to the Times, Google now intends to launch more than 20
new AI-powered products and demonstrate a version of its search
engine with chatbot features this year. And finally, a new
investigative report reveals the dark side of training AI models. A
recent investigation by Time found that OpenAI used outsourced
Kenyan laborers earning less than $2 per hour to make ChatGPT less
toxic. That included having workers review and label large amounts
of disturbing text, including violent, sexist, and racist remarks,
to teach the platform what constituted unsafe outputs. Some workers
reported serious mental trauma resulting from the work, which was
eventually suspended by OpenAI and Sama, the outsourcing company
involved, due to the damage to workers and the negative
press. As Paul put it in a recent LinkedIn post, this raises
larger questions about how AI is trained: “There are people, often
in faraway places, whose livelihoods depend on them exploring the
darkest sides of humanity every day. Their jobs are to read, review
and watch content no one should have to see.”
Weitere Episoden
1 Stunde 5 Minuten
vor 4 Monaten
1 Stunde 17 Minuten
vor 4 Monaten
1 Stunde 15 Minuten
vor 4 Monaten
In Podcasts werben
Kommentare (0)