#44: Inside ChatGPT’s Revolutionary Potential, Major Google AI Announcements, and Big Problems with AI Training Are Discovered
54 Minuten
Podcast
Podcaster
Beschreibung
vor 2 Jahren
New announcements, fast training with repercussions, and more are
discussed in this week’s Marketing AI Show with Paul Roetzer and
Mike Kaput. Read more, then tune in! Stunning results from
ChatGPT plugins The way we all work is about to change in major
ways thanks to ChatGPT—and few are ready for how fast this is about
to happen. In a new TED Talk, OpenAI co-founder and president Greg
Brockman shows off the power and potential of the all-new ChatGPT
plugins…and the results are stunning. Thanks to ChatGPT plugins,
ChatGPT can now browse the internet and interact with third-party
services and applications, resulting in AI agents that can take
actions in the real world to help us with our work. In the talk,
Brockman shows off how knowledge workers will soon work
hand-in-hand with machines—and how this is going to start changing
things months (or even weeks) from now, not years. Paul and Mike
talk about capabilities that caught their eye, and what this means
for the future of work. Google just announced some huge AI updates
However, some within the company say Google is making ethical
lapses in their rush to compete with OpenAI and others. There were
three significant updates: Google announced that its AI research
team, Brain, would merge with DeepMind, creating Google DeepMind.
It was also revealed that Google is working on a project titled
“Magi.” It involves Google reinventing its core search engine from
the ground up to be an AI-first product, as well as adding more AI
features to search in the short term. Details are light at the
moment, but the New York Times has confirmed some AI features will
roll out in the US this year and that ads will remain a part of
AI-powered search results. Finally, Google announced Bard had been
updated with new features to help you code. Bard can now generate
code and help you debug code. As these updates rolled out,
reporting from Bloomberg revealed that some Google employees think
the company is making ethical lapses by rushing the development of
AI tools, particularly around Bard and the accuracy of its
responses. What problems arise during training AI tools? AI
companies like OpenAI are coming under fire for how AI tools are
trained, and social media channels are pushing back. Reddit, which
is often scraped to train language models, just announced it would
charge for API access, in order to stop AI companies from training
models on Reddit data without compensation. Additionally, Twitter
recently made a similar move. And Elon Musk publicly threatened to
sue Microsoft for, he says, “illegally using Twitter data” to train
models. Other companies are sure to follow suit. An investigative
report by the Washington Post recently found that large language
models from Google and Meta trained on data from major websites
like Wikipedia, The New York Times, and Kickstarter. The report
raises concerns that models may be using data from certain sites
improperly. In one example, the Post found models had trained on an
ebook piracy site and likely did not have permission to use the
data it trained on. Not to mention, the copyright symbol appeared
more than 200 million times in the data set the Post studied. And
if that wasn’t enough, StableLM and AI Drake were discussed!
discussed in this week’s Marketing AI Show with Paul Roetzer and
Mike Kaput. Read more, then tune in! Stunning results from
ChatGPT plugins The way we all work is about to change in major
ways thanks to ChatGPT—and few are ready for how fast this is about
to happen. In a new TED Talk, OpenAI co-founder and president Greg
Brockman shows off the power and potential of the all-new ChatGPT
plugins…and the results are stunning. Thanks to ChatGPT plugins,
ChatGPT can now browse the internet and interact with third-party
services and applications, resulting in AI agents that can take
actions in the real world to help us with our work. In the talk,
Brockman shows off how knowledge workers will soon work
hand-in-hand with machines—and how this is going to start changing
things months (or even weeks) from now, not years. Paul and Mike
talk about capabilities that caught their eye, and what this means
for the future of work. Google just announced some huge AI updates
However, some within the company say Google is making ethical
lapses in their rush to compete with OpenAI and others. There were
three significant updates: Google announced that its AI research
team, Brain, would merge with DeepMind, creating Google DeepMind.
It was also revealed that Google is working on a project titled
“Magi.” It involves Google reinventing its core search engine from
the ground up to be an AI-first product, as well as adding more AI
features to search in the short term. Details are light at the
moment, but the New York Times has confirmed some AI features will
roll out in the US this year and that ads will remain a part of
AI-powered search results. Finally, Google announced Bard had been
updated with new features to help you code. Bard can now generate
code and help you debug code. As these updates rolled out,
reporting from Bloomberg revealed that some Google employees think
the company is making ethical lapses by rushing the development of
AI tools, particularly around Bard and the accuracy of its
responses. What problems arise during training AI tools? AI
companies like OpenAI are coming under fire for how AI tools are
trained, and social media channels are pushing back. Reddit, which
is often scraped to train language models, just announced it would
charge for API access, in order to stop AI companies from training
models on Reddit data without compensation. Additionally, Twitter
recently made a similar move. And Elon Musk publicly threatened to
sue Microsoft for, he says, “illegally using Twitter data” to train
models. Other companies are sure to follow suit. An investigative
report by the Washington Post recently found that large language
models from Google and Meta trained on data from major websites
like Wikipedia, The New York Times, and Kickstarter. The report
raises concerns that models may be using data from certain sites
improperly. In one example, the Post found models had trained on an
ebook piracy site and likely did not have permission to use the
data it trained on. Not to mention, the copyright symbol appeared
more than 200 million times in the data set the Post studied. And
if that wasn’t enough, StableLM and AI Drake were discussed!
Weitere Episoden
1 Stunde 5 Minuten
vor 4 Monaten
1 Stunde 17 Minuten
vor 4 Monaten
1 Stunde 15 Minuten
vor 4 Monaten
In Podcasts werben
Kommentare (0)