This podcast currently has no reviews.
Submit ReviewThis podcast currently has no reviews.
Submit ReviewAI-driven automation is quickly becoming a fundamental part of businesses’ tech stacks, but there are also potential dangers associated with this technology.
Paul pointed out that AI-driven automation is quickly becoming a fundamental part of businesses’ tech stacks. "I feel like the large language model is going to be as fundamental to the tech stack as a CRM has been for the last ten to 15 years," he said.
Mike added that businesses should look for models that allow them to customize the model with their own data and integrate it into their own applications. "You want to tune these models on that data, whether it’s for internal external use cases, and you want to be highly confident in the privacy and security of that data and how these models work within your organization," he said.
AI technology is rapidly advancing and is capable of performing complex tasks autonomously with minimal human intervention.
Paul discussed AI technology and the need for safety and alignment when building these applications. "We're not going back, we're not going to just stop trying to build these action transformers," he said.
"But I really hope that the people that are building these things understand the potential ramifications of what they're building and do everything in their power internally and with their peers who are working on similar technology, to do everything possible, to do it in a responsible way, and to do it with safety, first and foremost."
Mike then discussed the impact of AI on labor, noting that AI tech has accelerated the ability of AI to perform knowledge work, including strategic and creative work.
AI is advancing quickly and is likely to significantly reduce the time it takes to complete knowledge tasks such as writing, design, coding, and planning.
Paul noted that AI-driven disruption of knowledge work is a very real possibility and that organizations and leaders should plan for significant job loss. He also pointed to a survey of almost 800 people, which showed that lack of education and training was the top obstacle to adoption.
"It’s coming; it’s going to intelligently automate large portions of your work," Roetzer said.
"Based on my own experiences in research as well as the context of dozens of conversations, it is reasonable to assume the time to complete most knowledge tasks such as writing, design, coding, planning, et cetera, can be reduced on average 20% to 30% at minimum with current generative AI technology. And the tech is getting faster and smarter at a compounding rate, so these percentages are only going to rise what it's capable of doing."
AI technology has the potential to create a wide range of new jobs and career paths.
Paul and Mike discussed the potential impacts of AI technology on the job market and the economy. Mike noted that "this is not just wild speculation," and Paul agreed, saying "I do believe it's going to create lots of new jobs and career paths we can't imagine."
He went on to explain that "the flaws and limitations of generative AI are greater than are being discussed in the media and will prevent mass disruption in the near term."
Paul also highlighted the importance of humans in the AI process, noting that "the dependence of the machine on the human to make sure it's accurate and safe and aligned with human values" is essential. He suggested creating "generative AI policies that explicitly say how you're using language tools, image generation tools, video generation tools, etc."
This episode is brought to you by BrandOps, built to optimize your marketing strategy, delivering the most complete view of marketing performance, allowing you to compare results to competitors and benchmarks.
One step forward, two steps back…or at least made with caution. Meta announces their Segment Anything Model, and in that same breath, we’re talking about ChatGPT and safety, as well as the limitations of being able to detect the usage of ChatGPT. Paul and Mike break it down:
Meta AI announces their Segment Anything Model
An article from Meta introduces their Segment Anything project, aiming to democratize image segmentation in computer vision. This project includes the Segment Anything Model (SAM) and the Segment Anything 1-Billion mask dataset (SA-1B), the largest segmentation dataset ever.
This has wide-ranging applications across different industries. Meta cites that it could do things like be incorporated into augmented reality glasses to instantly identify objects you’re looking at and prompt you with reminders and instructions related to an object.
In marketing and business specifically, Gizmodo calls the demo of SAM a Photoshop Magic Wand tool on steroids, and one of its reporters used it to do sophisticated image editing on the fly with ease by simply pointing and clicking to remove and adjust images.
Right now, the model is available only for non-commercial testing, but given the use cases, it could find its way into Meta’s platforms as a creative aid.
Paul and Mike discuss the opportunities for marketers and the business world at large.
Does ChatGPT have a safety problem?
Is OpenAI's April 5 statement on their website is a response to calls for increased AI safety, like the open letter signed by Elon Musk and others, and Italy’s full ban on ChatGPT?
A new article from WIRED breaks down why and how Italy’s ban could spur wider regulatory action across the European Union—and call into question the overall legality of AI tools. When banning ChatGPT, Italy’s data regulator cited several major problems with the tool. But, fundamentally, their reasoning for the ban hinged on GDPR, the European Union’s wide-ranging General Data Protection Regulation privacy law.
Experts cited by WIRED said there are just two ways that OpenAI could have gotten that data legally under EU law. The first would be if they had gotten consent from each user affected, which they did not. The second would be arguing they have “legitimate interests” to use each user’s data in training their models. The experts cited say that the second one will be extremely difficult for OpenAI to prove to EU regulators. Italy’s data regulator has already been quoted by WIRED as saying this defense is “inadequate.”
This matters outside Italy because all EU countries are bound by GDPR. And data regulators in France, Germany, and Ireland have already contacted Italy’s regulator to get more info on their findings and actions.
This also isn’t just an OpenAI problem. Plenty of other major AI companies likely have trained their models in a way that violates GDPR. This is an interesting conversation and topic to keep our eyes on. With other countries follow suit?
Can we really detect the use of ChatGPT?
OpenAI, the maker of ChatGPT, just published what it’s calling “Our approach to AI safety,” an article outlining specific steps the company takes to make its AI systems safer, more aligned, and developed responsibly.
Some of the steps listed include delaying the general release of systems like GPT-4 to make sure they’re as safe and aligned as possible before being accessible to the public, protecting children by requiring people to be 18 or older, or 13 or older with parental approval, to use AI tools. They are also looking into options to verify users. They cite that GPT-4 is 82% less likely to respond to requests for disallowed content.
Listen for more. Why now? Are we confident they’re developing AI responsibly?
AI leaders say slow down, Italy blocks AI, the United Nations implements global framework. But, other leaders keep finding ways to integrate ChatGPT, and new companies are launched. This dichotomy makes for an interesting episode. Paul and Mike break it all down.
“The Letter’ heard round the world made waves - but what does it really mean?
In an open letter published by the nonprofit Future of Life Institute, a number of well-known AI researchers and tech figures, including Elon Musk and Steve Wozniak, have called on all AI labs to pause the development of large-scale AI systems for at least 6 months due to fears over the profound risks to society and humanity that they pose.
The letter notes that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems that no one can understand, predict, or reliably control.
The signatories call for a public and verifiable pause and for the development of shared safety protocols for advanced AI design and development.
What does it mean, will other countries follow suit, is it a PR play, and at this point, does it even matter? Are we thinking about misinformation and job loss the right way?
At the same time, moves are being made internationally: UNESCO (United Nations Educational, Scientific and Cultural Organization) is calling for the immediate implementation of its Recommendation on the Ethics of Artificial Intelligence, a global framework for the ethical use of AI.
And, in a bold move, Italy has become the first Western country to block OpenAI's chatbot ChatGPT, citing privacy concerns. The Italian data protection authority said it would ban and investigate OpenAI with immediate effect, following a data breach involving user conversations and payment information. Will other countries follow suit?
Prompt engineering - a job, a function, or a skill?
Paul recently wrote about one possible future he’s seeing for prompt engineering on LinkedIn, saying: “How soon until we have a Prompt Copilot that helps users write far more effective and optimized generative AI prompts? Think of it as a prompting assistant that improves and expands your prompts as you type them.”
He also talked about how the quality of human user prompts is crucial for the effectiveness and value of generative AI software—and that companies are motivated to reduce the friction in their products and speed up time to value for all users.
The development of a prompting assistant that helps users write more effective and optimized prompts using AI seems like an obvious and achievable innovation to solve this problem and could render prompting as a career path or human skill less important beyond 2023.
Will it become a must-know in any career path?
BloombergGPT is announced
Bloomberg has announced the development of a new large-scale generative AI model specifically trained on a wide range of financial data to support natural language processing tasks within the financial industry.
The model, called BloombergGPT, represents the first step in the development of a domain-specific model to tackle the complexity and unique terminology of the financial domain.
The new model will enable Bloomberg to improve existing financial NLP tasks such as sentiment analysis, named entity recognition, news classification, and question answering while bringing the full potential of AI to the financial domain.
On top of this, Seth Godin and David Sacks are using ChatGPT. What’s next?
Rapid-fire topics include the All-In podcast, a Redditor loses his love of his career because of AI, Replit teams up with Google Cloud, Sam Altman chats with Lex Fridman, Sam Altman launches Worldcoin, and more.
Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
GPT-4 is changing the game. Access is easier, outputs are better, and technologies connecting to it are increasing exponentially with the help of a new plugin system. What will the rest of this week bring us?
OpenAI launches a plugin system for ChatGPT
OpenAI just announced a plugin system for ChatGPT, enabling it to interact with the wider world through the internet. The plugins, developed by companies like Expedia, Instacart, and Slack, will allow users to perform a variety of tasks using these sites from right within ChatGPT.
It’s not just companies wanting to embed AI into their sites. OpenAI itself is hosting three of the plugins: one that gives ChatGPT access to up-to-date information on the internet, a Python code interpreter, and a retrieval plugin that allows users to ask questions of documents, files, notes, emails, and public documentation.
Of particular note, one of the plugins available integrates with Zapier, which itself integrates with thousands of other tools. Right now, there’s a waitlist to access the plugins for developers and ChatGPT Plus users.
Did we just open a whole new world of AI use cases?
Artificial General Intelligence…one step closer
"OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” We read this in episode 36 of The Marketing AI Show, just over a month ago.
Now, OpenAI is saying, "Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity." A team of Microsoft AI scientists claims that GPT-4, the latest iteration of OpenAI's Large Language Model, exhibits "sparks" of human-level intelligence, or artificial general intelligence (AGI).
The researchers argue that GPT-4's impressive performance in a wide range of tasks, such as mathematics, coding, and even legal exams, indicates its potential as an early version of an AGI system.
While some argue that AGI is a pipe dream, others believe that it could usher in a new era for humanity, and this research indicates GPT-4 might just be leading the way.
Are these thoughts and findings legit? How seriously should we take it?
It only took 30 minutes to market a product launch
Imagine leveraging the power of AI to complete a massive business project in just 30 minutes, accomplishing tasks that would take humans hours or even days.
In a remarkable experiment from Wharton professor Ethan Mollick, a combination of AI tools was used to market the launch of an educational game, conduct market research, create an email campaign, design a website, and craft a social media campaign, among other tasks—in just 30 minutes.
The results demonstrated the unprecedented potential of AI as a multiplier of human effort, with vast implications for the future of work, productivity, and creativity. Over the course of half an hour, Mollick used no more than 20 inputs, actions, or prompts to generate 9,200 words of content, a working HTML/CSS file, 12 images, a voice file, and a movie file across a marketing positioning document, email campaign, website, logo, script and video, and social campaigns.
As he put it “AI would do all the work, I would just offer directions.”
Is this the new normal for marketers?
Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
In what may have been the biggest week in marketing AI (to date), we have a lot to review in this week’s podcast, so let’s jump right in.
GPT-4 is released to the public
The week started with the much-anticipated release of GPT-4, the latest, most powerful version of OpenAI’s large language model. And it’s now is being integrated into existing products via API, as well as ChatGPT.
According to OpenAI, “GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.” OpenAI says that GPT-4 surpasses ChatGPT in its reasoning capabilities. In fact, the new model was tested on the Uniform Bar Exam, where it scored in the 90th percentile compared to ChatGPT’s 10th percentile score.
GPT-4 will be able to accept images as input;. OpenAI demoed one jaw-dropping example of the model being able to generate code for a webpage based on a hand-drawn sketch of what the webpage should look like.
Has OpenAI veered too far away from its non-profit roots? Is a company with “open” in the name being as forthcoming as they should be? Paul and Mike discuss.
Microsoft and Google embed AI in their core products
Google just announced that developers will now have access to its PaLM API, which gives them the ability to build on top of Google’s language models in Google Cloud. The company also announced generative AI features coming to Google Workspace, the firm’s productivity suite. That means you’ll soon see generative AI features in Gmail and Docs that draft copy on any given topic for you.
At the same time, Microsoft announced Microsoft 365 Copilot, an AI tool that is “your copilot for work.” According to the company, Copilot combines the power of large language models with your data in the Microsoft Graph and Microsoft 365 apps to increase productivity.
That happens in two ways, says Microsoft. First, Copilot works alongside you in popular apps like PowerPoint, Word, and Outlook to increase productivity. For instance, in Word, it can now generate drafts for you. In PowerPoint, you can use natural language prompts to create presentations. Copilot also enables a new feature called “Business Chat” that surfaces insights from data across your company and makes performing tasks easier. Our team watched the launch demo, and some instant applications came to mind.
The U.S. Copyright Office is getting involved…so make sure you’re not breaking any laws.
Warning: If you’re using generative AI tools to create content—articles, blog posts, books, images, software, songs, videos, etc—you do not own that content, according to the U.S. Copyright Office. That means anyone can reproduce it without your permission, create derivative works from it, display it, perform it, and sell it.
On Mar 16, 2023, the Copyright Office launched an initiative to examine the copyright law and policy issues raised by AI, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training.
“This initiative is in direct response to the recent striking advances in generative AI technologies and their rapidly growing use by individuals and businesses. The Copyright Office has received requests from Congress and members of the public, including creators and AI users, to examine the issues raised for copyright, and it is already receiving applications for registration of works including AI-generated content.”
The Office also issued updated registration guidance that has an immediate effect on your ability to protect your original works.
Paul and Mike discuss ways to use AI-powered technologies that are legal, and what tech companies need to address with their tools.
Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
The Marketing AI Show is back! The smart CRM market is evolving…and so are marketers and businesses with the help of AI.
First comes ChatSpot, then comes Salesforce Einstein GPT
Coming on the heels of HubSpot’s ChatSpot announcement, Salesforce just announced Einstein GPT, a generative AI tool for its market-leading CRM. The tool, which is currently in closed pilot, creates content across marketing, sales, and service use cases.
Salesforce’s communications say, “Einstein GPT will infuse Salesforce’s proprietary AI models with generative AI technology from an ecosystem of partners and real-time data from the Salesforce Data Cloud, which ingests, harmonizes, and unifies all of a company’s customer data.”
They say Einstein GPT can generate personalized emails, generate specific responses for customer service teams, generate targeted content, and auto-generate code for developers. In the same breath, the company also announced a $250 million Generative AI fund through its venture arm.
The value (or lack thereof) gained by AI is dependent on three factors.
Paul recently published a post on an AI topic framing his idea of “the law of uneven AI distribution.” In it, he wrote: “The Law: The value you gain from AI, and how quickly and consistently that value is realized, is directly proportional to your understanding of, access to, and acceptance of the technology.”
This uneven distribution will create dramatic differences in people’s experiences with and perceptions of AI. And it’s all dependent on three factors: how well you understand AI, the level of access you have to AI, and how much you accept the radical changes that AI will bring about in business and society.
Do we need to fill the time saved by AI with more…work?
When we talk about AI, we often hear that the wondrous productivity gains produced by AI technology will give us back more time, in turn making our lives less busy and more fulfilling.
And these productivity gains are valuable. Venture fund ARK Invest predicts that we could boost the productivity of the average knowledge worker by 140% with AI, which would create $56 trillion in value globally. But a new article from the Centre for International Governance Innovation challenges the idea that AI will liberate our time and goes so far as to call the AI productivity narrative “a lie.”
However, history has shown that efficiencies often heighten expectations and standards. How can we as marketers, business leaders, and humans, ensure we aren’t exacerbating Parkinson’s law by adding to the idea that “work expands so as to fill the time available for its completion?” How can we invest that time in things we want to do? This is a topic we all need to listen to!
This week’s episode of The Marketing AI Show touches on generative AI, and you guessed it, ChatGPT. But it’s not more of the same. APIs and HubSpot take ChatGPT to the next level. Tune in!
ChatSpot…the latest in ChatGPT
The week is starting off with a big development. Just yesterday, Monday, March 6, HubSpot co-founder and CTO Dharmesh Shah released ChatSpot, an AI tool that combines the power of ChatGPT, image generation AI, and HubSpot’s CRM. The tool lets you ask questions of your HubSpot portal and provide instructions in natural language through a chat interface. For example, you can use ChatSpot to give you a summary of data in your portal, create a report of companies added last quarter summarized by country, or generate an image of an orange rocket ship. Mike and Paul break down this latest development and what it means for HubSpot customers and agencies.
The biggest winners generative AI tech stack…so far
Legendary venture capital firm Andreessen Horowitz published a deep dive into the generative AI market: “Who Owns the Generative AI Platform?” To create this, the firm met with dozens of startup founders and operators who deal directly with generative AI to better understand where the value in this market will accrue. Andreessen breaks down the generative AI tech stack into three main categories:
Andreessen observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, haven’t yet achieved a large commercial scale.
Bottom line: the companies creating the most value — i.e. training generative AI models and applying them in new apps — haven’t captured most of it.
APIs are available for ChatGPT and Whisper
We knew it would happen soon: developers can now integrate ChatGPT and Whisper, OpenAI’s human-level speech recognition system, into apps and products through the company’s API. Since December, OpenAI says it has reduced the cost of ChatGPT by 90%—savings that API users will now receive when they use it, making it much easier and cheaper for companies to incorporate the capabilities of ChatGPT and Whisper into their businesses.
However, this doesn’t just mean every business can have its own instance of ChatGPT. It means they can use these capabilities to build innovative new products.
And tech and e-commerce companies are here for it. Already, Snap, the creator of Snapchat, introduced My AI, a customizable on-platform chatbot that is built on the ChatGPT API. Instacart is using the ChatGPT API to pair ChatGPT with its own data so that customers can ask open-ended natural language questions. And Speak is an AI language learning app and the fastest-growing English app in South Korea. They’re using the Whisper API to power an AI-speaking companion product. It’s impressive to see the API in action.
These advancements and developments—happening at lightning speed—have an immediate impact on the marketing world. Paul and Mike help us uncover new opportunities and possibilities.
Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
This week’s episode of The Marketing AI Show brings out some strong opinions from Paul and Mike. The common thread in the three stories covered? Humans.
OpenAI drops a big announcement planning for AGI.
OpenAI, the creator of ChatGPT, just published a bombshell article titled “Planning for AGI and Beyond,” AI systems that are smarter than humans at many different tasks. OpenAI says that AGI “has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.” But it also notes the serious risks of misusing such a hyper-intelligent system. Because of this, OpenAI outlines short- and long-term principles to “carefully steward AGI into existence.”
AI-generated content will lead to more human content.
Paul recently posted on LinkedIn about the “rise of more human content,” and it’s gotten some attention. In the post, he outlines one possible future for content in the age of AI-generated content, saying “As AI-generated content floods the web, I believe we will see authentic human content take on far greater meaning and value for individuals and brands.”
Readers had some things to say, including Alvaro Melendez, who said, “I totally agree I think we will see a rise in relevance and appreciation of artisan content. Human-crafted stories will gain in value.” Paul and Mike discuss their thoughts and observations. See the show notes below for a link to Paul’s post.
Get-rich-quick schemes are on the rise as ChatGPT takes center stage.
Internet scammers are now selling get-rich-quick advice on how to use ChatGPT to churn out content that makes money.
In one noted example, the editors of Clarkesworld, a popular science fiction and fantasy magazine that accepts short story submissions, recently estimated that 500 out of 1,200 submissions received in February were AI-generated by tools like ChatGPT. The problem got so bad, the magazine had to suspend submissions. And Clarkesworld is not alone.
This trend is impacting far more than fiction. Similar advice on how to make a quick buck generating content across book publishing, e-commerce, and YouTube is prevalent. In fact, there are already 200+ books on Amazon that now list ChatGPT as an author or co-author.
Paul and Mike have a lot to say on this topic!
Plus, stick around for the rapid-fire questions at the end, covering Bain x OpenAI, and Meta AI’s LLaMA release.
Paul and Mike are back together for a new episode of The Marketing AI Show. As technologies fast-track some rollouts, it’s clear that it might be time to slow down, and this includes ChatGPT better explaining its value. Then the guys discuss “World of Bits” and what this means for marketers and the business world.
Microsoft’s Bing chatbot is not ready for primetime.
A recent interaction between New York Times technology reporter, Kevin Roose, and a chatbot developed by MIcrosoft for its Bing search engine went a bit awry.
Suffice it to say, it turned into bizarre and unsettling human/machine interaction. During a two-hour conversation, the chatbot told Roose unsettling things like that it could hack into computer systems and also suggested Roose leave his wife.
Roose concluded that the AI wasn’t ready for primetime, and Microsoft is now doing damage control.
ChatGPT vows to better educate the public.
For marketers who have taken time to understand ChatGPT, you have seen some degree of value in the tool. For your average consumer, many are confused or generally scared by the idea of what AI could do. Because of this and a myriad reasons, OpenAI recently published a blog post that addresses some of the known issues with ChatGPT’s behavior. It also provides some education on how ChatGPT is pre-trained, and how it is continuously fine-tuned by humans.
Open AI is working hard to improve ChatGPT’s default behavior by better addressing biases in the tool’s responses, defining the AI’s values within broad bounds, and making an effort to incorporate more public input on how the system’s rules work.
“World of Bits” has transformative implications on marketing and business.
Paul wrote a blog post over the weekend about a powerful concept called “World of Bits,” saying that it could transform marketing and business. In the post, Paul said, “Based on a collection of public AI research papers related to a concept called World of Bits (WoB), and in light of recent events and milestones in the AI industry, including legendary AI researcher Andrej Karpathy announcing his return to OpenAI, it appears that the capabilities for AI systems to use a keyboard and mouse are being developed in major AI research labs right now.”
The outcomes? If AI develops these abilities at scale, the UX of every SaaS company will have to be re-imagined, and it will have profound impacts on productivity, the economy, and human labor. It’s a great and thought-provoking way to end this week’s podcast.
Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
This week on The Marketing AI Show, Paul takes the show on the road—to San Francisco for Jasper’s GenAI Conference—while Mike is here in Cleveland. The big news is Bard, Bing, and a $6 Billion valuation. Suddenly, it’s ChatGPT against the world.
Google responds to ChatGPT with its conversational AI tool, Bard.
Google just announced an experimental conversational AI tool named Bard. Bard uses Google’s LaMDA language model to provide natural language answers to search queries. Think of it like ChatGPT, but backed by all the knowledge and information that Google’s search engine has cataloged over the last couple of decades.
The announcement of Bard—a response to OpenAI and ChatGPT—prompted some critics to say the rollout was rushed, while others said they moved too slowly after ChatGPT took center stage in December and January.
If you missed it, the demo didn’t quite go as planned.
OpenAI gives Bing a new lease on life.
Microsoft’s Bing is getting more attention now than its previous 14 years combined. The latest version of the search engine is powered by OpenAI, complete with ChatGPT-like conversational capabilities. Bing can now respond to searches and queries in natural language, like ChatGPT, and use up-to-date information, like Google’s Bard release.
Kevin Roose, technology writer at the New York Times, took the new capabilities for a test drive and was impressed.
Will Bing and OpenAI make Edge, Microsoft’s browser, interesting for customers?
Cohere answers the call for ChatGPT for the enterprise.
Major AI startup, Cohere, is in talks to raise money at a $6 billion valuation and bring ChatGPT-like capabilities to businesses. Established in 2019 by former researchers at Alphabet/Google, Cohere is a big player in the world of AI. The foundational language AI technology allows businesses to incorporate large language models into their work.
The group is now in talks to raise hundreds of millions at a $6 billion valuation, reports Reuters, as the AI arms race heats up. Cohere is no stranger to the VC world, having already raised $170 million from venture capital funds and AI leaders like Geoff Hinton and Fei-Fei Li.
The appeal is the company’s focus on building for the enterprise, with an emphasis on real-world applications for their technology.
Listen to this week’s episode on your favorite podcast player, and be sure to explore the links below for more thoughts and perspectives on these important topics.
This podcast could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review