Dubai Tech News

How ChatGPT and Billions in investment helped AI go mainstream in 2023

The hottest topic of 2023 was AI as billions of dollars in VC funding flowed into the sector and the industry grappled with critical questions about the technology’s place in society. getty 2023 was the year of AI. After ChatGPT launched in November 2022, it became one of the fastest growing apps ever, gaining 100 millions monthly users within two months.

With AI becoming the hottest topic of the year (just as Bill Gates predicted in January, a string of startups exploded into the market with AI tools that could generate everything from synthetic voice to videos . Evidently, AI has come a long way since the start of the year, when people questioned if ChatGPT would replace Google search. “I’m much more interested in thinking about what comes way beyond search…What do we do that is totally different and way cooler?’” OpenAI CEO Sam Altman told Forbes in January.

Rapid advancements in the technology caught the attention of venture capitalists as billions of dollars flowed into the sector. Leading the way was Microsoft’s $10 billion investment into AI MVP OpenAI, which is now reportedly raising at $80 billion valuation. In June, high profile AI startup, Inflection , released its AI chatbot Pi and raised $1.

3 billion at a $4 billion valuation. A month later, Hugging Face, which hosts thousands of open source AI models, reached a $4 billion valuation. In September, Amazon announced that it plans to invest $4 billion into OpenAI challenger, Anthropic , which rolled out its own conversational chatbot Claude 2.

0 in July and is today valued at $25 billion. But not all AI founders have had a straightforward path to fundraising. Stability AI raised funding at a $1 billion valuation in September 2022 for its popular text-to-image AI model Stable Diffusion but has struggled to raise since.

Its CEO Emad Mostaque spun misleading claims about his own credentials and the company’s strategic partnerships to investors, a Forbes investigation found in June. In December, a Stanford study found that the dataset used to train Stable Diffusion contains illegal child sexual abuse material. The AI gold rush minted several other unicorns like Adept, which is building AI assistants that can browse the internet and run software programs for you, and Character AI, that’s used by 20 million people to create and chat with AI chatbot characters like Taylor Swift and Elon Musk.

Enterprise-focused generative AI startups such as Typeface , Writer and Jasper that are helping companies automate tasks like email writing and summarizing long documents, have also seen an influx of funding. But amid the race to build and launch AI tools, Google found itself flat footed and playing catch up . The tech giant launched its conversational AI chatbot Bard and its own AI model Gemini at the end of the year.

In the past year, AI has penetrated virtually every facet of life. Teachers worried that students would use ChatGPT to cheat on assignments and the tool was banned from the most popular school districts in the U. S.

Doctors and hospitals began using generative AI tools not only for notetaking and grunt work but also to diagnose patients. While some political candidates started deploying AI in their campaigns to interact with potential voters, others used generative AI tools to create deep fakes of political opponents. AI-generated content flooded the internet, kindling concerns about the exploitation of widely available AI tools to create toxic content.

For instance, fake news stories produced using generative AI software went viral on TikTok and YouTube and nonconsensual AI-generated porn proliferated on Reddit and Etsy . As low quality AI-generated content populated the web, ChatGPT created havoc in the world of freelancers as many feared they would lose their gigs to the buzzy new AI software that can spin out content faster and cheaper than humans. Companies also used AI chatbots to screen, interview and recruit employees, raising flags about biases and risks baked into the technology.

Cybercriminals found ChatGPT useful for writing code for malware and others used it as a social media surveillance tool. To combat some of these problems, tech giants like Microsoft and Google hired red teams to jailbreak their own AI models and make them safer. “There are still a lot of unsolved questions,” said Regina Barzila, professor of electrical engineering and computer science at MIT CSAIL.

“We need to have tools that can discover what kind of issues and biases are in these datasets and have meta AI technologies that can regulate AI and help us be in a much safer position than where we are today with AI. ” In 2023, leading AI startups like OpenAI, Stability AI and Anthropic were hit with a tide of copyright infringement lawsuits by artists, writers and coders, who claimed that these tools are built on vast datasets that used their copyrighted content without consent or pay. Legal expert Edward Klaris predicts these class action lawsuits will create room for new nuanced rules around fair use of AI by the U.

S. Copyright Office in 2024. “In the legal world there’s a huge number of AI transactions that are going on.

Some people are upset that their work was scraped to create training data and so they want to be able to license their content to the AI companies and get paid for the use of their stuff,” said Klaris, CEO and managing partner at IP rights law firm KlarisIP. After the European Union looked to regulate the technology through its EU AI Act, the Biden administration issued an executive order of its own, requiring startups developing large AI models that could pose national security risks to disclose them to the government. While tech companies largely supported the executive order, startups were concerned that it could stifle the pace of innovation.

“If you’re looking at the executive order, it formulated principles, which is good to articulate, but it doesn’t really translate to how do we take these principles and translate them into some technology or guardrail that helps us to ensure that the tool that we’re using is really safe,” Barzilla said. 2023 also saw a fracturing among AI leaders, who are divided about whether AI technology should be developed openly or by powerful companies behind closed doors, like Google, OpenAI and Anthropic. Some have spoken about the safety issues associated with open sourcing AI models, since anyone could ostensibly misuse those models.

Others like Meta AI’s Chief Scientist Yann LeCun , who oversaw the development of Meta’s open source model Llama 2, are proponents of stress testing open source AI in an open and transparent way. “Open source large language models will reach the level of closed source large language models in 2024,” Clement Delangue said in a press briefing. An internal divide became public in late November when OpenAI CEO Sam Altman was ousted from the company by its board of directors , who cited that he had not been “candid” with his representations to the board.

A few days later, he was reinstated to his previous role as CEO, after employees threatened to leave the company if Altman did not return. The company also brought on new directors to its board including Bret Taylor and Larry Summers. The key questions that remain to be answered in 2024 are around the economics of AI, Delangue said, specifically around how these AI startups will manage to achieve profit margins and make their investors money.

Reliant on GPUs from semiconductor giants like Nvidia and AMD , most AI models are increasingly cost intensive and have a high carbon footprint as they need to be trained on vast amounts of data. “In 2024, most companies will realize that smaller, cheaper, more specialized models make more sense for 99% of AI use cases,” Delangue said. This article was first published on forbes.

com and all figures are in USD. More from Forbes Australia By Anastasia Santoreneos Forbes Staff By Britney Nguyen By Shivaune Field Forbes Staff.


From: forbes
URL: https://www.forbes.com.au/news/innovation/how-chatgpt-and-billions-in-investment-helped-ai-go-mainstream-in-2023/

Exit mobile version