Hey guys, welcome to TechCrunch's regular AI newsletter. Sign up if you want this in your inbox every Wednesday here.
The AI news cycle hasn't slowed down much this holiday season. Within OpenAI 12 days “Shipmas” and of Dipsic Major model releases for ChristmasBlink and you'll miss some new developments.
And it's not slowing down now. On Sunday, OpenAI CEO Sam Altman said In a post on his personal blog he thinks OpenAI knows how to build Artificial General Intelligence (AGI) and has begun to turn its goals into superintelligence.
AGI is a nebulous term, but OpenAI has its own definition: “Highly autonomous systems that outperform humans in the most economically valuable tasks.” As for superintelligence, which Altman sees as a step beyond AGI, he said in the blog post that it could “massively accelerate” innovation beyond what humans are capable of achieving on their own.
“[OpenAI continues] Believing that iteratively putting great tools into people's hands leads to great, broadly distributed results,” Altman wrote.
Altman — like Dario Amodei, CEO of OpenAI rival Anthropic — is optimistic that AGI and superintelligence will lead the way. Wealth and prosperity for all. But assuming AGI and superintelligence are possible without new technological advances, how can we be sure they will benefit everyone?
A study regarding a recent data point flagged By Wharton professor Ethan Malik at X earlier this month. Researchers from the National University of Singapore, University of Rochester and Tsinghua University investigated the impact of OpenAI's AI-powered chatbot. chatgpton freelancers across different labor markets.
The study identified an economic “AI inflection point” for a variety of jobs. Before the inflection point, AI boosted freelancers' incomes. For example, web developers grew by ~65%. But AI starts after the inflection point replacement Freelance translators saw an estimated 30% drop.
The study suggests that once AI starts replacing a job, it doesn't go in the opposite direction. And it should concern us all if more capable AI is indeed on the horizon.
Altman wrote in his post that he is “pretty confident” that “everyone” will see the importance of “broad benefits and empowerment” in the age of AGI — and superintelligence. But what if he is wrong? What if AGI and superintelligence come, and only corporations have anything to show for it?
The result will not be a better world, but more of the same inequality. And if this is AI's legacy, it will be a profound disappointment.
the news
Silicon Valley suppresses the apocalypse: Technologists have been sounding the alarm bells about AI's potential for catastrophic damage for years. But in 2024, those warning calls were drowned out.
OpenAI is losing money: OpenAI CEO Sam Altman says the company is currently losing money by $200 per month ChatGPT Pro plan because people are using it more than the company expects.
Record Generative AI Funding: Investment in generative AI, which encompasses a range of AI-powered apps, tools and services to generate text, images, video, speech, music and more, reached new heights last year.
Microsoft Increases Data Center Spending: Microsoft has earmarked $80 billion through fiscal year 2025 to build data centers designed to handle AI workloads.
Grok 3 MIA: xAI's next-generation AI model, the Grok 3, didn't arrive on time, adding to a trend of flagship models missing their promised launch windows.
Research paper of the week
AI can make a lot of mistakes. But it can also supercharge experts in their work.
At least, that Findings of a team of researchers Hailing from the University of Chicago and MIT. In a new study, they suggest that investors who use OpenAI GPT-4o Income summary states that calls realize higher returns than those that do not.
The researchers recruited investors and GPT-4o gave them AI summaries consistent with their investment skills. Sophisticated investors got more technical AI-generated notes, while beginners got easier notes.
More experienced investors saw a 9.6% improvement in their one-year returns after using GPT-4o, while less experienced investors saw a 1.7% increase. That's not too shabby for AI-human collaboration, I'd say.
Model of the week
Prime Intellect, a startup building infrastructure for training decentralized AI systems release An AI model that it claims can help detect pathogens.
The model, called METAGENE-1, was trained on a dataset of 1.5 trillion DNA and RNA base pairs sequenced from human wastewater samples. Developed in partnership with the University of Southern California and Securibio's Nucleic Acid Observatory, Metagen-1 can be used for a variety of metagenomic applications, Prime Intellect said, such as studying organisms.
“Metagen-1 achieves state-of-the-art performance across multiple genomic benchmarks and new evaluations focused on human-pathogen detection,” Prime Intellect wrote In a series of posts by X. “After pre-training, this model is designed to support tasks in biosurveillance, epidemic monitoring, and pathogen detection.”
grab the bag
In response to this Legal action from major music publishersAnthropologie has agreed to maintain fences to prevent its AI-powered chatbot, Cloud, from sharing copyrighted song lyrics.
Labels, including Universal Music Group, Concord Music Group and ABKCO, sued Anthropic in 2023, accusing the startup of copyright infringement for training its AI system on at least 500 song lyrics. The lawsuit has not been resolved, but for the time being, Anthropic has agreed to stop providing Claude with proprietary song lyrics and creating new song lyrics based on copyrighted material.
“We look forward to demonstrating that, consistent with existing copyright laws, using potentially copyrighted material to train generative AI models is an optimal fair use,” Anthropic said in a statement.