econtwitter.net is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for Economists.

Administered by:

Server stats:

171
active users

#openai

36 posts35 participants2 posts today

Short seller Andrew Left’s Citron Research argues Palantir Technologies remains significantly overvalued even if shares drop to $40, citing OpenAI’s valuation as a benchmark and highlighting Palantir’s recent stock decline despite strong earnings.
#YonhapInfomax #PalantirTechnologies #OpenAI #CitronResearch #SharePrice #ValuationMultiple #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
en.infomaxai.com/news/articleV

Yonhap Infomax · Short Seller Says Palantir Remains Overvalued Even Compared to OpenAI
More from Yonhap Infomax News

#OpenAI CFO Sarah Friar stated the company hit its first $1 #billion #revenue month in July and is expected to triple revenue to $12.7 billion this year. Despite this growth, OpenAI faces ongoing pressures from #AIcompute demands, necessitating partnerships with Oracle, Coreweave, and Microsoft. cnbc.com/2025/08/20/openai-com #tech #media #news

CNBCOpenAI logged its first $1 billion month but computing power demand is 'voracious,' CFO saysOpenAI has ballooned in size since the launch of ChatGPT in late 2022 and is expected to triple in revenue to $12.7 billion this year.

US tech stocks hit by concerns over future of AI boom

> Warning from #OpenAI’s #SamAltman and #MITpaper puncture Wall Street’s enthusiasm

> US #tech #stocks sold off on Tuesday as warnings that the hype surrounding #artificialintelligence could be overdone hit some of the year’s best-performing shares.

ft.com/content/33914f25-093c-4
#Nvidia #Palantir #Arm #chips #semiconductors

Financial Times · US tech stocks hit by concerns over future of AI boomBy Aiden Reiter

"What if generative AI isn’t God in the machine or vaporware? What if it’s just good enough, useful to many without being revolutionary? Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap. What if they never stop hallucinating and never develop the kind of creative ingenuity that powers actual human intelligence?

The models being good enough doesn’t mean that the industry collapses overnight or that the technology is useless (though it could). The technology may still do an excellent job of making our educational system irrelevant, leaving a generation reliant on getting answers from a chatbot instead of thinking for themselves, without the promised advantage of a sentient bot that invents cancer cures.

Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy—the product of a mass delusion. What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane."

theatlantic.com/technology/arc

The Atlantic · AI Is a Mass-Delusion EventBy Charlie Warzel

Someone who is a serial killer for a given amount of time, as a function of quantity, will not only kill fewer people but spend less time killing people, than someone who spends the exact same amount of time in total employed as an assassin. That's an extreme example, and that's deliberate; feel free to dial it back from there. The point is that people who do things generally regarded as atrocious for personal reasons do so, on average, quite infrequently as compared to those who do things that are functionally identical only because they want the money. Curiously and however ironically, the latter also have a higher tendency towards getting away with it.

So, in effect:
Is the most prolific hacker of all time some for-profit (first red flag) company whose operation is entirely legal, and who distributes ostensibly legitimate software for free even though the development thereof is inherently expensive (second red flag), and/or who exposes ostensibly legitimate services to the public for free even though the uptime thereof is inherently expensive (third red flag), and none of which qualifies as a violation the Computer Fraud and Abuse Act, technically, by definition (fourth red flag)?

Is that a rhetorical question (and here's your fifth, pun intended)?

#Google #Meta #ByteDance #OpenAI #Microsoft #Apple #Amazon

Replied in thread

@festal This is possibly the most insightful summary of where we're at I think I've read in a mainstream publication:

"What if generative AI isn’t God in the machine or vaporware? What if it’s just good enough, useful to many without being revolutionary? Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap. What if they never stop hallucinating and never develop the kind of creative ingenuity that powers actual human intelligence?

"The models being good enough doesn’t mean that the industry collapses overnight or that the technology is useless (though it could). The technology may still do an excellent job of making our educational system irrelevant, leaving a generation reliant on getting answers from a chatbot instead of thinking for themselves, without the promised advantage of a sentient bot that invents cancer cures.
...
"Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy—the product of a mass delusion. What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane."

https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/

#AI #LLM #artificialintelligence #largelanguagemodel #ChatGPT #OpenAI

The Atlantic · AI Is a Mass-Delusion EventBy Charlie Warzel

I watched this video on #VibeCoding: youtube.com/watch?v=iLCDSY2XX7E
And at some point I was like, hey, maybe it could actually be useful for me to finally do frontend stuff, which I never got the hang of so far. As long as people wouldn't use it for critical stuff, like OS programming or...
And then at 4:33 my jaw dropped and I hope I'm misinterpreting this: Did they vibe code a health app for diabetes where the decisions come from the #OpenAI API? Which is fed by random internet data?

"Stargate Project": Foxconn und Softbank bauen zusammen KI-Server

In einer früheren Elektroautofabrik im US-Bundesstaat Ohio stellen Foxconn und Softbank künftig gemeinsam Rechenzentrumsausrüstung für das Stargate-Projekt her.

heise.de/news/Stargate-Project

heise online · "Stargate Project": Foxconn und Softbank bauen zusammen KI-ServerBy Andreas Knobloch

From 0 to GPT-5 in 10 years: A quick OpenAI timeline for the record:

2015: Founded as a nonprofit to build „safe AGI“.
2016: OpenAI Gym for reinforcement learning.
2018: OpenAI Five beats humans at Dota 2.
2019: $1B Microsoft investment.
2020: GPT-3 (175B params).
2021: DALL·E 1.
2022: DALL·E 2 & ChatGPT go viral.
2023: GPT-4.
2024: GPT-4o unifies text, image, and audio.
2025: GPT-5.

Ist das neue #ChatGPT-Modell wirklich ein Experte für alles? Laut #OpenAI handelt es sich bei GPT-5 um ein großes Update der #KI, doch Expertïnnen zweifeln an den offiziellen Tests. Christian J. Meier über den schwer messbaren Fortschritt von KIs. riffreporter.de/de/technik/ki-

Jemand hat sich Formeln in die Hand geschrieben, um einen Mathe-Test zu bestehen. Symbolbild dafür, dass KI die Tests, an denen ihre Intelligenz gemessen wird, teilweise schon bei ihrem Training gesehen hat.
RiffReporter · Künstliche Intelligenz in der Sackgasse: Der schwer messbare Fortschritt von GPT-5By Christian J. Meier

"[F]ar from marking a break with the widely hated platform giants that precede it, the A.I. of this most recent hype cycle is a “normal technology” in the strong sense that its development as both a product and a business is more a story of continuity than of change. “Instead of measuring success by time spent or clicks,” a recent OpenAI announcement reads, “we care more about whether you leave the product having done what you came for”--a pointed rebuke of the Meta, Inc. business model. But as Kelly Hayes has written recently, “fostering dependence” is the core underlying practice of both OpenAI and Meta, regardless of whether the ultimate aim is to increase “time spent” for the purpose of selling captured and surveilled users to advertisers, or to increase emotional-intellectual enervation for the purpose of selling sexy know-it-all chat program subscriptions to the lonely, vulnerable, and exploitable:"

maxread.substack.com/p/ai-as-n

Read Max · A.I. as normal technology (derogatory)By Max Read

"Some AI researchers say that the overwhelming focus on scaling large language models and transformers — the architecture underpinning the technology which was created by Google in 2016 — has itself had a limiting effect, coming at the expense of other approaches.

“We are entering a phase of diminishing return with pure LLMs trained with text,” says Yann LeCun, Meta’s chief scientist, who is considered one of the “godfathers” of modern AI. “But we are definitely not hitting a ceiling with deep-learning-based AI systems trained to understand the real world through video and other modalities.”

These so-called world models are trained on elements of the physical world beyond language, and are able to plan, reason and have persistent memory. The new architecture could yet drive forward progress in self-driving cars, robotics or even sophisticated AI assistants.

“There are huge areas for improvement . . . but we need new strategies to get [there],” says Joelle Pineau, the former Meta AI research lead now chief AI officer at start-up Cohere. “Simply continuing to add compute and targeting theoretical AGI won’t be enough.”"

ft.com/content/d01290c9-cc92-4

Financial Times · Is AI hitting a wall?By Tim Bradshaw