The anthropic logo he is appearing on a smartphone in this photographic illustration in Brussels, … [+]
There are many conversations about what is happening in Anthropic, where the Claude 3.7 Sonnet is receiving all kinds of new user activity, and showing the latest repetition from this leading technology company. Last week I wrote about Ethan Mollick’s response to the model on his blog, and another coverage.
But there is also a source of direct information: Dario Amodei continued fork Hard this week to talk to friends (Kevin Roose and Casey Newton) where things are, and the context of new developments with Claude.
“There are these patterns of reasoning there, who have been there for several months, and we wanted to make one of our own, but we wanted to focus on being a little different,” Amodei explained.
“In particular, many of the other models of reasoning have been trained mainly in the coding of mathematics and competition, which are objective tasks where you can measure performance. We trained 3.7 to focus more on real world tasks.”
It also addressed the development of Sonet Claude 3.7 as a hybrid model.
“Overall it was that there is a regular model and then has a reasoning model,” he said. “It would be as if a man had two brains, and you can talk to the number one brain to ask a quick question like your name, and you are talking to the number two brains if you are looking to try a mathematical theorem.”
Amodei also talked about future models being able to self-reference for conclusions or impose a data on thought. Searching online, he said, is coming soon. When specified, there was a ridiculous point in Podcast where Amodei mentioned a “small number of time units” for the pleasure of the armies.
The future risks of he
Amodei spoke a little about the safety and context of new technologies.
“I feel like there is this constant confusion of current dangers with future risks,” he said. “It’s not that there are no risks present (but) I am more concerned about the dangers we will see as the models become more powerful.”
While testifying to the Senate, he said, he thought of things such as biological or chemical warfare and the dangers of misuse.
The judgments, he said they can help where they test models for weaknesses.
“It means there is a new danger in the world,” he said about the arrival of him. “A new threat vector exists in the world.”
Paris, France – May 22: Co -founder and CEO of Anthropic, Dario Amodei, an artificial intelligence … [+]
Requesting additional safety measures and additional deployment measures, amodei noted that the shares are high.
Assistant
In terms of services, Amodei spoke about the personal and business use of these new systems.
“The best helper for me may not be the best helper for any other person,” he said. “I think an area where models will be good is if you are trying to use this as a Google search replacement or quick information receipt.”
Deep bombs
Passing up some current events, Amodei addressed the latest announcement by Deepseek, which seemed to be so worried by American companies.
“I worry less about Deepseek from a prospect of trade competition,” he said. “I worry more about them from a national competition and the prospect of national security.”
He does not want autocracies to have an advantage in that of representative democracy.
“I want to make sure that liberal democracies have enough lever and sufficient advantage in technology that they can prevent certain abuses from occurring, and prevent opponents from putting us in a bad position with respect to the rest of the world,” he said.
Opportunities with him
“I am a fan of using opportunities,” Amodei said, citing his writing on Loving Grace cars, a prominent essay on new technology. “: For someone who worries the dangers, I feel like I have a better vision of benefits than many people spending all their time talking about the benefits. In the background, as I said, as the models have become more powerful, the amazing and wonderful things we can grow with, but the dangers have also increased.”
He pointed to a zeitgeist in whom he approaches great.
“You talk to people living in San Francisco, and there is this deep feeling of bone that within a year or two, we will simply live in a world that has been transformed by him,” he said.
There is much more in Podcast – this is some of those I felt more important for our system analysis, as the 2025 continues to be fast. Always is always well checked with business leaders closest to the process to get data on what is coming next.