CrowdStrike CEO George Kurtz said in a LinkedIn post Thursday that 97% of the Windows sensors that went offline during last week's global IT outage are now back up-and-running.
Why it matters: Roughly a week after CrowdStrike shipped out a faulty update that bricked millions of Windows computers, the worst appears to be over.
Two AI systems from Google DeepMind together solved four of the six problems in this year's International Mathematical Olympiad — on par with silver medalists in the annual world math championship for high school students.
Why it matters: The ability to solve a range of math problems in step-by-step proofs is considered a "grand challenge" in machine learning and has been beyond the reach of current state-of-the-art AI systems.
OpenAI announced Thursday it would start testing a prototype of a new search-based AI tool called SearchGPT, kicking off the next phase of the search industry's rapid AI remodel.
Why it matters: Tech leaders believe that traditional search engines will give way to ChatGPT-style conversational interfaces as the dominant mode of information gathering online.
Leading AI company Anthropic does not support California's AI regulation bill, SB 1047, but is suggesting changes that could lead to a shift, per a letter shared exclusively with Axios Thursday.
State of play: SB 1047 from California State Sen. Scott Wiener passed the California Senate in May and could get a vote in the California Assembly next month.
What they're saying: "Ensuring the safe development of AI technology is a worthy goal, but the current version of SB 1047 has substantial drawbacks that harm its safety aspects and could blunt America's competitive edge in AI development," an Anthropic spokesperson told Axios.
"[Our letter proposes] to refocus the bill on frontier AI safety and away from approaches that aren't adaptable enough for a rapidly evolving technology."
What's inside: Anthropic's letter to Buffy Wicks, chair of the California Assembly Appropriations Committee, suggests the bill shift to "outcome-based deterrence" from "pre-harm enforcement," letting AI companies develop and deploy safety protocols and be held liable for any catastrophes they cause.
Amid a long list of proposed amendments, the company also suggests more narrowly tailoring regulations to focus on frontier AI safety to avoid some duplication of existing federal requirements.
Anthropic also wants to do away with creating a new state agency to regulate frontier models and give authority to the Government Operations Agency instead.
The bottom line: "We are optimistic that if the proposed amendments are adopted it will catalyze an era of innovation and experimentation in risk reduction practices, where companies have skin in the game and are thus incentivized to adopt the practices most likely to actually prevent catastrophic risks," Anthropic state and local policy lead Hank Dempsey wrote in the letter.
Google expects its Gemini AI assistant to be "maximally helpful" while avoiding responses that "could cause real world harm or offense," the company says in policy documents shared first with Axios and being released publicly Thursday.
Why it matters: The explanations follow a string of well-publicized incidents in which the company's AI summaries advised people to eat rocks, put glue on pizza and take other bizarre actions in response to what Google says were queries that were either very rare or malicious.
Tesla CEO Elon Musk says the automaker's self-driving technology will be good enough by next year to launch a long-awaited robotaxi service, but there are serious questions about the technology's readiness and how such a business would operate.
Why it matters: With Tesla's electric vehicle profits shrinking, Musk is now betting the company on an autonomous future involving both vehicles and humanoid robots.
Sam Altman, OpenAI co-founder and CEO, is calling for a "U.S.-led global coalition" to ensure a democratic vision for AI prevails over an authoritarian one — and says both Washington and state governments must act with more urgency.
"The future continues to come at us fast," Altman told Axios in a phone interview Wednesday. "I'm grateful that some stuff is happening [at the White House and on Capitol Hill]. But I don't think we're seeing the level of seriousness that this warrants."
Why it matters: In the face of China's determination to become a dominant AI player, Altman wants to goad governments at all levels into a more strategic, urgent AI approach.
At a Paris building that helped inspire the first Air sneakers 37 years ago, Nike is using the Olympics here to show a future where generative AI is helping bring athletes the shoe of their dreams.
Why it matters: Much of the discussion around AI and design focuses on replacing human labor, while Nike's effort demonstrates that the technology can also be used to explore and expand creative possibilities.
One in 4 Fortune 500 companies experienced a service disruption due to Friday's global IT outages and likely lost a combined $5.4 billion, according to a new report from cyber insurer Parametrix.
Why it matters: The report provides some of the first estimates on how damaging the recent CrowdStrikes outages were to the global economy.
CrowdStrike is adding more steps to its internal review process for software updates after shipping faulty content data last Friday that crashed millions of Windows devices worldwide, the company said in a blog post Wednesday.
Why it matters: Microsoft estimates that 8.5 million Windows devices went down on Friday after CrowdStrike pushed a faulty software update to its popular endpoint detection tools.
A group of bipartisan lawmakers on Wednesday sent Meta CEO Mark Zuckerberg a letter asking the firm to delay shutting down CrowdTangle for six months.
Why it matters: The lawmakers argue Meta has a responsibility to be transparent about the content being shared on its platform ahead of the 2024 election.
Heavy security restrictions ahead of the Olympics Opening Ceremony meant that only those with a pass or official accreditation are allowed along the Seine. I had a lovely walk along the river on Tuesday, with only the occasional worker putting on the finishing touches, along with police on land and in boats.
In a quiet industrial park in Charlotte, a small company is building drones and robots to revolutionize the American blue-collar workforce. And soon, those robots will be artificially intelligent.
Why it matters: Lucid Bots aims to become a leader in embedding AI into physical devices for a real purpose — creating robots that can take on dangerous work and freeing humans to pursue other meaningful tasks.
A secretive drone designed to reap intelligence from faraway targets flew for at least three days straight in recent testing, a feat its maker shared first with Axios.
Why it matters: The marathon flight — possibly twice as long as other drones' — could shake up how the U.S. military approaches overhead surveillance, for which there is an insatiable appetite.
Meta, the parent company of Facebook and Instagram, has removed roughly 63,000 accounts and is banning all future content from a notorious online cybercriminal ring that's targeted U.S. adults in financial sextortion scams.
Why it matters: This is the toughest enforcement action against a financial sextortion from a social media company to-date.
A growing number of women are seeking connection and comfort in relationships with chatbots — and finding their approximation of empathy more dependable than many human partners' support.
Why it matters: These female AI users, flipping the stereotype of under-socialized men chatting with AI girlfriends in their parents' basement, are challenging assumptions about the nature of human intimacy.