Axios House: "You have to take control of your destiny" in today's AI world, HPE CEO says
Add Axios as your preferred source to
see more of our stories on Google.

HPE CEO Antonio Neri in conversation with Axios' Courtenay Brown. Photo: Dani Ammann Photography for Axios.
DAVOS, Switzerland — Companies have a responsibility to be part of solutions mitigating AI risk, Hewlett Packard Enterprise (HPE) and OpenAI executives said at an Axios House event this week.
Why it matters: AI's rapid rise has created big challenges for companies and governments who must balance the technology's benefits with new security and privacy issues.
Axios' Ina Fried and Courtenay Brown spoke with HPE president and CEO Antonio Neri and OpenAI chief global affairs officer Chris Lehane at the event, which was sponsored by Rubrik.
What they're saying: "I think a year or two ago people were still trying to figure out what AI was going to do. I think now they've realized, as well with some of the tensions we see around the world, through national security to geopolitical aspects, that now you have to take control of your destiny," Neri said.
- HPE's customers are finding their return on investment and realizing the value of AI, Neri said. "But new challenges come all the time – space, power, cooling, bias, all these things have to be dealt [with] as you go, and then regulation and compliance."
The big picture: The AI boom has intensified safety concerns and a broader debate over whether governments or tech companies should lead efforts to manage the risks.
- Asked if the tension between ethics and the rapid evolution of technology is a reason to slow AI's development, Neri said he doesn't think so.
- "But I think the private sector has a responsibility to really be part of the solution, and not wait for governments to solve the problem," Neri said.
- Lehane agreed, saying "companies like ours that are building the frontier models, we do believe have a responsibility to try to get this right and get it right at the front end."
The latest: OpenAI recently announced a partnership with Common Sense Media and jointly proposed a California ballot proposition on children's AI safety.
- Being able to distinguish between kids and adults is key for AI programs, Lehane said.
- "What the proposal does is it creates age verification … if you're identified as being under 18 you get pushed to a model that's appropriate for someone under 18." It also gives a variety of parental controls and bans companion bots for kids, he said.
- "We are looking at taking this to the ballot, we are looking to actually bring it to other states in the U.S., we are looking to bring it abroad, we are looking to hopefully have federal legislation that advances this."
What we're watching: How the arrival of ads on ChatGPT will unfold, and whether OpenAI will remain "on track" to release its first device this year as Lehane said at the event.
- OpenAI will be conducting a series of tests around the ads rollout, Lehane said.
- "We have roughly 850 million people who use chat on a regular daily basis. The vast majority of those are accessing the technology for free, we make it available for free, almost like it's a public utility out there. At some level, you do have to pay for the compute that makes that available."
Content from the sponsor's remarks:
In a View From the Top conversation, Rubrik CEO, chairman and co-founder Bipul Sinha said that AI "ended the knowledge economy," and started a new economy that he refers to as the "intuition economy."
- "In the modern workplace, you'll have two kinds of workers: AI worker and human worker. And AI workers will do knowledge work … anything that data can decide will be decided by AI, and then humans have to connect new dots to create value," he said.
- "And as soon as the new dots are created and [the] market accepts it, it becomes an accepted knowledge and it goes to AI. So, as you can see, the AI work is going to be huge."
