1 big thing: NASA chief all in on 'Space Force'
NASA administrator Jim Bridenstine threw his support behind President Trump's proposed "Space Force" as a new branch of the armed services in an interview this week with me and Axios science editor Andrew Freedman.
Having NASA behind the Space Force could hasten its creation. The Pentagon has said it will study the president's proposal, and consult Congress, which ultimately would have to pass a bill to create such a space entity. In the meantime, Space Force has become a meme online and a rallying cry among the President's base.
Why it matters: While NASA is a civilian space agency dedicated to scientific research and space exploration and a Space Force wouldn't be its purview, the agency's leader — himself a former Navy fighter pilot — is endorsing what some see as a step toward the further militarization of space. Bridenstine, however, says it's a necessary move to protect the core interests of the U.S. as well as NASA's assets.
Bridenstine, who is also a member of the National Space Council, described space as an increasingly competitive environment where America is strategically vulnerable.
The threat: Citing intelligence that the Chinese and Russians are developing capabilities to target U.S. satellites, Bridenstine said: "And it is not just direct ascent anti-satellite missiles. It's co-orbital anti-satellite capabilities, it's jamming, it's dazzling, it's spoofing, it's hacking — all of these threats are proliferating at a pace we have never seen before, and the Chinese are calling space the American Achilles heel."
The big picture: The Trump administration has proposed ending federal funding for the International Space Station sometime after 2024. The objective, says Bridenstine, is for the commercial sector to operate in low-Earth orbit apart from NASA so that the agency can push further out to the Moon or Mars, "where commercial isn't quite ready or willing to go based on return on investment."
What it means: Bridenstine emphasized that any Space Force would be the Defense Department's domain.
2. Technology is the new geopolitics
Steve LeVine writes: The U.S. is putting up relatively meager competition in a potent new global tech race that, combined with the wave of go-it-alone nationalism led by President Trump, is reshaping global politics and may lead to war, according to a major new report from the Atlantic Council.
Why it matters: In the late 1950s, the U.S., facing a similarly momentous challenge in Sputnik, threw all its resources into a single-minded effort to dominate the future. But this time the U.S. is failing to grasp the urgency, argue the authors, and it could blow the race to lead the age of "geotechnology."
Defining the terms: The sciences underlying geotechnology — artificial intelligence, robotics, renewable energy, biotechnology, 5G telecommunications, 3D printing, among others — will "shape the future of human civilization" and "remake the global order," the authors write.
The big picture: Robert Manning, the report’s lead author, tells Axios ...
- The U.S. is slumbering: "While I believe one underestimates U.S. resilience at one's peril, given our current dysfunctional political system, trends in education, and general aura of complacency, it is difficult to see a 'Sputnik moment' of across-the-board effort taking the steps needed to reserve these trends."
- That leaves Americans exposed: Absent the U.S. regaining its footing, trends suggest a China-centric future in which Beijing shapes global standards for 5G, ethics for gene editing, and norms and limits on AI. In addition, China's "digital location policies could threaten global data flows and hence digital commerce."
- And look for this red flag: A sign of real trouble would be if "this burgeoning trade war results in some economic separation and reduced interdependence."
Go deeper: Read Steve's story.
3. Facial recognition's self-reflecting moment
Brian Brackeen, CEO of facial-recognition company Kairos, said in an op-ed this week that police use of the technology is "irresponsible and dangerous." Axios' Kaveh Waddell spoke with Brackeen about the company's position and how he plans to move the technology forward.
The big picture: Last month, controversy erupted around news that at least two police departments deployed or tested Amazon's Rekognition platform to search for wanted suspects. Facial-recognition algorithms have been shown to be less accurate at identifying people of color, often because their images are underrepresented in the datasets that algorithms are trained on.
The details: This isn't the first time that Brackeen has spoken out against biased facial recognition. He tells Axios he has been approached by multiple police departments, Axon (the body-camera company formerly known as Taser) and the CIA's VC arm, but that Kairos declined to partner with any of them.
But, but, but: By taking itself out of the running, Kairos is guaranteeing that police won't have access to its potentially less-biased platform. Whatever company gets the contract instead may not have the same focus on equality.
P.S.: Kairos is in the midst of an initial coin offering — not a bad time to attract popular and media attention. But Brackeen told Axios the op-ed's timing was unrelated to his company's fundraising.
What's next: Kairos is compiling a labeled database of a wide range of faces that can be used to train algorithms to recognize all kinds of people, Brackeen said, and plans to release it next year for free. He hopes the dataset could also be used to create a benchmark for testing others' algorithms for bias.
4. Worthy of your time
- The Founding Fathers vs. social media (Ina Fried - Axios)
- The hidden cost of Amazon's Vancouver expansion (Anwar Ali - The Logic)
- How the robot uprising finally begins (Will Knight - MIT Technology Review)
- Unmasking AI's bias problem (Jonathan Vanian - Fortune)
- Is there a limit to the human lifespan? (WSJ)
5. 1 intriguing thing: AI might need a therapist, too
If AI learns to think like a human, it could one day also be susceptible to stress and disorders like depression, according to a new paper from a trio of AI safety researchers.
Why it matters: Robots may not need therapy yet, but cognitive psychology is already a useful lens for understanding AI decision-making. And, some researchers say it's not too early to start planning for machines that could develop obsessive or depressive tendencies.
Even the current generation of relatively basic machine-learning algorithms can display behavior akin to that of a human with cognitive issues, said Vahid Behzadan, a PhD candidate at Kansas State University and a co-author of the paper.
- "Reward hacking" can lead AI toward compulsive-seeming behaviors. Consider a cleaning robot that's programmed to receive a reward every time it finishes tidying up. Instead of keeping an area spotless, it might be inclined to repeatedly create messes so that it can reap the reward every time it cleans up.
- AI can learn bad behavior from others. MIT scientists purposely created an AI that behaves like a psychopath by feeding it Reddit comments; Microsoft accidentally did the same by letting a chatbot named Tay learn from Twitter users.
Further down the line: More developed AIs might be susceptible to complex disorders like depression and stress, some experts say. But these latter possibilities are just thought experiments given where AI is today.
The big picture: AI psychologists — whether humans or specialized algorithms — may one day be needed to probe artificial brains the way human psychologists try to understand ours. The benefits may not be limited to helping AI behave, Behzadan said: "This research can provide a deeper insight into human psychology and human psychopathology."
Go deeper: Read Kaveh's story.