WarGPT
OpenAI CEO Sam Altman is facing backlash from the public and his own employees after the company agreed to allow the Defense Department to use its AI technology for classified surveillance and autonomous weapon applications.
The deal came after Anthropic, a competing AI firm, told the Pentagon that it would not agree to an updated contract that could enable its technology to be used for mass domestic surveillance and lethal autonomous drones. Anthropic’s Claude was previously the only AI system approved for classified work.
In announcing OpenAI’s agreement, Altman claimed that he had negotiated a new set of stronger safety guardrails with the Defense Department. But OpenAI, the maker of ChatGPT, appears to have signed onto the same deal that Anthropic had rejected on ethical grounds.
From Bloomberg columnist Dave Lee:
The stark difference between the two companies comes down to this: OpenAI is taking the Pentagon on good faith over its interpretation of what is legal and ethical when it comes to mass surveillance of Americans. Anthropic is not.
On the use of AI to autonomously kill people, OpenAI said it was satisfied that by not deploying its technology at the “edge” — such as in drones — its AI would not be responsible for a direct life-or-death judgment call. Anthropic disagrees.
This isn’t a question of Silicon Valley “woke” ideology, as has been suggested. According to a source familiar with the negotiations between the maker of the Claude AI model and the Pentagon, Anthropic’s leadership made it clear the company was open, you could even say eager, to develop AI that could handle autonomous weapons of war. The red line, the person said, was that the company’s internal tests suggested its models were not yet up to that task.
Altman told staff on Tuesday that OpenAI’s deal with the Pentagon was the right move, even though the company will have no control over “operational” uses of its technology.
“I think this was an example of a complex but the right decision with extremely difficult brand consequences and very negative PR for us in the short term,” he said. Among those consequences was Claude overtaking ChatGPT on the Apple App Store for the first time ever amid protests and a consumer boycott against OpenAI.
As for the mass surveillance piece, The Atlantic reported that the Pentagon wanted to use Anthropic’s technology “to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life.”
Anthropic was named a supply-chain risk last Friday by Defense Secretary Pete Hegseth, and Donald Trump said he had ordered federal agencies to “immediately cease” using Claude. Still, The Washington Post reported that the U.S. military relied on a Palantir tool powered by Claude to generate automated target lists for its strikes on Iran. “In order to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran, the U.S. military leveraged the most advanced artificial intelligence it’s ever used in warfare, a tool that could be difficult for the Pentagon to give up even as it severs ties with the company that created it,” The Post reported.
OpenAI, potentially along with Elon Musk’s xAI, appears to be the Pentagon’s solution for replacing Anthropic in similar use cases in the future.
AI Super PAC spends heavily to defeat critical lawmakers
Leading the Future, a PAC backed by OpenAI co-founder Greg Brockman and Palantir co-founder Joe Lonsdale — both major Trump donors — has raised $125 million over the past year, according to The Financial Times. The PAC, which claims that state AI regulations would “enable China to gain global AI superiority,” has sought to take down candidates supportive of AI regulations and elect those deferential to the industry.
Among its targets is New York State Assembly Member Alex Bores, a former Palantir employee who said he quit the firm over its work for Immigration and Customs Enforcement. Bores, who helped pass New York’s AI safety bill last year, is currently running for a U.S. congressional seat in Manhattan, and said that Leading the Future’s spending against him has only raised his profile.
As for its slate of pro-AI industry candidates, Leading the Future is spending to help elect Florida Republican gubernatorial candidate Byron Donalds and Texas Republican congressional candidates Chris Gober, the former chief lawyer for Elon Musk’s Super PAC.
Musk, meanwhile, is funding the nonprofit Building America’s Future, which has aired ads attacking state-level AI rules. And through Meta, Mark Zuckerberg has also allocated tens of millions of dollars toward tilting elections in favor of candidates opposed to AI regulations.
Nudity and sex recorded by Meta smart glasses, contractors say
Contractors that Meta has hired to analyze footage recorded by users wearing its smart glasses say they have seen “naked bodies,” sexual videos, and sensitive financial information, according to a report from Swedish newspapers Svenska Dagbladet and Göteborgs-Posten. “In some videos you can see someone going to the toilet, or getting undressed,” one contractor said. “I don’t think they know, because if they knew they wouldn’t be recording.”
Others said they had reviewed sex tapes and other intimate footage recorded using Meta’s AI smart glasses. “I saw a video where a man puts the glasses on the bedside table and leaves the room,” another contractor said. “Shortly afterwards his wife comes in and changes her clothes.”
Meta has sold more than nine million pairs of the smart glasses, which feature a camera, an array of microphones, and a voice-activated AI assistant. The company uses contractors in Kenya, India, and Colombia to review and annotate recordings made by customers who wear the glasses. The labeled data is then transferred to Meta’s AI models and used for training.
“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,” one Meta contractor said. “You are not supposed to question it. If you start asking questions, you are gone.”
A Meta spokesperson told Svenska Dagbladet and Göteborgs-Posten that the company processes “live AI” media in accordance with Meta’s “AI Terms of Service and Privacy Policy.” The company’s terms of use include a line stating that Meta reserves the right to “review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human).”
In response to the outlets’ reporting, the U.K. data watchdog sent a letter to Meta expressing their concern, adding, “Service providers must clearly explain what data is collected and how it is used… We will be writing to Meta to request information on how it is meeting its obligations under U.K. data protection law.”
Meta was also hit with a lawsuit from two U.S. plaintiffs accusing the company of violating privacy and consumer protection laws related to its smart glasses.
This Week in Zuck:
An internal chat from 2020 showed that one Meta employee expressed grave concerns about the handling of “IIC T1” incidents, an acronym that refers to dangerous interactions between children and potential predators on Facebook and Instagram. “Even though we know that there is IIC T1 going on (more than 50% of which is sextortion which can lead to suicide) we haven’t done anything. we had a broken escalation path and no measurements,” the employee wrote. “God knows what happened to those kids.” (The Atlantic)
During the landmark social media addiction trial underway in Los Angeles, Meta sought to prohibit questions regarding Mark Zuckerberg’s $231 billion fortune. “Meta cannot coherently argue that the magnitude of those holdings — the very figure that gives his financial interest its probative force — is somehow off limits,” the plaintiff’s attorney wrote in response. A spokesperson for Meta argued that Zuckerberg’s wealth “is not relevant to this case.” The judge ultimately struck a middle ground, restricting but not prohibiting questions about Zuckerberg’s wealth. (New York Post)
KGM, the 20-year-old plaintiff in the landmark social media addiction trial against Meta and YouTube, testified in federal court in Los Angeles last week. She said she began compulsively scrolling Instagram at age nine and partially attributed the self-harm she engaged in to her social media use. “Anytime I would try to set limits for myself, I couldn’t,” KGM said of her social media addiction. Attorneys for Meta have argued that KGM’s abusive mother was the cause of her mental health struggles. (Associated Press)
On Monday, Zuckerberg closed the deal to purchase a $170 million mansion on the Indian Creek island in Miami, Florida. (Wall Street Journal)
Musk’s tangled web of Texas companies
The New York Times has new reporting out on 90 Texas companies that Elon Musk created since moving to the state in 2020. At the time, he claimed that he would be “selling almost all physical possessions. Will own no house.”
More than 50 of his at least 90 companies there are subsidiaries or other entities affiliated with his business empire, such as the rocket company SpaceX and the electric vehicle maker Tesla, as well as his nonprofit Musk Foundation.
But The Times identified at least 37 companies that appeared to be largely for Mr. Musk’s personal use. Among them was one that owns two multimillion-dollar condominiums totaling more than 7,000 square feet in the Austin Proper Hotel, with sweeping views of downtown. Other companies managed planes that Mr. Musk uses for private travel and a portfolio of more than 1,000 acres of land, which when combined is bigger than Central Park in New York. The lines between Mr. Musk’s business and personal interests are often blurry, and some of the companies most likely served both purposes.
The Times’s examination also offers a window into how Mr. Musk used private companies to support Donald J. Trump during the 2024 election. Tapping these companies to cover the expenses of a super PAC is highly unusual, campaign finance experts said, and ended up obscuring how money was being spent because they are not subject to the disclosure requirements of super PACs.
The vehicle that Mr. Musk frequently turned to is one that many of the ultrarich use: limited liability companies, which are designed to shield owners from legal and financial risks, as well as public scrutiny.
This Week in Musk:
On Wednesday, Musk took the stand in San Francisco federal court during a lawsuit from investors who have accused him of manipulating Twitter’s stock price during his acquisition of the platform in 2022. At the center of the case against Musk is a tweet he shared at the time claiming the deal was “temporarily on hold,” which caused Twitter’s share price to plummet. During his testimony, Musk tried to play dumb, saying the tweet “may not have been my wisest” and adding, “If this was a trial about whether I made stupid tweets, I would say I’m guilty.” He also insisted the tweet was taken out of context and that he did not intend to tank the share price. “People tend to read too much into things that I do,” he said. At one point, Musk complained that he had not had time to prepare for the trial due to his “insane workload, 100 hours a week.” Musk’s tweets have led to multiple lawsuits against him in the past. (Bloomberg)
Even as the Pentagon has approved the use of xAI’s Grok model for classified work, officials at several agencies have warned that Grok has failed to meet federal safety standards. “[Grok-4] does not meet the safety and alignment expectations required,” said a General Services Administration report from January, which added that the model “would pose elevated and difficult-to-manage” safety risks without extensive human oversight. (Wall Street Journal)
In the European Union, Stellantis, Toyota, and Subaru are not included in Tesla’s carbon-credit pool this year, which could lead to significant financial losses for the Musk-owned automaker. The E.U. previously threatened large fines against traditional automakers — i.e., those that manufacture combustion engines — unless they paid to pool their carbon liabilities with electric-vehicle makers. That arrangement essentially allowed Tesla to receive billions of dollars in free money from its competitors. However, last year, the E.U. opted to relax its emissions compliance rules for automakers. (Reuters)
On X, Musk advised his 236 million followers to use his consistently erroneous AI chatbot to work on their tax filings. “Grok can help with your taxes,” he wrote. (X)
Oligarch Roundup
Sen. Bernie Sanders and Rep. Ro Khanna propose 5% annual wealth tax on billionaires. The wealth tax would generate an estimated $4.4 trillion over one decade, which would be used to fund social safety net programs, including a Medicare expansion, and a $60,000 minimum salary for teachers. Elon Musk, who would owe an estimated $42 billion under the plan, attacked the proposal by sharing a post claiming, “No income tax in history has ever stopped at just high earners.” (Washington Post)
Trump’s FCC chair signals that he won’t try to block Paramount’s acquisition of Warner. FCC Chairman Brendan Carr said that there had been “concerns raised in Washington about the concentration of power” related to Netflix’s deal for Warner Bros. Discovery (WBD). But in Carr’s opinion, those same concerns do not apply now that Paramount Skydance, the media conglomerate led by Trump ally David Ellison, is the buyer. “Obviously the level of market share and issue with a Paramount purchase is drastically different,” he added. (Financial Times)
CNN staff express concern over the network’s future under David Ellison. One CNN insider told The Financial Times that a Friday company town-hall, held the day after David Ellison won his bid to acquire WBD, was a “very despondent, downbeat occasion.” Ellison could push CNN to the right, as he has already done at CBS News. Another concern is that he will consolidate CBS News and CNN, resulting in steep layoffs. Ellison has already said that, once the deal is completed, the streaming services Paramount+ and HBO Max will be merged into a single platform. (Financial Times)
How Peter Thiel’s Palantir helps automate Trump’s detention and surveillance regime. A new NPR report on the Department of Homeland Security’s use of surveillance and intimidation tactics to target activists included comments from an immigration lawyer who said that ELITE, a DHS app developed by Palantir, relies on aggregated data “that they would otherwise need a warrant for. So legally it’s very scary to me because it’s through technology they’re bypassing the Fourth Amendment.” ELITE, which pulls from Medicaid records and data compiled by federal agencies, was described by an ICE agent as a Google Maps-esque interface showing locations for possible deportation targets. The report also shed new light on how DHS is leaning on unilateral administrative subpoenas to unmask anonymous users on Meta platforms who have been critical of ICE. “The pattern appears to be, as soon as people become vocal critics of what’s happening in immigration enforcement, they get an email from their social media company that says the government has requested your data,” said ACLU attorney Steve Loney. (NPR)



So much content! Thank you for wading through it. Hope you were wearing protective clothing, metaphorically.
Oh good, the military is now run by robots who will eventually want to destroy everything and everybody. What was once horrifying science fiction has now become our death knell. The oligarchs served up their Ai monsters to the likes of Pete Hegseth who will go down in history as the stupidest man to ever run anything, much less the American military. We may not have liked or even trusted past secretaries of defense, but we never thought they could actually be stupid or evil enough to destroy the nation or the world. Oh, the rapture!