WASHINGTON (AP) — The Justice Department has filed a complaint to legally take ownership of a sanctioned tanker and nearly 2 million barrels of petroleum seized off the coast of Venezuela in December, another step by President Donald Trump's administration to assert power over the country's oil sector after capturing leader Nicolás Maduro.
It's the first complaint filed by the U.S. to start the legal process to formally take control of one of at least 10 oil tankers intercepted by American authorities since late last year. The U.S. has accused Venezuela of using a shadow fleet of falsely flagged vessels to smuggle illicit crude into global supply chains.
“Under President Trump’s leadership, the era of secretly bankrolling regimes that pose clear threats to the United States is over,” Attorney General Pam Bondi said in an emailed statement. “This Department of Justice will deploy every legal authority at our disposal to completely dismantle and permanently shutter any operation that defies our laws and fuels chaos across the globe.”
The seizure of the vessel, named the Skipper, in December was the Republican administration's first in a series of similar actions and marked a dramatic escalation in Trump’s campaign to pressure Maduro by cutting off access to oil revenue that has long been the lifeblood of Venezuela’s economy.
Maduro, who called the tanker seizure an “act of international piracy,” was arrested in a U.S. raid last month and was taken to New York to face drug trafficking charges. He has pleaded not guilty, protesting his capture and declaring himself “the president of my country.” Following his ouster, several vessels fled the coast of Venezuela in spite of Trump's quarantine on sanctioned oil tankers, and U.S. forces have tracked and interdicted some of them as far away as the Indian Ocean.
The Trump administration has set out to control the production, refining and global distribution of Venezuela’s oil and oversee where the revenue flows. The U.S. has begun lifting broad sanctions to allow foreign companies to operate in Venezuela in a bid to revitalize the ailing oil industry.
A judge in Washington's federal court must sign off on the U.S. government's bid to permanently take ownership of the Skipper and its cargo so the oil can potentially be sold.
The Justice Department alleges the tanker moved oil from Iran and Venezuela throughout the world, flying false flags to hide its illegal activities while providing revenue for Iran’s paramilitary Revolutionary Guard, which the U.S. has deemed a foreign terrorist organization.
“Because of the coordinated efforts of our prosecutors and law enforcement partners, a ghost tanker that for years secretly moved illicit oil from Iran and Venezuela around the globe has been taken off the seas,” Assistant Attorney General A. Tysen Duva, who leads the Justice Department's Criminal Division, said in a statement.
“Today’s actions are an important step in making America and the world safer by disrupting the flow of millions of dollars to foreign terrorist organizations," he said.
FILE - Fishermen pass an oil tanker in the Gulf of Venezuela off the shore of Punta Cardon, Venezuela, Jan. 14, 2026. (AP Photo/Matias Delacroix, FIle)
Like so many sectors of the economy, the news industry is hurtling toward a future where artificial intelligence plays a major role — grappling with questions about how much the technology is used, what consumers should be told about it, whether anything can be done for the journalists who will be left behind.
These issues were on the minds of reporters for the independent outlet ProPublica as they walked picket lines earlier this month. They're inching toward a potential strike, in what is believed would be the first such job action in the news business where how to deal with AI is the chief sticking point.
Few expect this dispute will be the last.
AI has undeniably helped journalists, simplifying complex tasks and saving time, particularly with data-focused stories. News organizations are using it to help sift through the Epstein files. AI suggests headlines, summarizes stories. Transcription technology has largely eliminated the need for a human to type up interviews. These days, even a simple Google search frequently involves AI.
Yet rushing to see how AI can help a financially troubled industry has resulted in several cases of publications owning up to errors.
Within the past year, Bloomberg issued several corrections for mistakes in AI-generated news summaries. Business Insider and Wired were forced to remove articles by a fake author named Margaux Blanchard. The Los Angeles Times had trouble with AI and opinion pieces. Ars Technica said AI fabricated quotes, and the publication that has frequently reported on the risks of overreliance on AI tools embarrassed itself further by failing to follow its policy to tell readers when the tool is used.
The ProPublica dispute is noteworthy for how it touches on issues that are frequently cause for debates. The union representing ProPublica's journalists, negotiating its first contract with the the outlet known for investigative reporting, says it wants commitments that mirror those sought elsewhere in the industry about disclosure and the role of humans in the use of AI.
Along with holding informational pickets, union members pledged overwhelmingly that they would be willing to strike without a satisfactory agreement, said Jen Sheehan, spokeswoman for the New York Guild, the union that represents many journalists in the city.
“It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert on the topic at the Poynter Institute journalism think tank.
ProPublica has rejected its requests, the union said. Insight into why can be found in an essay, “Something Big is Happening,” that circulated widely this month. Author and investor Matt Shumer, who said he's spent six years building an AI startup, wrote that the technology is advancing so quickly that “if you haven't tried AI in the last few months, what exists today would be unrecognizable to you.”
Small wonder, then, that news executives are reluctant to put guarantees in writing that could quickly become outdated.
Rather than make promises that can't be kept, ProPublica is exploring how technology can create more space for investigative reporting, company spokesman Tyson Evans said. In the “unlikely event” of AI-related layoffs, ProPublica is proposing expanded severance packages for those affected, he said.
“We're approaching AI with both curiosity and skepticism,” Evans said. “It would be a mistake to freeze editorial decisions in a contract that will last years.”
Fifty-seven of 283 contracts at U.S. news organizations negotiated by the NewsGuild-USA contain language related to artificial intelligence, said Jon Schleuss, president of the union that represents more journalists than any in the country. The first such deals happened in 2023, and The Associated Press was one pioneer. He wants provisions in more contracts.
It won't be easy, judging by the reluctance of many outlets to be tied down. The organization Trusting News, which encourages news organizations to develop and make public its policies on AI use, estimates that less than half of U.S. outlets have done so.
“I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”
The guild is pushing for contracts that guarantee AI won't eliminate jobs. That's no surprise; unions exist to protect jobs. Schleuss characterized a proposal that ensures an actual journalist is involved when AI is used as a way to prevent errors and help an outlet build trust with its readers.
“Humans are actually so much better at going out, finding the story, interviewing sources, bringing back the relevant pieces, asking the hard follow-up questions and putting that in a way that people can understand and see, whether it's a news story or a video,” he said. “Humans are way better at doing that than AI ever will be.”
Apparently, not everyone in journalism agrees. Chris Quinn, editor of The Plain Dealer in Cleveland, Ohio, wrote this month of his disgust with a recent college graduate who turned down a job offer because the person had been taught that AI was bad for journalism.
Quinn's newspaper has been sending some of its journalists out to cover stories by interviewing people, collecting quotes and information, then feeding it to a computer to write. While a human will edit what the computer spits out, an integral part of the process — a reporter using his or her judgment about how to tell a story — has been stripped from their hands. Quinn defended it as the best use of limited resources.
Research shows that a vast majority of American consumers believe that it's very important that newsrooms tell the public when AI is used to write stories or edit photographs, said Benjamin Toff, director of the Minnesota Journalism Center at the University of Minnesota. But here's the rub: Such disclosure makes them trust the outlet's stories less, not more.
A significant minority — 30% in a study Toff conducted last year — doesn't want AI used in journalism at all.
Telling a reader that AI was used is not as simple as it sounds. “There are just so many, many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that just broadly declaring that when AI is used in the newsgathering process that you have to disclose it, just seems like it is actually a disservice to the reader in some cases,” Poynter's Mahadevan said.
Two lawmakers in New York state — the nation's publishing capital — introduced legislation this month requiring clear disclaimers when artificial intelligence is used in published content. There's no immediate word on its chances for passage, but both sponsors are Democrats in a legislature controlled by that party.
Mahadevan believes it's fair to have policies that requires human involvement — editing to prevent slip-ups, for example. But even these declarations are open to interpretation, he said. If an outlet uses chatbots to answer reader questions, are they being edited by a human being?
“Speaking realistically, the newsroom of the future is going to look completely different than it does today,” he said. “Which means people will lose jobs. There will be new jobs. So I think it's important that we are having these conversations right now because audiences do not want a newsroom completely taken over by AI.”
David Bauder writes about the intersection of media and entertainment for the AP. Follow him at http://x.com/dbauder and https://bsky.app/profile/dbauder.bsky.social.
FILE - The OpenAI logo is displayed on a cellphone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP Photo/Michael Dwyer, File)