LAGOS, Nigeria (AP) — A spike in attacks by militants across northern Nigeria is “driving hunger to levels never seen before" and is expected to result in the worst levels of food insecurity in Africa next year, according to a World Food Program report released Tuesday.
The food agency of the United Nations projected 35 million people are likely to experience severe hunger in Nigeria by 2026, the highest on the continent and the largest since the agency began recording data in Nigeria.
The WFP also predicted that at least 15,000 people in Borno state, the epicenter of Nigeria’s security crisis, will experience catastrophic hunger including famine-like conditions next year and will be classified as Phase 5. That's the highest classification of food insecurity, similar to what has been seen in some parts of Gaza and Sudan.
“Northern Nigeria is experiencing the most severe hunger crisis in a decade, with rural farming communities the hardest hit,” the WFP said in a statement.
Widespread attacks by various armed groups have deterred farmers from using their land, officials said.
In October, al-Qaida-affiliate Jama’at Nusrat al-Islam wal-Muslimin took responsibility for its first attack in Nigeria, making the group the latest entrant in a pool of armed groups launching attacks in the country.
More than 300 students and 12 teachers were kidnapped from a school in Niger state on Nov. 21, only four days after after 25 schoolchildren were abducted 170 kilometers (106 miles) away in neighboring Kebbi state.
Nigeria also has been hard hit by a massive scaling down of U.N. food assistance following U.S. President Donald Trump's decision to gut the United States Agency for International Development.
The USAID cut ceased funding to the WFP, which said it will run out of resources for emergency food and nutrition assistance in December. Nigeria is one of the few other countries in the region where the cut has deepened the food crisis. In July, the agency suspended food assistance across West and Central Africa.
“Without confirmed funding, millions will be left without support in 2026, fueling instability and deepening a crisis that the world cannot afford to ignore,” the agency said.
This story was first published on Nov. 25, 2025. It was updated on Nov. 26, 2025 to correct that the United Nations food agency predicted that only 15,000 people in Borno state, the epicenter of the country’s security crisis, will be classified under the highest classification of food insecurity, which is Phase 5.
FILE - People wait to receive food donations from the United Nations World Food Program in Damasak, northeastern Nigeria, Oct 6, 2024. (AP Photo/Chinedu Asadu, File)
Like so many sectors of the economy, the news industry is hurtling toward a future where artificial intelligence plays a major role — grappling with questions about how much the technology is used, what consumers should be told about it, whether anything can be done for the journalists who will be left behind.
These issues were on the minds of reporters for the independent outlet ProPublica as they walked picket lines earlier this month. They're inching toward a potential strike, in what is believed would be the first such job action in the news business where how to deal with AI is the chief sticking point.
Few expect this dispute will be the last.
AI has undeniably helped journalists, simplifying complex tasks and saving time, particularly with data-focused stories. News organizations are using it to help sift through the Epstein files. AI suggests headlines, summarizes stories. Transcription technology has largely eliminated the need for a human to type up interviews. These days, even a simple Google search frequently involves AI.
Yet rushing to see how AI can help a financially troubled industry has resulted in several cases of publications owning up to errors.
Within the past year, Bloomberg issued several corrections for mistakes in AI-generated news summaries. Business Insider and Wired were forced to remove articles by a fake author named Margaux Blanchard. The Los Angeles Times had trouble with AI and opinion pieces. Ars Technica said AI fabricated quotes, and the publication that has frequently reported on the risks of overreliance on AI tools embarrassed itself further by failing to follow its policy to tell readers when the tool is used.
The ProPublica dispute is noteworthy for how it touches on issues that are frequently cause for debates. The union representing ProPublica's journalists, negotiating its first contract with the the outlet known for investigative reporting, says it wants commitments that mirror those sought elsewhere in the industry about disclosure and the role of humans in the use of AI.
Along with holding informational pickets, union members pledged overwhelmingly that they would be willing to strike without a satisfactory agreement, said Jen Sheehan, spokeswoman for the New York Guild, the union that represents many journalists in the city.
“It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert on the topic at the Poynter Institute journalism think tank.
ProPublica has rejected its requests, the union said. Insight into why can be found in an essay, “Something Big is Happening,” that circulated widely this month. Author and investor Matt Shumer, who said he's spent six years building an AI startup, wrote that the technology is advancing so quickly that “if you haven't tried AI in the last few months, what exists today would be unrecognizable to you.”
Small wonder, then, that news executives are reluctant to put guarantees in writing that could quickly become outdated.
Rather than make promises that can't be kept, ProPublica is exploring how technology can create more space for investigative reporting, company spokesman Tyson Evans said. In the “unlikely event” of AI-related layoffs, ProPublica is proposing expanded severance packages for those affected, he said.
“We're approaching AI with both curiosity and skepticism,” Evans said. “It would be a mistake to freeze editorial decisions in a contract that will last years.”
Fifty-seven of 283 contracts at U.S. news organizations negotiated by the NewsGuild-USA contain language related to artificial intelligence, said Jon Schleuss, president of the union that represents more journalists than any in the country. The first such deals happened in 2023, and The Associated Press was one pioneer. He wants provisions in more contracts.
It won't be easy, judging by the reluctance of many outlets to be tied down. The organization Trusting News, which encourages news organizations to develop and make public its policies on AI use, estimates that less than half of U.S. outlets have done so.
“I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”
The guild is pushing for contracts that guarantee AI won't eliminate jobs. That's no surprise; unions exist to protect jobs. Schleuss characterized a proposal that ensures an actual journalist is involved when AI is used as a way to prevent errors and help an outlet build trust with its readers.
“Humans are actually so much better at going out, finding the story, interviewing sources, bringing back the relevant pieces, asking the hard follow-up questions and putting that in a way that people can understand and see, whether it's a news story or a video,” he said. “Humans are way better at doing that than AI ever will be.”
Apparently, not everyone in journalism agrees. Chris Quinn, editor of The Plain Dealer in Cleveland, Ohio, wrote this month of his disgust with a recent college graduate who turned down a job offer because the person had been taught that AI was bad for journalism.
Quinn's newspaper has been sending some of its journalists out to cover stories by interviewing people, collecting quotes and information, then feeding it to a computer to write. While a human will edit what the computer spits out, an integral part of the process — a reporter using his or her judgment about how to tell a story — has been stripped from their hands. Quinn defended it as the best use of limited resources.
Research shows that a vast majority of American consumers believe that it's very important that newsrooms tell the public when AI is used to write stories or edit photographs, said Benjamin Toff, director of the Minnesota Journalism Center at the University of Minnesota. But here's the rub: Such disclosure makes them trust the outlet's stories less, not more.
A significant minority — 30% in a study Toff conducted last year — doesn't want AI used in journalism at all.
Telling a reader that AI was used is not as simple as it sounds. “There are just so many, many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that just broadly declaring that when AI is used in the newsgathering process that you have to disclose it, just seems like it is actually a disservice to the reader in some cases,” Poynter's Mahadevan said.
Two lawmakers in New York state — the nation's publishing capital — introduced legislation this month requiring clear disclaimers when artificial intelligence is used in published content. There's no immediate word on its chances for passage, but both sponsors are Democrats in a legislature controlled by that party.
Mahadevan believes it's fair to have policies that requires human involvement — editing to prevent slip-ups, for example. But even these declarations are open to interpretation, he said. If an outlet uses chatbots to answer reader questions, are they being edited by a human being?
“Speaking realistically, the newsroom of the future is going to look completely different than it does today,” he said. “Which means people will lose jobs. There will be new jobs. So I think it's important that we are having these conversations right now because audiences do not want a newsroom completely taken over by AI.”
David Bauder writes about the intersection of media and entertainment for the AP. Follow him at http://x.com/dbauder and https://bsky.app/profile/dbauder.bsky.social.
FILE - The OpenAI logo is displayed on a cellphone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP Photo/Michael Dwyer, File)