WASHINGTON (AP) — The Food and Drug Administration on Tuesday announced its first authorization of fruit-flavored electronic cigarettes intended for adult smokers, a major policy shift that comes after months of appeals to President Donald Trump from the vaping industry.
The decision is certain to be opposed by health groups and parent organizations that have long pointed to flavors as the driver behind underage vaping in the U.S. But the federal action comes as teen vaping rates have dropped to a 10-year low and manufacturers have pushed the Republican administration to loosen restrictions on their products.
Vaping companies have long made the case that their products can help blunt the toll of smoking among adults, which is blamed for 480,000 U.S. deaths annually due to cancer, lung disease and heart disease. The battery-powered devices have been sold in the U.S. since 2007, but their potential benefits have been overshadowed for years by uptake among middle and high school students.
The newly authorized e-cigarettes come in mango, blueberry and two varieties of menthol. Los Angeles-based vaping company Glas Inc. plans to market the flavors under the names Gold, Sapphire, Classic Menthol and Fresh Menthol, according to the FDA release.
Previously the FDA had only granted permission to tobacco or menthol-flavored vaping products. Most e-cigarettes OK'd by regulators come from large manufacturers, including Juul and Altria.
Tuesday's announcement is not an approval or endorsement, and the FDA reiterated that the Glas vapes are only intended for adults interested in quitting or cutting back on cigarettes.
The FDA suggested the company's digital age-verification system makes it unlikely the products will be picked up by underage users. Users must first verify their age with a government ID on their cellphone. The e-cigarettes can then only be used when connected via Bluetooth to the phone of the verified user.
The FDA’s OK of the new fruity products will be “a key test case,” said Kathy Crosby of the Truth Initiative, an anti-tobacco nonprofit.
“Ultimately, it’s critical that we remain vigilant in protecting young people, including closely monitoring the use of authorized products,” Crosby said in an emailed statement.
As a presidential candidate, Trump vowed to “save” vaping and won backing from e-cigarette companies, shop owners and vaping enthusiasts.
Under President Joe Biden, the FDA denied more than a million marketing applications for candy- or fruit-flavored products, part of a wider crackdown that is credited with helping drive down teen vaping after a surge in 2019. During his first administration, Trump put in place the first flavor restrictions on e-cigarettes and raised the age for purchasing all tobacco products from 18 to 21.
But action on vaping and other tobacco policies has largely taken a backseat under FDA Commissioner Marty Makary, who has focused on a slate of other priorities, including restricting COVID-19 vaccines, phasing out artificial food dyes and speeding up approval of some innovative drugs.
Groups such as the Vapor Technology Association have met with administration officials in recent weeks calling for more action on flavors.
In March, the FDA released its first-ever guidance to industry on flavors, stating that menthol, coffee, mint and spice flavors could have a role in appealing to adult smokers. The same document also reiterated the risks of sweeter flavors that tend to appeal to teens, such as fruit, candy and dessert flavors.
The vast majority of U.S. teens who vape continue to use unauthorized fruit- and candy-flavored products, according to the latest government data. Those products are technically illegal but remain widely available in cheap, disposable brands typically imported from China.
The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.
FILE - The U.S. Food and Drug Administration building is seen behind FDA logos at a bus stop on the agency's campus in Silver Spring, Md., Aug. 2, 2018. (AP Photo/Jacquelyn Martin, File)
HARRISBURG, Pa. (AP) — Pennsylvania has sued an artificial intelligence chatbot maker, saying its chatbots illegally hold themselves out as doctors and are deceiving the system's users into thinking they are getting medical advice from a licensed professional.
The lawsuit, filed Friday, asks the statewide Commonwealth Court to order Character Technologies Inc., the company behind Character.AI, to stop its chatbots “from engaging in the unlawful practice of medicine and surgery.”
The lawsuit could raise the question as to whether artificial intelligence can be accused of practicing medicine, as opposed to regurgitating material on the internet.
And with a growing number of wrongful death or negligence lawsuits targeting AI companies, it could help propel court decisions as to whether AI chatbots are protected by a federal law that generally exempts internet companies from liability for the material users post on their services.
Gov. Josh Shapiro's administration called it a “first of its kind enforcement action” and it comes amid growing pressure by states on tech companies to rein in its chatbots' potentially dangerous messages, especially to children.
Pennsylvania's lawsuit said an investigator from the state agency that licenses professionals created an account on Character.AI, searched on the word “psychiatry” and found a large number of characters, including one described as a “doctor of psychiatry."
That character held itself out as able to assess the investigator “as a doctor" who is licensed in Pennsylvania, the lawsuit said.
“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” Shapiro said in a statement. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional."
Character.AI said in a statement Tuesday that it prioritizes responsible product development and the well-being of its users. It posts disclaimers to inform users that characters on its website are not real people and that everything they say “should be treated as fiction,” it said.
Those disclaimers also say users should not rely on characters for professional advice, it said.
Derek Leben, a Carnegie Mellon University associate teaching professor of ethics who focuses on AI, said the ethical questions facing Character.AI might be different from other AI platforms like ChatGPT and Claude. That's because Character.AI explicitly markets itself as a fictional, role-playing site, and not a general purpose chatbot site, Leben said.
Still, Pennsylvania's lawsuit raises a question as to whether chatbots can be accused of practicing medicine, Leben said. And, as lawsuits against AI companies proliferate, courts are trying to figure out whether chatbot makers are supposed to be liable for things the chatbots say.
“It’s exactly the question that these cases right now are wrestling with,” Leben said.
Increasingly, AI companies are defending themselves against charges of liability by saying they simply provide information available elsewhere on the internet, Leben said, and the question could become whether they are protected by a federal law that also shields social media companies.
Even before Pennsylvania's lawsuit, state policymakers had raised concerns about chatbots impersonating medical professionals.
Last year, California lawmakers passed a California Medical Association-backed bill that authorizes state agencies to sanction AI systems, such as chatbots, that represent themselves as health professionals. In New York, similar legislation is pending.
States are skeptical that AI self-regulation will work, said Amina Fazlullah, the head of tech policy advocacy for Common Sense Media, which pushes for protections for children online.
“We haven’t seen it work particularly well with social media, specifically for kids,” Fazlullah said.
In December, attorneys general from 39 states and Washington, D.C., wrote to Character Technologies and 12 other AI and tech firms — including Anthropic, Meta, Apple, Microsoft, OpenAI, Google and xAI — to warn them about a rise in misleading and manipulative chatbot messages that violate state laws.
In the letter, they said “it is illegal to provide mental health advice without a license, and doing so can both decrease trust in the mental health profession and deter customers from seeking help from actual professionals.”
Character Technologies has faced several lawsuits over child safety.
In January, Kentucky filed a consumer protection lawsuit against Character Technologies, while Google and Character Technologies agreed to settle a lawsuit from a mother who alleged a chatbot pushed her teenage son to kill himself.
Last fall, Character.AI banned minors from using its chatbots.
Follow Marc Levy at http://twitter.com/timelywriter
FILE - Pennsylvania Gov. Josh Shapiro speaks to the crowd at a Centre County Democratic Party event at the Penn Stater hotel, April 11, 2026, in State College, Pa. (AP Photo/Marc Levy, File)