AI Companies Need to Take Concerns Seriously
Elon Musk, owner of X (formerly known as Twitter) and Grok, in September 2025. Twitter users have been asking Grok to create deepfakes of softcore porn, and Grok developers have been slow to respond to complaints. (Shutterstock/bella1105)
AI Companies Need to Take Concerns Seriously
Public skepticism toward AI is significant, and companies must address misuse now or face harsher regulation later.
A few years ago, artificial intelligence (AI) was seen as something borderline whimsical: capable of producing a cartoonish video of Will Smith eating spaghetti, or of spitting out jokes on command. But now AI is advertised as something that you can genuinely trust to help you with your day-to-day job, taxes, shopping, driving, and more. It is something which can create books, art, poetry, and films—including life-like videos of Smith eating his pasta.
But those changes have brought about a dramatic shift in how the public perceives AI. Polling shows that Americans are skeptical of AI, and that skepticism has risen; majorities believe it will worsen creative thinking and problem solving, and are “more concerned than excited” about the new technology. In 2025, only 10 percent registered as being “more excited than concerned.”
Part of this is simply the result of AI being a new technology; a sudden, feasible paradigm-shifting phenomenon is going to result in skepticism, particularly when it appears in everything, everywhere, all at once. But there is skepticism, and then there is a 10 percent approval rating. If a political figure’s rating sinks that low, it’s a DEFCON 1 situation—because once you get that low, views become baked in, and they become incredibly hard to reverse.
Yet AI companies seem to mostly dismiss these concerns. An OpenAI executive was fired—supposedly for sex discrimination—shortly after she raised concerns over a new AI erotica feature. The same company also recently terminated a team that had the goal of promoting the idea that AI was beneficial to humanity. After a recent Grok photo scandal, X took days to solve the issue, when it could have been fixed in hours.
This is a serious problem because AI genuinely has the capability to improve aspects of everyday life: everyday menial tasks could be a thing of the past, as could diseases such as cancer or Alzheimer’s. But if these legitimate concerns are not addressed, it may never get to that point.
AI Is Still Unknown
Within the tech world, different AI models are constantly compared: Google’s Gemini, Anthropic’s Claude, OpenAI’s ChatGPT, and more are all discussed, dissected, and inspected.
The issue here is that average people do not do the same. Anthropic discovered this the hard way with their Super Bowl ad. They had what was, on paper, a pretty ingenious advertisement: a guy trying to learn a new workout asks a trainer (a stand-in for ChatGPT) for a workout plan. The trainer speaks like a machine and, at the end of the ad, suggests that the user purchase a product, the implication being that ChatGPT will promote paid advertisements.
This was not libelous; OpenAI has indeed announced that ChatGPT will be trying to get users to buy certain products. But most people do not really tell a difference between the various models, and reactions to the ad proved it. Viewers placedthe ad in the bottom 3 percent of all Super Bowl ads over the past five years, and reactions were confused. For many people, ChatGPT is shorthand for AI, like “Google” is for “search.” “Anthropic” sounds like an athletic wear company.
This should be extremely alarming for those who are supportive of AI and are opposed to increased regulation. In the tech world, the perception of AI is overwhelmingly positive. But for average individuals, because AI is still so unknown, perceptions are still fungible, meaning people can—and will—be turned against it if they become convinced it is dangerous.
AI: Becoming Dangerous
The original Will Smith video, first posted on Reddit in 2023, was silly: his eyes were googly, and he looked like a cartoon character. Newer versions could be mistaken for deleted scenes in a film; they are not perfect, but they are getting close.
The problem is that Will Smith is not the only thing AI can create (not that this would be a good thing for Mr. Smith, but as a rich celebrity, he has various ways of forcing people to take videos down or making clear whether or not something is truly him). AI can now recreate anyone, and telling whether something is artificial intelligence has become a guessing game.
Eventually, this led people to realize it could be used to create softcore porn (or worse). “@Grok, put her in a bikini” became repeated ad nauseam on X, resulting in thousands of women finding themselves unwillingly replicated in an overtly sexual fashion.
And Grok’s developers were very slow to respond. Once the phenomenon became apparent, it took a series of embarrassing headlines to get X to change its policy. And even now, when asked to sexualize photos, Grok still, on occasion, provides the asked-for content.
While unasked-for sexualization is a serious problem, it is not a national security issue. But perfectly fake videos of presidents could be. All it would take is a hacked official account—say, the White House X account—with an AI-generated video of President Donald Trump to trigger a market crash or other concerns.
AI: Becoming a Nuisance
Supporters of AI have placed one argument at the center of their case: “AI will make life easier.” It will write your spreadsheets. It will, as Google advertised during the Super Bowl, allow you to be able to plan your future garden.
The problem is that it will also allow a surge of spam calls. Already, experts have been warning that a sea of highly realistic spam is coming: calls that seem totally real with voices that sound exactly like your mother, father, children, spouse, or friends. Voices that will sound scared and will ask for money—perhaps they claim an emergency situation, or a car crash—and urge you to quickly help. This seems like a moderate problem; can something as mundane as spam calling affect something as complex and society-shifting as AI?
Yes—easily. Because it’s the exact thing that will annoy enough people to cause them to go from ambivalent about AI to actively negative. Most people are not using X regularly; in 2025, one study found that roughly 1 in 10 Americans used the app every day. That’s by no means an insignificant number, but it is still relatively small. The overwhelming number of people will never find themselves sexualized by AI.
But almost every American uses smartphones: over 90 percent. And all of them can get spam texts and calls. And if people start to associate those messages with AI—and they will—they will demand action.
How AI Companies Can Fix the Problem
AI companies can get ahead of all of this by taking action themselves. They should make it impossible to use their services to create images of living individuals without permission; exceptions can be made for historical figures who are no longer living. The same should be extended to individual voices. This action alone will go a long way toward calming nerves and eroding skepticism.
They should also support congressional legislation to implement the above and ensure that bad actors who may not self-restrict are punished. There are bills, such as the No Fakes Act, before Congress. This bill would make it illegal to createreproductions of individuals by giving people the right to their likeness.
Libertarian groups, such as The Foundation for Individual Rights and Expression (FIRE), have sought to portray the bill as a major violation of civil liberties and a restriction on free speech. One example they list is a teacher being punished for creating an AI video of Ronald Reagan explaining his Cold War policy. Even if this is an issue, it’s a relatively simple fix (just do not extend the right to likeness to the dead—though one wonders why the teacher could not use an actual video of Reagan instead of a fake one).
Their other complaints—that it would make it harder to conduct academic research—are likewise thin gruel. Academic research today requires permission to view letters, notes, or other historical artifacts; if one truly needs to recreate the voice of some historical figure (for whatever reason), they can get permission the legal way. The alternative is to let public frustration build and build and build, until Congress passes something far stricter than the No Fakes Act.
Artificial intelligence is a new technology that will change humanity. Those who are creating it are stewards of that revolution. They have a duty to steward it well.
About the Author: Anthony J. Constantini
Anthony J. Constantini is a policy analyst at the Bull Moose Project and the foreign affairs editor at Upward News. His work has appeared in a variety of domestic and international publications.
The post AI Companies Need to Take Concerns Seriously appeared first on The National Interest.