Добавить новость
News in English



Новости сегодня на DirectAdvert

Новости сегодня от Adwile

You can’t recall AI like a defective drug

At a recent AI summit in New Delhi, Sam Altman warned that early versions of superintelligence could arrive by 2028, that AI could be weaponized to create novel pathogens, and that democratic societies need to act before they are overtaken by the technology they have built. These concerns are widely shared across the industry. Geoffrey Hinton, the Nobel laureate known as “the godfather of AI,” has warned that creating digital beings more intelligent than ourselves poses a genuine existential threat. Mustafa Suleyman, CEO of Microsoft AI, devoted much of his book The Coming Wave to the argument that AI’s fusion with synthetic biology could put the tools to engineer a deadly pandemic within reach of a single individual. These are not warnings about a distant future. Last week, a clash over who controls AI and on what terms led to a complete collapse in the company’s relationship with the Pentagon.

When politicians and business leaders try to make sense of issues like these, they are often tempted to look to the pharmaceutical industry for a regulatory model. Senator Richard Blumenthal—one of the few legislators actively pushing for meaningful AI regulation—has proposed that the way the U.S. government regulates the pharmaceutical industry can serve as a model for AI oversight. The analogy makes intuitive sense. The pharma model shows that strict licensing and oversight of potentially dangerous emerging technologies can limit threats without placing undue restrictions on innovation.

The instinctive attraction of this approach isn’t confined to legislators. Many companies are applying the same logic internally—whether consciously or not—managing AI risk through stage-gate reviews, pre-deployment testing, and post-launch monitoring. The pharma model, in other words, is already the de facto governance framework for much of the industry. The problem is that it’s the wrong framework—and the differences are not just technical but existential.

Three disanalogies that matter

Pharmaceutical regulation works because the barriers to entry are high, the product is physical and controllable, and the development cycle is slow enough for oversight to keep pace. None of these conditions hold for AI.

First, barriers to entry are very different. Bringing a new drug to market costs an average of $1.1 billion, according to a 2020 study published in the Journal of the American Medical Association. The infrastructure alone—laboratories, clinical trial networks, manufacturing facilities—limits production to a relatively small number of identifiable companies that regulators can monitor. AI has no equivalent friction. Capable models can be built for a fraction of that cost, fine-tuned on consumer hardware, and deployed globally from a laptop. The universe of actors a regulator would need to track is not a handful of identifiable companies—it is potentially anyone, anywhere.

Second, a pharmaceutical product is physical. Manufacturing it requires raw materials, specialized equipment, and distribution logistics. All of this creates friction that regulators can exploit by imposing oversight checkpoints. But code has no such friction. Once released, an AI model’s weights can be copied number-for-number and shared across borders far more quickly than any physical weapon or industrial system. Its marginal cost of replication is effectively zero. And you cannot recall software the way you recall a contaminated drug. Once it is in the wild, it stays in the wild.

Even capabilities that are delivered purely through access to the cloud are vulnerable to replication and thus to the breaking of corporate or regulatory guardrails. In just the last month, Anthropic disclosed that three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—had used 24,000 accounts to generate over 16 million exchanges with Claude, extracting its most advanced capabilities through a technique called distillation. The Chinese labs did not need to infiltrate a supply chain or build expensive factories. They only needed API access and carefully crafted prompts, routed through proxy networks designed to evade detection. There is no pharmaceutical equivalent of this replicability.

The final crucial disanalogy is speed. The pharma approval pipeline assumes that a product will go through years of controlled testing before it reaches the public. But AI models evolve on software timelines. Capabilities improve not only through hardware gains but through software updates, new training methods, and frequent model releases that can produce meaningful jumps in weeks rather than years. Anthropic, for instance, shipped two major Claude releases within ten weeks. The iteration cycle is so fast that by the time any pharma-style approval process could hope to evaluate a model, that model would already be obsolete – replaced by something far more powerful for which the evaluation process had not even begun.

Why “test, deploy, monitor” doesn’t work

The problem isn’t confined to government. The same pharma-shaped thinking that distorts regulatory frameworks has taken root inside organizations—and it leaves them exposed for the same reasons.

Pharma-type risks are familiar: a product might have harmful side effects, so you test it before deployment, monitor it afterward, and pull it back if something goes wrong. Even without an external regulator, many companies are applying this logic to AI internally, managing risk via the familiar means of stage-gate reviews, pre-deployment testing, and post-launch monitoring. It feels responsible. It feels sufficient.

This is precisely the danger.

Of course, stage-gate reviews and pre-deployment testing are not worthless. They catch real errors, enforce discipline, and create a paper trail that demonstrates due diligence to boards and regulators. Any organization that has implemented them is better off than one that has done nothing. But these frameworks create a false sense of coverage. The risk they manage is the risk they were designed for—product defects, adverse effects, quality-control failures. AI’s risk profile has a different shape entirely. It is defined by the potential for irreversibility, rapid proliferation, and misuse. Not every AI-driven outcome will trigger these risks. But unlike a defective product, you cannot issue a recall once the damage is done.

This combination of potential threats means that the familiar toolkit of managed risk simply doesn’t fit—and organizations that believe it does are accepting exposures they haven’t mapped. It is precisely to meet these challenges that we developed the OPEN and CARE frameworks for managing AI innovation and risk. The CARE framework, in particular, provides a structured methodology for governing AI risk and is the foundation for the recommendations that follow.

Build governance for AI risk

The CARE framework works through four stages: Catastrophize, identifying what could go wrong; Assess, prioritizing those risks; Regulate, implementing controls; and Exit, planning for when those controls fail. Applied to your organization’s AI exposure, the framework points toward five immediate actions.

1. Surface your shadow AI exposure. Ask your direct reports one question: what AI tools are you using that weren’t provided by the company? The answers will tell you how large the gap is between the AI your organization officially uses and the AI your people are actually relying on.

2. Map your irreversibility points—and your fallbacks. Identify the AI-dependent processes where a failure would be irreversible or highly damaging, such as automated customer communications, AI-assisted code pushed to production, algorithmic hiring screens. Ask whether your current safeguards assume you can catch and correct errors before they reach the outside world. If they do, redesign them—and build explicit fallback procedures for when they fail anyway.

3. Lock down your data exposure. Every AI tool your organization touches is a data pipeline running in both directions. Classify your data into tiers—public, internal, confidential, restricted—and map which AI tools are authorized for each tier. Audit your vendor agreements for training-data clauses. The moment proprietary data enters a third-party system, your ability to recall it is gone.

4. Red team for misuse, not just malfunction. Red teaming for malfunction asks “What if this breaks?” Red-teaming for misuse asks “What if this works exactly as intended and someone uses it for the wrong purpose?” As the CARE framework’s Catastrophize phase emphasizes, you need both.

5. Assign clear executive ownership. None of the above matters if accountability is diffused across committees. Designate a single executive who owns AI risk the way your CFO owns financial risk. That person needs authority, budget, and a direct line to the board.

The real stakes

For decades, pharma-style regulation has been one of the most successful bets in business: a framework that protects the public without strangling the industry. But the model is insufficient for AI.

At the governmental level, serious people are reaching for serious solutions. Sam Altman’s call at the New Delhi summit for an international regulatory body modeled on the International Atomic Energy Agency reflects a clearer-eyed view of what kind of technology this is—one that demands oversight frameworks commensurate with its actual risk profile, not models borrowed from industries that don’t share its characteristics.

Business leaders should follow the same path. The category of problem that governments are grappling with at the international level is the same category of problem you are grappling with inside your organization. Design your governance accordingly—for the technology you actually have, not the one you wish you were dealing with.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media









103news.com — быстрее, чем Я..., самые свежие и актуальные новости Вашего города — каждый день, каждый час с ежеминутным обновлением! Мгновенная публикация на языке оригинала, без модерации и без купюр в разделе Пользователи сайта 103news.com.

Как добавить свои новости в наши трансляции? Очень просто. Достаточно отправить заявку на наш электронный адрес mail@29ru.net с указанием адреса Вашей ленты новостей в формате RSS или подать заявку на включение Вашего сайта в наш каталог через форму. После модерации заявки в течении 24 часов Ваша лента новостей начнёт транслироваться в разделе Вашего города. Все новости в нашей ленте новостей отсортированы поминутно по времени публикации, которое указано напротив каждой новости справа также как и прямая ссылка на источник информации. Если у Вас есть интересные фото Вашего города или других населённых пунктов Вашего региона мы также готовы опубликовать их в разделе Вашего города в нашем каталоге региональных сайтов, который на сегодняшний день является самым большим региональным ресурсом, охватывающим все города не только России и Украины, но ещё и Белоруссии и Абхазии. Прислать фото можно здесь. Оперативно разместить свою новость в Вашем городе можно самостоятельно через форму.

Другие популярные новости дня сегодня


Новости 24/7 Все города России



Топ 10 новостей последнего часа



Rss.plus


Новости России







Rss.plus
Moscow.media


103news.comмеждународная интерактивная информационная сеть (ежеминутные новости с ежедневным интелектуальным архивом). Только у нас — все главные новости дня без политической цензуры. "103 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию.

Мы не навязываем Вам своё видение, мы даём Вам объективный срез событий дня без цензуры и без купюр. Новости, какие они есть — онлайн (с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии).

103news.com — живые новости в прямом эфире!

В любую минуту Вы можете добавить свою новость мгновенно — здесь.

Музыкальные новости




Спорт в России и мире



Новости Крыма на Sevpoisk.ru




Частные объявления в Вашем городе, в Вашем регионе и в России