Thousands of AI ‘Vibe Coding’ Apps May Expose Sensitive Medical, Business Data
The promise of building apps with a few text prompts is turning into a growing cybersecurity headache: Researchers warn that the same AI tools that help people create software in minutes are also exposing sensitive company and personal information to the public internet.
A new investigation by Israeli cybersecurity firm Red Access found thousands of AI-generated web apps leaking data ranging from medical records to internal business documents. The findings add to mounting concerns about vibe coding, a fast-growing trend in which users rely heavily on AI tools to generate and deploy software with little or no traditional coding experience.
A new investigation by Israeli cybersecurity firm Red Access found roughly 380,000 publicly accessible assets created with AI-powered coding tools such as Lovable, Replit, Netlify, and Base44. According to the researchers, about 5,000 of those apps exposed potentially sensitive information.
The findings, reported by Axios, suggest many users are publishing internal tools online without realizing they are publicly accessible. Dor Zvi, CEO of Red Access, said the company uncovered the apps while researching “shadow AI,” where employees use AI tools without formal approval from their organizations.
“The concept of people just creating something that simply, and using it in production … on behalf of their company without getting any permission — there is no limit,” Zvi told Axios.
He also warned that many non-technical users may not even think about security settings before launching apps online. “I don’t think it’s feasible to educate the whole world around security,” Zvi said in comments published by Axios. “My mother is [vibe coding] with Lovable, and no offense, but I don’t think she will think about role-based access.”
Medical, financial, and corporate data reportedly exposed
The exposed information reportedly included medical records, financial documents, chatbot conversations, schedules, and internal business materials.
Axios said it independently verified several exposed applications, including a shipping company app displaying vessel schedules, a healthcare platform detailing clinical trials in the UK, and customer support conversations from a cabinet supplier.
WIRED reported that some exposed apps appeared to contain hospital work assignments, sales records, marketing strategy presentations, financial information, and chatbot logs with customer names and contact details.
Researchers also claimed they found apps leaking patient conversations, school lesson recordings, and internal staff schedules. According to WIRED, around 40% of the exposed apps appeared to contain sensitive data.
“The end result is that organizations are actually leaking private data through vibe-coding applications,” Zvi told WIRED. “This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world.”
Platforms push back on claims
The companies behind the AI coding tools disputed parts of the researchers’ findings, arguing that the visibility of public apps online does not automatically mean there was a security breach. Replit CEO Amjad Masad said users can decide whether apps are public or private.
“Replit allows users to choose whether apps are public or private,” Masad wrote in a statement cited by WIRED. “Public apps being accessible on the internet is expected behavior. Privacy settings can be changed at any time with a single click.”
A spokesperson for Lovable said the company was investigating the claims and emphasized that developers are responsible for how their apps are configured.
“Lovable takes reports of exposed data and phishing sites seriously, and we’re actively working to obtain what we need to investigate,” the company said in a statement published by WIRED. Base44 also defended its platform, saying users are given tools to configure security settings themselves.
The scale of this issue is expected to grow. Industry forecasts suggest that 60% of all new code will be AI-generated by the end of this year. While these tools democratize creation, they also bypass the traditional security checks used by professional engineering teams.
Related reading: As AI coding tools reshape how software gets built, OpenAI is also expanding Codex into a broader developer “super app” aimed at streamlining more of the app-building process.
The post Thousands of AI ‘Vibe Coding’ Apps May Expose Sensitive Medical, Business Data appeared first on eWEEK.