All of the 400 exposed AI systems found by UpGuard have one thing in common: They use the open source AI framework called llama.cpp. This software allows people to relatively easily deploy open source AI models on their own systems or servers. However, if it is not set up properly, it can inadvertently expose prompts that are being sent. As companies and organizations of all sizes deploy AI, properly configuring the systems and infrastructure being used is crucial to prevent leaks.
Rapid improvements to generative AI over the past three years have led to an explosion in AI companions and systems that appear more “human.” For instance, Meta has experimented with AI characters that people can chat with on WhatsApp, Instagram, and Messenger. Generally, companion websites…