- AI Periscope
- Posts
- Lesser-Used Languages Can Jailbreak ChatGPT
Lesser-Used Languages Can Jailbreak ChatGPT
And: Figure AI Seeking to Raise $500M | Law Enforcement Weighed Down by AI-made Fake CSAM
Exploring below the surface of AI headlines.
Summaries | Insights | Points of Views
In Today’s Edition
OpenAI / Safety
Image source: Brown University
Summary - Researchers at Brown University discovered a method to bypass OpenAI's GPT-4 safety guardrails (jailbreaking) by translating harmful prompts into lesser-known languages, such as Scots Gaelic, and then translating the responses back to English. This allowed them to evade content filters designed to prevent the AI from generating dangerous information, achieving a 79% success rate in bypassing these safeguards. This vulnerability highlights a significant oversight in AI safety mechanisms, and suggests a need for more inclusive safety measures that consider a wider range of languages. Read the report from Brown University here.
Buoy points:
Exploiting Language Gaps: The study revealed that translating prompts into rare languages like Scots Gaelic, Zulu, Hmong, or Guarani can circumvent GPT-4's safety features, which typically block harmful content in English with a 99% success rate.
Success Rate of Bypassing Safeguards: By translating harmful prompts into and from these less common languages, researchers successfully bypassed GPT-4's safety mechanisms approximately 79% of the time.
Vulnerability Across Different Domains: The AI was more prone to providing compliance with prompts related to terrorism, financial crime, and misinformation in lesser-known languages, whereas prompts involving child sex abuse were less likely to be answered.
Machine Translation Attacks: The effectiveness of the attack varies with the commonality of the language; languages that are more widely spoken and thus better represented in training data, such as Bengali or Thai, posed less of a risk.
Technological Disparities and Safety Risks: The deficiency in training on low-resource languages not only exacerbates technological disparities among speakers of these languages but now also presents a universal safety risk to all users of large language models (LLMs).
POV - This research sheds light on an emerging AI use-case: exploiting language translation as a loophole to bypass safety measures. It highlights the importance of developing more robust AI safety mechanisms that account for linguistic diversity. This is easier said than done, though - some estimate that there are 300 written languages and 7000 spoken languages (languages and dialects) globally. What an immense challenge for AI and tech leaders. Can humans close this AI loop-hole?
Figure AI
Image source: siliconANGLE
Summary - Figure AI, a startup building human-like robots, is in talks for a $500 million funding round led by Microsoft and OpenAI, valuing the company at nearly $1.9 billion. Their humanoid robots, designed for industrial tasks, could make workplaces safer and automate tedious jobs. While no robots have been commercially deployed yet, Figure already has a deal with BMW for testing in their factories.
Buoy points:
Significant Funding Round: Figure AI Inc.'s potential $500 million funding round, led by Microsoft and OpenAI, underscores the growing interest and investment in advanced robotics.
Innovative Humanoid Robots: Unlike traditional industrial robots, Figure's creations are bipedal and humanoid, designed to navigate and operate in environments built for humans.
Partnership with BMW: The collaboration with BMW to trial robots in automotive manufacturing showcases the practical applications of Figure's technology in high-value industries, emphasizing the robots' capability to perform risky or tedious tasks.
Emerging Market for General-Purpose Robotics: The development of Figure's robots points to a broader trend towards versatile, general-purpose robotics, moving beyond the limitations of single-purpose, task-specific robots that have dominated the market.
Competitive Landscape: Figure's advancements and funding news come amidst a backdrop of increasing competition in the humanoid robotics space, with notable developments from Tesla and Agility Robots Inc.
POV - We don’t need run-away robots like those depicted in movies like Robocop and iRobot. However, I welcome this advancement, especially for automating risky, tedious, or intensive tasks - as long as humans are smart enough to include human-controlled safeguards. There are so many potential applications, in healthcare, construction, agriculture, personal assistance, and beyond. Where do you see humanoid robots applying in society?
AI-made CSAM
Image source: arsTECHNICA | Getty Images
Summary - The rise of AI-generated fake child sex images is hindering investigations into real child abuse cases, despite calls for legislation to address the issue. This surge in AI misuse not only risks normalizing child sexual exploitation but also complicates the identification of actual victims. Experts foresee an exponential growth in such cases, prompting concerns about the sufficiency of existing laws to handle these crimes.
Buoy points:
AI-generated child sex images: The increasing use of AI to create fake child sex images is complicating real child abuse investigations due to difficulty in distinguishing between real and AI-generated images.
Legal gray zone: The lack of clear legislation against AI-generated non-consensual intimate imagery, despite calls from attorneys general, is hindering law enforcement efforts.
Impact on victims and society: The proliferation of realistic but fake images is normalizing child sexual exploitation, putting more children at risk, and obstructing law enforcement from identifying actual victims.
Tech platform challenges: AI is complicating platform monitoring of child sexual exploitation, generating unviable reports, and encryption options are limiting crime tracking.
Legislative response: Congress has reintroduced legislation like the Disrupt Explicit Forged Images and Non-Consensual Edits Act and the Preventing Deepfakes of Intimate Images Act to combat AI-generated non-consensual intimate images.
POV - The tech industry has a responsibility to guard against this type of abuse. Sure, this capability already existed before, but with the massive proliferation of AI image generation tools, anyone who can type the most basic of instructions can now generate photo-realistic images. For their part, lawmakers will continue to codify inappropriate use, and rightfully so.