OpenAI wants to control your computer

And: AI faces major election test in Indonesia | Tech titans join White House AI consortium

Exploring below the surface of AI headlines.

Summaries | Insights | Points of Views

In Today’s Edition

OpenAI

Summary - Imagine a world where your computer is no longer just a tool, but a buddy that knows you so well, it does your tasks for you while you kick back and relax. This world might be coming faster than we thought. OpenAI is on a mission to make your PC your personal genie. According to The Information, OpenAI is developing agent software so advanced, it could turn ChatGPT into a "supersmart personal assistant" – think Siri on steroids.

Buoy points:

  • What are agents? Basically, software that controls your device, performing tasks across multiple apps by simulating clicks, cursor movements, and text typing. The ‘agent would use your computer for you, instead of you using your computer.’

  • The broader vision: To transform ChatGPT into an ultra-intelligent personal assistant, capable of tasks Siri could only dream of, including turning documents into spreadsheets and analyzing data.

  • Done-for-you browsing: OpenAI is reportedly developing a web browsing AI that can plan your flights and conduct research on companies, tasks currently beyond ChatGPT's grasp.

  • The pivot to agent software: With the GPT Store already out and people are developing agent-like software connecting ChatGPT to perform thousands of functions, the pivot to a more embedded agent software experience seems natural.

  • New age in computing: This is likely the dawn of a new age in not just software, but hardware as well. AI-dedicated operating systems and computer hardware might be next.

POV - In a nutshell, this development by OpenAI could either be the dawn of a new digital era or the beginning of our reliance on AI for tasks we didn't even think we needed help with. Either way, it's a fascinating time to be alive. What are your thoughts? Is this exciting or scary? Both?

AI election warm-up

Summary - In the bustling streets of Jakarta, a quirky, AI-generated, chubby-cheeked cartoon of General Prabowo Subianto is wooing the hearts of Gen Z voters with Korean-style finger hearts and cuddles with his cat, Bobby. The Indonesian presidential election is witnessing a digital transformation like no other, with generative AI propelling a once-feared commander into a "gemoy" (cute and cuddly) internet sensation. This unprecedented leap into the future of political campaigning might just be the most exciting thing since sliced bread was introduced to campaign breakfasts. It’s a tale of technology meeting tradition, where digital avatars are becoming as influential as their real-world counterparts.

Buoy points:

  • Generative AI Rebranding: Prabowo Subianto, Indonesia's defense minister and a presidential candidate, has harnessed generative AI to soften his image, turning from a feared nationalist to a lovable, cuddly figure, likely the cause for his lead in the polls.

  • Youth Engagement: About half of Indonesia's 205 million voters are under 40, showcasing the strategic use of AI to engage younger demographics through social media platforms like TikTok, where Prabowo's AI avatar has racked up 19 billion views.

  • AI in Political Campaigning: The Indonesian elections are a testing ground for the use of AI in political campaigning, with technologies being used to create campaign art, track social media sentiment, and build interactive chatbots.

  • OpenAI and Political Use: Despite OpenAI's guidelines against the use of its technology for political campaigning, its tools have been widely utilized in Indonesia's elections, highlighting challenges in enforcing these policies.

  • Hyper-Local Campaign Strategies: The Pemilu.AI app, leveraging OpenAI's GPT-4 and 3.5, assists candidates in crafting speeches and social media content tailored to local constituencies, emphasizing characteristics like humility and religiosity.

  • The Warm-up Election: Indonesia is the third largest democracy, and this can be viewed as an AI-infused election warm-up to the November election in the U.S. But, India goes after Indonesia, in May.

POV - As AI technology becomes increasingly sophisticated and accessible, future elections in countries like the United States and India may witness even more innovative uses of AI, potentially transforming political campaigning and voter engagement in unprecedented ways. But, this year we should see an acceleration of rules and regulations, and AI titans enacting their own policies and guardrails. I love the creativity of this Indonesian example, but we’ll see how long or how much is permitted going forward. As long as it is used for good, it’s great - but who gets to decide?

US AI Safety Consortium

Summary - The White House announced the establishment of the US AISIC (AI Safety Institute Consortium) with the aim to “develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.” The list includes Big Tech and AI firms that are now household AI names, like Google, Microsoft, Apple, Amazon, Meta, OpenAI, Anthropic and others. Also on the list are leaders in industry, research and academia.

Buoy points:

The AISIC established working groups in November 2023 in which consortium members are invited to participate. These are:

  • Risk Management Group: Enhance the AI Risk Management Framework specifically for generative AI, creating guidance for federal agencies and making the framework operational.

  • Synthetic Content Group: Work on identifying and developing standards and techniques for verifying, labeling, detecting, and preventing harmful synthetic content. This includes protecting against misuse in creating inappropriate material and ensuring tools used for these purposes are tested and audited.

  • Capability Evaluations Group: Develop guidance for assessing AI's potential risks, particularly in sensitive areas like cybersecurity and physical system control, and support the creation of environments for safe AI testing.

  • Red-Teaming Group: Formulate guidelines to facilitate red-teaming exercises for AI developers, focusing on dual-use technologies to ensure the deployment of secure and reliable AI systems.

  • Safety & Security Group: Develop safety and security guidelines for dual-use AI technologies, coordinating efforts to manage risks associated with these models.

POV - According to the article, the NIST, under which the AISIC is formed, is underfunded, and there is a fog over how much this committee can accomplish. How much hope do you hold out for positive outcomes, especially considering the dizzying pace of AI advancement?

Don’t forget to check out other AI headlines in The Ocean.

The Ocean is designed to help you stay informed with the wide array of AI headlines, giving you just the snippet of news that you need if you are short on time, but also gives you one-click ease if you want to dive deeper.