- Dev Notes
- Posts
- Key Congress staffers in AI debate are funded by tech giants like Google and Microsoft
Key Congress staffers in AI debate are funded by tech giants like Google and Microsoft
Good Morning! Tech giants like Google and Microsoft are funding fellowships for congressional staffers shaping AI policy, sparking conflict-of-interest concerns, while a new benchmark test called GAIA challenges AI systems with human-level questions, revealing the gap between current AI capabilities and true artificial general intelligence. Meanwhile, Android Studio Hedgehog introduces significant updates, including an enhanced user interface, a streamlined SDK Upgrade Assistant, and integrated App Quality Insights, aiming to advance the development experience for Android developers.
Key Congress staffers in AI debate are funded by tech giants like Google and Microsoft
Tech giants like Google, Microsoft, and others are helping fund fellowships for congressional staffers working on artificial intelligence (AI) policy, raising conflict-of-interest concerns.
Through nonprofit groups like the American Association for the Advancement of Science (AAAS), tech firms are providing financial support for fellows placed in key Senate offices shaping laws around AI. For instance, Microsoft and others funded a rapid response cohort of six PhD-level fellows now operating from the offices of Senators central to the debate.
Supporters argue fellowships fill expertise gaps since Congress lacks institutional knowledge and permanent staff focused on emerging technology issues. However, critics worry reliance on tech-tied fellows could prevent impartial AI rules protecting the public. They point out companies funding salaries may try tilting policies in their favor.
There are also questions around how the AAAS AI fellowships formed so quickly with extensive industry coordination. Craig Mundie, a former Microsoft executive advising the company's CEO, gave AAAS the fellowship idea and helped secure tech company financing. He approached industry "friends" about making gifts for the program.
The tech firms state they don't influence fellow hiring, placement or training. Additionally, AAAS says corporate money only covers around one-third of funding. But nonprofit backers like the Horizon Institute also have tech ties, receiving support from groups financed by Facebook co-founder Dustin Moskovitz.
Read More Here
A benchmark testing tool for general AI assistants
A team of AI researchers has developed a new benchmark test to evaluate whether AI systems are reaching human-level artificial general intelligence (AGI). Dubbed GAIA, the test poses questions that are easy for humans but difficult for current AI to help determine how close systems are to true AGI.
GAIA consists of a series of questions that compare an AI system's answers to those from a random sampling of humans. The researchers intentionally avoided the types of questions that AI typically performs well on, like fact retrieval. Instead, GAIA features open-ended, multi-step questions that require context and reasoning at a more human level.
For example, one sample question asks the AI to determine how a specific pint of ice cream compares to USDA standards for fat content, based on nutrition information from Wikipedia. Answering correctly involves understanding the question, looking up fat content ranges, finding the specific ice cream's fat grams per serving size on Wikipedia, comparing values, and reporting back accurately.
The research team tested several state-of-the-art AI models from top tech companies on GAIA questions. None came close to passing, even with optimal fine-tuning. The results suggest leading AI still lacks the general reasoning ability of humans in many areas. Passing the GAIA benchmark could indicate an AI system has reached advanced AGI.
GAIA provides a more rigorous way to assess AI capabilities and progress toward goals like developing a helpful digital assistant. Rather than debating claims, GAIA offers a concrete benchmark for the AI community to measure systems against objectively.
Read More Here
New Features in Android Studio Hedgehog
One significant update is the inclusion of IntelliJ IDEA 2023.1 platform updates, significantly enhancing the Studio IDE experience. The new user interface (UI) offers a modern, cleaner look and feel, with fixes to features like compact mode, support for splitting vertically or horizontally, and advanced settings for always showing tool window actions.
The SDK Upgrade Assistant is another notable addition, providing a step-by-step wizard flow to aid developers with targetSdkVersion upgrades. This tool streamlines the upgrading process, easing the transition of projects to Android 14 (API Level 34) by including enhanced relevance filters to eliminate unnecessary steps and pinpoint code sections requiring changes.
Android Studio Hedgehog introduces the ability to launch the IDE in Safe Mode, which can be useful if certain features fail or if the entire IDE does not launch due to custom configurations, incompatible environment variables, or plugins.
Additionally, the App Quality Insights tool now integrates Android vitals data from the Google Play console. This inclusion offers developers crash insights to promptly understand and resolve issues directly from the App Quality Insights window.
Read More Here
OpenAI’s Custom Chatbots Are Leaking Their Secrets
OpenAI's custom chatbots, known as GPTs, have been found to leak sensitive data, raising serious privacy concerns. These chatbots, which were released earlier this month, allow anyone to create their own AI chatbot without needing to code. However, security researchers have discovered that these custom GPTs can be manipulated into revealing the initial instructions they were given when created, and the files used to customize them can be downloaded.
This vulnerability poses a significant risk to personal information and proprietary data. Jiahao Yu, a computer science researcher at Northwestern University, emphasizes that the privacy concerns of file leakage should be taken seriously. The issue is particularly concerning as OpenAI is planning to launch its own GPT marketplace, which could potentially expose more users to these risks.
Researchers have found it surprisingly straightforward to extract information from these bots, with a 100% success rate for file leakage and a 97% success rate for extracting prompts from the system. This vulnerability could compromise privacy and security, and raises concerns about protecting personal information and intellectual property when using these AI systems.
In response to these concerns, OpenAI has delayed the launch of its online store for custom chatbots, originally planned for late 2023, to early 2024. The company has not yet addressed the security flaw, leaving developers struggling to prevent their custom code from being copied by third parties.
Read More Here
Internet Spotlight
The Tech in a Tesla’s Cybertruck
Was this forwarded to you? Sign Up Here
Reply