- Dev Notes
- Posts
- Google's AI Now Writes 25% of Its Codebase: What This Means for Devs
Google's AI Now Writes 25% of Its Codebase: What This Means for Devs
PLUS: NPM Supply Chain Attack Uses Ethereum Blockchain
Good Morning! Big news from Google this week as they revealed that AI is now writing a quarter of their codebase through their internal tool "Goose," marking a significant shift in how one of tech's giants approaches software development. In a clever twist on cybersecurity threats, attackers are now using Ethereum's blockchain as a resilient command-and-control infrastructure for a new supply chain attack targeting JavaScript developers. And in what could be a watershed moment for AI in cybersecurity, Google's "Big Sleep" project has discovered its first zero-day vulnerability in SQLite, demonstrating AI's growing capabilities in finding complex software bugs that traditional methods might miss.
Google's AI Now Writes 25% of Its Codebase: What This Means for Devs
In a recent earnings call, Google CEO Sundar Pichai dropped a bombshell: over 25% of Google's new code is now AI-generated, with human engineers reviewing and accepting the contributions. This revelation comes from their internal AI tool "Goose," built on their Gemini LLM, which helps with coding tasks and can even modify code based on natural language instructions.
What's New: Googleβs adoption of AI coding tools isn't just experimental β it's becoming core to their development workflow. Engineers are using AI to accelerate coding processes while maintaining oversight of the generated code. This matches broader industry trends, with GitHub reporting that 92% of US developers are already using AI coding tools both professionally and personally.
The shift suggests a new paradigm in software development, where AI handles more routine coding tasks while engineers focus on:
Architecture decisions and system design
Code review and quality assurance
Integration of AI-generated components
Complex problem-solving and optimization
Dev Perspective: While some worry about AI introducing hard-to-detect bugs, this transition mirrors historical shifts in programming β from assembly to high-level languages, or the adoption of object-oriented programming. The key difference? Instead of just abstracting complexity, AI is actively participating in code generation, with humans as strategic overseers.
Read More Here
NPM Supply Chain Attack Uses Ethereum Blockchain
Context: The attackers deployed a malicious package called "jest-fet-mock" (spot the typosquatting?) impersonating popular JavaScript testing utilities. This fake package targets developers and has been spreading multi-platform malware across Windows, Linux, and macOS environments.
Here's where it gets interesting β instead of traditional C2 servers, the malware connects to an Ethereum smart contract at address "0xa1b40044EBc2794f207D45143Bd82a1B86156c6b". The contract's getString method serves as a decentralized bulletin board for C2 server addresses. Pretty clever, right? This makes the infrastructure incredibly resilient since you can't exactly "take down" the blockchain.
The malware variants (which, surprisingly, aren't being flagged by VirusTotal) are targeting dev environments with various capabilities:
System reconnaissance
Credential theft
OS-specific persistence (AutoStart files in Linux, Launch Agent configs in macOS)
What makes this attack particularly concerning is its potential access to CI/CD pipelines and build systems. If you're managing development environments, now's a good time to double-check your package management security controls and verify the authenticity of your testing utilities.
Reads More Here
Google's AI Finds Its First Real-World Zero-Day
Remember Project Naptime, Google's framework for LLM-assisted vulnerability research? Well, it just leveled up. The team expanded it into "Big Sleep," a collaboration between Project Zero and DeepMind, and they've hit their first major milestone.
What's New: Big Sleep discovered an exploitable stack buffer underflow in SQLite's seriesBestIndex function. The bug? A failure to properly handle the special sentinel value -1 used for ROWID constraints in virtual tables. In release builds, this leads to a write below the aIdx buffer, corrupting pointer data β definitely exploit material.
Here's what makes this discovery particularly spicy:
Traditional fuzzing missed it completely, even after 150 CPU-hours of AFL runs
The bug was caught before making it into an official release
It required understanding complex SQL query semantics and virtual table implementations
The AI not only found the bug but provided detailed root-cause analysis
The Big Sleep team suggests this could be a game-changer for defensive security. While fuzzing remains king for finding many types of bugs, AI seems particularly adept at catching variants of known vulnerabilities β a task that has traditionally required manual analysis by skilled researchers.
Read More Here
π₯ More Notes
πΉ Youtube Spotlight
3 Coding Projects to Break the Coding Barrier (w/ Instructions Included)
Was this forwarded to you? Sign Up Here
Reply