- Dev Notes
- Posts
- Mellum: JetBrains’ New LLM Built for Developers
Mellum: JetBrains’ New LLM Built for Developers
PLUS: IBM Just Dropped Their New Open-Source LLMs
Good Morning! JetBrains has unveiled Mellum, their specialized LLM that claims to slash code completion times and boosting accuracy across multiple programming languages. IBM is making waves in the open-source world with their new Granite 3.0 family of LLMs, featuring impressive multilingual capabilities and enterprise-focused optimizations. And in a fascinating breakthrough for mathematics and GPU computing, a distributed team just discovered the largest known prime number using cloud-based NVIDIA GPUs, shattering the previous record with a mind-boggling 41 million digits.
Mellum: JetBrains’ New LLM Built for Developers
JetBrains just dropped some exciting news in the AI development space with their new LLM called Mellum. Unlike the sea of general-purpose LLMs out there, this one's laser-focused on what developers actually need – faster and smarter code completion.
Mellum is now powering JetBrains AI Assistant, and the numbers are pretty impressive. The model has slashed completion latency to just one-third of its previous time, and developers are accepting about 40% of its suggestions – a solid benchmark in the code completion world. The cancel rate has also dropped significantly (3-4x lower), suggesting the suggestions are more relevant than ever.
Technical Details: Currently, Mellum supports code completion for Java, Kotlin, Python, Go, and PHP, with more languages on the way through their Early Access Program. What sets it apart is its deep integration with JetBrains IDEs, creating a tight feedback loop between the client-side IDE and server-side AI logic. This means the model can offer highly contextual suggestions based on your project's specific needs.
For the privacy-conscious developers out there (and who isn't these days?), JetBrains has trained Mellum exclusively on publicly available, permissively licensed code. You can take it for a spin right now through JetBrains AI Assistant – might be worth checking out if you're looking to speed up your coding workflow.
Read More Here
IBM Just Dropped Their New Open-Source LLMs
IBM has just released Granite 3.0, their third-gen family of LLMs, with a special focus on enterprise applications. What makes this interesting is that unlike many recent model releases, these are fully open-source under Apache 2.0 - no proprietary strings attached.
What’s New: The star of the show is Granite 3.0 8B Instruct, a dense decoder-only model trained on a massive 12 trillion tokens across 12 natural languages and 116 programming languages. Early benchmarks show it going toe-to-toe with similar-sized models from Meta and Mistral, but with some extra enterprise-focused capabilities baked in.
The release includes several variants optimized for different use cases. For those working with limited resources, there are new Mixture of Experts (MoE) models - the 3B-A800M and 1B-A400M - which use just a fraction of their total parameters during inference. They've also introduced a speculative decoder that achieves a 220% speedup in tokens per step, and Guardian models specifically designed for safety checks and RAG-specific validations.
Coming up by year-end: context windows expanding to 128K tokens and multimodal capabilities. If you want to take these for a spin, you can grab them from Hugging Face or try them on IBM's watsonx platform.
Read More Here
đź“ą Youtube Spotlight
New largest prime number found! See all 41,024,320 digits.
Stand-up Maths
Context: After nearly three decades of CPU dominance in finding record-breaking Mersenne primes, GPUs have finally taken the crown. On October 21, 2024, GIMPS (Great Internet Mersenne Prime Search) announced the discovery of 2136279841-1, a behemoth prime number with 41,024,320 digits.
Technical Details:
The new prime (M136279841) is only the 52nd known Mersenne prime ever discovered
Discovery process involved two key steps:
Initial identification by NVIDIA A100 GPU in Dublin
Confirmation via Lucas-Lehmer test on H100 GPU in San Antonio
Surpasses previous record by 16+ million digits
Verification required multiple hardware platforms and programs for mathematical certainty
Former NVIDIA employee Luke Durant orchestrated this breakthrough by leveraging cloud-based GPUs across 17 countries. Using Mihai Preda's GPU-optimized software, Durant created a distributed "cloud supercomputer" that took advantage of discounted GPU resources during low-demand periods. This creative approach to accessing massive computational power marks a significant shift from traditional hardware-dependent prime hunting methods.
Read More Here
🔥 More Notes
Was this forwarded to you? Sign Up Here