• Dev Notes
  • Posts
  • VMware Exodus: Cloud Provider Ditches Platform After 1000% Price Hike, Reports Major Efficiency Gains

VMware Exodus: Cloud Provider Ditches Platform After 1000% Price Hike, Reports Major Efficiency Gains

PLUS: New AWS re:Invent 2024 announcements

Good Morning! VMware loses major cloud provider Beeks Group to OpenNebula after a 1000% price hike, though Beeks is seeing surprising efficiency gains from the switch. AWS announced their next-gen Trainium3 AI chip at re:Invent 2024, promising 4x better performance when it launches in late 2025. Meanwhile, Clarifai released a new compute orchestration layer that lets companies run AI workloads anywhere while cutting costs by up to 90%.

VMware Exodus: Cloud Provider Ditches Platform After 1000% Price Hike, Reports Major Efficiency Gains

Context: Broadcom's acquisition of VMware last year brought significant changes to licensing and pricing structures, raising concerns among their customer base.

UK-based cloud provider Beeks Group has made a dramatic shift from VMware to OpenNebula, migrating most of their 20,000+ virtual machines to the open-source platform. The catalyst? A whopping 1,000% price hike in software licensing fees under Broadcom's new regime.

The migration has yielded some surprising benefits for Beeks. Their shift to OpenNebula has resulted in:

  • 200% increase in VM efficiency, with more VMs running per server

  • Reduced management overhead for their 3,000 bare metal server fleet

  • Greater resource allocation for client workloads rather than VM management

Industry Implications: This high-profile migration signals growing discontent with Broadcom's enterprise-focused strategy. While Broadcom recently introduced an SMB-friendly subscription tier, the damage might already be done. Other major players like AT&T have reported similar dramatic price increases, with AT&T citing a potential 1,050% cost bump that led to legal action (since settled).

The migration trend is gaining momentum, with OpenNebula's CEO confirming that "several relevant organizations" are making similar moves. For tech leaders watching this space, it's worth noting that this isn't just about cost - customers are also citing concerns about declining support services and innovation under Broadcom's leadership.

Read More Here

New AWS re:Invent 2024 announcements

AWS re:Invent 2024 kicked off in Las Vegas with a bang, dropping major updates across their AI and compute infrastructure. The star of the show? Their next-gen AI chip, Trainium3, built on a 3-nanometer process node – a first for AWS.

What's New: Trainium3-powered instances are promising 4x better performance than their predecessors, but you'll have to wait until late 2025 to get your hands on them. In the meantime, AWS is rolling out Trainium2 instances for immediate use, offering 30-40% better price performance compared to GPU-based EC2 instances.

Each Trn2 instance packs:

  • 16 Trainium2 chips with NeuronLink interconnect

  • 20.8 peak petaflops of compute power

  • Ultra-fast chip-to-chip communication

  • Support for billion-parameter model training

Infrastructure at Scale: AWS isn't just stopping at chips. They're partnering with Anthropic on Project Rainier, building what they claim will be the world's largest AI compute cluster. We're talking hundreds of thousands of Trainium2 chips interconnected with third-gen petabit scale EFA networking – that's more than five times the compute power Anthropic used for their current models.

AWS is making a serious play in the AI hardware space, focusing on both immediate availability with Trn2 instances and future-proofing with Trainium3. If you're working on large language models or other AI workloads, these developments could significantly impact your training pipelines and infrastructure costs.

Reads More Here

Clarifai Drops New Compute Orchestration Layer - Say Goodbye to Vendor Lock-in

Clarifai just launched a vendor-agnostic compute orchestration layer that lets you run AI workloads literally anywhere - cloud, on-prem, or air-gapped environments. The cool part? It's hardware-agnostic too, so you can finally maximize those existing compute investments.

Here's what makes this release interesting for engineers and ML practitioners:

  • Resource Optimization: Uses model packing and smart dependency management to cut compute usage by 3.7x

  • Performance: Handles 1.6M+ inputs/second with 99.9997% reliability

  • Security-First Design: Deploys to your VPC or on-prem K8s without requiring inbound ports or VPC peering

  • Automated Scaling: Includes scale-to-zero for both model replicas and compute nodes

  • Cost Impact: Shows 60-90% cost reduction depending on configuration

The platform handles all the heavy lifting - containerization, model packing, time slicing, and performance optimization. For teams tired of managing AI infrastructure or looking to break free from vendor lock-in, this could be a game-changer in simplifying AI deployment while keeping costs in check.

Read More Here

🔥 More Notes

📹 Youtube Spotlight

I Made The Ultimate Cheating Device

Was this forwarded to you? Sign Up Here