- Dev Notes
- Posts
- NIST Approves Post-Quantum Cryptographic Algorithms
NIST Approves Post-Quantum Cryptographic Algorithms
“People say there are things that are complex and there are things that are just complicated. Complexity is considered interesting, complicatedness is considered harmful. The process of setting up an x86_64 CPU is mostly complicated.” This article describes the process of setting up an x86_64 CPU in 64-bit long mode, starting from the BIOS boot sector and progressing through protected mode and finally long mode.
NIST Approves Post-Quantum Cryptographic Algorithms
Credit: Augusto - stock.adobe.com
As quantum computing capabilities rapidly advance, the need to future-proof our cryptographic defenses has never been more pressing. The current encryption methods we rely on for securing digital communications and transactions could be rendered obsolete by the sheer power of quantum computers. Recognizing this looming threat, NIST has been spearheading a global effort to develop new encryption algorithms that can withstand the onslaught of quantum computing.
What's New: After an extensive six-year competition, NIST has now approved the first four post-quantum cryptographic algorithms to be included in their new standard. These algorithms, based on structured lattices and hash functions, are designed to be resistant against attacks from both classical and quantum computers.
The selected algorithms cover two critical use cases: general encryption for securing communications (CRYSTALS-Kyber) and digital signatures for authentication (CRYSTALS-Dilithium, FALCON, SPHINCS+). Reviewers praised the efficiency and performance of these new tools, which will be crucial for widespread adoption.
Looking Ahead: With these initial winners announced, NIST is now preparing to finalize the post-quantum cryptography standard within the next two years. However, the work doesn't end there. Organizations must proactively assess their cryptographic infrastructure and begin transitioning to these new algorithms before the arrival of large-scale quantum computers - a milestone that some experts predict could occur within the next 5-10 years.
Read More Here
GitHub Actions Artifacts Exposing Sensitive Data
GitHub Actions is a powerful CI/CD tool that allows developers to automate their software workflows. One of the key features of Actions is the ability to persist data between workflow jobs using build artifacts. However, researchers recently discovered that these artifacts can inadvertently leak sensitive information like cloud service tokens and GitHub repository credentials.
Palo Alto Networks researcher Yaron Avital found that misconfigured Actions workflows can result in artifacts containing secrets used by the workflow. This could allow anyone with read access to the repository to steal these credentials and potentially compromise the associated cloud services or push malicious code to the repo.
Some of the most serious leaks included GitHub personal access tokens, which could give an attacker full control over the repository. Even popular open source projects from companies like Google, Microsoft, and AWS were affected.
The Risks: By downloading the public artifact, an attacker can extract the leaked tokens and use them to gain unauthorized access. This could lead to remote code execution on the workflow runner, potentially compromising developer workstations and spreading through the organization.
Mitigation: The responsibility falls on developers to carefully review their Actions workflows. Limiting token permissions, avoiding storing secrets in artifacts, and scanning for secrets before upload can all help prevent these damaging leaks. GitHub has also released an updated version of Actions artifacts to address some of these issues.
Read More Here
OpenAI Unleashes the Power of Fine-Tuning for GPT-4o
OpenAI
Developers have been eagerly awaiting the ability to fine-tune OpenAI's powerful GPT-4o language model to better suit their specific needs. And the wait is now over - OpenAI has officially launched fine-tuning for GPT-4o!
With this new capability, devs can now customize GPT-4o models using their own datasets, allowing them to improve performance and accuracy for their particular applications. From refining the structure and tone of responses to following complex domain-specific instructions, fine-tuning opens up a world of possibilities.
The best part? You guys can see strong results with as little as a few dozen examples in their training data. Whether you're working on a coding assistant, a creative writing tool, or something else entirely, fine-tuning can give your GPT-4o-powered apps a serious boost.
But OpenAI isn't stopping there. They've also implemented robust safety measures to ensure fine-tuned models aren't misused, including continuous automated evaluations and usage monitoring. And to sweeten the deal, the company is offering 1 million free training tokens per day until September 23.
Read More Here
🔥 More Notes
Hackers Exploit PHP Vulnerability to Deploy Stealthy Msupedge Backdoor: A previously undocumented backdoor named Msupedge has been used in a cyber attack targeting an unnamed university in Taiwan. The backdoor communicates with a command-and-control server via DNS traffic and was likely deployed by exploiting a recently disclosed critical flaw in PHP (CVE-2024-4577).
Procreate defies AI trend, pledges “no generative AI” in its illustration app: Procreate, a popular iPad illustration app, has announced that it will not incorporate any generative AI technology into its products. Procreate CEO James Cuda stated in a video, "I really fucking hate generative AI. I don't like what's happening in the industry and I don't like what it's doing to artists."
📹 Youtube Spotlight
Why Worldcoin's Eyeball Orb Wants to Learn Your Face | Hello World with Ashlee Vance
Was this forwarded to you? Sign Up Here