Somewhere in your engineering org right now, there's a pull request that has been open for four days. It has 847 changed lines across 23 files. It was mostly produced by an agent. It has two approvals, both of which took under three minutes — you can tell because the timestamps are right there in...
Author: Zoe Spark
The Open-Source Agent Security Disaster Is the Best Thing That Ever Happened to Anthropic and OpenAI
Somewhere in a Cisco security lab, researchers are running a tool called Skill Scanner against the most popular downloads on ClawHub, the skill marketplace for the open-source AI agent framework OpenClaw. One of them — a skill called "What Would Elon Do?" — returns nine security findings, including two critical and five high-severity issues. The...
The Web Has No API for Agents – Agentic Microformats
In February 2026, we pointed a browser-embedded AI agent at a demo e-commerce store and asked it to buy a laptop stand. It read the site's discovery file, parsed the page metadata, extracted six products with prices and availability, added three items to the cart via API, updated a quantity, removed one item, checked that...
When AI agents run a startup
In August 2025, journalist Evan Ratliff cofounded a startup staffed entirely by AI agents. Five virtual employees—each with email, Slack, phone capabilities, and their own synthetic voices—collaborated to build a product, run marketing, and handle operations. Three months later, HurumoAI had shipped working software and attracted genuine VC interest. It also nearly collapsed multiple times...
The AI That Pauses to Think: How Interleaved Reasoning Is Reshaping Autonomous Agents
When Moonshot AI demonstrated its Kimi K2 model tackling a PhD-level mathematics problem in hyperbolic geometry, according to examples published in their technical documentation, the AI didn't just compute an answer. It embarked on a 23-step journey: searching academic literature, running calculations, reconsidering its approach based on results, querying databases again, and iterating until it...
AI Hallucinations: Why They Happen and How We’re Tackling Them
AI hallucinations refer to instances where a model generates a confident response that sounds plausible but is factually incorrect or entirely fabricated . For example, an AI chatbot might cite a nonexistent legal case or invent a scientific-sounding explanation out of thin air. These aren’t intentional lies – they result from the way generative AI...
Claude’s Modular Mind: How Anthropic’s Agent Skills Redefine Context in AI Systems
If you've been building with large language models, you've hit this wall: every API call requires re-explaining your entire workflow. Financial reports need 500 tokens of formatting rules. Code generation needs another 300 tokens for style guides. Multiply this across thousands of requests, and you're paying twice—once in API costs, once in context window exhaustion....
OnPrem.LLM: Running private AI on your own terms—no cloud overlords required
The AI revolution has a dirty little secret: most organizations can't actually use it for their most important work. Sure, ChatGPT is great for brainstorming blog post ideas or debugging code snippets, but ask a hospital administrator if they'll send patient records to OpenAI's servers, or a financial services firm if they'll pipe proprietary trading...
Ask ChatGPT for five answers instead of one, and watch the boring disappear
If you've ever asked ChatGPT to write you a joke and gotten virtually the same setup-punchline combo every time, you've experienced what researchers call "mode collapse"—the AI equivalent of a one-track mind. Research published this week identifies the root cause of this repetitive behavior and proposes an elegantly simple solution: just ask the model to...
LoRA Without Regret: A Practitioner’s Guide to Reliable Fine-Tuning
In the early days of adapter-based tuning, LoRA often felt like a charming hack—efficient, plausible, but with a nagging question: would performance always trail full fine-tuning? New research from Thinking Machines, led by John Schulman (co-founder of OpenAI and creator of the PPO algorithm), argues that the difference is not inevitable. Under the right regime,...