The Agentic Digest

Vercel open sources deepsec AI security harness

·6 min read·securityagentsinfrastructureclouddevtools

For engineers, designers & product people. Stay up to date with free daily digest.

TLDR: Vercel just shipped an AI security harness for your code, AWS is quietly building the agent ops layer, and we get a real-world look at agents shipping on Vercel.

Vercel open sources deepsec, an AI security harness for code

Vercel has open sourced deepsec, a security harness powered by coding agents that scans large codebases for vulnerabilities, running entirely on your own infrastructure as of 2026-05-05. You can run deepsec on a laptop without giving any cloud service privileged access to your source, and wire it to existing Claude or Codex subscriptions for inference.

The pitch is automated, agentic security review for big repos where traditional static analysis or manual review misses subtle issues. For AI engineers and platform teams, the key draw is a local, auditable workflow that fits existing security postures instead of pushing more code into third party SaaS. Scans on a single machine can take days, but deepsec can fan out research jobs in parallel when you have more hardware.

If you are already on Vercel, this is worth watching as a potential building block for continuous agent driven code review and security gates in CI, especially for fast moving agent stacks that change prompts and tools weekly.

Read more →


General Intelligence scales its Cofounder agent platform on Vercel

General Intelligence describes how it used agents to build and run its Cofounder agent platform on Vercel, with an 8 person team shipping around 10 pull requests and 70+ commits per engineer per day. The company runs Cofounder as a multi tenant app on Vercel, with more than 4,000 preview branches and roughly 100 parallel app versions live at any time, and claims that about 90% of site reliability engineering work is automated via Vercel and its own agent.

For anyone building production agents, this is a concrete case study of running a high change rate agent platform on a serverless stack instead of a bespoke Kubernetes cluster. The managed Vercel account that Cofounder provisions for every customer is doing a lot of heavy lifting around isolation, previews, and deployments. The tradeoff is tight coupling to one hosting provider and some opacity around deeper reliability numbers.

The interesting part for your roadmap is how they use agents not only as the product, but as infra co pilots that manage operations at scale. That pattern is likely to spread as tool use and trace based optimization keep improving.

Read more →


AWS previews AgentCore Optimization for ongoing agent quality

Amazon SageMaker AI has introduced AgentCore Optimization in preview, framing an “agent quality loop” that turns production traces into recommendations, validates them with batch evaluation and A/B tests, and rolls out updates as of 2026-05-05. The idea is that AI agents that launch strong still drift as models, prompts, and user behavior change, so you need a systematic feedback and deployment pipeline.

For teams already on AWS, this is effectively an observability and ops layer tuned for agents rather than generic ML models. You get a more opinionated stack for taking logs, turning them into candidate prompt or policy changes, and testing these changes before they touch real users. The details and guardrails matter, and there are not yet independent benchmarks or customer case studies.

If you are rolling your own evaluation pipelines for retrieval augmented generation (RAG) or tool using agents, it is worth evaluating whether this can simplify the plumbing, or at least inform your own internal “quality loop” architecture.

Read more →


Quick Hits

  • AI Outperforms Doctors in Emergency Diagnosis Study - Let's Data Science A peer reviewed Science paper reports a large language model beating physicians on some emergency department diagnostic tasks. For healthcare builders, the real work now is randomized trials, workflow integration, and regulatory clearance.

  • Capacity-aware inference: Automatic instance fallback for SageMaker AI endpoints Amazon SageMaker AI now lets you define prioritized instance pools so endpoints can fall back automatically when a preferred GPU type is unavailable. This should reduce provisioning failures and manual reconfiguration for latency sensitive agents.

  • Introducing Dataset Q&A for Amazon QuickSight AWS has added Dataset Q&A so business users can run multi dataset, natural language queries over QuickSight data. Useful if your agents need to surface BI answers without you building custom SQL orchestration.

  • Citi introduces platform for AI agent rollout Citi is launching its Arc platform to let internal developers build narrowly scoped agents first, then eventually expose access more broadly. This is another signal that big banks are standardizing on internal agent platforms rather than one off pilots.

  • How OpenAI delivers low-latency voice AI at scale OpenAI explains how it rebuilt its WebRTC stack to support real time, turn taking Voice AI globally. If you are building voice agents, the architecture notes around congestion control and streaming codecs are worth a read.

  • OpenAI and PwC collaborate to reimagine the office of the CFO OpenAI and PwC are packaging agent workflows for forecasting, controls, and finance operations. Expect more off the shelf “CFO agent” style offerings that you may need to integrate with or compete against.

  • I used AI to code a personal trainer app in one weekend A reporter uses Manus, a general purpose AI agent that can code, to ship a fitness app over a weekend. It is anecdotal, but it is a good proxy for how non engineers will increasingly prototype products.

  • Amazon rolls out Claude Code and Codex internally Business Insider reports that Amazon is deploying Claude Code and Codex across employees after internal pushback. Large scale internal agent adoption at hyperscalers is a strong validation signal for coding copilots.

  • Redis Array Playground Salvatore Sanfilippo has proposed a native array type for Redis with a long list of AR* commands. For agent infra, this could simplify storing and iterating over structured sequences in hot paths.

  • Granite 4.1 3B SVG Pelican Gallery IBM’s Apache 2.0 Granite 4.1 models get an 3B fine tune that generates SVG pelican art. Fun demo plus a reminder that small, open models remain attractive for cost sensitive agent workloads.

  • Bun is being ported from Zig to Rust The Bun JavaScript runtime is moving its implementation language from Zig to Rust. For agent backends that lean on Bun, expect a long tail of performance and ecosystem changes as the port matures.

More from the Digest

For engineers, designers & product people. Stay up to date with free daily digest.

© 2026 The Agentic Digest