The Agentic Digest

OpenAI president claims AI now writes 80% of your code

·5 min read·agentic-aisoftware-engineeringllmscloud-infra

For engineers, designers & product people. Stay up to date with free daily digest.

TLDR: OpenAI says agents now write most of your code, AWS ships a serious LLM migration playbook, and Sun Finance posts eye-popping IDV numbers on Bedrock.

OpenAI president: agentic tools now write “80% of your code”

In a Sequoia Capital talk, OpenAI president Greg Brockman said agentic coding tools jumped from writing 20% to 80% of software engineers’ code between December and now, as of 2026-05-01. He framed this shift as moving AI from a “sideshow” to the main act in software development, implying that models are now orchestrating multi-step coding tasks instead of just autocomplete.

If those numbers hold up in real workflows, this is a signal that agent-based IDE copilots and CLI agents are close to default for greenfield code. For engineering leaders, that means revisiting hiring profiles, coding guidelines, and review practices to handle machine-generated code at scale. The big missing piece is detail: no repos, benchmarks, or methodology yet, so treat the 80% as directional, not a hard metric.

Read more →


AWS publishes framework for LLM migration in production

Amazon Web Services released the “AWS Generative AI Model Agility Solution,” a framework for migrating and upgrading large language models (LLMs) in production environments, as of 2026-05-01. The guide walks through prompt conversion, evaluation, rollout, and tooling patterns so you can swap models without breaking downstream apps.

This is aimed squarely at teams stuck on “v1 forever” because model changes are scary to deploy. It covers prompt normalization, test harnesses, and operational playbooks so you can move between Amazon Bedrock, self-hosted models, or third-party APIs with less risk. The post is light on hard benchmarks, but rich in architecture diagrams and concrete mechanisms.

If you maintain multi-tenant generative AI services or internal platforms, this is worth a careful read; it is essentially AWS’s opinionated spec for model lifecycle management in enterprises.

Read more →


Sun Finance shares real numbers on ID verification with Bedrock

Sun Finance detailed how it built an AI-powered identity verification and fraud detection pipeline on Amazon Web Services using Amazon Bedrock, Amazon Textract, and Amazon Rekognition. The company reports extraction accuracy improvements from 79.7% to 90.8%, per-document cost reductions of 91%, and latency drops from up to 20 hours to under 5 seconds, as of 2026-05-01.

The architecture uses Textract for optical character recognition (OCR), Rekognition for image checks, and a large language model from Amazon Bedrock to structure and validate the extracted data. That hybrid pattern is notable: specialized vision plus a general LLM outperformed either alone. For anyone building know-your-customer (KYC), fintech onboarding, or document-heavy workflows, this is a solid reference design with concrete metrics instead of aspirational claims.

The system is also fully serverless, which matters if you want to scale up and down with spiky verification traffic without running a dedicated cluster.

Read more →


Quick Hits

More from the Digest

For engineers, designers & product people. Stay up to date with free daily digest.