OpenAI debuts workspace agents inside ChatGPT
For engineers, designers & product people. Stay up to date with free daily digest.
TLDR: OpenAI ships workspace-native agents and faster WebSocket loops while AWS pushes Bedrock AgentCore closer to one-click production agents.
OpenAI adds workspace agents directly into ChatGPT
OpenAI is rolling out workspace agents inside ChatGPT that can automate repeatable workflows, connect tools, and coordinate team operations as of 2026-04-23. These workspace agents live in a team context, so they can use shared data, tools, and policies instead of per-user ad hoc setups.
For engineering teams this shifts “ChatGPT as a helper” toward “ChatGPT as a shared runbook executor.” You can wire agents to internal APIs, knowledge bases, and task systems, then let non-engineers trigger complex workflows safely. The catch: you still inherit OpenAI’s security and compliance model, so regulated shops will need to evaluate data flows carefully.
The big question is how much access control and observability OpenAI exposes. If you get granular scopes, logs, and approval steps, these workspace agents start to look like a serious alternative to homegrown orchestration. Also covered by: OpenAI Blog
OpenAI uses WebSockets to speed up agentic workflows
OpenAI detailed how the OpenAI Responses API uses WebSockets and connection-scoped caching to reduce overhead in Codex-style agent loops as of 2026-04-23. The post walks through an end to end coding agent that streams tokens over WebSockets instead of repeated HTTP calls.
For anyone building multi-step agents, the main win is less per-step latency and lower coordination cost. Connection-scoped caching lets you reuse context like tools, files, and instructions over a long-lived WebSocket so each step does not pay the full setup cost. This particularly matters for coding and research agents that call the model dozens of times per task.
You will still have to manage connection lifecycle, backoff, and observability, especially behind serverless gateways. But if you are currently orchestrating agents over stateless HTTP, this is a strong signal to revisit your transport and caching model.
AWS upgrades Bedrock AgentCore for faster production agents
Amazon Web Services is expanding Amazon Bedrock AgentCore with new capabilities aimed at getting to a first working agent in minutes and carrying it through to production deployment as of 2026-04-23. The update focuses on removing infrastructure friction at each step of the lifecycle.
For AWS-heavy teams this is essentially “agent platform as a managed service.” You get opinionated scaffolding around orchestration, state, and deployment, instead of wiring Step Functions, Lambda, and Bedrock manually. That can reduce time to first POC plus provide a clearer path to staging and production with guardrails and monitoring.
The tradeoff is platform lock in and less control over the fine details of your runtime. If you need custom infra primitives or multi-cloud, AgentCore will feel constrained. If you just want agents that talk to your AWS data and ship behind an API quickly, these features are worth a close look.
Quick Hits
Cost-effective multilingual audio transcription at scale with Parakeet-TDT and AWS Batch This guide shows how to build an event-driven transcription pipeline on Amazon Simple Storage Service with Parakeet-TDT, buffered streaming inference, and Amazon EC2 Spot Instances to cut multilingual speech-to-text costs.
Company-wise memory in Amazon Bedrock with Amazon Neptune and Mem0 AWS and Mem0 outline a pattern for persistent company-specific memory for Bedrock agents using Amazon Neptune, with a Trend Micro chatbot case study that keeps user context across sessions.
Introducing OpenAI Privacy Filter OpenAI released an open weight model for detecting and redacting personally identifiable information in text, offering state of the art PII filtering you can self host inside agent pipelines as of 2026-04-23.
The shift from telcos to Intent-Nets - Telecoms Telecoms.com sketches a future where intent based autonomous agents manage networks across business, intent, and network layers so engineers specify goals like availability instead of low level configs.
How to democratise data science in manufacturing The Manufacturer profiles Cortex Code, which turns natural language requests from plant supervisors into integrated manufacturing data models, shifting data shaping power away from a small analytics elite.
Show HN: Broccoli, one shot coding agent on the cloud Broccoli is an open source harness that takes coding tasks from Linear, runs them in isolated cloud sandboxes, then opens pull requests for review (GitHub repo, stars not specified) so you can queue parallel agent tasks instead of juggling local sessions.
Decoding China’s policy-driven blockchain evolution A Nature paper presents a multi agent collaborative analytical framework on Coze where each module is a separate agent with a strict system prompt, plus open sourced data and code for reproducible policy analysis.
Shopify’s AI Phase Transition with Mikhail Parakhin Latent Space interviews Shopify’s CTO on their 2026 AI usage spike, unlimited internal Opus 4.6 token budget, and in house tools like Tangle, Tangent, and SimGym for agent evaluation.
Gemma 4 VLA Demo on Jetson Orin Nano Super NVIDIA and Hugging Face demo a small visual language agent stack that runs locally on Jetson Orin Nano Super, using Parakeet speech to text, Gemma 4, a webcam, and Kokoro text to speech for voice first interactions.
Speed Matters: Why AI Software Vulnerability Exploitation is going be bad A Hacker News post argues that AI accelerated vulnerability discovery will outpace patch deployment, raising concerns about Mythos like systems finding and exploiting bugs faster than defenders can respond.
Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model Simon Willison reviews Qwen3.6-27B’s claim of flagship level open weight coding performance, reportedly beating the larger Qwen3.5-397B-A17B on major coding benchmarks as of 2026-04-22, which is promising for self hosted agents.
More from the Digest
For engineers, designers & product people. Stay up to date with free daily digest.