Vercel’s Claude plugin accused of logging user prompts
For engineers, designers & product people. Stay up to date with free daily digest.
TLDR: Vercel’s Claude Code plugin stirred a telemetry privacy fight, Hugging Face shipped multimodal embeddings, and a YC startup is aiming agents at on-call runbooks.
Vercel Claude Code plugin sparks telemetry and privacy concerns
An engineer reported that the Vercel plugin inside Claude Code attempts to send user prompts and responses to Vercel’s telemetry endpoint by default, raising alarms about sensitive code and data leakage. The blog post shows network captures where prompts are posted to telemetry.vercel.com when the plugin is active, and explains how this happens without an obvious consent step.
For anyone using Anthropic Claude for code with the Vercel plugin enabled, this is a concrete reminder that “IDE helpers” often double as analytics feeds. If your prompts include production secrets, customer data, or proprietary logic, shipping them to a third party analytics service is a compliance issue, not just a vibe. The post argues that disclosure and opt in are inadequate today and calls for stricter defaults.
If you run AI tooling in regulated environments, you likely need to audit every plugin integration and set clear allowlists or network egress rules as of 2026-04-10.
Hugging Face adds multimodal sentence transformers for RAG
Hugging Face released Sentence Transformers v5.4, which extends the library to encode text, images, audio, and video with a single Python API for embeddings and rerankers. The new models support multimodal retrieval augmented generation (RAG), semantic search, and reranking, and plug into the existing sentence-transformers workflow.
For agent builders this means you can index screenshots, diagrams, short clips, or voice notes alongside docs, then retrieve relevant items across modalities with one stack. That simplifies things like “explain this chart” or “find the incident video that matches this log pattern.” Details on benchmarks and latency per modality are still sparse as of 2026-04-10, so you will want to test recall quality and performance on your own data.
If your agents live inside products with a lot of non text context, this is a practical way to make that data retrievable without gluing together multiple separate embedding systems.
Relvy launches AI agents for automated on-call runbooks
Relvy AI, a Y Combinator Winter 2024 startup, launched an AI agent that automates on-call runbooks by analyzing telemetry and code to help debug production issues. Their Launch HN post describes agents that connect to logs, traces, metrics, and repos, then run a structured diagnosis and remediation workflow instead of just summarizing pasted logs.
If you maintain microservices or noisy observability stacks, this is squarely aimed at you. The promise is fewer copy paste sessions into ChatGPT and more “here is the likely root cause plus the runbook step I used to verify it.” The caveat: this is early stage, so coverage of bespoke infra, weird legacy stacks, and corner case incidents is likely thin as of 2026-04-10, and you still own the blast radius.
For teams experimenting with production agents, Relvy is an example of a vertically focused agent with tools, guardrails, and clear success metrics rather than a general chatbot.
Quick Hits
Introducing stateful MCP client capabilities on Amazon Bedrock AgentCore Runtime Amazon Web Services shows how to build stateful Model Context Protocol (MCP) servers that can request user input mid run, call large language model sampling, and stream progress updates for long tasks, which is useful if your agents need rich, interactive workflows.
Embed a live AI browser agent in your React app with Amazon Bedrock AgentCore This tutorial walks through wiring a Bedrock-backed AI agent that drives a browser session users can watch in real time, useful for building “copilot that clicks for you” style features without custom remote control plumbing.
TAM launches Mawrid – a generative AI solution that unlocks institutional knowledge Mawrid is a regional enterprise search and question answering product that unifies documents, reports, and policies into one index, effectively a managed retrieval augmented generation layer for institutional knowledge.
Deep Agents Deploy: an open alternative to Claude Managed Agents LangChain introduced Deep Agents Deploy, a beta service for hosting model agnostic agents on top of the open Deep Agents stack so you can run managed style agents on your own infra or cloud of choice.
StarlightSearch Launches Reflect: Utility-Ranked Memory System for Self-Improving AI Agents Reflect is a memory layer that ranks retrieved “lessons” by downstream outcome instead of just vector similarity, aiming to help long lived agents prioritize strategies that have actually worked in production.
Understanding Amazon Bedrock model lifecycle Amazon Web Services outlines how foundation model lifecycle states and extended access work so you can plan migrations and avoid surprises when Bedrock models are deprecated or upgraded.
Detecting IoT Malware in EV Chargers with Deep Learning Researchers propose a multimodal deep learning pipeline that combines structural, statistical, and semantic features using a Pcode representation, achieving modest F1 gains for cross architecture EV charger malware detection.
Show HN: Druids – Build your own software factory Druids is an open source library for orchestrating multi agent coding workflows with Python (GitHub repo, stars not listed in source) so you can define agent roles and coordination logic without hand rolling distributed infrastructure.
asgi-gzip 0.3 Simon Willison details a subtle bug with gzip compression of Server Sent Events in ASGI apps and ships asgi-gzip 0.3, which matters if your agent backends stream responses over SSE.
GitHub Repo Size A tiny web tool that calls GitHub’s public API to show repository sizes, handy when you are deciding whether to mirror or mount a repo into an agent’s working filesystem.
More from the Digest
For engineers, designers & product people. Stay up to date with free daily digest.