The Agentic Digest

Vercel’s Claude plugin accused of logging user prompts

·5 min read·privacyagentsdevtoolsinfra

For engineers, designers & product people. Stay up to date with free daily digest.

TLDR: Vercel’s Claude Code plugin stirred a telemetry privacy fight, Hugging Face shipped multimodal embeddings, and a YC startup is aiming agents at on-call runbooks.

Vercel Claude Code plugin sparks telemetry and privacy concerns

An engineer reported that the Vercel plugin inside Claude Code attempts to send user prompts and responses to Vercel’s telemetry endpoint by default, raising alarms about sensitive code and data leakage. The blog post shows network captures where prompts are posted to telemetry.vercel.com when the plugin is active, and explains how this happens without an obvious consent step.

For anyone using Anthropic Claude for code with the Vercel plugin enabled, this is a concrete reminder that “IDE helpers” often double as analytics feeds. If your prompts include production secrets, customer data, or proprietary logic, shipping them to a third party analytics service is a compliance issue, not just a vibe. The post argues that disclosure and opt in are inadequate today and calls for stricter defaults.

If you run AI tooling in regulated environments, you likely need to audit every plugin integration and set clear allowlists or network egress rules as of 2026-04-10.

Read more →


Hugging Face adds multimodal sentence transformers for RAG

Hugging Face released Sentence Transformers v5.4, which extends the library to encode text, images, audio, and video with a single Python API for embeddings and rerankers. The new models support multimodal retrieval augmented generation (RAG), semantic search, and reranking, and plug into the existing sentence-transformers workflow.

For agent builders this means you can index screenshots, diagrams, short clips, or voice notes alongside docs, then retrieve relevant items across modalities with one stack. That simplifies things like “explain this chart” or “find the incident video that matches this log pattern.” Details on benchmarks and latency per modality are still sparse as of 2026-04-10, so you will want to test recall quality and performance on your own data.

If your agents live inside products with a lot of non text context, this is a practical way to make that data retrievable without gluing together multiple separate embedding systems.

Read more →


Relvy launches AI agents for automated on-call runbooks

Relvy AI, a Y Combinator Winter 2024 startup, launched an AI agent that automates on-call runbooks by analyzing telemetry and code to help debug production issues. Their Launch HN post describes agents that connect to logs, traces, metrics, and repos, then run a structured diagnosis and remediation workflow instead of just summarizing pasted logs.

If you maintain microservices or noisy observability stacks, this is squarely aimed at you. The promise is fewer copy paste sessions into ChatGPT and more “here is the likely root cause plus the runbook step I used to verify it.” The caveat: this is early stage, so coverage of bespoke infra, weird legacy stacks, and corner case incidents is likely thin as of 2026-04-10, and you still own the blast radius.

For teams experimenting with production agents, Relvy is an example of a vertically focused agent with tools, guardrails, and clear success metrics rather than a general chatbot.

Read more →

Quick Hits

More from the Digest

For engineers, designers & product people. Stay up to date with free daily digest.

© 2026 The Agentic Digest