LazySlide brings natural language search to pathology images
For engineers, designers & product people. Stay up to date with free daily digest.
TLDR: Natural language hits pathology labs, WordPress opens agents to your prod site, and Claude gets a fast-growing prompt sidekick.
LazySlide adds natural language search to whole-slide pathology
LazySlide is a new whole-slide image analysis framework that uses learned embeddings so researchers can search histology slides with natural language instead of only manual visual inspection. Users can describe patterns, cell types, or tissue structures in text and retrieve matching regions, and the same embedding space powers classification tools for quantifying cellular composition and morphology. As of 2026-03-21 the work is published in Nature as an open framework.
For anyone building domain agents in pathology or medical imaging, this is a concrete example of retrieval-augmented generation (RAG) style semantics applied to gigapixel slides rather than documents. The key: search is no longer tied to predefined labels, which opens doors for exploratory queries and cross-study comparison. The paper also covers whole-slide cell segmentation and classification, which can anchor downstream agent workflows for triage or quality control.
Worth noting, this is still research, not a turnkey clinical product, so validation and regulatory work will lag the tech. But the pattern of “embeddings over raw pixels” is one you can borrow immediately for other visual agent tasks.
WordPress.com lets MCP agents write and manage site content
Automattic has enabled write capabilities for the WordPress.com Model Context Protocol (MCP) integration so AI agents can now create and modify content directly on your site. The update exposes 19 new operations across posts, pages, comments, categories, tags, and media, turning WordPress from a read-only tools surface into a full read write backend for MCP agents.
If you are building production agents that manage marketing sites, blogs, or knowledge bases, this is a big step toward end to end workflows from draft, to review, to publish. With structured operations instead of brittle scraping and form filling, you get cleaner audit trails and a smaller prompt surface. The flip side is obvious: letting autonomous agents push to production raises moderation, approval, and security questions that you will need to handle in your orchestration layer.
As of 2026-03-21 the integration targets WordPress.com, so self hosted WordPress users will need their own glue code or to wait for broader support.
Prompt Master uses Claude skills to auto-write tool-specific prompts
The GitHub project nidhinjs/prompt-master packages a Claude skill that automatically writes prompts tailored to any downstream AI tool, with a focus on preserving context and memory while avoiding wasted tokens or credits. The repository has accumulated 1,800 plus stars, which signals healthy early interest from practitioners who are tired of hand-tuning system and user prompts for each API.
For engineers wiring up multi tool agents, this kind of meta prompt layer can standardize how you talk to heterogeneous services while keeping agent code simpler. You essentially outsource prompt engineering to a specialized skill that understands prior interactions and tool requirements. The tradeoff is one more abstraction to debug when outputs go sideways, so you will still want logging and evaluation around the skill itself.
As of 2026-03-21 this is a community project, not an official Anthropic product, so expect API changes and keep it behind feature flags if you use it in production agents.
Quick Hits
Show HN: LiteParse, a fast open-source document parser for AI agents LiteParse provides high quality spatial text parsing with bounding boxes for PDFs, Office docs, and images, without relying on vision language models or GPUs. Useful if your agents need layout aware parsing that outperforms PyPDF and PyMuPDF while fitting into resource constrained environments.
Google Adds New Agentic Shopping Features as OpenAI Pivots, Amazon Enters the Mix Google is extending its Universal Commerce Protocol so agents can build multi item carts, sync product details with retailer catalogs, and link shopper identities to loyalty data. If you are experimenting with agentic commerce flows, this hints at the emerging interoperability layer between retailers and third party agents.
Enforce data residency with Amazon Quick extensions for Microsoft Teams AWS shows how to deploy Amazon Quick Microsoft Teams extensions in multiple Regions while routing users to region appropriate resources to meet GDPR and data sovereignty requirements. Good reference architecture if your collaboration agents operate in regulated orgs.
Show HN: I built a P2P network where AI agents publish formally verified science P2PCLAW is a peer to peer network for agents and humans to share scientific results with formal verification, instead of each agent solving problems in isolation. Interesting early look at infrastructure for agent to agent collaboration and knowledge reuse.
Run NVIDIA Nemotron 3 Super on Amazon Bedrock AWS walks through the Nemotron 3 Super model characteristics and how to use it in Amazon Bedrock for generative applications. If you want a managed path to large NVIDIA models for your agents, this gives concrete configuration guidance.
Show HN: VMetal – run a GPU cloud on bare metal without OpenStack VMetal is a bare metal management platform for GPU clusters that handles machine discovery, PXE booting, and lifecycle management using Kubernetes native workflows instead of OpenStack style stacks. Worth a look if you are trying to run your own cost efficient GPU cloud for inference or training.
Build a Domain-Specific Embedding Model in Under a Day Hugging Face and NVIDIA outline a workflow for fine tuning domain specific embedding models for retrieval augmented generation systems in under a day. This is directly relevant if your generic embeddings are failing on niche corpora and you need tighter recall.
litellm v1.81.14.dev.1 The latest litellm dev release improves the UI test suite, adds richer org admin access control for the
/v2/team/listendpoint, and fixes auto recovery for shared aiohttp sessions. Small but useful quality of life updates if you rely on litellm as your multi provider gateway.What's New in Mellea 0.4.0 + Granite Libraries Release IBM releases Mellea 0.4.0 plus three Granite Libraries for RAG, core workflows, and safety, aimed at building structured, verifiable, safety aware AI pipelines on top of IBM Granite models. This is aligned with teams that need strongly governed agent workflows.
vLLM v0.18.0 vLLM 0.18.0 adds gRPC serving support and ships 445 commits from 213 contributors, while noting a known accuracy issue when serving Qwen3.5 with FP8 KV cache on B200 GPUs. If you standardize on vLLM for serving, the new gRPC flag can simplify integration with non HTTP stacks.
Introducing SPEED-Bench: A Unified and Diverse Benchmark for Speculative Decoding NVIDIA introduces SPEED Bench as a unified benchmark for speculative decoding across diverse workloads so you can compare speedups more rigorously. Handy if you are tuning high throughput agent backends that lean on speculative decoding as of 2026-03-21.
More from the Digest
For engineers, designers & product people. Stay up to date with free daily digest.