Agree & Join LinkedIn
        
  By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

            AI Research Engineer @ Intel Labs
        📬 I built an MCP server that lets LLMs search my email from the terminal

The server connects Claude to email search via the mu CLI tool. Now I just ask it things like: “Find emails with PDF attachments from last April” ⚡

🛠 No custom frontend. No heavy framework. Just a CLI tool made smarter.

💡 I learned that MCP servers are basically API translators — they take complex developer SDKs and flatten them into simple function calls that LLMs can actually use.

🎯 The bigger picture: This pattern can breathe new life into existing CLI tools and services. Complex APIs → Simple, declarative functions → Natural language queries.

This isn’t a product — just an experiment in stitching new capabilities into existing workflows. Code here: https://lnkd.in/eT2fJBSv

mu email indexer and searcher: https://github.com/djcb/mu

#MCP #LLM #EmailSearch #OpenSource #AI

What existing tools would you want to make LLM-friendly? 🤔 To view or add a comment, sign in

            Cloud, DevOps, and Full-stack professional
        Tired of manually writing function-calling definitions for every single API endpoint you want your LLM to use?

I built Conduit to solve this. It’s an open-source tool that automatically exposes any existing GraphQL API to an LLM as a ready-to-use toolset.

Conduit leverages GraphQL’s strong typing and introspection capabilities to dynamically generate a tool manifest via the Model Context Protocol (MCP). An AI agent can connect to Conduit, understand the available tools (your GraphQL queries/mutations), and execute them.

This automates the most tedious part of giving AI agents new capabilities, allowing you to focus on the agent’s logic instead of the boilerplate. The entire project is written in TypeScript.

If you’re building with LLMs and need to connect to GraphQL data sources, this might save you a lot of time.

Contributions and feedback are welcome! GitHub: https://lnkd.in/gr38VWv3 #LLM #FunctionCalling #AIAgents #MachineLearning #GraphQL #API #TypeScript #NodeJS #MLOps #AIengineering To view or add a comment, sign in

            Founder of Yearbook  | Flutter Application developer | Content Creator
        Are You Tired of Structuring AI Prompts 

Microsoft got your back with newly launched POML (Prompt Orchestration Markup Language) Language , Its like html and based on xml and helps you structure your ptompts easily so you Can get the desired Output for the tokens spent.

Here’s How to Use it , 1) Install POML by running pip install poml or npm install poml js – for node js 2) Install VS code extension by searching ‘poml’ 3) Crete a POML file and write your Promot 4) Test it with your preferred AI Model

Thats it gor today follow me on linkedin For Business and Tech Insights.

#poml #html #ai #prompt #promptengineering #future #microsoft To view or add a comment, sign in

            SDE at Anonimo | EX - HeyCoach, GFG | 3⭐@Codechef(max 1615) | Pupil@codeforces(max 1242) | Web Development, React| Node Js| KLU'25
        After MCP, A2A, & AG-UI, there's another Agent protocol that is  ACP (Agent Communication Protocol) and the best Part is it is fully Open Source

But before we dive into ACP, let’s quickly revisit the earlier 3 👇

  1. MCP (Model Context Protocol) Problem before MCP: Every new tool = custom integration code With MCP: One universal adapter for APIs, tools, files. -> Think of it as the USB-C for agents — plug anything in, it just works.

  2. A2A (Agent-to-Agent) Problem before A2A: Agents couldn’t hand off tasks cleanly → worked in silos. With A2A: Agents collaborate like a team with walkie-talkies  -> Funny Part is: Google made this A2A protocol just to show they’re updated in the market, like saying yes, after MCP, we’ve launched something new.

  3. AG-UI (Agent → UI) Problem before AG-UI: Each agent gave random outputs → frontend devs had to write messy glue code. With AG-UI: Agents return standard UI components (forms, tables, charts). -> Basically → one design system for AI outputs

  4. ACP (Agent Communication Protocol) — by IBM ACP is like a traffic control system for coordinating agents at scale. It’s open source, RESTful, and even ships with a Python SDK!

Key ACP Features (solution-architect friendly):

  1. Supports both stateful and stateless agents
  2. Uses JSON-RPC for structured, reliable comms
  3. Handles natural language interfaces (so humans ↔ agents is smooth)
  4. Offers real-time streaming (great for live dashboards, chat)
  5. Flexible deployment + legacy integration (you can fit it in old infra too) ->  In short: ACP = the dispatcher + rulebook for multiple agents working together.

#AI #ArtificialIntelligence #AIAgents #AgenticAI #MultiAgentSystems #AICommunity #OpenSourceAI #AIStandards To view or add a comment, sign in

            I help labs reduce costs and improve the quality of diagnostics by the means of AI
        New release of AgentCraft (it's like Cursor, but for n8n). This Chrome extension integrates directly into the n8n interface and helps  build, configure, and debug workflows up to 10x faster with AI assistance.

Key Features: 🔎 Search & insert ready-to-use workflows 💬 MCP-powered chat with n8n documentation access ⚡ One-click workflow generation from natural language 🔧 AI-powered node configuration 🌐 Generate and import CURL into HTTP node 🛠️ AI Fixer for errors in workflows & ⚡Generate JavaScript code 📑 JSON auto-fixing 🤖 Prompt generator for AI assistants 🎛️ Trigger emulation with test data 🗒️ Auto-generated sticky notes 💾 Autosave & backup history

Link - https://lnkd.in/ej-JkZKs To view or add a comment, sign in

            On Prem GenAI Digital Transformation for BFSI Sector
        We tested Microsoft’s new POML (Prompt Orchestration Markup Language) at Ekkel AI on a few internal workflows to see how it performs in practice.

Where it worked well: - Multi-step workflows: Breaking prompts into , , and tags made our multi-agent setups easier to manage. - Reusable templates: We could reuse the same prompt structure across different projects just by swapping in variables and data sources. - Context embedding: Adding documents and tables directly into the prompt through POML tags improved accuracy for context-heavy tasks. - Version control: The structured format made it easier to track prompt changes in Git, which is harder with plain text prompts.

Where it could improve: - Tooling maturity: The VS Code extension works well, but more integration with prompt testing platforms would help. - Language support: Official .NET SDK support would be valuable for certain enterprise stacks.

Overall, POML is promising, especially for large-scale, repeatable prompt workflows. For smaller or ad-hoc tasks, plain prompts are still quicker. To view or add a comment, sign in

            Senior Data scientist, 5+ years, CV, NLP, Classic ML, Gen AI
        MCP is getting serious: first benchmark for real tool‑use

The Model Context Protocol (MCP) launched about nine months ago (Nov 2024) and has been spreading fast across major platforms. It’s the “plug” that lets models talk to real tools and data.

Now there’s a dedicated benchmark: MCP‑Universe (Salesforce AI Research). It tests models against real MCP servers across 6 domains - Location navigation, Repository management, Financial analysis, 3D design, Browser automation, Web searching - using execution‑based checks on 231 tasks.

A detail I like: tasks are multi‑step. Runs typically take ~6-8 tool calls, which puts pressure on long‑context handling and planning. In early results, GPT‑5 tops the overall leaderboard and leads in most domains (with Grok‑4 slightly ahead in browser automation).

Bottom line: MCP is maturing fast — and now it’s measurable. What gets measured gets improved, so expect rapid progress in tool use, planning, and context management next.

Link: https://lnkd.in/g4KcpJFW To view or add a comment, sign in

            Babysitting AI agents
        Claude Code is a general purpose agent disguised as a coding agent.

It searches the web. It integrates with tools through MCP. It executes multi-step workflows. All the basics you’d expect.

But it’s not just a general-purpose agent. It’s a code-first agent, not just an agent that helps you code, but one that codes to expand what it can do.

Having bash access, Claude Code controls your operating system directly. It can install packages, script CLI tools together, and automate anything you can do in a terminal. Combined with file system access, it can search, read, and modify files across your computer. Your entire system becomes its workspace, if you allow it.

More importantly, it writes code. When other agents hit a wall because no tool exists for a task, Claude Code can build one. It can write parsers for unusual file formats. It can reverse-engineer undocumented APIs and build clients. It can create data processing pipelines, automation scripts, whatever the task demands. The tool doesn’t exist until it needs to exist.

It’s also composable, a capability unique to CLI-based agents. Claude Code can spawn specialized versions of itself as subagents, each with full tool access. Give it web search and file creation, and it can replicate features like “Deep Research”. What others hard-code, it builds from primitives.

Code-first agents don’t just use tools. They script them together, build new ones, and orchestrate multiple instances of themselves to create capabilities that didn’t exist before. Other agents work within their constraints. Claude Code can read its own source code, analyze how it works, and modify its configuration files to change its behavior.

#anthropic #agent #claudecode #llm #ai To view or add a comment, sign in

            3 followers
        Here we are with the post about Request validation and validation in general. As it is required now to validate the tool CreateReservation query and API request to create reservation, I decided to use as much Laravel features as possible. Just write validation in the laravel v12 documentation 📜, and the guidelines are pretty good I could tell you.

The only thing I haven’t spotted, before AI actually introduced me to it is make:rule artisan command, which is just a lifesaver if you know what I mean. The code looks much cleaner when all the additional rules lies outside of the StoreRequest. Even with AI, human beings are necessary to build up the rules, validation, and think ahead. So, with the custom validation within the policies of #EasyBookr it was necessary to inject several important rules:

🕣15 minutes rule: I strongly believe that it makes sense to limit people with the timing divided by 15, meaning that you won’t be able to create reservation that is less then 15 minutes long. It does not make any sense. Also that will keep system cleaner, customers less confused and easier for everyone to track the time correctly.

📆 Dates validation: Fixed rate type cannot have end date but only start date because it is fixed, so only one date is necessary to book the time correctly.

📅 Employee schedule rule: Not only the correct date but also time should be correctly picked, since we all have our own working hours. That should also be included into account.

🤔 Have I included all the rules into account? Let’s see. Good to start on. I won’t introduce all the rules here, of course, some basic ones won’t be included into the post, but are included into the project to prevent security breakdowns. To view or add a comment, sign in

            Magento Developer at Scandesign Media A/S
        Validation rules. Are they widely used? I got to know them only when I reached the API flow. However, they may be more commonly used without a package, such as Filament? Don't know. I got to know it only because of AI Tools and API scope, but not on the frontend. 
            3 followers
        Here we are with the post about Request validation and validation in general. As it is required now to validate the tool CreateReservation query and API request to create reservation, I decided to use as much Laravel features as possible. Just write validation in the laravel v12 documentation 📜, and the guidelines are pretty good I could tell you.

The only thing I haven’t spotted, before AI actually introduced me to it is make:rule artisan command, which is just a lifesaver if you know what I mean. The code looks much cleaner when all the additional rules lies outside of the StoreRequest. Even with AI, human beings are necessary to build up the rules, validation, and think ahead. So, with the custom validation within the policies of #EasyBookr it was necessary to inject several important rules:

🕣15 minutes rule: I strongly believe that it makes sense to limit people with the timing divided by 15, meaning that you won’t be able to create reservation that is less then 15 minutes long. It does not make any sense. Also that will keep system cleaner, customers less confused and easier for everyone to track the time correctly.

📆 Dates validation: Fixed rate type cannot have end date but only start date because it is fixed, so only one date is necessary to book the time correctly.

📅 Employee schedule rule: Not only the correct date but also time should be correctly picked, since we all have our own working hours. That should also be included into account.

🤔 Have I included all the rules into account? Let’s see. Good to start on. I won’t introduce all the rules here, of course, some basic ones won’t be included into the post, but are included into the project to prevent security breakdowns. To view or add a comment, sign in

            Leading Developer Advocacy @ CrewAI • Automating business workflows with AI
        i’ve wired up a bunch of MCP servers but not for semantic code search.

i decided to try out the new Claude Context MCP plugin by Zilliz using Claude Code and I found it quite interesting and I think you will as well.

so what is it? it’s a powerful semantic code search tool that gives your AI coding assistants deep understanding of your entire codebase. Instead of grep or basic keyword search, you can ask natural language questions about your codebase and get actual relevant code back.

the setup is interesting (mermaid diagram in comments), it takes your code: • parses it through AST (Abstract Syntax Tree)[1] (chunk by structure) • chunks it intelligently • embeds the code using openai/voyage/ollama/gemini  • stores everything in Milvus, created by Zilliz vector db which is well known for being a beast in handling large-scale vector search across billions of embeddings

to top it all off, it uses one of my favorite data structures, Merkle Trees. This allows it to perform incremental indexing on new changes and not the entire codebase.

since it works through MCP, it integrates with claude code, cursor, vscode, and other MCP clients. I’ve added the links to the repo + setup instructions in the chat.

have you tried using this server yet?

[1] an Abstract Syntax Tree (AST) is a hierarchical representation of your code’s structure. To view or add a comment, sign in

        1,432 followers
      
                Create your free account or sign in to continue your search
              
          or
        
  By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

            New to LinkedIn? Join now
          
                      or
                    
                New to LinkedIn? Join now
              
  By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.