Agree & Join LinkedIn
        
  By clicking Continue to join or sign in, you agree to LinkedInโ€™s User Agreement, Privacy Policy, and Cookie Policy.
๐Ÿ“ฌ I built an MCP server that lets LLMs search my email from the terminal

The server connects Claude to email search via the mu CLI tool. Now I just ask it things like: โ€œFind emails with PDF attachments from last Aprilโ€ โšก

๐Ÿ›  No custom frontend. No heavy framework. Just a CLI tool made smarter.

๐Ÿ’ก I learned that MCP servers are basically API translators โ€” they take complex developer SDKs and flatten them into simple function calls that LLMs can actually use.

๐ŸŽฏ The bigger picture: This pattern can breathe new life into existing CLI tools and services. Complex APIs โ†’ Simple, declarative functions โ†’ Natural language queries.

This isnโ€™t a product โ€” just an experiment in stitching new capabilities into existing workflows. Code here: https://lnkd.in/eT2fJBSv

mu email indexer and searcher: https://github.com/djcb/mu

#MCP #LLM #EmailSearch #OpenSource #AI

What existing tools would you want to make LLM-friendly? ๐Ÿค” To view or add a comment, sign in Whenever I am building complex ๐€๐ ๐ž๐ง๐ญ๐ข๐œ ๐’๐ฒ๐ฌ๐ญ๐ž๐ฆ๐ฌ, I mostly end up adding a lot of ๐ฅ๐š๐ญ๐ž๐ง๐œ๐ฒย to the system. And personally, these two techniques have always helped me a lot with reducing latency:

In that case, if you are running five processes inside the logic of your agent, and all of them are taking 3 seconds: Without parallelization: 3x5 = ๐Ÿ๐Ÿ“๐ฌ (๐๐š๐ ๐”๐—) With Parallelization: 3x1 = ๐Ÿ‘๐ฌย (๐†๐จ๐จ๐ ๐”๐—)

Because streaming keeps the user engaged. If you are a web developer, you know the importance of a loader when a process is happening or waiting for an API response. If you have used Cursor or some coding agent, you would have experienced that it shows you:

๐‘Šโ„Ž๐‘Ž๐‘ก ๐‘‘๐‘–๐‘“๐‘“๐‘’๐‘Ÿ๐‘’๐‘›๐‘ก ๐‘ก๐‘’๐‘โ„Ž๐‘›๐‘–๐‘ž๐‘ข๐‘’๐‘  ๐‘‘๐‘œ ๐‘ฆ๐‘œ๐‘ข ๐‘๐‘’๐‘Ÿ๐‘ ๐‘œ๐‘›๐‘Ž๐‘™๐‘™๐‘ฆ ๐‘ข๐‘ ๐‘’ ๐‘ก๐‘œ โ„Ž๐‘’๐‘™๐‘ ๐‘ค๐‘–๐‘กโ„Ž ๐‘กโ„Ž๐‘’ ๐‘™๐‘Ž๐‘ก๐‘’๐‘›๐‘๐‘ฆ ๐‘–๐‘› ๐‘๐‘œ๐‘š๐‘๐‘™๐‘’๐‘ฅ ๐ด๐‘”๐‘’๐‘›๐‘ก๐‘–๐‘ ๐‘ ๐‘ฆ๐‘ ๐‘ก๐‘’๐‘š๐‘ ?

    To view or add a comment, sign in
You've built your FastAPI application. Tests pass. It works locally. Now you're reading deployment guides and drowning in advice about connection pool tuning, PostgreSQL JIT compilation, and async event loop optimization.

Hereโ€™s the problem: youโ€™re optimizing blind. You donโ€™t have production traffic to measure. You donโ€™t know where your bottlenecks are.

Meanwhile, the stuff that will actually break on day one gets buried in the noise. Iโ€™ve seen developers spend days tuning connection pools for traffic they donโ€™t have yet, while missing the fact that their authentication breaks in production or their database credentials arenโ€™t set correctly.

The truth is, before your first deployment, only three things actually matter:

Everything else - performance tuning, advanced monitoring, sophisticated caching - can wait until you have real data.

I just published a practical checklist covering exactly what matters in the hour before you go live. No overwhelming theory. Just the non-negotiables explained with real examples of what happens when you skip them.

Link in the comments!

#python #fastapi #webdev #deployment


Want to skip deployment configuration entirely? Check out FastroAI at https://fastro.ai - a production-ready FastAPI template with security, monitoring, and deployment already configured correctly. To view or add a comment, sign in Tried my hands on web scraping and AI-powered document processing recently.

I built a pipeline that crawls configured websites, filters PDFs by exam type and year, and downloads them in a structured way. Both the exam name and years are configurable through the config file.

Instead of using traditional parsing methods, I integrated Claude (Sonnet 4) to directly read PDFs, extract questions and options, and tag them with subject, topic, difficulty level, and many more attributes โ€” all in one step. The processed data exports to Google Sheets for easy analysis and organization.

The project includes three CLI commands for crawling, tagging with Claude, and exporting to Sheets, keeping the workflow modular and composable.

Hereโ€™s a demo dataset from one of the runs showcasing the structured output. This setup uses exam papers from two years: https://surl.li/rhsnmg

Tech stack: Node.js, TypeScript, Claude API, Google Sheets API

GitHub Repo: https://lnkd.in/g4kdAvFE

#AI #WebScraping #Automation To view or add a comment, sign in

              446 followers
          Actions, Not Just Chat   React Component GPT:

We need a GPT that understands our React components, knows our CSS variables, and can spit out code thatโ€™s ready to use. This isnโ€™t about general knowledge; itโ€™s about our knowledge. The standard GPT knowledge upload is fine for broad docs, but for precise component generation, we need control. Thatโ€™s where Actions come in. Our design system lives in zeroheight. Our CSS variables are in a .css file. Our React components are in .jsx files. These are all discrete sources of truth. A generic LLM has no idea how they connect. If someone asks for a โ€œprimary button,โ€ it might give generic HTML, not our Button component with โ€“color-brand-primary. Unacceptable. We build an API. This API becomes our โ€œknowledge retrieval service.โ€ The GPT uses Actions to call this API when it needs specific, localized data. Extract Data (The ETL of our Design System): zeroheight Content: Use the zeroheight API to pull down all component documentation. Store it, parse it, clean it. Weโ€™re i https://lnkd.in/gufWti_X To view or add a comment, sign in

              299 followers
          Actions, Not Just Chat 

React Component GPT:

We need a GPT that understands our React components, knows our CSS variables, and can spit out code thatโ€™s ready to use. This isnโ€™t about general knowledge; itโ€™s about our knowledge. The standard GPT knowledge upload is fine for broad docs, but for precise component generation, we need control. Thatโ€™s where Actions come in. Our design system lives in zeroheight. Our CSS variables are in a .css file. Our React components are in .jsx files. These are all discrete sources of truth. A generic LLM has no idea how they connect. If someone asks for a โ€œprimary button,โ€ it might give generic HTML, not our Button component with โ€“color-brand-primary. Unacceptable. We build an API. This API becomes our โ€œknowledge retrieval service.โ€ The GPT uses Actions to call this API when it needs specific, localized data. Extract Data (The ETL of our Design System): zeroheight Content: Use the zeroheight API to pull down all component documentation. Store it, parse it, clean it. Weโ€™re i

https://lnkd.in/gufWti_X To view or add a comment, sign in

              446 followers
          API-Mocker Hits 5.38K Downloads: The Open Source API Development Platform That's Changing How Developers Mock APIs   The Problem Every Developer Faces

Building modern applications means integrating with countless APIs. But what happens when those APIs are down, rate-limited, or simply donโ€™t exist yet? Most developers resort to basic mocking tools that barely scratch the surface of real-world API complexity. API-Mocker isnโ€™t just another mocking tool. Itโ€™s a comprehensive API development platform that has already been downloaded over 5,380 times by developers worldwide. Hereโ€™s what makes it different: FastAPI-based server with advanced routing and regex pattern matching AI-powered mock generation using OpenAI GPT models with intelligent fallback Scenario-based mocking for testing different API states and edge cases Smart response matching that analyzes request data for intelligent response selection GraphQL support with schema introspection and subscription handling WebSocket mocking for real-time communication testing Advanced authentication with OAuth2, JWT, API keys, and MFA support Database inte https://lnkd.in/gYKbM7Ku To view or add a comment, sign in API-Mocker Hits 5.38K Downloads: The Open Source API Development Platform Thatโ€™s Changing How Developers Mock APIs The Problem Every Developer Faces

Building modern applications means integrating with countless APIs. But what happens when those APIs are down, rate-limited, or simply donโ€™t exist yet? Most developers resort to basic mocking tools that barely scratch the surface of real-world API complexity. API-Mocker isnโ€™t just another mocking tool. Itโ€™s a comprehensive API development platform that has already been downloaded over 5,380 times by developers worldwide. Hereโ€™s what makes it different: FastAPI-based server with advanced routing and regex pattern matching AI-powered mock generation using OpenAI GPT models with intelligent fallback Scenario-based mocking for testing different API states and edge cases Smart response matching that analyzes request data for intelligent response selection GraphQL support with schema introspection and subscription handling WebSocket mocking for real-time communication testing Advanced authentication with OAuth2, JWT, API keys, and MFA support Database inte https://lnkd.in/gYKbM7Ku To view or add a comment, sign in WKassebaumโ€™s fork of zen-mcp-server seems to be better maintained than the official version, with support for more LLMs from different providers. For those unfamiliar:

zen-mcp-server is a โ€œModel Context Protocol server that supercharges tools likeย Claude Code,ย Codex CLI, and IDE clients such asย Cursorย or theย Claude Dev VS Code extension.ย Zen MCP connects your favorite AI tool to multiple AI modelsย for enhanced code analysis, problem-solving, and collaborative developmentโ€.

https://lnkd.in/efRqQ7PH To view or add a comment, sign in The Cloudflare Code Mode approach to MCP tool calls (https://lnkd.in/erdnK7EH) sounds like a really significant improvement on the MCP experience. Itโ€™s one of those rare breakthroughs that is both elegant and obvious in hindsight.

At a high level, the idea is to translate โ€œraw MCPโ€ into TypeScript interfaces, and ask the LLM to code against the TypeScript interface. Itโ€™s a form of language arbitrage you might say: the agent exchanges a low-resource language (raw MCP) for a high-resource language (TypeScript), so the LLM performs much better. Then something cool happens - the LLM can also write code to chain tool calls together, or otherwise process the tool call responses in interesting ways. The agent is left holding a bunch of LLM-generated code, so it needs a sandbox to go run that code, and of course Cloudflare offers a solution for that.

Weโ€™ll see if this approach takes hold; it seems to have a lot of traction already. If it does, then itโ€™s worth asking whether the MCP protocol itself needs a revision - for example, by making the MCP server provide the TypeScript interface natively. That then raises another round of questions, around what is the best way for MCP servers to โ€œspeakโ€ to LLMs - can we do better than Typeface?

Certainly itโ€™s a cool idea, and I think itโ€™s a great step forward for MCP usage.

h/t to Kushagra Kumar for sending this post my way! To view or add a comment, sign in ๐Ÿ’ก ๐—ก๐—ฒ๐˜ƒ๐—ฒ๐—ฟ ๐—น๐—ผ๐˜€๐—ฒ ๐˜๐—ฟ๐—ฎ๐—ฐ๐—ธ ๐—ผ๐—ณ ๐—ถ๐—บ๐—ฝ๐—ผ๐—ฟ๐˜๐—ฎ๐—ป๐˜ ๐—ถ๐—ป๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ฎ๐—ด๐—ฎ๐—ถ๐—ป.

Just released MCP Memory Service v8.6.0 with ๐——๐—ผ๐—ฐ๐˜‚๐—บ๐—ฒ๐—ป๐˜ ๐—œ๐—ป๐—ด๐—ฒ๐˜€๐˜๐—ถ๐—ผ๐—ป - your personal AI-powered knowledge base.

๐—ง๐—ต๐—ฒ ๐—ฃ๐—ฟ๐—ผ๐—ฏ๐—น๐—ฒ๐—บ: You have PDFs, documentation, notes scattered everywhere. Finding the right information takes forever. Context is lost between AI conversations.

๐—ง๐—ต๐—ฒ ๐—ฆ๐—ผ๐—น๐˜‚๐˜๐—ถ๐—ผ๐—ป: Upload your documents once. Search them semantically. Let AI remember everything for you.

๐—›๐—ผ๐˜„ ๐—ถ๐˜ ๐—ช๐—ผ๐—ฟ๐—ธ๐˜€:

1๏ธโƒฃ ๐—จ๐—ฝ๐—น๐—ผ๐—ฎ๐—ฑ - Drag PDFs, docs, or notes to the web interface 2๏ธโƒฃ ๐—ฃ๐—ฟ๐—ผ๐—ฐ๐—ฒ๐˜€๐˜€ - Intelligent chunking preserves context 3๏ธโƒฃ ๐—ฆ๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต - Ask in natural language: โ€œauthentication flow from the security docsโ€ 4๏ธโƒฃ ๐—ฅ๐—ฒ๐—บ๐—ฒ๐—บ๐—ฏ๐—ฒ๐—ฟ - AI assistants access this knowledge automatically

๐—ช๐—ต๐—ฎ๐˜ ๐— ๐—ฎ๐—ธ๐—ฒ๐˜€ ๐—ง๐—ต๐—ถ๐˜€ ๐—ฆ๐—ฝ๐—ฒ๐—ฐ๐—ถ๐—ฎ๐—น:

๐ŸŒŸ ๐—ฆ๐—ฒ๐—บ๐—ฎ๐—ป๐˜๐—ถ๐—ฐ ๐—ฆ๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต - Finds relevant content, not just keywords ๐ŸŒŸ ๐—ฃ๐—ฟ๐—ถ๐˜ƒ๐—ฎ๐—ฐ๐˜†-๐—™๐—ถ๐—ฟ๐˜€๐˜ - Runs locally on your machine (or your teamโ€™s server) ๐ŸŒŸ ๐—จ๐—ป๐—ถ๐˜ƒ๐—ฒ๐—ฟ๐˜€๐—ฎ๐—น - Works with Claude, VS Code, Cursor, and 13+ AI applications ๐ŸŒŸ ๐—ฃ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป ๐—ฅ๐—ฒ๐—ฎ๐—ฑ๐˜† - 2000+ memories in active deployments, <500ms search times

๐—•๐˜‚๐—ถ๐—น๐˜ ๐—ณ๐—ผ๐—ฟ ๐—ง๐—ฒ๐—ฎ๐—บ๐˜€: โ€ข OAuth 2.1 collaboration โ€ข Hybrid sync (local + cloud) โ€ข Zero-configuration setup โ€ข Enterprise security

From solo developers to entire teams - one source of truth for AI-powered work.

๐—ข๐—ฝ๐—ฒ๐—ป ๐—ฆ๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ & ๐—™๐—ฟ๐—ฒ๐—ฒ: ๐Ÿ‘‰ https://lnkd.in/ePYekaAF

#ArtificialIntelligence #Productivity #KnowledgeManagement #DeveloperTools #OpenSource #Claude #AI To view or add a comment, sign in

        1,444 followers
      
                Create your free account or sign in to continue your search
              
          or
        
  By clicking Continue to join or sign in, you agree to LinkedInโ€™s User Agreement, Privacy Policy, and Cookie Policy.

            New to LinkedIn? Join now
          
                      or
                    
                New to LinkedIn? Join now
              
  By clicking Continue to join or sign in, you agree to LinkedInโ€™s User Agreement, Privacy Policy, and Cookie Policy.