Has it been suspended just for the day or for the next few days?
April 7, 2025 at 4:35 AM
Has it been suspended just for the day or for the next few days?
I looked at how a client is created since it is the one interfacing with the model: github.com/modelcontext...
What's still unclear is how it is extensible to other models than Claude. I guess their MCP client definition would adhere to their own LLM's behavior.
What's still unclear is how it is extensible to other models than Claude. I guess their MCP client definition would adhere to their own LLM's behavior.
quickstart-resources/mcp-client-python/client.py at main · modelcontextprotocol/quickstart-resources
A repository of servers and clients from the Model Context Protocol tutorials - modelcontextprotocol/quickstart-resources
github.com
April 4, 2025 at 2:06 PM
I looked at how a client is created since it is the one interfacing with the model: github.com/modelcontext...
What's still unclear is how it is extensible to other models than Claude. I guess their MCP client definition would adhere to their own LLM's behavior.
What's still unclear is how it is extensible to other models than Claude. I guess their MCP client definition would adhere to their own LLM's behavior.
Looked into it and it's a lot simpler than I imagined. MCP is built on top of the function api. They standardized the tools specification and usage, and created a closed loop system to continue the conversation with the LLM. This standardization also allows one to share their tools with anyone.
April 4, 2025 at 2:03 PM
Looked into it and it's a lot simpler than I imagined. MCP is built on top of the function api. They standardized the tools specification and usage, and created a closed loop system to continue the conversation with the LLM. This standardization also allows one to share their tools with anyone.
Is there any info available on how MCP Client interfaces with the ML model itself? As in how/when is the token prediction stopped/resumed to use an available tool, how are query and generation contexts updated etc (under the hood workings of `self.anthropic.messages.create` basically) etc
March 30, 2025 at 2:08 PM
Is there any info available on how MCP Client interfaces with the ML model itself? As in how/when is the token prediction stopped/resumed to use an available tool, how are query and generation contexts updated etc (under the hood workings of `self.anthropic.messages.create` basically) etc