Saturday, April 12, 2025

Hugging Face, Claude, and MCP (Model Context Protocol)

Hugging Face, Claude, and MCP (Model Context Protocol) serve different purposes in the AI ecosystem, but they share some similarities in their focus on enhancing AI capabilities. Here's a breakdown:

Hugging Face: It's a platform and library that provides tools for working with large language models (LLMs) like GPT, BERT, and others. It simplifies the use of these models through its Transformers library and Model Hub, making it easier for developers to integrate and fine-tune LLMs for various applications.

Claude: Developed by Anthropic, Claude is an LLM designed for conversational AI and other tasks. It's a specific model, unlike Hugging Face, which is a platform hosting multiple models. Claude focuses on safety, interpretability, and user-friendly interactions.

MCP (Model Context Protocol): Introduced by Anthropic, MCP is not a model but a protocol. It acts as a "universal adapter" for AI systems, enabling seamless integration of LLMs with external tools and data sources. MCP standardizes interactions, making it easier to connect AI models like Claude or others to real-world applications.

In essence, Hugging Face is a platform for working with LLMs, Claude is an LLM itself, and MCP is a protocol that facilitates the integration of LLMs with external systems. They complement each other in the broader AI landscape.

Sunday, April 6, 2025

MCP Host, Client, Server with LLM - Sample Python Code

import socket
import openai
# Initialize the LLM API (e.g., OpenAI)
openai.api_key = "your_openai_api_key"

# MCP Server
def mcp_server(host='127.0.0.1', port=65432):
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind((host, port))
server_socket.listen()
print(f"Server started on {host}:{port}")

while True:
client_socket, client_address = server_socket.accept()
print(f"Connection from {client_address}")
data = client_socket.recv(1024).decode()

if data:
print(f"Received: {data}")
# Process input with LLM
response = query_llm(data)
client_socket.send(response.encode())
client_socket.close()

# MCP Client
def mcp_client(message, host='127.0.0.1', port=65432):
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect((host, port))
client_socket.send(message.encode())
response = client_socket.recv(1024).decode()
print(f"Response from server: {response}")
client_socket.close()

# MCP Host (MCP logic tying client-server communication)
def query_llm(prompt):
# Use the LLM to generate a response
try:
print(f"Querying LLM: {prompt}")
response = openai.Completion.create(
engine="text-davinci-003",
prompt=prompt,
max_tokens=100
)
return response.choices[0].text.strip()
except Exception as e:
return f"Error querying LLM: {e}"

# To run
if __name__ == "__main__":
# Example: Launch Server
import threading
server_thread = threading.Thread(target=mcp_server)
server_thread.start()

# Example: Run Client
import time
time.sleep(1) # Give server time to start
mcp_client("Hello, LLM! How are you?")


Explanation MCP Server:

This Python function acts as the server in the MCP setup.
It listens for incoming connections from clients, processes their data, and sends back responses.
The server uses the query_llm function to integrate with an LLM API.

MCP Client:

The client connects to the server and sends a message (e.g., a user query).
It waits for the server's response and prints it.
LLM Integration:

The server processes input from the client using an external LLM API like OpenAI's GPT via query_llm.
Replace "your_openai_api_key" with your actual API key to make it functional.

Host Logic:

The host in this context is represented by the query_llm function, which facilitates communication between the MCP system and the LLM.
It ensures the server uses the LLM effectively to process and generate meaningful responses.
Concurrency:
The server runs in a separate thread to allow concurrent client-server communication.

How It Works:
Start the MCP server by running mcp_server in one thread.
The client (mcp_client) sends a message to the server.
The server passes the message to the LLM using the query_llm function.
The LLM processes the input and generates a response, which is sent back to the client.

Sample AI agent to get Stock quote - Python

import csv
import openai

# Define the AI agent class

class AI_Agent:
def __init__(self, model):
self.model = model

def get_stock_quote(self, stock_symbol):
# Query the LLM for stock information
query = f"Provide the current stock quote for {stock_symbol}."
response = self.model.process_query(query)
return response

# Define the class for the LLM (OpenAI in this case)

class OpenAI_Model:
def __init__(self, api_key):
openai.api_key = api_key

def process_query(self, query):
try:
# Call OpenAI's GPT model to process the query
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", # Use your desired model
messages=[{"role": "system", "content": "You are a stock market assistant."}, {"role": "user", "content": query}] )

return response['choices'][0]['message']['content']
except Exception as e:
return f"Error: {e}"

def read_csv_file(file_path):
# Read stock symbols from a CSV file
stock_symbols = []
try:
with open(file_path, newline='') as csvfile:

reader = csv.reader(csvfile)
for row in reader:
stock_symbols.append(row[0]) # Assuming stock symbols are in the first column
return stock_symbols
except Exception as e:
print(f"Error reading CSV file: {e}")
return []





def main():



# Provide your OpenAI API key


api_key = "your_openai_api_key_here"

openai_model = OpenAI_Model(api_key)
ai_agent = AI_Agent(openai_model)
# Path to your CSV file containing stock symbols
csv_file_path = "stock_symbols.csv"
stock_symbols = read_csv_file(csv_file_path)

if stock_symbols:
print("Fetching stock quotes...\n")
for symbol in stock_symbols:
print(f"Stock: {symbol}")
response = ai_agent.get_stock_quote(symbol)
print(f"Quote: {response}\n")
else:
print("No stock symbols found in the CSV file.")

if __name__ == "__main__":
main()

CSV Reader:
The read_csv_file() function reads stock symbols from a CSV file.
Each row's first column is treated as a stock symbol.
AI Agent:
The AI_Agent class queries the LLM for stock quotes.
It sends a specific query for each stock symbol.
LLM Integration:
The OpenAI_Model class interacts with OpenAI's GPT model.
Make sure you replace "your_openai_api_key_here" with your actual OpenAI API key.

Main Function:
The main() function ties everything together, reads the CSV file, queries stock quotes, and displays results.

Requirements: Install the OpenAI Python library using pip install openai.
Prepare a CSV file (stock_symbols.csv) with stock symbols listed (one symbol per row).
Replace the placeholder API key with your own key.

How to get API Key

Getting an API key is a straightforward process, but it depends on the service or platform you're working with. Here's a general guide:

Choose the Service: Decide which API you want to use (e.g., OpenAI, Google Maps, Twitter, etc.).

Create an Account: Sign up on the platform's website if you don't already have an account.

Access the Developer Portal: Most platforms have a developer or API section where you can manage your API keys.

Create a New API Key:

Look for an option like "Create API Key" or "Generate Key."

Follow the instructions, which may include naming the key and setting permissions or restrictions.

Secure Your Key: Once generated, store it securely. Avoid sharing it publicly or embedding it directly in your code without encryption.

AI agent using an LLM (openai) model - Python agent

A customer wants to check the status of their food delivery order.

### pip install openai
import openai
class AI_Agent:
def __init__(self, model):
self.model = model

def collect_input(self, user_input):
print(f"Customer: {user_input}")
return user_input

def send_query_to_model(self, user_input):
query = f"The customer wants to know: {user_input}"
response = self.model.process_query(query)
return response

def execute_action(self, model_response):
print(f"AI Response: {model_response}")

class OpenAI_Model:
def __init__(self, api_key):
openai.api_key = api_key

def process_query(self, query):
try:

# Using OpenAI's GPT model to process the query
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo", # Use the specific model you want
messages=[{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": query}]
)
return response['choices'][0]['message']['content']
except Exception as e:
return f"Error: {e}"

# Replace 'your_openai_api_key' with your actual OpenAI API key
api_key = "your_openai_api_key"
openai_model = OpenAI_Model(api_key)
ai_agent = AI_Agent(openai_model)

# Simulate customer interaction
user_input = "Where's my order?"
input_collected = ai_agent.collect_input(user_input)
response = ai_agent.send_query_to_model(input_collected)
ai_agent.execute_action(response)

Explanation:
OpenAI's GPT Model: The OpenAI_Model class now interacts with OpenAI's API to process queries.

Agent Query: The agent formulates the user's query and sends it to the GPT model via OpenAI's API.

API Key: Replace "your_openai_api_key" with your actual API key to run the code.

Communication between AI agent & LLM

AI agents and LLMs communicate through structured frameworks that enable them to collaborate effectively. Here's how it typically works:

Requests and Inputs: The AI agent gathers inputs from the environment or users (e.g., commands, data, or queries) and formulates a request for the LLM.

LLM Processing: The LLM receives the request and processes it using its language understanding and reasoning capabilities. It generates responses, solutions, or insights based on its training and the input provided.

Outputs and Actions: The AI agent takes the LLM's output and uses it to perform specific tasks or actions, such as updating a database, interacting with external APIs, or providing a response to the user.

Feedback Loop: Communication can also involve a feedback loop, where the AI agent assesses the results and provides additional context or clarification for the LLM to refine its response.

This interaction is usually facilitated by APIs, SDKs, or protocols like MCP (Model Context Protocol), ensuring smooth, secure, and efficient communication between components.

Relationship between an AI agent and LLM

AI agents and LLMs (Large Language Models) complement each other, working in tandem to achieve specific tasks and goals. Here's how they relate:

LLMs as the Brain: LLMs provide intelligence, language processing, and reasoning capabilities. They generate responses, understand context, and perform complex analyses.

AI Agents as the Action Takers: AI agents use LLMs as their "brain" to process information, make decisions, and execute actions autonomously. AI agents can interact with external systems, handle workflows, and carry out tasks based on instructions.

Collaboration: Together, an AI agent powered by an LLM can perceive its environment, decide on a course of action, and implement solutions. For example, an AI agent might use an LLM to generate a detailed report based on raw data.

In short, the LLM forms the "thinking" part, while the AI agent is responsible for "doing."

Hugging Face, Claude, and MCP (Model Context Protocol)

Hugging Face, Claude, and MCP (Model Context Protocol) serve different purposes in the AI ecosystem, but they share some similarities in th...