The Agentic Future: Building Autonomous AI Agents That Run 24/7 on RakSmart Bare Metal and VPS

Introduction: From Workflows to Agents

The first two posts in this series covered running local AI models and building no-code automation pipelines. These are powerful technologies, but they share a common limitation: they are reactive. A trigger happens, a workflow runs, and the process ends.

The next evolution in techtrends is the autonomous AI agent — a software system that:

  • Has a goal (e.g., “research five competitors and summarize their pricing”)
  • Breaks that goal into sub-tasks
  • Executes those tasks using available tools (web browsing, API calls, code execution)
  • Adapts when things go wrong
  • Continues working until the goal is achieved or a human intervenes

These agents do not need a specific trigger. They run continuously, making decisions and taking actions in pursuit of their objectives.

In this final post, we will build autonomous AI agents on RakSmart raksmart.com/cps/6807″ target=”_blank” rel=”noopener”>bare metal and high-performance VPS infrastructure. You will learn how to create agents that browse the web, write code, interact with your WordPress site, and even manage other agents.


The Architecture of an Autonomous Agent

Before we build, let us understand the components.

Component 1: The Large Language Model (LLM)

The LLM is the agent’s “brain.” It receives observations (what has happened so far) and decides what action to take next. For autonomous agents, you need an LLM with strong reasoning capabilities. Models like Llama 3 70B, Mixtral 8x7B, or GPT-4 (if you use cloud APIs) work best.

On RakSmart bare metal, you can run Llama 3 70B if you have at least 128GB of RAM. For most users, Llama 3 8B or Mistral 7B provides a good balance of capability and resource requirements.

Component 2: The Agent Loop

The agent runs in a loop:

  1. Observe: Receive the current state (previous actions, results, environment data)
  2. Think: The LLM generates a reasoning step (“I need to search for competitor A’s pricing page”)
  3. Act: Execute an action (call an API, run a Python script, browse a website)
  4. Remember: Store the action and its result in memory
  5. Repeat: Go back to step 1 until the goal is achieved

Component 3: Tools

Tools are functions the agent can call. Examples:

  • search_web(query) — Performs a web search and returns results
  • fetch_url(url) — Downloads and parses a webpage
  • execute_python(code) — Runs arbitrary Python code
  • send_email(to, subject, body) — Sends an email
  • create_wordpress_post(title, content) — Publishes to your WordPress site

Component 4: Memory

Agents need memory to avoid repeating the same actions. Two types:

  • Short-term memory: The current conversation or task context (limited to the LLM’s context window, typically 8,000 to 128,000 tokens)
  • Long-term memory: A vector database (like Chroma or Qdrant) that stores past actions and results for retrieval when similar situations arise

Building a Simple Agent with LangChain on RakSmart

LangChain is the most popular framework for building LLM-powered agents. We will install it on a RakSmart VPS and build a working agent.

Step 1: Set Up the Environment

bash

# Update and install Python
apt update && apt upgrade -y
apt install python3-pip python3-venv -y

# Create a virtual environment
python3 -m venv agent-env
source agent-env/bin/activate

# Install LangChain and dependencies
pip install langchain langchain-community langchain-core
pip install ollama chromadb requests beautifulsoup4

Step 2: Connect to Your Local LLM

Assuming Ollama is running on the same server (see Blog #1):

python

from langchain_community.llms import Ollama

# Connect to your local LLM
llm = Ollama(model="llama3.2:3b", base_url="http://localhost:11434")

Step 3: Define Tools

Let us create three simple tools.

python

from langchain.tools import tool
import requests
from bs4 import BeautifulSoup

@tool
def search_web(query: str) -> str:
    """Search the web for a query. Use this to find information."""
    # For simplicity, we'll use a free search API or DuckDuckGo scraping
    # In production, consider using SerpAPI or a self-hosted Searx instance
    url = f"https://html.duckduckgo.com/html/?q={query}"
    response = requests.get(url)
    soup = BeautifulSoup(response.text, 'html.parser')
    results = soup.select('.result__a')
    return "\n".join([r.text for r in results[:5]])

@tool
def fetch_url(url: str) -> str:
    """Fetch and extract text from a URL."""
    response = requests.get(url, timeout=10)
    soup = BeautifulSoup(response.text, 'html.parser')
    # Remove script and style elements
    for script in soup(["script", "style"]):
        script.decompose()
    text = soup.get_text()
    # Limit to 5000 characters to avoid context overflow
    return text[:5000]

@tool
def execute_python(code: str) -> str:
    """Execute Python code and return the output."""
    import subprocess
    result = subprocess.run(["python3", "-c", code], capture_output=True, text=True, timeout=30)
    if result.stderr:
        return f"Error: {result.stderr}"
    return result.stdout

Step 4: Create the Agent

python

from langchain.agents import initialize_agent, AgentType

tools = [search_web, fetch_url, execute_python]

agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
    max_iterations=10,
    handle_parsing_errors=True
)

Step 5: Run the Agent with a Goal

python

result = agent.run("""
Your goal: Research RakSmart's current VPS pricing and compare it with DigitalOcean's basic VPS plan.

Steps:
1. Search for 'RakSmart VPS pricing 2025'
2. Fetch the pricing page
3. Extract the price for the cheapest VPS plan
4. Search for 'DigitalOcean basic VPS price'
5. Extract their price
6. Compare and report which is cheaper
""")

print(result)

The agent will now:

  1. Think about what to do first
  2. Call search_web with an appropriate query
  3. Call fetch_url on the search results
  4. Extract the pricing information
  5. Repeat for DigitalOcean
  6. Generate a final answer

All of this happens automatically on your RakSmart server.


Building a Persistent Agent with Memory

The simple agent above forgets everything between runs. For long-running autonomous agents, we need memory.

Adding Long-Term Memory with Chroma

python

from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Chroma
from langchain.embeddings import OllamaEmbeddings

# Create embeddings using your local LLM
embeddings = OllamaEmbeddings(model="llama3.2:3b")

# Create a vector database for long-term memory
vectorstore = Chroma(
    collection_name="agent_memory",
    embedding_function=embeddings,
    persist_directory="./agent_memory_db"
)

# Create a retriever
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})

# Memory that combines conversation history and retrieved documents
from langchain.memory import VectorStoreRetrieverMemory

memory = VectorStoreRetrieverMemory(
    retriever=retriever,
    memory_key="history",
    input_key="input"
)

# Add this memory to your agent
from langchain.agents import AgentExecutor

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    memory=memory,
    verbose=True,
    max_iterations=15
)

Now your agent remembers past actions. If you ask it “What did you learn about RakSmart yesterday?”, it will search the vector database and retrieve relevant information.


Use Case: Autonomous WordPress Content Agent

Let us build a practical, useful agent that runs continuously on a RakSmart VPS and manages a WordPress site.

The Agent’s Goal

*”Maintain a WordPress blog about AI and automation. Each day, find one trending topic in the AI space, research it thoroughly, write a 500-word article, and publish it as a draft for human review.”*

Tools for the WordPress Agent

python

@tool
def get_trending_ai_topics() -> str:
    """Get current trending topics in AI from multiple sources."""
    # Fetch from Hacker News, Reddit r/MachineLearning, and Twitter
    # Return as a formatted list
    pass

@tool
def research_topic(topic: str) -> str:
    """Research a topic by searching the web and summarizing findings."""
    # Use search_web and fetch_url to gather information
    # Then use the LLM to summarize into a research brief
    pass

@tool
def write_wordpress_post(title: str, content: str) -> str:
    """Write a post to WordPress as a draft."""
    import requests
    # WordPress REST API endpoint
    wp_url = "https://yourwordpress.com/wp-json/wp/v2/posts"
    # Authentication (use application password)
    auth = ("your_username", "your_application_password")
    data = {
        "title": title,
        "content": content,
        "status": "draft"
    }
    response = requests.post(wp_url, json=data, auth=auth)
    return f"Post created. ID: {response.json().get('id')}"

The Agent Loop for WordPress

python

def wordpress_agent_loop():
    # Step 1: Find trending topic
    trending = get_trending_ai_topics()
    topic = extract_best_topic(trending)  # Use LLM to pick the most relevant
    
    # Step 2: Research
    research = research_topic(topic)
    
    # Step 3: Write article
    prompt = f"""
    Using this research: {research}
    Write a 500-word blog post about {topic}.
    Include an introduction, 3 key points, and a conclusion.
    Write in a professional but accessible style.
    """
    article = llm.invoke(prompt)
    
    # Step 4: Publish as draft
    result = write_wordpress_post(f"Trending: {topic}", article)
    
    return result

# Run the agent on a schedule (via cron or n8n)
# This could run daily, or you could have the agent decide when to publish

This agent runs autonomously. It decides what to write about, researches the topic, writes the content, and publishes it — all without human input. Your role is only to review and approve before making posts live.


Use Case: Autonomous Data Center Monitoring Agent

If you run multiple RakSmart servers, you need to monitor them. Build an agent that monitors your data center infrastructure.

The Agent’s Goal

“Monitor all RakSmart servers in my account. Check CPU, RAM, disk usage, and network latency every 15 minutes. If any metric exceeds thresholds, investigate the cause and suggest a fix. If a server goes offline, attempt to restart it automatically.”

Tools for the Monitoring Agent

python

@tool
def get_server_metrics(server_ip: str) -> dict:
    """Get CPU, RAM, disk, and network metrics for a server via SSH."""
    # Use paramiko or subprocess to run `top`, `free`, `df` remotely
    pass

@tool
def check_latency(target_ip: str) -> float:
    """Ping a server and return latency in ms."""
    import subprocess
    result = subprocess.run(["ping", "-c", "1", target_ip], capture_output=True, text=True)
    # Parse output to extract latency
    return latency

@tool
def restart_service(server_ip: str, service_name: str) -> str:
    """Restart a service on a remote server."""
    # SSH and run `systemctl restart service_name`
    pass

@tool
def diagnose_issue(metrics: dict, thresholds: dict) -> str:
    """Use LLM to diagnose what might be wrong."""
    prompt = f"Metrics: {metrics}. Thresholds: {thresholds}. What is likely causing the high usage?"
    return llm.invoke(prompt)

The Agent Loop

The agent runs continuously, making decisions about when to check metrics and how to respond.


Open Claw Meets Autonomous Agents

Combine Open Claw (automated data extraction) with autonomous agents for even more power.

Use Case: Competitive Intelligence Agent

An agent that runs indefinitely with this goal:

“Monitor the top 5 competitors in my industry. Every day, visit their pricing pages, blog posts, and product changelogs. Identify any changes. If a change is significant (price drop, new feature, major content shift), generate a report and send it to my email. Also, suggest how my business should respond.”

The agent uses Open Claw scripts to scrape competitor sites. It then uses its LLM brain to analyze the changes and generate strategic recommendations. All of this happens autonomously, day after day, on a RakSpark bare metal server.


Challenges and Limitations of Autonomous Agents

Building autonomous agents is exciting, but you should understand the current limitations.

Limitation 1: Cost of Mistakes. An agent can make decisions that have real consequences. If an agent has permission to delete files, restart servers, or send emails, a hallucination could cause problems. Always start with read-only permissions and add write permissions gradually.

Limitation 2: Infinite Loops. Agents can get stuck in loops, repeating the same action over and over. Set a maximum number of iterations (e.g., 20 steps) and implement a timeout.

Limitation 3: Context Window Limits. LLMs can only remember so much at once. For long-running tasks, you need external memory (vector databases) and strategies to summarize or discard old information.

Limitation 4: Tool Reliability. If your tools break (e.g., a website changes its HTML structure), the agent will fail. Build error handling into each tool.

Limitation 5: Resource Consumption. Autonomous agents running 24/7 consume CPU, RAM, and network bandwidth. Monitor your RakSpark server’s usage and upgrade if needed.


Security Best Practices for Autonomous Agents

When running agents that can take actions, follow these security practices.

1. Run Agents in Isolated Containers

Use Docker to containerize your agent:

bash

docker run -d --name my-agent --restart unless-stopped -v /agent-data:/data my-agent-image

2. Use Read-Only Tools Initially

Start with tools that can only read data (search, fetch, analyze). Add write tools (email, database, WordPress) only after extensive testing.

3. Implement Human-in-the-Loop for Critical Actions

For high-stakes actions (deleting data, making purchases, publishing content), require human approval. n8n workflows can pause and wait for a webhook callback.

4. Log Everything

Every action the agent takes should be logged to a file or database. Review these logs regularly.

5. Set Spending Limits

If your agent uses paid APIs (e.g., Google Search API, OpenAI), set hard spending limits at the API level.


The Future: Multi-Agent Systems on RakSmart

The most advanced techtrend in AI agents is multi-agent systems — multiple specialized agents that communicate and collaborate.

On a powerful RakSmart bare metal server, you could run:

  • Researcher Agent: Finds and reads information
  • Writer Agent: Creates content based on research
  • Critic Agent: Reviews the writer’s output and suggests improvements
  • Publisher Agent: Handles WordPress integration
  • Monitor Agent: Watches all other agents and restarts them if they fail

These agents pass messages to each other, creating a miniature AI workforce that operates autonomously.


Conclusion: Your Autonomous Future Starts Today

Autonomous AI agents are not science fiction. They are running today on bare metal and VPS servers just like those offered by RakSmart. With a local LLM, the LangChain framework, and a few Python scripts, you can create agents that work for you 24/7.

The three posts in this series have taken you from:

  • Blog #1: Running local AI models on RakSmart bare metal
  • Blog #2: Building automation pipelines with n8n and Open Claw
  • Blog #3: Creating autonomous agents that pursue goals independently

You now have the knowledge to transform your RakSmart infrastructure into an intelligent, automated system that operates without constant human supervision.

Start small. Build a simple agent that monitors a single metric. Once it runs reliably for a week, add another capability. Within months, you will have a digital workforce running on your RakSmart servers, handling tasks that previously consumed hours of your time.

The data center you rent from RakSmart is not just a place to store files. It is the physical home of your future AI agents. Give them life. Give them goals. And watch them work.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *