Building AI NPCs with Player2 API
Integrate the Player2 API into your game/mod to create an autonomous, AI-driven NPC.
This guide will walk you through the code you will need to replicate the following Minecraft AI agent.
1. Developer API Documentation
This tutorial uses the ChatClef example project on GitHub as a reference implementation. Access the live API docs from http://localhost:4315/docs when the Player2 App is running.
Our API provides LLM, TTS, STT capabilities, and more. Example source code:
Player2 API package: https://github.com/elefant-ai/chatclef/tree/main/src/main/java/adris/altoclef/player2api/
ConversationHistory: https://github.com/elefant-ai/chatclef/blob/main/src/main/java/adris/altoclef/player2api/ConversationHistory.java
AICommandBridge: https://github.com/elefant-ai/chatclef/blob/main/src/main/java/adris/altoclef/player2api/AICommandBridge.java
Player2APIService (game-key example): https://github.com/elefant-ai/chatclef/blob/main/src/main/java/adris/altoclef/player2api/Player2APIService.java#L31
2. Health Check (Heartbeat)
We recommend monitoring integration by pinging the health endpoint once per minute. Our revenue-sharing reward program is based on each game/mod’s total time spent instead of downloads. This helps calculate your developer rewards correctly.
// Send a heartbeat every 60 seconds
public static void sendHeartbeat() {
try {
System.out.println("Sending Heartbeat");
Map<String, JsonElement> resp = sendRequest("/v1/health", false, null);
if (resp.containsKey("client_version"))
System.out.println("Heartbeat successful");
} catch (Exception e) {
System.err.printf("Heartbeat failed: %s", e.getMessage());
}
}
// In AltoClef.java:
long now = System.nanoTime();
if (now - lastHeartbeatTime > 60_000_000_000L) {
aiBridge.sendHeartbeat();
lastHeartbeatTime = now;
}
Tip: Include your game’s name in the player2-game-key header for proper attribution in reward calculations. Example code, replace chatclef with your game's identifier.
3. Agent Architecture Overview
The AI agent comprises four layers:
Game Mod Layer (AltoClef)
Hooks into the game (e.g., Minecraft + Fabric), gathers world and agent status, handles chat I/O.
API Bridge (AICommandBridge)
Manages message queuing, LLM calls, and dispatches in-game commands.
Conversation History (ConversationHistory)
Maintains a rolling log of system, user, and assistant messages; auto-summarizes older context.
LLM Service (Player2APIService)
Wraps calls to your LLM endpoint, sending JSON-wrapped messages and receiving JSON actions.
4. Conversation History: Rolling Context & Summaries
ConversationHistory keeps up to 64 entries. When it exceeds that:
Extract the earliest ~48 messages.
Send them to the LLM with a summarize prompt.
Insert the summary as a single assistant message at the top.
Persist the trimmed history to disk.
public void addHistory(JsonObject text, boolean doCutOff) {
conversationHistory.add(text);
if (doCutOff && conversationHistory.size() > MAX_HISTORY) {
List<JsonObject> toSummarize = new ArrayList<>(conversationHistory.subList(1, SUMMARY_COUNT + 1));
String summary = summarizeHistory(toSummarize);
if (summary == "") {
// 0th index is always system prompt
conversationHistory.remove(1);
} else {
JsonObject systemPrompt = conversationHistory.get(0);
int tailStart = conversationHistory.size() - (MAX_HISTORY - SUMMARY_COUNT);
List<JsonObject> tail = new ArrayList<>(
conversationHistory.subList(tailStart, conversationHistory.size()));
conversationHistory.clear();
conversationHistory.add(systemPrompt);
JsonObject summaryMsg = new JsonObject();
summaryMsg.addProperty("role", "assistant");
summaryMsg.addProperty("content", "Summary of earlier events: " + summary);
conversationHistory.add(summaryMsg);
conversationHistory.addAll(tail);
}
if (historyFile != null)
saveToFile();
} else if (doCutOff && conversationHistory.size() % 8 == 0 && historyFile != null) {
saveToFile();
}
}
Other key method:
copyThenWrapLatestWithStatus(world, agent, debug)
Wraps the last user turn with worldStatus, agentStatus, and debug logs before sending to the LLM.
Tip: We wrap game info in the last user message rather than the system prompt to maximize cache reuse (DeepSeek KV Cache) by keeping the system message stable between calls. This reduces latency, and cost. We wrap the last message instead of keeping game info in the conversation history to reduce token usage.
5. AICommandBridge: Chat Queue & Command Orchestration
5.1 Initialization
public class AICommandBridge {
private ConversationHistory conversationHistory;
private final Queue<String> messageQueue = new ConcurrentLinkedQueue<>();
private boolean llmProcessing = false, eventPolling = false;
public static final ExecutorService llmThread = Executors.newSingleThreadExecutor();
// ... other fields ...
}
conversationHistory: stores system, user, and assistant messages.
messageQueue: buffers incoming events (player chat, game updates).
llmThread: single-threaded executor for LLM calls.
5.2 Receiving & Queueing Messages
public void addMessageToQueue(String message) {
if (message == null || message.equals(_lastQueuedMessage)) return;
messageQueue.offer(message);
if (messageQueue.size() > 10) {
messageQueue.poll();
}
}
Skips duplicates to avoid loops.
Caps the queue at 10 messages for responsiveness.
6. The onTick
Loop
Executed every game tick (~20 Hz):
public void onTick() {
if (messageQueue.isEmpty()) return;
if (!eventPolling && !llmProcessing) {
eventPolling = true;
String next = messageQueue.poll();
conversationHistory.addUserMessage(next);
if (messageQueue.isEmpty()) {
processChatWithAPI();
} else {
eventPolling = false;
}
}
}
When the agent is already polling a message and LLM is not used, dequeue one message into history per tick.
When the queue empties, trigger processChatWithAPI().
Batch-process multiple user messages in a single LLM call.
This design handles fragmented or multi-agent chat efficiently and prevents overloading the LLM service. When implemented correctly, the agents will effectively wait in turn when speaking without centralised control.
7. LLM Interaction & JSON Protocol
The system prompt template is defined in the AICommandBridge source:
https://github.com/elefant-ai/chatclef/blob/main/src/main/java/adris/altoclef/player2api/AICommandBridge.java#L29
public void processChatWithAPI() {
llmThread.submit(() -> {
llmProcessing = true;
String agentStatus = AgentStatus.fromMod(mod).toString();
String worldStatus = WorldStatus.fromMod(mod).toString();
String debugMsgs = altoClefMsgBuffer.dumpAndGetString();
var wrapped = conversationHistory.copyThenWrapLatestWithStatus(worldStatus, agentStatus, debugMsgs);
JsonObject response = Player2APIService.completeConversation(wrapped);
conversationHistory.addAssistantMessage(response.toString());
String llmMsg = Utils.getStringJsonSafely(response, "message");
String llmCommand = Utils.getStringJsonSafely(response, "command");
if (!llmMsg.isEmpty())
mod.logCharacterMessage(llmMsg, character, getPlayerMode());
if (!llmCommand.isEmpty())
executeCommand(llmCommand);
llmProcessing = false;
eventPolling = false;
});
}
Expected LLM response format:
{
"reason": "Brief internal rationale.",
"command": "mine_block diamond_ore",
"message": "Sure, I’ll dig some diamonds!"
}
reason: Internal debugging/thought trace to help the agent think about what to do and say. This makes the model behave like the thinking model.
command: A valid game command.
message: NPC chat reply (≤250 characters).
8. Executing Actions & Feedback
cmdExecutor.execute(
cmdExecutor.getCommandPrefix() + llmCommand,
() -> addMessageToQueue("Command feedback: " + llmCommand + " finished."),
err -> addMessageToQueue("Command feedback: " + llmCommand + " FAILED: " + err)
);
Executes the LLM’s command with the mod prefix.
Handles @stop, success, and error callbacks separately:
Stop: halts the current plan with no follow-up.
Success: enqueues feedback for the next planning step.
Error: triggers replanning based on error.
Feedback allows the NPC to chain actions autonomously (e.g., craft armour, then fight).
Example: User asks to prepare for the Wither → NPC fetches a diamond chestplate → success feedback → NPC crafts a diamond sword.
9. Flow Summary
Event arrives → addMessageToQueue()
onTick → drain queue into history when idle
Queue empty → call processChatWithAPI()
LLM reply → log chat + issue command
Execute command → run in-game action
Action done → feedback enqueued for NPC chat
Our mission is to make AI-powered games accessible to players and indie developers by providing an easy-to-use and cost-effective API through our app
I hope this guide helps you build your own AI NPC. Don’t hesitate to reach out if you have questions or need assistance.
Why Use the Player2 App
Seamless Authentication
Players only need to keep the Player2 App open—no extra API keys or logins—and can enjoy multiple AI mods/games.Zero Developer Costs
No fees during development or after release; we share revenue with you so you can focus on building, not billing.Built-In Security & Services
We handle LLM tokens, rate-limits, and security. Our STT/TTS is fully managed—no audio integration needed.Continuous Improvements
We constantly test new models for gaming agents, update our API, and add features so you can focus on game development.