The Real Problem: Three Platforms, Three Separate Codebases
Six months ago, my team was tasked with deploying an AI chatbot for an enterprise client. They used Slack internally, their B2C customers used Telegram, and the dev team was on Discord. Three platforms, three different audiences — and I made the worst possible call: writing three separate bots.
The consequences were obvious within two weeks: three unsynchronized codebases, fixing a bug in Telegram and forgetting to update Slack, AI response logic diverging in each place. Every time the client wanted to tweak a prompt or add a feature, we had to touch three separate places. A genuine maintenance nightmare.
This isn’t unique to my team — the pattern repeats across countless chatbot projects. The “one project per platform” approach feels fast at first, but two or three months in, the accumulated technical debt is enough to drag the whole team down.
Why Multi-Platform Integration Is Hard
The core problem is that each platform has:
- Its own authentication: Telegram uses a Bot Token, Slack uses OAuth + Signing Secret, Discord uses a Bot Token + Application ID
- Different event models: Telegram uses polling or webhooks, Slack uses an Event API with HTTP endpoints, Discord uses a WebSocket gateway
- Incompatible message formats: Telegram’s Markdown differs from Slack Block Kit differs from Discord Embeds
- Different rate limits: Telegram limits to 30 messages/second/bot, Slack has Tier 1–4 API limits, Discord has per-guild rate limits
Trying to abstract everything into a common interface without fully understanding each platform first? The result is usually over-engineering from the start. Or worse: a leaky abstraction that bleeds everywhere and falls apart in practice.
Three Approaches — and the Problem with Each
Option 1: Off-the-shelf bot frameworks (Botpress, Rasa, Microsoft Bot Framework)
I tried Botpress and dropped it after three days. When you need to inject a custom LLM — like GPT-4 with a specific system prompt, or a local model via Ollama — these frameworks add too many abstraction layers. Debugging AI response errors is painful because you’re tracing through 5–6 layers. Rule-based dialog trees? Fine. AI chatbots where you need control at the prompt level? Avoid.
Option 2: Shared library — a common Python package
Better than option 1, but still structurally flawed: three separate entry points, three independently running servers/processes. Updating the AI model or changing conversation logic still requires three deploys. For a small team, that overhead compounds faster than you’d expect.
Option 3: Unified adapter pattern — one core, multiple adapters
After a full refactor, this is the approach I settled on. The core idea is simple: completely separate AI logic from platform integration. One process, one codebase, multiple platforms.
The Real Architecture: Unified Adapter Pattern
The project structure looks like this:
ai-chatbot/
├── core/
│ ├── ai_handler.py # Toàn bộ logic AI (OpenAI/Gemini/Ollama)
│ ├── conversation.py # Quản lý conversation history
│ └── message.py # Unified message model
├── adapters/
│ ├── telegram_adapter.py
│ ├── slack_adapter.py
│ └── discord_adapter.py
├── main.py # Entry point — chạy tất cả adapters
└── requirements.txt
The most critical piece is the Unified Message Model — a standard dataclass that every adapter must convert its input into:
# core/message.py
from dataclasses import dataclass
from typing import Optional
@dataclass
class UnifiedMessage:
platform: str # "telegram", "slack", "discord"
user_id: str # Platform-specific user ID
chat_id: str # Channel/chat để reply về
text: str # Nội dung tin nhắn
username: Optional[str] = None
reply_to_message_id: Optional[str] = None
The AI handler receives a UnifiedMessage and returns plain text. Each adapter is responsible for formatting that text according to its platform’s conventions:
# core/ai_handler.py
from openai import AsyncOpenAI
from .conversation import ConversationManager
from .message import UnifiedMessage
client = AsyncOpenAI()
conversation_mgr = ConversationManager()
async def process_message(msg: UnifiedMessage) -> str:
# Lấy conversation history theo user_id (cross-platform)
history = conversation_mgr.get_history(msg.user_id)
history.append({"role": "user", "content": msg.text})
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "Bạn là AI assistant hữu ích."},
*history
],
max_tokens=1000
)
reply = response.choices[0].message.content
conversation_mgr.add_reply(msg.user_id, reply)
return reply
Telegram Adapter
# adapters/telegram_adapter.py
from telegram import Update
from telegram.ext import Application, MessageHandler, filters
from core.message import UnifiedMessage
from core.ai_handler import process_message
async def handle_message(update: Update, context):
msg = UnifiedMessage(
platform="telegram",
user_id=str(update.effective_user.id),
chat_id=str(update.effective_chat.id),
text=update.message.text,
username=update.effective_user.username
)
reply = await process_message(msg)
await update.message.reply_text(reply)
def create_telegram_app(token: str):
app = Application.builder().token(token).build()
app.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle_message))
return app
Discord Adapter
# adapters/discord_adapter.py
import discord
from core.message import UnifiedMessage
from core.ai_handler import process_message
client = discord.Client(intents=discord.Intents.default())
@client.event
async def on_message(message: discord.Message):
if message.author == client.user:
return
if not client.user.mentioned_in(message):
return
text = message.content.replace(f'<@{client.user.id}>', '').strip()
msg = UnifiedMessage(
platform="discord",
user_id=str(message.author.id),
chat_id=str(message.channel.id),
text=text,
username=message.author.name
)
async with message.channel.typing():
reply = await process_message(msg)
await message.reply(reply)
Running Everything in One Process
# main.py
import asyncio
from adapters.telegram_adapter import create_telegram_app
from adapters.discord_adapter import client as discord_client
import os
async def main():
telegram_app = create_telegram_app(os.getenv("TELEGRAM_BOT_TOKEN"))
# Chạy cả hai adapter đồng thời
await asyncio.gather(
telegram_app.run_polling(),
discord_client.start(os.getenv("DISCORD_BOT_TOKEN"))
)
if __name__ == "__main__":
asyncio.run(main())
Issues That Only Surface in Production
After 6 months in production, this architecture holds up well — but a few problems only emerge once you actually deploy:
Cross-platform conversation history: The same person using both Telegram and Discord has two different user IDs, and two separate histories. No need to build a complex user-linking system: using platform:user_id as the key is sufficient.
Discord and asyncio conflicts: discord.py has its own event loop, which occasionally conflicts with asyncio.gather. You need to handle discord.Client with setup_hook carefully, or run Discord in a separate thread using asyncio.run_coroutine_threadsafe.
AI API rate limiting: During peak hours, all three platforms flood messages simultaneously — and the OpenAI API throttles immediately. Adding a semaphore to ai_handler is the fastest way to keep things under control:
# Trong ai_handler.py
import asyncio
_semaphore = asyncio.Semaphore(5) # Tối đa 5 concurrent AI calls
async def process_message(msg: UnifiedMessage) -> str:
async with _semaphore:
# ... OpenAI call ở đây
Results After 6 Months
Now when a client asks to update the system prompt or swap out the model, I change one place in ai_handler.py and deploy once. Bug fixes are the same — no more “fixed it in Telegram but forgot Slack.”
One practical note on Slack: that adapter is more complex than the other two because Slack requires event verification handling and uses slack-bolt with its own HTTP server. Slack works best on a dedicated port (3000 is a common convention) behind an nginx reverse proxy — don’t try to squeeze it into the same event loop as Telegram and Discord.
400 lines of Python for three platforms — less than the total code for a single platform if you’d built them separately. If your project needs to scale across multiple channels, investing in the right architecture upfront is far cheaper than refactoring once the codebase has grown.

