In this guide, we’ll show you how to create a Telegram bot powered by a free large language model (LLM). No prior experience is required, and we’ll use free tools like Telegram, Hugging Face, and Python.
Let’s get started!
• A Telegram account
• A Hugging Face account (free)
• Basic Python setup (Python installed, and ideally a virtual environment set up)
1. Open Telegram on your device.
2. Search for BotFather and click on the ☑️ verified account.
3. Type /newbot
and follow the instructions:
• Give your bot a name
• Create a username that ends with bot
4. BotFather will send you a bot token. Copy this token — you’ll need it soon.
1. Go to huggingface.co and sign up (free).
2. Click your profile icon → Access Tokens.
3. Click New Token, give it a name (e.g., "LLMBotToken"), and create it with write access.
4. Copy the API token.
5. Choose a free model from the Hugging Face Model Hub. Free models have open license (e.g. MIT, Apache 2.0). In this guide we will use DeepSeek-R1-Distill-Qwen-1.5B.
• Keep in mind, this is a reasoning model, so it's quite verbose. You can "play" with it directly on the Hugging Face website, or choose another one.
6. Click Deploy → Inference Providers. Here you can choose an Inference Provider to deploy selected model and access sample code snippets for Python, JavaScript, and cURL.
1. Create a new folder for your project and open it in your terminal.
2. You will need to install the following libraries to interact with the Telegram bot API and Hugging Face:
pip install python-telegram-bot huggingface_hub
1. Create a file named bot.py.
2. Paste this code inside the file:
from telegram import Update
from telegram.ext import ApplicationBuilder, CommandHandler, MessageHandler, ContextTypes, filters
from huggingface_hub import InferenceClient
TELEGRAM_TOKEN = 'YOUR_TELEGRAM_BOT_TOKEN'
HF_API_KEY = 'YOUR_HF_API_TOKEN'
client = InferenceClient(
provider="hf-inference",
api_key=HF_API_KEY,
)
user_conversations = {}
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
await update.message.reply_text('Hello! Send me a message and I will reply using AI!')
async def handle_message(update: Update, context: ContextTypes.DEFAULT_TYPE):
user_id = update.message.chat_id
user_message = update.message.text
await update.message.reply_text('Thinking... 🤔')
if user_id not in user_conversations:
user_conversations[user_id] = []
user_conversations[user_id].append({"role": "user", "content": user_message})
try:
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
messages=user_conversations[user_id],
max_tokens=500,
)
response_text = completion.choices[0].message["content"]
user_conversations[user_id].append({"role": "assistant", "content": response_text})
await update.message.reply_text(response_text)
except Exception as e:
print(f"Error: {e}")
await update.message.reply_text('Something went wrong! 😥')
def main():
app = ApplicationBuilder().token(TELEGRAM_TOKEN).build()
app.add_handler(CommandHandler("start", start))
app.add_handler(MessageHandler(filters.TEXT & (~filters.COMMAND), handle_message))
print("Bot is running...")
app.run_polling()
if __name__ == '__main__':
main()
3. Replace YOUR_TELEGRAM_BOT_TOKEN
and YOUR_HF_API_TOKEN
with your tokens.
1. In your terminal, run:
python bot.py
2. You should see in terminal:
Bot is running...
3. Now go to Telegram, find your bot (by username), and start chatting! It should reply based on the LLM’s output.
Note: Popular Hugging Face models can experience high demand. If the model is heavily used, it might take longer to respond or may be temporarily unavailable. Please be patient and try again later.
If the bot is not responding, check the following:
• Ensure the Telegram token and Hugging Face API key are correct.
• Verify that the Hugging Face model is accessible and functional (you can check the Hugging Face documentation for more details).
• Make sure your bot has been added to a chat and you are interacting with it.
March 2025
Social