These days, creating your own AI bot seems like an evening project. LLM models, ready-made frameworks, and no-code tools promise: “Plug it in — and you’re done.” Bots for customer support, content generation, or education all look equally easy to build. But there’s a catch.
If you want people to trust your chatbot — especially when it answers questions on complex topics — you have to go way beyond the basics.
.png)
We’ve built educational bots used by thousands of users, including the “Ask Jung” chatbot. Along the way, we’ve learned a few important lessons. Here are five of them:
1/ The knowledge base is everything
Even GPT-4 won’t help if your sources are vague or superficial. You need high-quality, structured material behind the bot. Otherwise, it just guesses. Keep in mind that LLM models can’t read a whole book at once. You need to help them by picking the right pieces of text to show them.
2/ Don’t expect the bot to teach itself
If you want accurate, context-aware answers, you need to guide the model: highlight what matters, link key ideas, and write prompts with intent. And just as important, have someone who truly understands the subject review its answers. Without expert oversight, even a well-prompted bot can mislead your users.
3/ Respect copyright
Not all books or courses can legally be fed into your bot. If your content includes copyrighted materials, be careful or get permission.
4/ Don’t let it make things up
By default, LLMs don’t admit uncertainty. They try to answer, even when they don’t know. You need to set clear limits. It’s better for the bot to say “I’m not sure” than to confidently give a wrong answer.
5/ Tiny UX details matter
How long it takes to answer, what tone it uses, whether it shows sources — these things shape how credible your bot feels. Don’t treat them as afterthoughts.
In short: it’s easy to build a chatbot — but hard to build one people actually trust. And trust is what makes it useful.