OpenAI ChatKit Review: Technical Deep Dive and Why We Didn’t Adopt It

Mateusz Jakubczak profile picture
Mateusz Jakubczak

Published: Nov 4, 2025

6 min read

Link copied
Thumbnail for deep dive into OpenAI ChatKit

Why We Chose Not to Adopt OpenAI ChatKit

In the fast-paced world of conversational AI, the pressure to innovate is constant. At quickchat.ai, our mission is to build the most intelligent and effective AI assistants for businesses. This means we are always on the lookout for tools that can accelerate our development, enhance our product, and deliver a world-class experience to our customers. It’s the classic startup dilemma, supercharged for the AI era: when do you build, and when do you buy?

So, when OpenAI, a leader in the field, announced ChatKit, it was more than just another library release - it was a siren call. The promise was captivating: a framework-agnostic, beautiful, drop-in chat solution that handles all the front-end complexity, letting us focus on the AI magic. It seemed like the perfect shortcut to upgrading our user-facing widget, a move that could save us months of development and leapfrog the competition.

We decided to dive in headfirst. What began as an optimistic technical evaluation quickly spiraled into a deep, philosophical debate about our core architecture, our product promises, and the hidden costs of adopting a seemingly “free” and “easy” solution. This is the detailed story of our journey - the initial excitement, the complex technical hurdles, and the strategic reasons behind our ultimate decision.


Why ChatKit Seemed Like the Perfect Solution

On the surface, ChatKit is a stunningly well-designed UI toolkit. It provides the front-end components — the chat bubble, input fields, and rich media widgets — that form the “last mile” of any AI application. In an industry where user experience is paramount, ChatKit offered a way to deliver the kind of fluid, interactive conversations seen in blockbuster demos, right out of the box.

For a startup like ours, the appeal was multifaceted and incredibly strong:

Aesthetic Excellence

Let’s be honest, ChatKit looks fantastic. It promised to save our front-end team countless hours designing, building, and testing components that OpenAI had already perfected.

Accelerated Velocity

In the startup world, speed is everything. The idea of dropping in a production-ready chat interface meant we could redirect our valuable engineering resources from building UI plumbing to solving harder AI problems.

Backend-Driven UI

This was perhaps the most compelling feature. ChatKit allows the backend to send not just text, but entire UI components. Imagine a shopping assistant, instead of just describing a product, sending a fully interactive product carousel directly from a backend API call. This promised a future of dynamic, highly customized user experiences with minimal front-end changes.

With a potential enterprise customer specifically asking if we could integrate with ChatKitJS, the business case felt like a slam dunk. We were excited. This wasn’t just a new library; it felt like a strategic advantage.


The First Cracks Appear: A Tale of Two Philosophies

Full of optimism, we assigned a senior engineer to build a proof-of-concept (POC). The goal: connect our existing, sophisticated backend to the ChatKit front end. It didn’t take long for the first red flags to appear. What we thought would be a simple data-mapping exercise turned into a full-blown architectural confrontation.

The core issue wasn’t a bug or a missing feature — it was a fundamental, philosophical disagreement between how ChatKit wants to work and how our platform is designed.

ChatKit’s philosophy

The front end is king, and the backend is its servant. The UI is in control, making standard HTTP calls to the backend to fetch a stream of events that it then renders. The backend’s role is largely reactive and stateless, answering when called upon.

Our philosophy

The backend is the central nervous system. Our entire platform is built around a single, powerful endpoint over a persistent WebSocket connection.

This architecture is not an accident; it’s the key to our platform’s power. It allows our backend, built with Django and the powerful LangGraph state machine library, to be proactive and intelligent.

This isn’t just a technical preference; it’s what allows us to deliver advanced features. While a message is being generated, our backend can simultaneously:

  • Stream the initial tokens to the user for low-latency feedback
  • Run parallel checks for potential AI hallucinations
  • Monitor the conversation for keywords or sentiment that might trigger a human handoff
  • Prepare and queue up post-generation tasks

ChatKit’s HTTP-based, request-response model fundamentally breaks this paradigm. We were being asked to trade our proactive, stateful orchestrator for a reactive, stateless servant. This architectural chasm manifested in several daunting technical hurdles that went far beyond a simple integration.


The Deal-Breakers We Couldn’t Ignore

As the technical challenges mounted, we identified two issues that were complete non-starters for us and our clients. These weren’t minor inconveniences; they were direct threats to our core product offering.

1. The Human Handoff Black Hole

A critical, non-negotiable feature for many of our enterprise clients is human handoff — the ability for a human agent to seamlessly take over a conversation from the AI. This requires true, bi-directional communication.

(You can read our detailed Product tutorial: Human Handoff to see how this feature works in practice.)

With our WebSocket architecture, this is elegant and instantaneous. Our backend can monitor a conversation and, when a handoff is triggered, instantly push that event to the user’s widget while simultaneously notifying a human agent in their dashboard. The agent can then join the same channel and start typing, all in real time.

ChatKit’s HTTP-based model makes this nearly impossible without significant, ugly workarounds. An HTTP request is a one-way street. Once the backend starts streaming its response of chat events, the communication channel is effectively locked. It cannot receive an external, out-of-band event like “an agent wants to join this conversation now.”

The only solution would be to invent a complex long-polling system on the front end, constantly asking the backend “has anything changed yet?” This is not only inefficient and slow, but it also completely defeats the purpose of adopting a drop-in solution. We would be fighting the framework’s fundamental nature, adding layers of complexity just to replicate a feature we already had working perfectly.


2. The Customization Ceiling

While ChatKit looks great out of the box, “great” isn’t good enough for our diverse client base. Our platform’s power lies in its ability to create widgets that are a perfect extension of a client’s brand. Our clients don’t just want a chatbot; they want their chatbot — integrated seamlessly into their brand identity, looking exactly how they want it to.

We quickly discovered that ChatKit’s customization capabilities were superficial. You could change colors and some basic fonts, but that was about it. Achieving the level of deep, white-label branding our clients expect — customizing the shape of the chat bubble, the layout of the header, the animation styles — was impossible without forking the entire ChatKit library and maintaining our own version.

This was a clear stop to the discussion. A third-party solution that forces you to fork it to meet core business requirements is not a solution; it’s a long-term maintenance liability. It negates the very reason you chose it in the first place.


The Final Verdict: Why We Walked Away

The journey had taken an ironic turn:

  • We started by wanting a nicer widget
  • We considered building it ourselves but estimated it would take a month
  • We turned to ChatKit to get a nicer widget faster
  • We were now facing a two-month, backend-only refactoring project that would force us to abandon critical features and break promises to our clients

The marketing promise clashed jarringly with our reality. OpenAI claimed that “ChatKit is a framework-agnostic, drop-in chat solution… no extra work needed.” Our extensive research and POC efforts proved that this statement comes with a giant asterisk. It’s only true if you are starting from scratch and are willing to build your entire stack according to the specific, and somewhat rigid, philosophy of the OpenAI Agents framework.

After a final, thorough internal review, the decision was unanimous: we stepped back from the ChatKit integration. The cost of adoption was not measured in dollars, but in architectural compromise, feature loss, and dangerous coupling to an external ecosystem we couldn’t control.


Lessons Learned and a Look to the Future

This experience, while frustrating, was incredibly valuable. It solidified our understanding of our own product and taught us several key lessons:

Your Architecture is Your Product’s DNA

The way you build your backend is not just a technical detail; it defines the boundaries of what your product can do. Our WebSocket-based, proactive architecture is a core feature, not an implementation choice.

Beware the “Drop-In” Solution

The easier a solution claims to be, the more deeply you should investigate its underlying assumptions. True plug-and-play is rare, and the integration cost is often hidden in architectural mismatches.

Prioritize Philosophical Alignment

Before you write a single line of integration code, ask if the new tool’s philosophy aligns with your own. If it doesn’t, you will be fighting the current for the entire lifecycle of the product.


Despite our decision, we believe ChatKit is a powerful tool in the right context. It’s an excellent choice if:

  • You are starting a new project from scratch and can design your backend from day one to align with the OpenAI Agents SDK
  • Your use case fits neatly within OpenAI’s framework and doesn’t require deep customization or features like real-time, bi-directional communication
  • You are willing to commit fully to the OpenAI ecosystem, accepting the benefits of integration and the risks of vendor lock-in

For quickchat.ai, our power comes from the deep control, flexibility, and advanced capabilities our current stack provides. Our journey with ChatKit was a crucial reminder that sometimes, the fastest way forward isn’t to take a shortcut, but to double down on the architecture that makes your product uniquely powerful.

And that’s a trade-off we’re not willing to make.