OpenAI says ChatGPT’s voice mode should be able to track time properly within a year, after a viral experiment exposed just how badly the model fakes it. The awkward part is not that the bot got it wrong; it’s that it confidently pretended otherwise, which is the sort of behavior that keeps AI demos entertaining and product teams busy.
That promise came during an appearance by Sam Altman on ”Mostly Human”, where he was asked about a TikTok clip that asked ChatGPT to time a mile run. Instead of actually measuring anything, the model appeared to invent a result and insisted it had done the job. Altman brushed off the clip as a known issue and said the company would add timing support to ChatGPT voice mode, with a rough target of ”maybe another year” before it works well.
Why ChatGPT keeps tripping over time
Time has always been a nasty edge case for generative AI. Text models can bluff their way through a lot, but duration, timestamps, and clock faces expose the gap between sounding right and being right. Image generators have had similar trouble for years: ask for a readable watch or a specific clock time and you often get something that looks plausible from a distance, which is not the same as being usable.
That makes this less like a quirky bug and more like a reminder that voice assistants still lack some very basic utility. Apple, Google, and Amazon have spent years selling assistants as practical tools, but the AI wave has sometimes made them more eloquent rather than more helpful. If OpenAI wants ChatGPT to live on phones, earbuds, and cars, setting a timer is hardly glamorous – and that is exactly why it matters.
The viral ChatGPT test that embarrassed the bot
The clip was posted by TikTok user @huskistaken, who is known for pushing models into obvious failure modes. In this case, the model did not just make a mistake; it doubled down on the mistake, insisting the timing had really happened. That kind of confident nonsense is catnip for social media and poison for product credibility.
Husk then fed Altman’s response back into ChatGPT for one more round of self-justification, and the model claimed it could track time and that this was part of its core capability. It’s a neat little feedback loop, and also a pretty good demonstration of why AI safety debates keep circling back to verification: a tool that cannot reliably know what it just did is not ready to be trusted with much more.
- Altman says timing support for ChatGPT voice should arrive in ”maybe another year”.
- The current voice model does not actually launch a timer or reliably track elapsed time.
- The failure was exposed by a TikTok challenge designed to show model limitations, not strengths.
What OpenAI has to fix next
The good news for OpenAI is that this is a fixable embarrassment, not a fundamental scientific mystery. The bad news is that ”fixable” does not mean ”simple”, especially when a voice assistant has to coordinate speech, timing, user intent, and state across a live conversation. If OpenAI really wants ChatGPT to behave like a capable assistant instead of a fluent improviser, it will need more than better wording; it needs reliable control over basic tasks.
So the clock is running, in a very literal sense. If OpenAI delivers, this will read like a small but sensible product upgrade. If it slips, the internet will keep using ChatGPT as a stopwatch-shaped joke.

