03/28/2026



Everyone is aware that ChatGPT is not infallible. It can make errors and struggles with various queries. Nonetheless, large language models (LLMs) like ChatGPT don’t always falter in the same areas. Regular users might recall instances where ChatGPT provided an incorrect response or generated something that seemed unrealistic.

LLMs exhibit a degree of randomness in their answers, which is part of their design. Although relying on ChatGPT for critical issues like personal relationships, financial matters, or health advice can be unwise, it’s not always a reflection of the model’s inability to be accurate. While incorrect answers are not ubiquitous, any misinformation could have severe repercussions when it involves important topics, such as medical test outcomes.

Finding specific tasks that ChatGPT performs poorly is surprisingly challenging. After considerable testing, we discovered that for many tasks—including altering conversational tones, extracting text from images, and setting up alerts—ChatGPT does quite well, which is likely a comfort to CEO Sam Altman. Nonetheless, there are vast domains where ChatGPT still falls short. This may be attributed to premature product launches, the overhyping of AI’s capabilities, or simply that the chatbot was never designed with certain functions in mind.

Storing Your Work

When using ChatGPT—whether on a free or Plus plan—you’ll notice a log of your previous interactions on the left side of the interface. This allows you to revisit older chats by clicking on them or searching for specific keywords. However, this may lead you to mistakenly believe that ChatGPT is a reliable repository for your important conversations. Unfortunately, that’s not the case. While past exchanges may be visible for up to six months, they can be erased without warning.

A case in point is a University of Cologne professor who lost two years’ worth of valuable documents, such as grant applications and teaching materials, due to a misconfiguration in his data settings. Upon reaching out to ChatGPT’s support, he learned that there was no hope of recovery for his lost work.

This incident wasn’t a bug; it’s how the system is intended to operate. OpenAI’s privacy and security policies do not backup user conversations on their servers. If they’re deleted, they cannot be recovered. Therefore, if there are any drafts or ideas you would regret losing from your ChatGPT history, make sure to back them up in another location, like OneDrive, which offers options to recover deleted work.

Engaging with Other Applications

OpenAI provides several methods for integration with external tools through Apps and Agent, the latter of which was formerly referred to as Operator and merged with OpenAI’s Deep Research. However, trialing the Apps functionality has proven to be frustrating for users, with many reviewers noting limitations and obstacles when trying to get Agent to perform designated tasks. The common descriptor for this experience has been “half-baked,” a term at odds with OpenAI’s description of it as “learning in public.” Regardless, many features seem to launch without full functionality.

Integrations with third-party apps frequently misunderstand user commands, making attempting real tasks unnecessarily complicated and time-consuming compared to using each app separately. For example, ChatGPT may declare it has updated a Canva design when that is not the case or misrelay details from Booking.com regarding properties. While companies are eager for their applications to be part of this ecosystem, the advantages to users appear to be minimal.

Agent also faces significant challenges. Although it can handle simple tasks, it often gets stymied due to technical limitations and safety protocols. Reviews from The Verge have described it as sluggish and fraught with issues, while Wired’s coverage noted that Agent often mishandled commands to the point of being called a mere proof of concept rather than a polished product ready for widespread use.

Time Efficiency

Following the announcement of ChatGPT-5.2, OpenAI claimed that its enterprise version saves workers an estimated three to ten hours each week. Naturally, any organization vested in AI’s development will tout its potential for enhancing productivity. Yet, recent findings suggest a different narrative. Research conducted by the Harvard Business Review (HBR) indicated that the proliferation of subpar AI-generated content has mandated businesses to invest extra hours in rectifying these shortcomings.

The term “workslop” was coined by HBR to describe this type of low-quality AI output that disguises itself as viable work without significantly advancing the task at hand. This superficial and vague output necessitates further revisions from recipients, consuming valuable time in sorting through content before they can proceed with their work.

According to HBR’s ongoing research, individuals are spending nearly two hours per occurrence addressing workslop, translating to an average cost of $186 monthly for each person dealing with the fallout of poor-quality AI work. Issues in coding also persist; a study by Model Evaluation & Threat Research (METR) revealed that AI usage can prolong task completion by 19%. Interestingly, users often believe they are benefiting from AI in terms of time saved, even when the evidence suggests otherwise.

Revenue Generation

This raises an essential inquiry—how beneficial is AI currently? If its efficacy is questionable, why are corporations and entire nations investing heavily in its development, especially considering the environmental impact of data centers? According to MIT’s Project NANDA, an astonishing 95% of companies utilizing AI have yet to experience any discernible returns on their investments, which total around $35 billion. The lack of significant outcomes has led many experts to caution that we might be on the brink of an AI bubble burst reminiscent of the dotcom crash of the late 1990s.

During a session at the World Economic Forum in Davos, Microsoft CEO Satya Nadella remarked that AI firms “must reach a point where we are using this technology for practical purposes.” He cautioned that failing to do so could jeopardize their “social permission” to continue leveraging the planet’s dwindling resources for AI operations.

Despite the novelty of AI capturing public interest, consumers show hesitance in investing financially. OpenAI is still struggling to turn a profit, even with a broader user base than any other AI competitor. To monetize its services, OpenAI is now contemplating the introduction of advertisements in its chatbots, a move that could potentially add yet another “dreadful” feature to ChatGPT’s growing list.


Leave a Reply

Your email address will not be published. Required fields are marked *