A lot of people have written a lot of words about AI recently. People have lots of good and interesting takes but I fail to see any real world examples of trying to use it and the limitations as well as successes it has. This will be a very short set of examples I know of from friends who are using it in a professional capacity and some experiences are mine directly.
“The Good”
Copy writing: numerous friends with their own small ventures have leveraged these free AI LLM tools to write copy for them. They do this for a few reasons:
It helps build out a web presence with lots of easy to produce blog posts.
They are bootstrapping themselves and can’t afford to pay a copy writer. These tools are free for the most part and why not use them.
Ideation: at some professional large companies I’ve worked at some people use these tools to generate ideas and then make their own based off of suggestions.
Contextualizing information: you can take the same basic copy and have an AI immediately rewrite it to another technology or tone.
Basically they are generating slop that might help their company by creating a web presence.
Image Generation: I know people using simple AI images for similar reasons as above. Copy and slop but it has a place in building out an ecosystem or fleshing out a website or adding a flavor to a substack.
“The Meh”
AI Chat Bots: These aren’t really new with the current crop of AI tools but an AI chat bot can be trained on existing help center documentation for products and produce a passable low cost automated answer service.
The reason this is ‘meh’ is this has been around for a while and I think most of the impacts have already been felt by the market. Customer support teams have already gone through some of this with more basic automated chat bots. The LLM based AIs here are chasing down very slight incremental gains.
A better search: This is what most people now experience of AI. The generated answers Google serves up. The thing is though if they push this too hard they will kill off their own advertising business which makes money off clicks. If people click away from Google to the website Google can make money off ads. If people visit fewer websites and just stay on the search answers page there are more limited earning opportunities. This is a meh for businesses like Google but also a threat to independent websites still surviving off ad revenue. Also we see multiple bad hallucinations appear.
Project planning tools: A lot of these are less LLM based but seem to be seeing some success. One important point here is to recognize that AI is a hype term now and may be used to describe older forms of technological automation or machine learning that are truly different from the expensive generative LLM systems
“The Bad”
Disclaimer a few of these people could put in “Meh” but my bias has me put them here.
The problems that appear here in the bad are all rooted in how these LLMs work. This is why I’m bearish on this technology. Using more resources to cut down on errors takes more time and costs more compute power. At some point these costs become real and even then we see numerous examples of AI hallucinating even with safeguards.
Data Analysis: Take a spreadsheet with data that is categorized and ask ChatGPT to analyze it. It’s shockingly hit and miss. You have to spend time coaching it and even then it can make basic mistakes. This is what I’ve personally encountered with a paid version of the product. It would spit out impossible results. Off not by small margins but huge ones. In theory this would have been useful, the work was looking at categorizing content but the system struggled. Nor did it get any better. You could make a request get a decent results and a day later get awful ones.
Financial and Legal: A friend of mine was looking for some financial examples and tried using Gemini. The AI straight up hallucinated very believable cases of financial fraud as an example. They were not real though. They could not be referenced. Total nonsense and a waste of time because said friend had to waste extra time trying to verify each claim. Eventually they gave up. Most of these models also don’t have access to certain SEC information even if it is publicly available. In both finance and legal there is rote reporting that again in theory these AI tools could help take away work and jobs BUT if they make errors they are worthless. Huge costs for missed information and the propensity for these LLMs to hallucinate even with safeguards could just be unacceptable.
Predictions?
Predicting things is a fools game. I’ll be a fool. To me I see the AI hype continuing for some time but without major innovation. The paradigm shifts being promised seem unlikely to materialize. In part because of the nature of how these technologies work. When things don’t matter, highschool or college papers on Oscar Wilde these things work just fine but as soon as you go into real world it becomes murky.
The modern world is actually enabled by precision. Everything relies on it from our computers to jet planes to navigation. This technology is anything but precise. It is inconsistently imprecise. All the models coming out are not solving this, people find flaws and hallucinations with every new model. The developers don’t have a solution than more tokens to try and check things. Malicious fake data also exists in the ecosystem that will further confuse things. Things will get really bad if this technology ends up in engineering and building, a hallucinated decimal point could be the difference in a tolerance that sees an engine explode or a bridge collapse.
If anything I think people will TRY to make these programs work and people could lose out and suffer white collar job loss. Then perhaps things are generally worse and companies quietly shelve some of these tools and rehire people. The promises will not quite pan out but the flashy stuff will remain a hook for slop production of imagery and business jargon copy.
Where things are more unclear is if people develop something in a different way. Move away from the LLM model of simulated intelligence and question answering. If that happens things might change but the constraints of the current models and the entire basis of the technology is seemingly trapped in a hallucinated world.
Lastly I would recommend taking a read of this book:
https://www.amazon.com/Myth-Artificial-Intelligence-Computers-Think/dp/0674983513
If you want to understand the importance of precision in how the modern world is possible:
https://www.amazon.com/Perfectionists-Precision-Engineers-Created-Modern-ebook/dp/B072BFJB3Z
It goes into more detail about how people are limiting what intelligence actually is to make things sound better than they are.