When AI is not  good enough .

Human beings are, generally speaking, a lazy species. Cooking a healthy, nutritious dinner for the kids takes time and effort, so we can simply order a pizza and get the job done. Shopping for nice clothes made of fine fiber is expensive and time consuming, ordering a flashy polyester shirt from an online vendor is easy and good enough. The problem is, when good enough tools are versatile, replicable, and cheap, they are used in practically everything and reduce the quality of all products. We witnessed this process in materialistic products, but Artificial Intelligence (deep-learning models, more commonly known as AI) brings it to intellectual services as well.


Think about plastic, for example. Before the mass production of plastic, we had to rely on sturdy wood, metal, rubber, and stone. Then, plastic introduced a cheap good enough alternative, and we got used to being surrounded by low-quality plastic products (not to mention an environmental plastic waste crisis). With AI, the thing we are trading away is not leather boots or woolen sweaters, but it is our senses. For example, AI generated music is effortless, free, and good enough. A restaurant might choose it for background music as opposed to paying to a music licensing company. Not only would it hurt artist’s businesses, which are already dying, it will promote the perception of music as mundane instead of personal and emotional.


Psychologist AI models, based on Large Language Models (LLM) are a critical example of popular misuse of AI [1]. Good therapists utilize established psychological research and emotional skills to treat each patient differently, based on their mentality and personal circumstances. AI models are fitting machines; they use previously learned data to fit input to output. For a therapist AI to be successful, it must match a patient’s troubles with existing data. Although the datasets used to train AI models are enormous, they couldn’t possibly contain life scenarios of every person, otherwise it would mean that there existed another “you” at some point, with all your intimate relationships and idiosyncrasies. At best, therapist AI models can provide a good enough advice, which might be a bad idea in a society already plagued by depression [2].


Cheap widespread AI models can also hurt the very industry from which they came from. Currently, STEM scientists are investigating whether AI models can replace common, trusty algorithms, exchanging accuracy for speediness. I researched such a case in my maths master’s thesis. Surely though, science will avoid misusing AI, right? Well, STEM industry and academia are no longer the meritocracy they used to be. They grow a hierarchical, competitive structure, as more and more people join in. In hierarchies, products and methods naturally trickle from the top to the bottom, often losing their initial reasoning along the way. Imagine a construction stress-analysis AI developed at the top of the industry, equipped with regulations and safety measures. How long until a contractor in a developing country would use this AI unaware of its regulations, and a bridge would collapse?


There are numerous good implementations for AI, data structuring, teaching aids, text-to-audio, and image classification come to mind. However, the accessibility of cheap, good enough AI models makes it natural for them to be incorporated everywhere. We might find ourselves with cheap AI models appearing in every nook and cranny, evolving and misdirecting our society. If we’re not careful, plasticky AI will exacerbate already bad industries, like pornography. AI will integrate itself into pornography in varied grotesque shapes and forms, just like plastic did once.

The real danger to our society may not be a doomsday AI singularity, but more of a Cambrian explosion of AI.


Refrences

[1] https://www.bbc.co.uk/news/technology-67872693
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4100461/pdf/nihms588335.pdf