I do not trust AI/LLMs after a certain point

This video from YouTube (Thanks to fuchsbrom for sharing on Mastodon) is (in my perception) a good example on how AI and LLM are “creating” things. It tries to recognise patterns in the data we feed to the system. It then uses the data to create statistics and calculates “new” patterns, which look similar to what we as humans expect. The results are often uncanny.

We need to be mindful and stick to the fact that there is no “understanding” from the AI/LLM side, what it says or what the words mean. It also has no understanding on the pictures and what is happening there. It does not have ethical, moral or respectful characteristics when it comes to communication. It just copies the indicators of behaviour it reads and plays it back to the audience, which led to terrible results in the past. And this still happens years after the first attempts. And it is less intrusive, since language is much easier to copy once you implement grammar.

This is why we get recipies with glue in it or responses, which tell us that a human should eat a rock each day.

All big AI companies try to emphazise that they will “train” the AI/LLM to make it better, more reliable and to provide better results. That does not change the statement from earlier, that the AI/LLM does not understand, what it is doing. So training is basically adding another if-clause to the settings to ensure that the predictive pictures are less biased.

I see huge benefits in the future using AI/LLM for simple tasks like indexing complex documents and replaying it’s content to a specific audience e.g. summarizing fictional books, but I am still very hesitant to trust any AI/LLM on creating recipies, health topics, or basically anything which can potentially have an impact on somebodies life.

Leave a Reply