Cookie Consent by Free Privacy Policy Generator 📌 Why AI is failing at giving good advice


✅ Why AI is failing at giving good advice


💡 Newskategorie: Programmierung
🔗 Quelle: dev.to

TLDR: ChatGPT generates responses based on the highest mathematical probabilities derived from existing texts on the internet. Popular advice (for various reasons) is seldomly good, nor (by definition) uniquely applicable, nor (mostly) founded on actual experience. You are probably better off taking advice from a real person who can empathize and knows what they are talking about.

When you ask ChatGPT a question, something highly interesting happens:

ChatGPT, which has previously consumed half or more of the internet to build its language model, will translate your question into a mathematical representation of numbers (e.g., a vector).

I don't know in detail how they do it, and I am sure there are some layers in between and around that serve some specific purpose, but I understand that if you google the phrase "How are you?", you can statistically expect a certain range of words and sentences in the results around it. Most sentences following the question will probably sound like "I'm good, thanks" or "Doing great, how about you?". Whereas if you search the internet for all occurrences of "Integrated circuit", you will usually find a very distinct set of words and sentences nearby, like "silicon semiconductor," "MOS transistor," or "the voltage requirement is 0.6V".

With so much base data, you can assign a mathematical value (or direction) to every word, change it when it appears together with other words (context), and compute an entire, unique direction for a continuous piece of text.

Realize the following (exemplified): Anyone who ever had success on the internet writing articles (or anything else) in the broader sense of the universe just pressed a particular combination of buttons on their keyboard, and then more biological masses in the world started reading the outcome than other produced texts.

In a strange but very scientific way, when you ask ChatGPT a question, it tries to compute the exact combination of letters, words, and sentences based on their previously computed values that it thinks you are looking for. Astonishingly enough, that is often a highly useful response in the real world.

But this approach has problems, especially when you try to give someone good, specific advice:

The outcome is, by definition, mathematical. It's probability, applied to man-made text. The most propagated (related) text on the internet will likely be repurposed in its own words to answer anything that you ask. Essentially, that means it might give you a mashed answer as you would get from the X first results on Google, but it will fill in contextual gaps from other places and make it more applicable to your specific input.

If most internet texts said the sky was yellow, ChatGPT would say so, too. Similarly, suppose you ask ChatGPT the infamous question, "How can I make money online quickly?". In that case, you will get a shallow, unhelpful response (that will often stay unhelpful even if you drill down into specifics).

ChatGPT conversation

This is not to say that everything is particularly "wrong" (although some points are, according to most people's experience); it is just paraphrasing those online bubbles of drop shippers, BuzzFeed listicles, and affiliate boards.

For example, almost everyone who has succeeded with YouTube or affiliate marketing will tell you neither is quick. It takes years of work, dedication, and a fair pinch of scientific user analysis.

Even if you ask it to walk you through making money step by step, it fails: In March 2023 (2 days after the release of version 4), a tweet caught fire that documented the usage of ChatGPT as a business owner, giving precise directions to make money (starting with $100):

// Detect dark theme var iframe = document.getElementById('tweet-1636107218859745286-359'); if (document.body.className.includes('dark-theme')) { iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1636107218859745286&theme=dark" }

It did make some money, but with millions and millions of views and even mainstream news covering the endeavor, I am hesitant to attribute the generated income to ChatGPT. The updates died out quickly, and two weeks later, the official confirmation was posted that the project (and apparently, the site) was sunsetted. It's not what a successful attempt to make money looks like in my world.

A Large Language Model can provide accurate answers if fed the correct base context (superseding the general knowledge base) and if you ask the right questions. But even then, you need to find that respective chatbot and the questions you must ask to get helpful answers (although the latter may apply to many human conversations, too).

At the current state of the internet, there is almost any educational information and advice already out there in some form, freely accessible to everybody, more than anyone could ever take action on in their lifetime. Today, the value of providing information is about more than just delivering it; it's about delivering the right information to the right people the right way. And LLMs fail at the former.

The bottom line is that AI is not yet capable of what a good teacher or mentor can do: giving actually good, uniquely applicable, empathizing advice. It's much better at explaining things.

PS.: This article was peer-reviewed and approved by ChatGPT. I ignored its suggestion to add examples where it gave helpful advice because that's against my agenda statistically, with enough advice given, you will just randomly run into occasions where it gave good advice.

PPS.: A big discourse was recently sparked by Pieter Levels, who built a mental therapist Telegram bot with AI. This article has been sitting in my drafts for almost a year now and has absolutely nothing to do with that (I feel that discussion is more ethic-, accountability-, and risk-based anyway, as opposed to this article's core message). The timing of this article's publication just after his tweet is coincidental.

PPPS.: If you enjoyed this article, please consider heading to my site and subscribing to my newsletter 🧜.

...

✅ Why AI is failing at giving good advice


📈 52.06 Punkte

✅ Am I giving good Internet security advice on my blog?


📈 32.29 Punkte

✅ Stranger Danger: Good Advice For Kids, Bad Advice For Global Cybersecurity


📈 31.56 Punkte

✅ Stranger Danger: Good Advice For Kids, Bad Advice For Global Cybersecurity


📈 31.56 Punkte

✅ I am failing at AWK, any advice?


📈 26.52 Punkte

✅ Why Are Cybersecurity Automation Projects Failing?


📈 19.77 Punkte

✅ Minecraft Earth ends: Why do Microsoft's mobile games keep failing?


📈 19.77 Punkte

✅ Why Does FaceTime Keep Failing? Here’s How To Fix It


📈 19.77 Punkte

✅ Why Preventing and Countering Violent Extremism Law and Practice Is Failing a Human Rights Audit


📈 19.77 Punkte

✅ Why desktop Linux is failing, in your opinion.


📈 19.77 Punkte

✅ Why Your Security Policies Could Be Failing Your Business


📈 19.77 Punkte

✅ Why are 5G private networks failing to take off in India?


📈 19.77 Punkte

✅ Why VR Is Failing


📈 19.77 Punkte

✅ Why the United Nations Keeps Failing Victims of Atrocity Crimes


📈 19.77 Punkte

✅ Why Data Cleaning Is Failing Your ML Models – And What To Do About It


📈 19.77 Punkte

✅ Why 2FA is failing and what should be done about it


📈 19.77 Punkte

✅ Point Tools are Failing: Why We Need a New Class of Converged Endpoint Platforms


📈 19.77 Punkte

✅ Point Tools are Failing: Why We Need a New Class of Converged Endpoint Platforms


📈 19.77 Punkte

✅ Why mastering programming requires failing


📈 19.77 Punkte

✅ Why Bug-Bounty Programs Are Failing Everyone


📈 19.77 Punkte

✅ Why mastering programming requires failing


📈 19.77 Punkte

✅ Why is my Qubes failing to install?


📈 19.77 Punkte

✅ Need advice on good Distros


📈 19.64 Punkte

✅ Advice wanted: A good, cheap multimedia storage SSD for linux.


📈 19.64 Punkte

✅ Suspicious wife fails to get good password advice from The Guardian


📈 19.64 Punkte

✅ Advice on a good MFA app


📈 19.64 Punkte

✅ Some good advice about UI design? | One Dev Question


📈 19.64 Punkte

✅ The manual turns good advice on its head, telling officials to use, reuse and recycle weak passwords.


📈 19.64 Punkte

✅ A good advice


📈 19.64 Punkte

✅ Looking for advice on partition table for dual boot with good cross OS share data and secure.


📈 19.64 Punkte











matomo

Datei nicht gefunden!