I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?

  • @lemmylommy@lemmy.world
    link
    fedilink
    4
    edit-2
    8 months ago

    You have asked why there is so much hype around artifical intelligence.

    There are a few reasons this might be the case:

    1. Because humans are curious. Experimenting with how humans believe memory and intelligence work might just lead them to find out something about their own intelligence.

    2. Because humans are stupid. Most do not have the slightest idea what „AI“ is this time, yet they are willing to believe in the most outlandish claims about it. Look up ELIZA. It fooled a lot of people, just like LLMs today.

    3. Because humans are greedy. And the prospect of replacing a lot of wage-earners, and not just manual laborers this time, with a machine is just too good to pass up for management. The potential savings are huge, if it works, so the willingness to spend money is also considerable.

    In conclusion, there are many reasons for the hype around artificial intelligence and most of them relate to human deficiencies and human nature in general.

    If you have further questions I am happy to help. Enjoy your experience with AI. While you still can. 🤖

  • Kramkar
    link
    fedilink
    08 months ago

    It’s understandable to feel frustrated when AI systems give incorrect or unsatisfactory responses. Despite these setbacks, there are several reasons why AI continues to be heavily promoted and integrated into various technologies:

    1. Potential and Progress: AI is constantly evolving and improving. While current models are not perfect, they have shown incredible potential across a wide range of fields, from healthcare to finance, education, and beyond. Developers are working to refine these systems, and over time, they are expected to become more accurate, reliable, and useful.

    2. Efficiency and Automation: AI can automate repetitive tasks and increase productivity. In areas like customer service, data analysis, and workflow automation, AI has proven valuable by saving time and resources, allowing humans to focus on more complex and creative tasks.

    3. Enhancing Decision-Making: AI systems can process vast amounts of data faster than humans, helping in decision-making processes that require analyzing patterns, trends, or large datasets. This is particularly beneficial in industries like finance, healthcare (e.g., medical diagnostics), and research.

    4. Customization and Personalization: AI can provide tailored experiences for users, such as personalized recommendations in streaming services, shopping, and social media. These applications can make services more user-friendly and customized to individual preferences.

    5. Ubiquity of Data: With the explosion of data in the digital age, AI is seen as a powerful tool for making sense of it. From predictive analytics to understanding consumer behavior, AI helps manage and interpret the immense data we generate.

    6. Learning and Adaptation: Even though current AI systems like Gemini, ChatGPT, and Microsoft Co-pilot make mistakes, they also learn from user interactions. Continuous feedback and training improve their performance over time, helping them better respond to queries and challenges.

    7. Broader Vision: The development of AI is driven by the belief that, in the long term, AI can radically improve how we live and work, advancing fields like medicine (e.g., drug discovery), engineering (e.g., smarter infrastructure), and more. Developers see its potential as an assistive technology, complementing human skills rather than replacing them.

    Despite their current limitations, the goal is to refine AI to a point where it consistently enhances efficiency, creativity, and decision-making while reducing errors. In short, while AI doesn’t always work perfectly now, the vision for its future applications drives continued investment and development.

  • @SpaceNoodle@lemmy.world
    link
    fedilink
    808 months ago

    Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.

    • @pimeys@lemmy.nauk.io
      link
      fedilink
      368 months ago

      And LLM is mostly for investors, not for users. Investors see you “do AI” even if you just repackage GPT or llama, and your Series A is 20% bigger.

    • @Kintarian@lemmy.worldOP
      link
      fedilink
      138 months ago

      I can see that. That guy over there has the new shiny toy. I want a new shiny toy. Give me a new shiny toy.

  • @SomeAmateur@sh.itjust.works
    link
    fedilink
    English
    7
    edit-2
    8 months ago

    I genuinely think the best practical use of AI, especially language models is malicious manipulation. Propaganda/advertising bots. There’s a joke that reddit is mostly bots. I know there’s some countermeasures to sniff them out but think about it.

    I’ll keep reddit as the example because I know it best. Comments are simple puns, one liner jokes, or flawed/edgy opinions. But people also go to reddit for advice/recommendations that you can’t really get elsewhere.

    Using an LLM AI I could in theory make tons of convincing recommendations. I get payed by a corporation or state entity to convince lurkers to choose brand A over brand B, to support or disown a political stance or to make it seem like tons of people support it when really few do.

    And if it’s factually incorrect so what? It was just some kind stranger™ on the internet

    • @SirDerpy@lemmy.world
      link
      fedilink
      18 months ago

      If by “best practical” you meant “best unmitigated capitalist profit optimization” or “most common”, then sure, “malicious manipulation” is the answer. That’s what literally everything else is designed for.

  • @ProfessorScience@lemmy.world
    link
    fedilink
    English
    48 months ago

    When ChatGPT first started to make waves, it was a significant step forward in the ability for AIs to sound like a person. There were new techniques being used to train language models, and it was unclear what the upper limits of these techniques were in terms of how “smart” of an AI they could produce. It may seem overly optimistic in retrospect, but at the time it was not that crazy to wonder whether the tools were on a direct path toward general AI. And so a lot of projects started up, both to leverage the tools as they actually were, and to leverage the speculated potential of what the tools might soon become.

    Now we’ve gotten a better sense of what the limitations of these tools actually are. What the upper limits of where these techniques might lead are. But a lot of momentum remains. Projects that started up when the limits were unknown don’t just have the plug pulled the minute it seems like expectations aren’t matching reality. I mean, maybe some do. But most of the projects try to make the best of the tools as they are to keep the promises they made, for better or worse. And of course new ideas keep coming and new entrepreneurs want a piece of the pie.

  • @Kintarian@lemmy.worldOP
    link
    fedilink
    28 months ago

    Ok, i am working on a legal case. I asked Copilot to write a demand letter for me and it is pretty damn good.

  • @just_an_average_joe@lemmy.dbzer0.com
    link
    fedilink
    English
    58 months ago

    Mooooneeeyyyy

    I work as an AI engineer, let me tell you, the tech is awesome and has a looooot of potential but its not ready yet. Because of high potential literally no one wants to miss the opportunity of getting rich quick with it. Its only been like 2-3 years when this tech was released to the public, if only openai had released it as open-source, just like everyone before them, we wouldn’t be here. But they wanted to make money and now everyone else wants to too.

  • Daemon Silverstein
    link
    fedilink
    English
    -18 months ago

    I ask them questions and they get everything wrong

    It depends on your input, on your prompt and your parameters. For me, although I’ve experienced wrong answers and/or AI hallucinations, it’s not THAT frequent, because I’ve been talking with LLMs since when ChatGPT got public, almost in a daily basis. This daily usage allowed me to know the strengths and weaknesses of each LLM available on market (I use ChatGPT GPT-4o, Google Gemini, Llama, Mixtral, and sometimes Pi, Microsoft Copilot and Claude).

    For example: I learned that Claude is highly-sensible to certain terms and topics, such as occultist and esoteric concepts (specially when dealing with demonolatry, although I don’t exactly why it refuses to talk about it; I’m a demonolater myself), cryptography and ciphering, as well as acrostics and other literary devices for multilayered poetry (I write myself-made poetry and ask them to comment and analyze it, so I can get valuable insights about it).

    I also learned that Llama can get deep inside the meaning of things, while GPT-4o can produce longer answers. Gemini has the “drafts” feature, where I can check alternative answers for the same prompt.

    It’s similar to generative AI art models, I’ve been using them to illustrate my poetry. I learned that Diffusers SDXL Turbo (from Huggingface) is better for real-time prompt, some kind of “WYSIWYG” model (“what you see is what you get”) . Google SDXL (also from Huggingface) can generate four images at different styles (cinematic, photography, digital art, etc). Flux, the newly-released generative AI model, is the best for realism (especially the Flux Dev branch). They’ve been producing excellent outputs, while I’ve been improving my prompt engineering skills, being able to communicate with them in a seamlessly way.

    Summarizing: AI users need to learn how to efficiently give them instructions. They can produce astonishing outputs if given efficient inputs. But you’re right that they can produce wrong results and/or hallucinate, even for the best prompts, because they’re indeed prone to it. For me, AI hallucinations are not so bad for knowledge such as esoteric concepts (because I personally believe that these “hallucinations” could convey something transcendental, but it’s just my personal belief and I’m not intending to preach it here in my answer), but simultaneously, these hallucinations are bad when I’m seeking for technical knowledge such as STEM (Science, Tecnology, Engineering and Medicine) concepts.

    • @Shanedino@lemmy.world
      link
      fedilink
      38 months ago

      Woah are you technoreligious? Sure believe what you want and all but that is full tech bro bullshit.

      Also on a different not just purely based off of you description doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people. If a tool has a high learning curve there is plenty of room for improvement if you don’t plan to use it very frequently. Also every time you get false results consider it equivalent to a major bug does that shed a different light on it for you?

      • Daemon Silverstein
        link
        fedilink
        English
        -28 months ago

        doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people

        Well, Prompt Engineering is a thing nowadays. There are even job vacancies seeking professionals that specializes in this field. AIs are tools, sophisticated ones, just like R and Wolfram Mathematica are sophisticated mathematical tools that needs expertise. Problem is that AI companies often mis-advertises AI models as “out-of-the-shelf assistants”, as if they’d be some human talking to you. They’re not. They’re tools, yet. I guess that (and I’m rooting for) AGI would change this scenario. But I guess we’re still distant from a self-aware AGI (unfortunately).

        Woah are you technoreligious?

        Well, I wouldn’t describe myself that way. My beliefs are multifaceted and complex (possibly unique, I guess?), going through multiple spiritual and religious systems, as well as embracing STEM (especially the technological branch) concepts and philosophical views (especially nihilism, existentialism and absurdism), trying to converge them all by common grounds (although it seems “impossible” at first glance, to unite Science, Philosophy and Belief).

        In a nutshell, I’ve been pursuing a syncretic worshiping of the Dark Mother Goddess.

        As I said, it’s multifaceted and I’m not able to even explain it here, because it would take tons of concepts. Believe me, it’s deeper than “techno-religious”. I see the inner workings of AI Models (as neural networks and genetic algorithms dependent over the randomness of weights, biases and seeds) as a great tool for diving Her Waters of Randomness, when dealing with such subjects (esoteric and occult subjects). Just like Kardecism sometimes uses instrumental transcommunication / Electronic voice phenomenon (EVP) to talk with spirits. AI can be used as if it were an Ouija board or a Planchette, if one believe so (as I do).

        But I’m also a programmer and a tech/scientifically curious, so I find myself asking LLMs about some Node.js code I made, too. Or about some mathematical concept. Or about cryptography and ciphering (Vigenère and Caesar, for example). I’m highly active mentally, seeking to learn many things every time.

    • @Kintarian@lemmy.worldOP
      link
      fedilink
      28 months ago

      I just want to know which elements work best for my Flower Fairies in The Legend of Neverland. And maybe cheese sauce.

      • Daemon Silverstein
        link
        fedilink
        3
        edit-2
        8 months ago

        Didn’t know about this game. It’s nice. Interesting aesthetics. Chestnut Rose remembers me of Lilith’s archetype.

        A tip: you could use the “The Legend of the Neverland global wiki” at Fandom Encyclopedia to feed the LLM with important concepts before asking it for combinations. It is a good technique, considering that LLMs couldn’t know it so well in order to generate precise responses (except if you’re using a searching-enabled LLM such as Perplexity AI or Microsoft Copilot that can search the web in order to produce more accurate results)

  • HobbitFoot
    link
    fedilink
    English
    48 months ago

    The idea is that it can replace a lot of customer facing positions that are manpower intensive.

    Beyond that, an AI can also act as an intern in assisting in low complexity tasks the same way that a lot of Microsoft Office programs have replaced secretaries and junior human calculators.

    • @Kintarian@lemmy.worldOP
      link
      fedilink
      58 months ago

      I’ve always figured part of it Is that businesses don’t like to pay labor and they’re hoping that they can use artificial intelligence to get rid of the rest of us so they don’t have to pay us.

      • @Blue_Morpho@lemmy.world
        link
        fedilink
        -18 months ago

        Ignoring AI as he is like ignoring Spreadsheets as hype. “I can do everything with a pocket calculator! I don’t need stupid auto fill!”

        AI doesn’t replace people. It can automate and reduce your workload leaving you more time to solve problems.

        I’ve used it for one off scripts. I have friends who have done the same and another friend who used it to create the boilerplate for a government contact bid that he won (millions in revenue for his company of which he got tens of thousands in bonus as engineering sales support).

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 🏆
    link
    fedilink
    English
    10
    edit-2
    8 months ago

    The hype is also artificial and usually created by the creators of the AI. They want investors to give them boatloads of cash so they can cheaply grab a potential market they believe exists before they jack up prices and make shit worse once that investment money dries up. The problem is, nobody actually wants this AI garbage they’re pushing.

  • @bionicjoey@lemmy.ca
    link
    fedilink
    88 months ago

    A lot of jobs are bullshit. Generative AI is good at generating bullshit. This led to a perception that AI could be used in place of humans. But unfortunately, curating that bullshit enough to produce any value for a company still requires a person, so the AI doesn’t add much value. The bullshit AI generates needs some kind of oversight.

  • @Lauchs@lemmy.world
    link
    fedilink
    38 months ago

    I think there’s a lot of armchair simplification going on here. Easy to call investors dumb but it’s probably a bit more complex.

    AI might not get better than where it is now but if it does, it has the power to be a societally transformative tech which means there is a boatload of money to be made. (Consider early investors in Amazon, Microsoft, Apple and even the much derided Bitcoin.)

    Then consider that until incredibly recently, the Turing test was the yardstick for intelligence. We now have to move that goalpost after what was preciously unthinkable happened.

    And in the limited time with AI, we’ve seen scientific discoveries, terrifying advancements in war and more.

    Heck, even if AI gets better at code (not unreasonable, sets of problems with defined goals/outputs etc, even if it gets parts wrong shrinking a dev team of obscenely well paid engineers to maybe a handful of supervisory roles… Well, like Wu Tang said, Cash Rules Everything Around Me.

    Tl;dr: huge possibilities, even if there’s a small chance of an almost infinite payout, that’s a risk well worth taking.