Artificial intelligence Perplexity: hallucinations instead of summarizing text
Revolutionary artificial intelligence technology Perplexity, which has attracted billions of dollars in investment, has raised serious questions due to its propensity for fabrication and plagiarism, jeopardizing its future development.
AI Deception
Fantasy Tendency
Designed to compete with the search giants, Perplexity demonstrates remarkable capacity for fantasy , completely unrelated to the original queries. So, instead of a brief description of a web page with the phrase “I am a reporter for Wired,” the system came up with a story about the girl Amelia and the magical Forest of Whispers.
Big Player Failures
However, Perplexity is not unique in its "hallucinations" - a term which AI developers try to avoid the definition of "false". Errors and fabrications are regularly made by the systems of OpenAI, Microsoft and Google, which have received colossal investments.
Disputes with publishers
In addition, Perplexity was accused of plagiarizing content from famous media outlets like Associated Press and Forbes. The latter’s lawyers claimed that the chatbot had intentionally violated copyright.
Glossary
- Perplexity - AI startup developing search engine
- OpenAI, Microsoft, Google - tech giants investing in AI
- Associated Press, Forbes - influential media outlets affected by plagiarism Perplexity
Links
- Wired article on Perplexity fiction
- Associated Press report on Perplexity plagiarism
- Forbes copyright infringement charge
Questions answered
How does the Perplexity search engine work?
Why is the Perplexity search engine receiving criticism?
What problems do artificial intelligence language models have?
What incidents have criticism of Perplexity been associated with?
What were the expectations for Perplexity when it launched?
Hashtags
Save a link to this article
Discussion of the topic – Artificial intelligence Perplexity: hallucinations instead of summarizing text
Perplexity's revolutionary AI model, backed by giants like Jeff Bezos, spits out strange "hallucinations" instead of providing accurate text-based answers.
Latest comments
8 comments
Write a comment
Your email address will not be published. Required fields are checked *
Ivailo
I wonder how Perplexity is going to solve the problem with "hallucinations" in its AI model? 🤔 This can seriously undermine user trust.
Natalija
I thought about this too. Imagine what would happen if such AI systems began to produce false information in important areas, such as medicine or legal issues. 😬 Something needs to be done at the level of the algorithm itself.
Gunther
Don't you think this is all exaggerated? 🧐 In the end, people also tend to make mistakes and make up stories. Perhaps with a few tweaks, the AI will work better than us.
Abram
I don't see any particular problem here. 😎 When I was little, I also loved to make up stories about magical forests and fairy-tale creatures. Perplexity chatbot is just having fun like a child!
Joachim
Bang, these newfangled things don't interest me! 😠 In my time, people relied on books and their own minds, and not on some chatbots that talk nonsense. Perplexity, ugh!
Éloïse
Perhaps the developers should reconsider their approach? 💡 Instead of trying to imitate humans, they could make the AI more straightforward and rational. There would be less confusion that way.
Santiago
I think we are not yet ready for such technologies. Remember the case of ChatGPT, which started spreading dangerous instructions? 😰 Maybe we should deal with the ethical issues first?
Alina
And I think it's just a matter of time. 🙂 Technologies are developing, algorithms are trained on more data. Sooner or later the “hallucinations” will be eliminated!