Survey says most believe generative AI is conscious, which may prove it’s good at making us hallucinate, too


 When you interact with ChatGPT and other conversational generative AI tools, they process your input through algorithms to compose a response that can feel like it came from a fellow sentient being despite the reality of how large language models (LLMs) function. Two-thirds of those surveyed for a study by the University of Waterloo nonetheless believe AI chatbots to be conscious in some form, passing the Turing Test of convincing them that an AI is equivalent to a human in consciousness. 

Generative AI, as embodied by OpenAI’s work on ChatGPT, has progressed by leaps and bounds in recent years. The company and its rivals often talk about a vision for artificial general intelligence (AGI) with human-like intelligence. OpenAI even has a new scale to measure how close their models are to achieving AGI. But, even the most optimistic experts don’t suggest that AGI systems will be self-aware or capable of true emotions. Still, of the 300 people participating in the study, 67% said they believed ChatGPT could reason, feel, and be aware of its existence in some way. 



Source link

Previous articleJD Vance Investment Portfolio Includes Bitcoin, Walmart, Nasdaq 100 ETFs