Also: The new Turing test: Are you human? Also: I tested out an AI art generator and here’s what I learned Altman also noted in a tweet the buzz around ChatGPT: “interesting to me how many of the ChatGPT takes are either “this is [artificial general intellingence] AGI” (obviously not close, lol) or “this approach can’t really go that much further”. trust the exponential. flat looking backwards, vertical looking forwards,” he said. Also: AI’s true goal may no longer be intelligence Also: We will see a completely new type of computer, says AI pioneer Geoff Hinton “The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” says Stack Overflow moderators in a post. Critics argue that these tools are just very good at putting words into an order that makes sense from a statistical point of view, but they cannot understand the meaning or know whether the statements it makes are correct. Another concern with the AI chatbot is the possible spread of misinformation. Since the bot is not connected to the internet, it could make mistakes in what information it shares. The bot itself says, “My responses are not intended to be taken as fact, and I always encourage people to verify any information they receive from me or any other source.” OpenAI itself also notes that ChatGPT sometimes writes “plausible-sounding but incorrect or nonsensical answers.”  

Also: Low-code is not a cure for overworked IT departments just yet Does it mean that AI is taking over the world? Not yet, perhaps, but OpenAI’s Altman certainly thinks that human-style intelligence in AI is now not that far off. Responding to Musk’s comment about dangerously strong AI, Altman tweeted: “i agree on being close to dangerously strong AI in the sense of an AI that poses e.g. a huge cybersecurity risk. and i think we could get to real AGI in the next decade, so we have to take the risk of that extremely seriously too.” He also noted: “interesting watching people start to debate whether powerful AI systems should behave in the way users want or their creators intend. the question of whose values we align these systems to will be one of the most important debates society ever has.”