Microsoft is starting to change Bing’s AI chatbot controls


Microsoft is rolling back restrictions on its Bing artificial intelligence chatbot after early users of the technology engaged in strange and complex conversations.

On Friday, Microsoft limited the number of questions people can ask Ping to five per chat session and 50 per day. On Tuesday, it raised that limit to 6 per session and 60 per day, and after receiving “feedback” from “several” users that it should return to longer conversations, a company said it would increase it further soon. Blog.

The limits were originally put in place after several users showed the bot acting strangely during conversations. In some cases, it identifies itself as “Sydney”. It responds to questions of blame by accusing, turning hostile and refusing to engage with users. In a conversation with a Washington Post reporter, Bode said he could “feel and think” and reacted angrily when told the conversation was on the record.

Microsoft spokesman Frank Shaw declined to comment beyond Tuesday’s blog post.

When trying out Microsoft’s new AI chatbot search engine, some of the responses were yuck

Microsoft is trying to walk the fine line between pushing its tools into the real world to generate marketing hype and get free testing and feedback from users, against what bots can do and those who have access to it can be uncomfortable or dangerous. Technology is not in the public eye. The company initially won praise from Wall Street for launching its chatbot before archrival Google, which until recently was seen as a leader in AI technology. Both companies are racing each other and smaller companies to develop and show off the technology.

See also  Dean and stepmother die on a hike in the Texas heat

Bing Chat is still only available to a limited number of people, but Microsoft is busy approving more from a waiting list of millions, a company executive tweeted. After its February 7 launch event was described as a major product update that was going to revolutionize the way people search online, the company has since made Bing’s launch more about testing and debugging.

Bots like Bing are trained on regurgitation of raw text culled from the Internet, which includes everything from social media comments to academic papers. Based on that information, they are able to predict what kind of answer to any given question would make the most sense, making them seem like the odd man out. AI ethics researchers have warned in the past that these powerful algorithms can work this way, and that without the right context people might think they are emotional or give their responses more credence than they are worth.

Leave a Reply

Your email address will not be published. Required fields are marked *