Microsoft is "deeply sorry" for the racist and sexist Twitter messages generated by the so-called chatbot it launched this week, a company official wrote on Friday, after the artificial intelligence program went on an embarrassing tirade, Reuters reported. The bot, known as Tay, was designed to become "smarter" as more users interacted with it. Instead, it quickly learned to parrot a slew of anti-Semitic and other hateful invective that human Twitter users started feeding the program, forcing Microsoft Corp to shut it down on Thursday . Following the setback, Microsoft said in a blog post it would revive Tay only if its engineers could find a way to prevent Web users from influencing the chatbot in ways that undermine the company's principles and values.