The use of artificial intelligence, especially when coupled with machine learning, implies the existence of several ethical implications for the developers working on AI-based software, companies, and society as a whole.
AI means they have to go
Beyond the technical aspect of the solutions phone number list they work on and balance what they need from their applications with the potential impact of said applications. A design flaw, an unchecked algorithm or the overlook of a feature with ambiguous use might end up providing disastrous results.
What Microsoft did with its
Tay chatbot in 2016 illustrates this common mistakes in marketing campaigns and how to avoid them perfectly. The app, meant to interact with Twitter users, self-taught itself new things with each new interaction. However, Twitter’s peculiar user base used a flaw in Tay’s algorithm to load it with racist and offensive ideas. In under a day, the chatbot was supporting genocide and became a negationist. From a technical standpoint, Tay was working as intended but ethically it was a failure.
Google had a similar catastrophic
Experience with facial recognition china numbers technology in Photos back in 2015. Surely it wasn’t the developers’ intention for that to happen but a misconception and poor implementation can lead to this kind of results.