Controversies




Artificial Intelligence controversiesedit

  • Virtual Assistants spur the filter bubble: As for social media, Virtual Assistants’ algorithms are trained to show pertinent data and discard others based on previous activities of the consumer: The pertinent data is the one which will interest or please the consumer. As a result, they become isolated from data that disagrees with their viewpoints, effectively isolating them into their own intellectual bubble, and reinforcing their opinions. This phenomena was known to reinforce fake news and echo chambers.
  • Virtual Assistants are also sometimes criticized for being overrated. In particular, A. Casilli points out that the AI of Virtual Assistants are neither intelligent nor artificial for two reasons :
  1. Not intelligent because all they do is being the assistant of the human, and only by doing tasks that a human could do easily, and in a very limited specter of actions: find, class, and present information, offers or documents. Also, Virtual Assistants are neither able to make decisions on their own nor to anticipate things.
  2. And not artificial because they would be impossible without human labelization through micro working.

Ethics implicationsedit

In 2019 Antonio A. Casilli, a French sociologist, criticized artificial intelligence and virtual assistants in particular in the following way:

At a first level the fact that the consumer provides free data for the training and improvement of the virtual assistant, often without knowing it, is ethically disturbing.

But at a second level, it might be even more ethically disturbing to know how these AIs are trained with this data.

This artificial intelligence is trained via neural networks, which require a huge amount of labelled data. However, this data needs to be labelled through a human process, which explains the rise of microwork in the last decade. That is, remotely using some people worldwide doing some repetitive and very simple tasks for a few cents, such as listening to Virtual Assistant speech data, and writing down what was said. Microwork has been criticized for the job insecurity it causes, and for the total lack of regulation: The average salary was 1,38 dollar/hour in 2010, and it provides neither healthcare nor retirement benefits, sick pay, minimum wage. Hence, Virtual Assistants and their designers are controversial for spurring job insecurity, and the AIs they propose are still human in the way that they would be impossible without the microwork of millions of human workers.

Privacy concerns are raised by the fact that voice commands are available to the providers of virtual assistants in unencrypted form, and can thus be shared with third parties and be processed in an unauthorized or unexpected manner. Additionally to the linguistic content of recorded speech, a user’s manner of expression and voice characteristics can implicitly contain information about his or her biometric identity, personality traits, body shape, physical and mental health condition, sex, gender, moods and emotions, socioeconomic status and geographical origin.

Comments

Popular posts from this blog

Devices and objects where found

Virtual assistant privacy

Developer platforms