Covering the intersection between technology, the law and society around the world

  • White Twitter Icon
  • White Facebook Icon
  • White Instagram Icon
  • White Tumblr Icon
  • Medium

Legal Disclaimer

The content displayed on this website does not constitute legal advice. Please consult a qualified legal expert if you are seeking legal advice or information on your rights. 

Controlling the Creators

June 20, 2017

 

 

 

 

The potential downsides to artificial intelligence

Many technological inventions and innovations throughout human history are ethically neutral and apolitical. They are created and developed by human who they themselves may have political tendencies or agendas, but the technologies created are not necessarily themselves tailored to suit a specific agenda. This neutrality means that the inevitable political and ethical implications of these technologies will be dependent on how its creators use them.

 

The same can be said with the emergence of AI but to a certain extent. The unique feature of this technology is its potential ability to be independent of human control and act autonomously. In order to be able to do this, however, these machines and algorithms still depend on the activity and input of human beings to actually operate for the purpose that the AI entity has been programmed to perform.

 

Essentially, what can certainly be said is that the rise of AI triggers some unprecedented, and thus difficult, political debates, law and policy implications and ethical dilemmas. While the full effect is yet to be realised, there are a few impacts which can be identified today.

 

The first, which was alluded to in an earlier section, relates the implications on privacy and surveillance. In early 2017, Evernote, the maker of its popular note-taking app recognisable by its famous elephant logo, managed to highlight the difficulties that exist around the implementation of AI. In an effort to include machine-learning features into its digital services, it faced vicious backlashes from its users. The company had amended its privacy policy in order to pursue this implementation, making it clear to users in a blog post that Evernote employees would have access to information uploaded to its servers in order to operate the AI-powered features. Despite attempting to impress users of the new additions, and trying to reassure users by explaining employees would be “subject to background checks and receive specific security and privacy training at least annually,” and that the data will be anonymised, Evernote failed to impress. After its announcement, users rebelled and demanded that the company reverse its decision citing an invasion of privacy. Subsequent to this disapproval, Evernote promptly changed its mind and apologised to users, promising to do better to adequately protect data and respect user privacy.

 

Although this happening may somewhat contradict the idea presented before in relation to users willing to give up some privacy in return for the convenience of technological innovation, this aforementioned user attitude is still in its relatively early stages. For now, may people will have the revelations by Edward Snowden of the mass surveillance conducted by intelligence agencies in America and the UK fresh in their minds. Therefore, privacy concerns, even in the context of the wonderful potential of AI and machine learning, remain alive (refer to the article titled ‘The Elephant in the Room’ in the complimentary reading for more on this story).

 

Consequently, such concerns may provide a barrier for businesses trying to embrace and implement AI into their products and services. In order for AI and machine learning to perform better, as much data as possible is needed. If users are apprehensive about offering their data to technology companies though, the potential of AI could be limited.

 

Aside from the private sector, the worries of the State using such technologies is perhaps of an even greater worry to some. For example, controversy arose in Russia with police using a programme called FindFace to identify individuals in public places. This programme collects data from social networks and machine learning to connect faces with online profiles, and thus determine their identity. Russian police have said that it has used the programme to track down criminal suspects or witnesses. The app’s developers claim that such technology was intended to be used to identify people that one may see in a bar or on the street temporarily. Inevitably, privacy concerns have been raised, as the surveillance capacities of the Russian State are greatly enhanced by the introduction of such technology. This thus, shows how technologies developed by humans are truly impartial to political tendencies and ethical implications, and that these are dependant on how humans put the technology to use.

 

Another example involves the work being carried out by Google using AI. It is in the process of creating an AI-powered programme to filter out hate speech online. The software is called Perspective, and Google says it is an attempt to combat the vitriol trolling taking place online, and it can be used by websites owners to monitor the commenting systems. But as powerful and welcomed this technology may be, there is an ethical implication to be realised. Google admits that the technology is in its early stages still, and so the programme has not proven to work too effectively yet. It is not yet managed to identify most of the hateful comments that exist online, while terms like “garbage truck” are identified as such. The ethical or policy considerations to be realised here is in determining what should be classed as hate speech. Of course, there is a wide consensus that racism, sexism and other kinds of discrimination are not to be tolerated. But how far should the programme go? Should a list of supposedly hateful phrases be decided just on Google’s terms? What if the programmes conflict with First Amendment rights in America? And assuming that the software will be available globally as the internet would allow, how will it cope with the different laws and rules on hate speech in other jurisdictions? Europe, with its ‘right-to-be-forgotten’ laws, has a somewhat different approach to freedom of speech and expression than America, for its example.

 

In 2015, Google had a previous incident involving its Photo app which categorised pictures of black people as “gorillas.” The software used AI to automatically group and label photos in the app. Richard Socher of MetaMind, an AI department now part of Salesforce, correctly pointed out that Google did not necessarily design the software to do this, but “if it trains on terrible data, it will make terrible predictions.”

 

Such incidents highlight an interesting reality; the immense role technology companies are likely to play in the future due to the innovations they create. The more that their products and services interfere with the law and ethics, the more political tension they will infuse. The days where these firms could leave such problems to market forces are beginning to vanish. If there is anything to be learnt by the internet, with its lawlessness and borderless nature causing havoc for regulators, these tech firms will be held more accountable for their activity than ever before. AI will definitely be no exception.

 

 

NEXT  | An AI Army

The inevitable militarisation of AI comes with both costs and benefits

 

BACK  | Knowing Thyself No More

The trends and technologies fuelling AI could change views on privacy and free will

 

 

 

Please reload

  • Twitter - Black Circle
  • Facebook - Black Circle
  • Instagram - Black Circle
  • Tumblr - Black Circle
Recent Posts

November 1, 2019

September 9, 2019

Please reload