Humans vs. Robots vs. Robots vs. Humans: Part 2 (Cybersecurity, AI, and Human Rights)

Our last month’s edition of the Dataspace newsletter looked at the growing applications of AI for cybersecurity, and explored how some of the major players in tech are incorporating AI based security tools into the suite of products they offer to business clients.

However, it is also important to note that hacking can present a threat not only to an organization’s information, but to personal privacy as well. The latter is a growing concern, as evidenced by statements from Apple CEO Tim Cook and the content of many of the sessions from the recent International Conference on Computers, Privacy and Data Protection. (Of note, there were many panels addressing various aspects of AI use and regulation, as well as a session on the intersection of Rights and Cybersecurity policy for Europe)

A point highlighted by many of these sessions, as well as literature by panel participants, is that the realms of data protection, cybersecurity, and human rights protection are now intimately linked. Similarly, the line between “personal data” and “business data” has become blurry.

The boundary between “threat detection” and “privacy violation” is another gray area that will have to be navigated. Looking back at some of the examples we explored in April\’s newsletter, we see Amazon’s use of behavior models, and IBM’s AI risk assessments based on a user’s activity type. This threat detection tactic of User Entity Behavior Analytics explicitly relies on machine learning algorithms to track and analyze all user activity – creating metrics for what presents as “normal” behavior for any particular user. This methodology results in an ethical quandary in an of itself – where there is a responsibility to protect users’ data from external (and internal) threats, but in order to improve the AI necessary to mitigate these threats users’ behavior must be even more carefully surveilled and monitored by these same AI.

Is privacy then lost all the same, in the name of protection?

When we reach the Fourth Wave of AI what kind of ethical implications will there be for having  “superintelligent” AI reviewing personal information and user activity?

There will necessarily be an ongoing conversation surrounding these questions, and other similar ethical issues, as we continue to navigate the growing intersection between technology and human rights.

Scroll to Top