Statice is Now part of Anonos Data Embassy Platform

Statice's synthetic data technology is now part of Anonos Data Embassy, the award-winning data security and privacy solution.

LEARN MORE

Shaping Society: “Our Lives With Algorithms”

By
Ben Nolan

I recently attended the Alexander von Humboldt Institute for Internet and Society’s event “Our lives with Algorithms with Louise Amoore”. Professor Amoore presented excerpts from the research that went into her new book Cloud Ethics: Algorithms & the Attributes of Ourselves & Others, and dove into some of the challenges algorithms present to society.

One of the central themes of the talk was centred around ethics, and more specifically how some applications of machine learning algorithms are changing the representation of “good” as a concept, and provide decisions based on opaque methodology and effectively partial consideration of situational facts.

The human at the center of data

At Statice, we build software to safely anonymize sensitive personal information, which means that companies can protect individual privacy without giving up the potential for innovation and value creation around data. We believe that the ethical and moral responsibility to define concepts such as “good” and “bad” should always lie on humans and not the machines. Data driven innovation can only do good if it is aimed at humanity and operated by humans, after all.

We firmly believe in the power of machine learning algorithms to “do good” (we use Deep Learning as a part of our software to anonymize data and keep personal data safe, which for example enables healthcare companies to share data for medical research), but we agree that when designing algorithms users need to remember that they are only tools, and have their own implicit weaknesses.

This combination of characteristics was used in the presentation to illustrate that responsible users of ML technologies should work to ensure that where needed, a human is truly in the loop — which can be a difficult proposition in many ML cases. Professor Amoore built on this to suggest that we should rather develop our thinking around algorithmic decisions towards an “aperture of possibility” as opposed to a fixed outcome, and keep in mind the human connection.

Machine learning is a tool, not a solution

I thoroughly enjoyed the presentation overall, though I felt the roles of data, and of operator action in algorithmic performance could have been discussed in more detail, and in general that the positive sides of ML could have been illustrated, to present a more balanced perspective. ML algorithms alone are just a progression of a process that has been happening for thousands of years: the recognition and usage of patterns in data. ML allows us to do this faster, and at much great scale and granularity than was possible until recently, but one needs to remember that humans can be flawed, and these flaws can play out in data we collect, and can be accelerated by algorithms.

That doesn’t mean that we should throw the proverbial baby out with the bathwater however. ML presents a huge opportunity to provide solutions to problems too complex or expensive to solve using traditional methods. It enables better healthcare, safer driving, improved cybersecurity, and less waste. These are opportunities that we as an ever-growing society cannot afford to pass up.

It’s positive to see the discussion of the impact of technology and data moving more into the public sphere, and we look forward to attending more events organized by the Humboldt Institute. We’d be happy to hear your thoughts on algorithms and data privacy, so feel free to contact us.

Get the latest content straight in your inbox!

Get the latest content straight in your inbox!

Articles you might like

What is Differential Privacy: definition, mechanisms, and examples

Read more

Pseudonymization vs anonymization: differences under the GDPR

Read more

How to manage re-identification risks with synthetic data

Read more