👋 Hello, again!
Welcome to this second episode of Mind The Gap. Which is already a special one, because it was birthday week for GDPR! 🎂
That meant we read tons of content on privacy from both technical and policy perspectives: Our birthday boy (they/them) is still attracting mixed reactions, DuckDuckGo was caught in the act, Google is rolling out symmetrical consent in the EU. And to keep your data close(d), we're launching a self-hosted option with the STRM Data Plane.
Hope you enjoy, we're keen to read your feedback - please share it with me and feel free to forward/recommend Mind The Gap or send your tips for episode #3!
-Pim at STRM
My first GDPaRty
My first GDPaRty was in 2018 (I partly stole that joke). It was some 7 weeks before GDPR in Europe would come into effect, and I was in a room full of legal folks and executives in my role as product manager of recommendations and data for an e-commerce marketplace. My domains would be severely impacted by GDPR and guidance was limited so we had interesting discussions (lawyer: "you can't do recommendations anymore". Us: "not so fast, cowboy").
It’s in that room I discovered how large the gap was between the legal/policy view and the engineering and data perspectives. That experience and the following weeks sew the seed for STRM, and although it's been 4 years I sometimes wonder if the world really understands what privacy regulations aim for.
Fact is the digital economy (by design?) moves faster than regulators can keep up and industry has a 1000-fold the capacity in this arms race (which, I've argued before, drives instead of limits innovation!).
Read along for a few examples.
GDPR after 4 years: still attracting mixed reactions
A birthday is always a good moment to reflect, and so NOYB has a few notes on how it's received and understood, while Wired dives into the reasons it's apparently so hard to get Big Tech on its knees with GDPR. I am confident it's definitely not well understood among tech folks with a large Twitter following.
Great timing by the EDPO to prove it has value: in a letter to the European Commission, the data protection board finds new Anti-Money Laundering regulations pose a big threat to civilians (in order to find fraud, you have to inspect data). But those same civilians also expect law enforcement and financial institutions to effectively battle criminal networks. Privacy is, once more, a balancing act.
It's fines, really
One of the interesting perspectives on this is how fines work out. They appear to be a financial risk, but in practice they also put a price on one side of the balancing act - like pricing the business case. Is it simply a profit-loss discussion to the extent of "are we willing to take this (financial) risk"? In reality reputation risks are a much more tangible threat than fines - which are more likely, come unexpected, materialise much quicker and have more direct impact to revenues (albeit perhaps briefly?).
If only there would be real personal risk involved in misusing data for corporate gains (Zuckerberg is being sued over Cambridge Analytica).
A Few Good Ducks?
DuckDuckGo is both the weirdest name for a business I know and my favourite search app (I wanted to rebrand STRM as Private Parts but my co-founder Bart wouldn't let me).
But are they a 100% pro-privacy? In what seems to be at least an expensive PR slip, founder Weinberg raises suspicions about how DuckDuckGo is monetising. The issue: a reference to an agreement of DuckDuckGo not blocking Microsoft tracking because of a contract they aren't allowed to talk about...
Weinberg responded, and it mainly shows how hard it is to balance a mission with scaling out to a sustainable business. I can't assess from my position if this is a deal with the devil (in privacy terms). I do know a deal that would only share anonymous data with Microsoft would certainly look better than this one.
Google introduces symmetrical consent in the EU
On to that other Big Five giant (and contrary to my argument about the effectiveness of fines):
Europe is getting symmetrical cookie choices for Google services, following a bunch of fines from EU Data Protection Authorities.
Google is collecting a lot of them (fines) by now, but the relevant ones center around “asymmetrical” consent choices, where acceptance was one click but rejecting specific consent required selecting them one by one. As a product manager I’d call that intended friction, and at scale it will lead to many more default choices (accept) than declines. Which is beneficial if you sell ads based on data for a living...
This is a good move, and will probably help drive more “symmetrical” choices in consent across many consent options. But Google is, per their own words, all about privacy. And a choice to share or not share data is a privacy essential.
So why isn’t this being rolled out across the world, Google?
Back to my own department: release news!
This one is a bit more technical. Keywords:
For the past months, we've been working on splitting our platform in such a way that all components that contain or touch sensitive (customer) data can be run inside a customer's cloud instead of as a SaaS solution. This has important benefits for security and privacy we list in the post below.
If you see it in action, it's magic. We never sold STRM as being machine learning or data science, although it’s place in ML and DS stacks is very clear. I was in machine learning, but seeing the
STRM Data Plane being pulled up automagically once an instruction chart is submitted is the closest thing to “artificial intelligence” I have personally been part of.
Head over to the blog for the full introduction: