Easter Egg and AI Patchwork
Welcome to the fourth issue of my monthly newsletter
I'll be sharing analysis and short stories about digital transformation, practical recommendations, or recommended reading on this platform.
This month, to follow-up on easter a short double issue as lots has been going in the digital space in the last week.
First, a timely reminder that we probably take way too much of today’s digital comfort for granted and that cybersecurity in large part depends on the curiosity and commitment of a few people.
Second, my thoughts on the current shouting match that is AI governance. What is happening, what does it mean in a broader context, and where do we go?
Please enjoy!
Easter Egg
An easter egg usually refers to a hidden feature in software code, intended to amuse the curious user able to find it, like a game hidden in Microsoft Excel. Over this easter a much darker hidden “feature” was discovered by an engineer surprised at a lagging performance in a widely used software component. What started looking like a simple bug now looks to be a specifically engineered vulnerability, introduced to be exploited by malicious actors down the line, an attack vector now thwarted by the swift action of the open source community.
If you want to dive into the details of this incident and the background on why this was serious I can recommend the piece in WIRED and POLITICO as well as the German blog DNIP.CH.
The story reminded me of a book I read decades ago: The Cuckoo’s Egg by Clifford Stoll. Published when I was born, the book describes how a systems engineer – the author – got surprised by a mainframe charge that seemed out of place. Investigating the issue, the story quickly develops into a real-world spy thriller as the charge leads to a German-based hacker trying to steal US secrets by using university mainframes to sell them to Soviet intelligence.
Both stories go to show just how brittle and complex the digital world is, 1989 and even more so today. The vast majority of our digital infrastructure relies on software maintained by the open-source community. And as both stories show, finding vulnerabilities or suspicious behavior often comes down to the curiosity and commitment of individuals. While “the system worked” in both cases, we should ask ourselves if this is a sustainable path forward or whether more could be done to systematically improve cybersecurity and our digital infrastructure.
Making sense of the patchwork
If you try to follow what is going on in AI governance, you’ll need a lot of time and attention. For the often lamented slow pace of policymaking, there has been a whole lot going on in the last months regarding AI from sub-national, national, regional and global governance efforts, not to mention the countless frameworks and manifestos proposed by the private sector.
Just take a look purely at EU efforts to regulate the digital realm in this overview here. In addition to the EU AI Act, we have also seen the first agreement by the Council of Europe and even a “landmark resolution on artificial intelligence” by the UN General Assembly. Add to this the national efforts such as the UK AI Safety Summit or the Executive Order on AI by the White House as well as the global governance efforts by the G7 with the Hiroshima Process or the ongoing work of the OECD. This activism is warranted as countless researchers and activists have been calling for global governance of AI.
But if you are a local politician, entrepreneur, or citizen it is difficult to understand what the impact of all these documents will be. Some thoughts and concepts from international affairs can help you navigate the patchwork.
A first step can be to think along the axis of hard law vs. soft law. Simply stated, soft law is not enforceable, it is more a set of norms that we can somewhat agree on. At the other end of the spectrum sits hard law, which is enforceable and very clearly defined. Some of the current efforts on AI governance fall into soft law, others are clearly hard law. This distinction is important when judging the content of individual governance efforts. For example, soft law tends to be formulated in a much fuzzier way often with a broader scope and applying to more actors whereas hard law is usually very narrow and specific. Over time, soft law can also lay the foundation for hard law and facilitate the emergence of norms, i.e. “unwritten rules.” Interestingly, when it comes to AI, research has also found that while there are countless soft law instruments, the norms contained in them are often similar.
A second helpful distinction can be to look at the institution in charge of a governance effort. Is it purely multilateral, i.e. bringing together governments? Or is it a multistakeholder initiative that also includes the private sector and civil society as well as academia and other groups? In particular for digital governance, going back to the early days of internet governance, a lingering dispute between world powers is visible again: while the US and its allies favor a multistakeholder approach, countries like China and Russia but also significant voices from the global south, would prefer to regulate AI and the digital space in existing multilateral fora such as the UN. What kind of institution is in charge again shapes what impact a governance effort might have. A resolution by the UN General Assembly carries political weight but it is going to impact the everyday operations of a company completely differently than a new technology standard agreed upon in an industry association.
Those are just two ways concepts from international affairs can help you better place the many governance efforts and understand what the potential impact on you and your business might be. In any case, the dynamic field of AI governance will keep us busy for the foreseeable future.
Subscribe to this newsletter and follow me on social media for more publications and thoughts:
LinkedIn
Twitter
Bluesky
Mastodon