“Digital citizenship”

The internet has fundamentally transformed the environment in which governments operate in, from a generally static one with low information velocity, to a global, incredibly dynamic one with incredibly fast information exchange. With this comes a great increase in globalization; after all, the internet, is not bounded by country (generally, barring the Great Firewall, etc), so commerce and governance disseminate across the world. From our discussion, the themes that emerged suggested two possible end-game outcomes of increased digitization are either a very centralized, all-powerful government with full access to our information, and thereby our lives, contrasted with a completely decentralized world where nation-states have very little power, with even the concept of a country having little meaning. What’s crucial to understand is the internet is at its core agenda-agnostic, happy to enable both scenario 1 and scenario 2 all through a little protocol called TCP/IP.

While discussing the concept of government by API, a possibility allowing for streamlined development, innovation, and lower costs, an interesting issue that stood out to me was how we would deal with identities and authentication in such a paradigm. If access to government services occurs mainly through the internet, the ability to uniquely and securely identify citizens becomes a core part of the national infrastructure, for failing to do so ultimately results in an erosion of sovereignty and opens up pathways to fraud, at the same time leaving the legitimately needy stranded. The US has made very little progress on this front, with our means of identification typically reserved to Social Security numbers or drivers licenses. Both of these mechanisms have shaky foundations. Social Security numbers were never designed as identity numbers, and recent breaches like with Equifax essentially make SSNs semi-public information. So SSNs work as a unique identifier but not necessarily as an authenticator. Meanwhile drivers licenses, used as photo IDs at places like voting stations, come with their own challenges, namely that using them would exclude those who typically tend to be lower-income or minorities.

While the US has stalled on that front, other countries like India, with their Aadhaar program, have made broad-based efforts to create an digital identity system for the 21st century. I’ve actually received an Aadhaar ID myself, when visiting for a couple months. The process is painless, and involves simply showing up with a couple documents and getting a biometric scan. Privacy concerns are of course a concern in such a system, but by being very careful with what information is actually stored (the relevant hashes), the government is able to open up the doors for innovation atop the platform while minimizing losses should leakage occur. In the US, such a broad governmental collection of data would likely be politically infeasible, so we talked about private solutions possibly emerging. For example, Google and Facebook accounts already authenticate us to a variety of services, and comprise of a trove of information that essentially represents our digital life. At a certain inflection point, our Facebook IDs could become more relevant that our SSNs. The idea of private companies doing “public” services is increasingly common around the world. Take Sesame Credit, under the Alibaba family, which has become the de facto platform for credit ratings in China. This trend is not necessarily good or bad, but we must carefully consider the incentives that arise once a firm becomes a natural monopoly, and ensure that a private solution truly is the optimal one.

One thing’s for sure: in a post-Arab Spring, post-“fake news” world, the internet has shifted all prior assumptions of the relationship between the people and the state.


“AI, the Internet, and the Intelligence Singularity– Will the Machines Need Us?”

This past week was another crazy roller-coaster ride for Bitcoin, hitting an all-time high of $5000, before almost immediately hitting another milestone of $5100, $5200, … moon.

I was looking at possible models of Bitcoin’s valuation growth proposed about a year ago, and we’ve surpassed their wildest predictions based on prior performance. In a sense, any such talk of exponential growth reminds me of much of the literature around the singularity, like what we just read. (Sidenote: The Bitcoin analogue for the AI singularity, that is, a point of no return, is “hyperbitcoinization”, if/when fiat currencies lose all value.) We might plausibly accept that we’ll hit the singularity maybe in the next 20-50 years, but humans find difficult to grasp just how fast AI will be advancing after the singularity. This WaitButWhy post points out that at the singularity, AI will have reached the intelligence of the smartest person on the planet, but just a few minutes later, it could be magnitudes smarter than that person. Software can iterate on itself much faster than genetic evolution can. One could think of this concept through the lens of compound interest. Once we have more capital that our principal produced for us, that new higher value will generate even more returns, creating that lucrative exponential growth we yearn for.

This is why Google’s AutoML excites me so much. We use machine learning in all sorts of domains to optimize processes in ways humans could never foresee, so why not recursively apply the same technique to machine learning itself? ML on ML on ML on ML. As Sundar Pichai elegantly stated, “We must go deeper.” Developing the actual techniques behind machine learning libraries is something very few people have the requisite expertise for, and even then, it’s a extremely difficult problem to tackle. But when a computer is given the task to optimize it’s own machine learning algorithm, it can simply try all sorts of tweaks and observe how success is changing, seeking to minimize some sort of loss function. Recently, there’s been a lot of excitement over AlphaGo again, which has improved itself by many magnitudes simply by playing itself. The same idea is at play here.

Recent advances in AI bring up the perennial questions around whether superintelligent AI will be a boon to society or the death of all humankind. These questions devolve into sensationalized headlines all too often, and even in discussion,  lead to endless philosophical debate loops that lose touch with reality. This debate blew into the public sphere this past year as a spate between fellow tech billionaires Elon Musk and Mark Zuckerberg. Musk has been warning about the dangers of uncontrolled AI for a while, and recently has adopted a focus on responsible AI research through OpenAI. Zuckerberg countered this darker assessment by pointing to the benefits that AI has already unleashed, ranging from better healthcare to safer roads. When it comes down to it, they’re debating AI at two different stages of evolution, one closer to 2017 and one closer to the singularity.

Frankly, I can’t make any predictions about how, if, and when the singularity will play out, but I think it’s critical that we adapt to the short-term consequences of AI. Much of this is economic. I did a year-long research project last year analyzing policy implications of technological unemployment and inequality, with a focus on autonomous vehicles. In the US, truck driving is one of the largest professions, and yet one of the most vulnerable to automation. We already have Level 3 autonomous vehicles on the street, and it’s only a short matter of time, with some regulatory approval, before we reach Level 5, or full autonomy. What happens when such a massive group of people lose their livelihoods to a computer? In the past, technology has supplanted the more mechanical tasks and enabled humans to move to bigger and better things. But how much longer is that sustainable? Will there be a point at which AI can perform any task better than us?

Technology has democratized access to information, jobs, and other resources but yet in many ways, it has helped to boost inequality simply due to the massive scales at which corporations can operate at. With automation, a scenario evolves where fewer and fewer people own the machines/algorithms doing all the work, and they reap all the rewards. Without a system like universal basic income to redistribute the fruits of this newfound productivity, inequality is bound to only continue to grow. Eventually, this reaches a breaking point, either when the masses protest or when there are simply no more consumers to purchase the goods and services being produced.

Regardless, all hail our new AI overlords.