This past week was another crazy roller-coaster ride for Bitcoin, hitting an all-time high of $5000, before almost immediately hitting another milestone of $5100, $5200, … moon.
I was looking at possible models of Bitcoin’s valuation growth proposed about a year ago, and we’ve surpassed their wildest predictions based on prior performance. In a sense, any such talk of exponential growth reminds me of much of the literature around the singularity, like what we just read. (Sidenote: The Bitcoin analogue for the AI singularity, that is, a point of no return, is “hyperbitcoinization”, if/when fiat currencies lose all value.) We might plausibly accept that we’ll hit the singularity maybe in the next 20-50 years, but humans find difficult to grasp just how fast AI will be advancing after the singularity. This WaitButWhy post points out that at the singularity, AI will have reached the intelligence of the smartest person on the planet, but just a few minutes later, it could be magnitudes smarter than that person. Software can iterate on itself much faster than genetic evolution can. One could think of this concept through the lens of compound interest. Once we have more capital that our principal produced for us, that new higher value will generate even more returns, creating that lucrative exponential growth we yearn for.
This is why Google’s AutoML excites me so much. We use machine learning in all sorts of domains to optimize processes in ways humans could never foresee, so why not recursively apply the same technique to machine learning itself? ML on ML on ML on ML. As Sundar Pichai elegantly stated, “We must go deeper.” Developing the actual techniques behind machine learning libraries is something very few people have the requisite expertise for, and even then, it’s a extremely difficult problem to tackle. But when a computer is given the task to optimize it’s own machine learning algorithm, it can simply try all sorts of tweaks and observe how success is changing, seeking to minimize some sort of loss function. Recently, there’s been a lot of excitement over AlphaGo again, which has improved itself by many magnitudes simply by playing itself. The same idea is at play here.
Recent advances in AI bring up the perennial questions around whether superintelligent AI will be a boon to society or the death of all humankind. These questions devolve into sensationalized headlines all too often, and even in discussion, lead to endless philosophical debate loops that lose touch with reality. This debate blew into the public sphere this past year as a spate between fellow tech billionaires Elon Musk and Mark Zuckerberg. Musk has been warning about the dangers of uncontrolled AI for a while, and recently has adopted a focus on responsible AI research through OpenAI. Zuckerberg countered this darker assessment by pointing to the benefits that AI has already unleashed, ranging from better healthcare to safer roads. When it comes down to it, they’re debating AI at two different stages of evolution, one closer to 2017 and one closer to the singularity.
Frankly, I can’t make any predictions about how, if, and when the singularity will play out, but I think it’s critical that we adapt to the short-term consequences of AI. Much of this is economic. I did a year-long research project last year analyzing policy implications of technological unemployment and inequality, with a focus on autonomous vehicles. In the US, truck driving is one of the largest professions, and yet one of the most vulnerable to automation. We already have Level 3 autonomous vehicles on the street, and it’s only a short matter of time, with some regulatory approval, before we reach Level 5, or full autonomy. What happens when such a massive group of people lose their livelihoods to a computer? In the past, technology has supplanted the more mechanical tasks and enabled humans to move to bigger and better things. But how much longer is that sustainable? Will there be a point at which AI can perform any task better than us?
Technology has democratized access to information, jobs, and other resources but yet in many ways, it has helped to boost inequality simply due to the massive scales at which corporations can operate at. With automation, a scenario evolves where fewer and fewer people own the machines/algorithms doing all the work, and they reap all the rewards. Without a system like universal basic income to redistribute the fruits of this newfound productivity, inequality is bound to only continue to grow. Eventually, this reaches a breaking point, either when the masses protest or when there are simply no more consumers to purchase the goods and services being produced.
Regardless, all hail our new AI overlords.