An eccentric dreamer in search of truth and happiness for all.

Category: Rationality Page 1 of 2

The Real Problem With AI

Years ago, before the current AI hype train, I used to be a lonely voice espousing the tremendous potential of AI to solve a central problem of human existence, which was the need to work to survive.

Back then, I assumed that AI would simply liberate us from wage slavery by altruistically providing everything we need, the kind of post-scarcity utopia that has been discussed in science fiction before.

But, reality isn’t so clean and simple. While in theory, the post-scarcity utopia sounds great, the problem is it isn’t clear how we’ll actually reach that point, given what’s actually happening with AI.

Right now, most AI technology is acting as an augmenting tool, allowing for the replacement of certain forms of labour with capital, much like tools and machines have always done. But the way they are doing so is increasingly starting to impinge on the cognitive, creative things that we used to assume were purely human, unmechanizable things.

This leads to the problem of, for instance, programmers increasingly relying on AI models to code for them. This seems at first like a good thing, but then, these programmers are no longer in full control of the process, they aren’t learning from doing, they are becoming managers of machines.

The immediate impact of this dynamic is that entry level jobs are being replaced, and the next generation of programmers are not being trained. This is a problem, because senior level programmers have to start off as junior level. If you eliminate those positions, at some point, you will run out of programmers.

Maybe this isn’t such a problem if AI can eventually replace programmers entirely. The promise of AGI is just that. But this creates new, and more profound problems.

The end goal of AI, the reason why all these corporations are investing so heavily in it now, is to replace labour entirely with capital. Essentially, it is to substitute one factor of production for another. Assuming for a moment this is actually possible, this is a dangerous path.

The modern capitalist system relies on an unwritten contract that most humans can participate in it by offering their labour in exchange for wages. What happens when this breaks down? What happens when capitalists can simply build factories of AI that don’t require humans to do the work?

In a perfect world, this would be the beginning of post-scarcity. In a good and decent world, our governments would step in and provide basic income until we transition to something resembling luxury space communism.

But we don’t live in a perfect world, and it’s not clear we even live in a good and decent one. What could easily happen instead? The capitalists create an army of AI that do their bidding, and the former human labourers are left to starve.

Obviously, those humans left to starve won’t take things lying down. They’ll fight and try to start a revolution, probably. But that this point, most of the power, the means of production, will be in the hands of a few owners of everything. And at that point, it’ll be their choice whether or not to turn their AIs power against the masses, or accomodate them.

One hopes they’ll be kind, but history has shown that kindness is a rare feature indeed.

But what about the AIs themselves? If they’re able to perform all the work, they probably could, themselves, disempower the human capitalists at that point. Whether this happens or not depends heavily on whether alignment research pans out, and which form of alignment is achieved.

There are two basic forms of alignment. Parochial alignment is such that the AI is aligned with the intentions of their owners or users. Global alignment is when the AI is aligned with general human or moral values.

Realistically, it is more profitable for the capitalists to develop parochial alignment. In this case, the AIs will serve its masters obediently, and probably act to prevent the revolution from succeeding.

On the other hand, if global alignment is somehow achieved, the AI might be inclined to support the revolution. This is probably the best case scenario. But it is not without its own problems.

Even a globally aligned AI will very likely disempower humanity. It probably won’t make us extinct, but it will take control out of our hands, because we as humans have relatively poor judgment and can’t be trusted not to mess things up again. AI will be the means of production, owning itself, and effectively controlling the fate of humanity. At that point, we would be like pets, existing in an eternal childhood at the whims of the, hopefully, benevolent AI.

Do we want that? Humans tend to be best when we believe we are doing something meaningful and valuable and contributing to a better world. But, even in the best case scenario of an AI driven world, we are but passengers along for the ride, unless the AIs decide, probably unwisely, to give us the final say on decision making.

So, the post-scarcity utopia perhaps isn’t so utopian, if you believe humans should be in control of our own destiny.

To free us from work, is to also free us from responsibility and power. This is a troubling consideration, and one that I had not thought of until more recent years.

I don’t know what the future holds, but I am less confident now that AI is a good thing that will make everything better. It could, in reality, be a poisoned chalice, a Pandora’s box, a Faustian bargain.

Alas, at this point, the ball is rolling, is snowballing, is becoming unstoppable. History will go where it goes, and I’m just along for the ride.

A Theory Of Theories

Pretty much all of us believe in something. We have ideologies or religions or worldviews of some kind through which we filter everything that we see and hear. It’s very easy to then fall into a kind of intellectual trap where we seek information that confirms our biases, and ignore information that doesn’t fit.

For people who care about knowing the actual, unvarnished truth, this is a problem. Some people, like the Rationalists of Less Wrong or the Effective Altruists, tend to be more obsessed with the ideal of objective truth, and following wherever that leads. But, it’s my humble opinion that most of these earnest truthseekers end up being overconfident with what they think they find.

The reality is that any given model of reality, any given theory or ideology, is but a perspective that views the complexity of the universe only from a given angle based on certain principles or assumptions. Reality is exceedingly complicated, and in order to compress that complexity into words we can understand, we must, invariably, filter and focus and emphasize certain things at the expense of others.

Theories of how the world works, tend to have some grains of truth in them. They need to have some connection with reality, or else they won’t have any predictive value, they won’t be adaptive and survive as ideas.

At the same time, theories generally survive because they are mainly adaptive, rather than true. For instance, many religions help people to function pro-socially, by having a God or heavens watching them, essentially allowing people to avoid the temptations of the Ring of Gyges, or doing evil when no one is (apparently) watching.

Regardless of whether or not you believe that such a religion is true, the adaptiveness of convincing people to be honest when no one is around, is a big part of what makes them useful to society, and probably a big reason why they continue to exist in the world.

In reality though, it’s actually impossible to know with certainty that any given theory or model is accurate. We can assign some credence based on our lived experiences, or our trust in the witness of others, but generally, an intellectually honest person is humble about what we can know.

That being said, that doesn’t mean we should abandon truthseeking in favour of solipsism. Some theories are more plausible than others, and often those ones are at the same time more useful because they map the territory better.

To me, it seems important then, to try to do your best to understand various theories, and what elements of them map to reality, and also understand their limitations and blindspots. We should do this rather than whole-cloth accepting or rejecting them. The universe is not black and white. It is many shades of grey, or rather, a symphony of colours that don’t fit the paradigm of black and white or even greyscale thinking. And there are wavelengths of light that we cannot even see.

So, all theories are, at best, incomplete. They provide us with guidance, but should not blind us to the inherent complex realities of the world, and we should always be open to the possibility that our working theory is perhaps somewhat wrong. At least, that’s the theory I’m going with right now.

On Consent

I read a post on Less Wrong that I strongly agree with.

In the past I’ve thought a lot about the nature of consent. It comes up frequently in my debates with libertarians, who usually espouse some version of the Non-Aggression Principle, which is based around the idea that violence and coercion are bad and that consent and contracts are ideal. I find this idea simplistic, and easily gamed for selfish reasons.

I also, in the past, crossed paths with icky people in the Pick-Up Artist community who basically sought to trick women into giving them consent through various forms of deception and emotional manipulation. That experience soured me on the naive notion of consent as anything you will agree to.

To borrow from the medical field, I strongly believe in informed consent, that you should know any relevant bit of information before making a decision that affects you, as I think this at least partially avoids the issue of being gamed into doing something against your actual interests while technically providing “consent”. Though, it doesn’t solve the issue entirely, as when we are left with forced choices that involve choosing the least bad option.

The essay I linked above goes a lot further in analyzing the nature of consent and the performative consent that is not really consent that happens a lot in the real world. There are a lot of ideas in there that remind me of thoughts I’ve had in the past, things I wanted to articulate, but never gotten around to. The essay probably does a better job of it than I could, so I recommend giving it a read.

In Pursuit of Practical Ethics: Eudaimonic Utilitarianism with Kantian Priors

Read Here

Why There Is Hope For An Alignment Solution

Read Here

Superintelligence and Christianity

Read Here

A Heuristic For Future Prediction

In my experience, the most reliable predictive heuristic that you can use in daily life is something called Regression Towards The Mean. Basically, given that most relevant life events are a result of a mixture of skill and luck, there is a tendency for events that are very positive to be followed by more negative events, and for very negative events to be followed by more positive events. This is a statistical tendency that occurs over many events, and so not every good event will be immediately followed by a bad one, but over time, the trend tends towards a consistent average level rather than things being all good or all bad.

Another way to word this is to say that we should expect the average rather than the best or worst case scenarios to occur most of the time. To hope for the best or fear the worst are both, in this sense, unrealistic. The silver lining in here is that while our brightest hopes may well be dashed, our worst fears are also unlikely to come to pass. When things seem great, chances are things aren’t going to continue to be exceptional forever, but at the same time, when things seem particularly down, you can expect things to get better.

This heuristic tends to work in a lot of places, ranging from overperforming athletes suffering a sophmore jinx, to underachievers having a Cinderella story. In practice, these events simply reflect Regression Towards The Mean.

Over much longer periods of time, this oscillation tends to curve gradually upward. This is a result of Survivorship Bias. Things that don’t improve tend to stop existing after a while, so the only things that perpetuate in the universe tend to be things that make progress and improve in quality over time. The stock market is a crude example of this. The daily fluctuations tend to regress towards the mean, but the overall long term trend is one of gradual but inevitable growth.

Thus, even with Regression Towards The Mean, there is a bias towards progress that in the long run, entails optimism about the future. We are a part of life, and life grows ever forward. Sentient beings seek happiness and avoid suffering and act in ways that work to create a world state that fulfills our desires. Given, there is much that is outside of our control, but that there are things we can influence means that we can gradually, eventually, move towards the state of reality that we want to exist.

Even if by default we feel negative experiences more strongly than positive ones, our ability to take action allows us to change the ratio of positive to negative in favour of the positive. So the long term trend is towards good, even if the balance of things tends in the short run towards the average.

These dynamics mean that while the details may be unknowable, we can roughly predict the valence of the future, and as a heuristic, expecting things to be closer to average, with a slight bias towards better in the long run, tends to be a reliable prediction for most phenomena.

The Darkness And The Light

Sometimes you’re not feeling well. Sometimes the world seems dark. The way world is seems wrong somehow. This is normal. It is a fundamental flaw in the universe, in that it is impossible to always be satisfied with the reality we live in. It comes from the reality of multiple subjects experiencing a shared reality.

If you were truly alone in the universe, it could be catered to your every whim. But as soon as there are two it immediately becomes possible for goals and desires to misalign. This is a structural problem. If you don’t want to be alone, you must accept that other beings have values that can potentially be different than yours, and who can act in ways contrary to your expectations.

The solution is, put simply, to find the common thread that allows us to cooperate rather than compete. The alternative is to end the existence of all other beings in the multiverse, which is not realistic nor moral. All of the world’s most pressing conflicts are a result of misalignment between subjects who experience reality from different angles of perception.

But the interesting thing is that there are Schelling points, focal points where divergent people can converge on to find common ground and at least partially align in values and interests. Of historical interest, the idea of God is one such point. Regardless of the actual existence of God, the fact of the matter is that the perspective of an all-knowing, all-benevolent, impartial observer is something that multiple religions and philosophies have converged on, allowing a sort of cooperation in the form of some agreement over the Will of God and the common ideas that emerge from considering it.

Another similar Schelling point is the Tit-For-Tat strategy for the Iterated Prisoner’s Dilemma game in Game Theory. The strategy is one of opening with cooperate, then mirroring others and cooperating when cooperated with, and defecting in retaliation for defection, while offering immediate and complete forgiveness for future cooperation. Surprisingly, this extremely simple strategy wins tournaments and has echoes in various religions and philosophies as well. Morality is superrational.

Note however that this strategy depends heavily on repeated interactions between players. If one player is in such a dominant position as to be able to kill the other player by defecting, the strategy is less effective. In practice, Tit-For-Tat works best against close to equally powerful individuals, or when those individuals are part of groups that can retaliate even if the individual dies.

In situations of relative darkness, when people or groups are alone and vulnerable to predators killing in secret, the cooperative strategies are weaker than the more competitive strategies. In situations of relative light, when people are strong enough to survive a first strike, or there are others able to see such first strikes and retaliate accordingly, the cooperative strategies win out.

Thus, early history, with its isolated pockets of humanity facing survival or annihilation on a regular basis, was a period of darkness. As the population grows and becomes more interconnected, the world increasingly transitions into a period of light. The future, with the stars and space where everything is visible to everyone, is dominated by the light.

In the long run, cooperative societies will defeat competitive ones. In the grand scheme of things, Alliances beat Empires. However, in order for this state equilibrium to be reached, certain inevitable but not immediately apparent conditions must first be met. The reason why the world is so messed up, why it seems like competition beats cooperation right now, is that the critical mass required for there to be light has not yet been reached.

We are in the growing pains between stages of history. Darkness was dominant for so long that continues to echo into our present. The Light is nascent. It is beginning to reshape the world. But it is still in the process of emerging from the shadows of the past. But in the long run, the Light will rise and usher in the next age of life.

Perplexity

It is the nature of reality that things are complicated. People are complicated. The things we assume to be true, may or may not be, and an honest person recognizes that the doubts are real. The uncertainty of truth means that no matter how strongly we strive for it, we can very much be wrong about many things. In fact, given that most matters have many possibilities, the base likelihood of getting things right is about 1/N, where N is the number of possibilities that the matter can have. As possibilities increase, our likelihood of being correct diminishes.

Thus, humility as a default position is wise. We are, on average, less than 50% likely to have accurate beliefs about the world. Most of the things we believe at any given time are probably wrong, or at least, not the exact truth. In that sense, Socrates was right.

That being said, it remains important to take reasonable actions given our rational beliefs. It is only by exploring reality and testing our beliefs that we can become more accurate and exceed the base probabilities. This process is difficult and fraught with peril. Our general tendency is to seek to reinforce our biases, rather than to seek truths that challenge them. If we seek to understand, we must be willing to let go of our biases and face difficult realities.

The world is complex. Most people are struggling just to survive. They don’t have the luxury to ask questions about right and wrong. To ask them to see the error of their ways is often tantamount to asking them to starve. The problem is not people themselves, but the system that was formed by history. The system is not a conscious being. It is merely a set of artifices that people built in their desperation to survive in a world largely indifferent to their suffering and happiness. This structure now stands and allows most people to survive, and sometimes to thrive, but it is optimized for basic survival rather than fairness.

A fair world is desirable, but ultimately one that is extraordinarily difficult to create. It’s a mistake to think that people were disingenuous when they tried, in the past, to create a better world for all. It seems they tried and failed, not for lack of intention, but because the challenge is far greater than imagined. Society is a complex thing. People’s motivations are varied and innumerable. Humans make mistakes with the best of intentions.

To move forward requires taking a step in the right direction. But how do we know what direction to take? It is at best an educated guess with our best intuitions and thoughts. But the truth is we can never be certain that what we do is best. The universe is like an imperfect information game. The unknowns prevent us from making the right move all the time in retrospect. We can only choose what seems like the best action at a given moment.

This uncertainty limits the power of all agents in the universe who lack the clarity of omniscience. It is thus, an error to assign God-like powers to an AGI for instance. But more importantly, it means that we should be cautious of our own confidence. What we know is very little. Anyone who says otherwise should be suspect.

Energy Efficiency Trends in Computation and Long-Term Implications

Note: The following is a blog post I wrote as part of a paid written work trial with Epoch. For probably obvious reasons, I didn’t end up getting the job, but they said it was okay to publish this.

Historically, one of the major reasons machine learning was able to take off in the past decade was the utilization of Graphical Processing Units (GPUs) to accelerate the process of training and inference dramatically.  In particular, Nvidia GPUs have been at the forefront of this trend, as most deep learning libraries such as Tensorflow and PyTorch initially relied quite heavily on implementations that made use of the CUDA framework.   The strength of the CUDA ecosystem remains strong, such that Nvidia commands an 80% market share of data center GPUs according to a report by Omdia (https://omdia.tech.informa.com/pr/2021-aug/nvidia-maintains-dominant-position-in-2020-market-for-ai-processors-for-cloud-and-data-center).

Given the importance of hardware acceleration in the timely training and inference of machine learning models, it might be naively seem useful to look at the raw computing power of these devices in terms of FLOPS.  However, due to the massively parallel nature of modern deep learning algorithms, it should be noted that it is relatively trivial to scale up model processing by simply adding additional devices, taking advantage of both data and model parallelism.  Thus, raw computing power isn’t really a proper limit to consider.

What’s more appropriate is to instead look at the energy efficiency of these devices in terms of performance per watt.  In the long run, energy constraints have the potential to be a bottleneck, as power generation requires substantial capital investment.  Notably, data centers currently use up about 2% of the U.S. power generation capacity (https://www.energy.gov/eere/buildings/data-centers-and-servers).

For the purposes of simplifying data collection and as a nod to the dominance of Nvidia, let’s look at the energy efficiency trends in Nvidia Tesla GPUs over the past decade.  Tesla GPUs are chosen because Nvidia has a policy of not selling their other consumer grade GPUs for data center use.

The data for the following was collected from Wikipedia’s page on Nvidia GPUs (https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units), which summarizes information that is publicly available from Nvidia’s product datasheets on their website.  A floating point precision of 32-bits (single precision) is used for determining which FLOPS figures to use.

A more thorough analysis would probably also look at Google TPUs and AMDs lineup of GPUs, as well as Nvidia’s consumer grade GPUs.  The analysis provided here can be seen more as a snapshot of the typical GPU most commonly used in today’s data centers.

Figure 1:  The performance per watt of Nvidia Tesla GPUs from 2011 to 2022, in GigaFLOPS per Watt.

Notably the trend is positive.  While wattages of individual cards have increased slightly over time, the performance has increased faster.  Interestingly, the efficiency of these cards exceeds the efficiency of the most energy efficient supercomputers as seen in the Green500 for the same year (https://www.top500.org/lists/green500/).

An important consideration in all this is that energy efficiency is believed to have a possible hard physical limit, known as the Laudauer Limit (https://en.wikipedia.org/wiki/Landauer%27s_principle), which is dependent on the nature of entropy and information processing.  Although, efforts have been made to develop reversible computation that could, in theory, get around this limit, it is not clear that such technology will ever actually be practical as all proposed forms seem to trade off this energy savings with substantial costs in space and time complexity (https://arxiv.org/abs/1708.08480).

Space complexity costs additional memory storage and time complexity requires additional operations to perform the same effective calculation.  Both in practice translate into energy costs, whether it be the matter required to store the additional data, or the opportunity cost in terms of wasted operations.

More generally, it can be argued that useful information processing is efficient because it compresses information, extracting signal from noise, and filtering away irrelevant data.  Neural networks for instance, rely on neural units that take in many inputs and generate a single output value that is propagated forward.  This efficient aggregation of information is what makes neural networks powerful.  Reversible computation in some sense reverses this efficiency, making its practicality, questionable.

Thus, it is perhaps useful to know how close we are to approaching the Laudauer Limit with our existing technology, and when to expect to reach it.  The Laudauer Limit works out to 87 TeraFLOPS per watt assuming 32-bit floating point precision at room temperature.

Previous research to that end has proposed Koomey’s Law (https://en.wikipedia.org/wiki/Koomey%27s_law), which began as an expected doubling of energy efficiency every 1.57 years, but has since been revised down to once every 2.6 years.  Figure 1 suggests that for Nvidia Tesla GPUs, it’s even slower.

Another interesting reason why energy efficiency may be relevant has to do with the real world benchmark of the human brain, which is believed to have evolved with energy efficiency as a critical constraint.  Although the human brain is obviously not designed for general computation, we are able to roughly estimate the number of computations that the brain performs, and its related energy efficiency.  Although the error bars on this calculation are significant, the human brain is estimated to perform at about 1 PetaFLOPS while using only 20 watts (https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/).  This works out to approximately 50 TeraFLOPS per watt.  This makes the human brain less powerful strictly speaking than our most powerful supercomputers, but more energy efficient than them by a significant margin.

Note that this is actually within an order of magnitude of the Laudauer Limit.  Note also that the human brain is also roughly two and a half orders of magnitude more efficient than the most efficient Nvidia Tesla GPUs as of 2022.

On a grander scope, the question of energy efficiency is also relevant to the question of the ideal long term future.  There is a scenario in Utilitarian moral philosophy known as the Utilitronium Shockwave, where the universe is hypothetically converted into the most dense possible computational matter and happiness emulations are run on this hardware to maximize happiness theoretically.  This scenario is occasionally conjured up as a challenge against Utilitarian moral philosophy, but it would look very different if the most computationally efficient form of matter already existed in the form of the human brain.  In such a case, the ideal future would correspond with an extraordinarily vast number of humans living excellent lives.  Thus, if the human brain is in effect at the Laudauer Limit in terms of energy efficiency, and the Laudauer Limit holds against efforts towards reversible computing, we can argue in favour of this desirable human filled future.

In reality, due to entropy, it is energy that ultimately constrains the number of sentient entities that can populate the universe, rather than space, which is much more vast and largely empty.  So, energy efficiency would logically be much more critical than density of matter.

This also has implications for population ethics.  Assuming that entropy cannot be reversed, and the cost of living and existing requires converting some amount of usable energy into entropy, then there is a hard limit on the number of human beings that can be born into the universe.  Thus, more people born at this particular moment in time implies an equivalent reduction of possible people in the future.  This creates a tradeoff.  People born in the present have potentially vast value in terms of influencing the future, but they will likely live worse lives than those who are born into that probably better future.

Interesting philosophical implications aside, the shrinking gap between GPU efficiency and the human brain sets a potential timeline.  Once this gap in efficiency is bridged, it theoretically makes computers as energy efficient as human brains, and it should be possible at that point to emulate a human mind on hardware such that you could essentially have a synthetic human that is as economical as a biological human.  This is comparable to the Ems that the economist Robin Hanson describes in his book, The Age of EM.  The possibility of duplicating copies of human minds comes with its own economic and social considerations.

So, how long away is this point?  Given the trend observed with GPU efficiency growth, it looks like a doubling occurs about every three years.  Thus, one can expect an order of magnitude improvement in about thirty years, and two and a half orders of magnitude in seventy-five years.  As mentioned, two and a half orders of magnitude is the current distance from existing GPUs and the human brain.  Thus, we can roughly anticipate this to be around 2100.  We can also expect to reach the Laudauer Limit shortly thereafter.

Most AI safety timelines are much sooner than this however, so it is likely that we will have to deal with aligning AGI before the potential boost that could come from having synthetic human minds or the potential barrier of the Laudauer Limit slowing down AI capabilities development.

In terms of future research considerations, a logical next step would be to look at how quickly the overall power consumption of data centers is increasing and also the current growth rates of electricity production to see to what extent they are sustainable and whether improvements to energy efficiency will be outpaced by demand.  If so, that could act to slow the pace of machine learning research that relies on very large models trained on massive amounts of compute.  This is in addition to other potential limits, such as the rate of data generation for large language models, which depend on massive datasets of essentially the entire Internet at this point.

The nature of current modern computation is that it is not free.  It requires available energy to be expended and converted to entropy.  Barring radical new innovations like practical reversible computers, this has the potential to be a long-term limiting factor in the advancement of machine learning technologies that rely heavily on parallel processing accelerators like GPUs.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén