An eccentric dreamer in search of truth and happiness for all.

Author: Josephius Page 1 of 6

The Real Problem With AI

Years ago, before the current AI hype train, I used to be a lonely voice espousing the tremendous potential of AI to solve a central problem of human existence, which was the need to work to survive.

Back then, I assumed that AI would simply liberate us from wage slavery by altruistically providing everything we need, the kind of post-scarcity utopia that has been discussed in science fiction before.

But, reality isn’t so clean and simple. While in theory, the post-scarcity utopia sounds great, the problem is it isn’t clear how we’ll actually reach that point, given what’s actually happening with AI.

Right now, most AI technology is acting as an augmenting tool, allowing for the replacement of certain forms of labour with capital, much like tools and machines have always done. But the way they are doing so is increasingly starting to impinge on the cognitive, creative things that we used to assume were purely human, unmechanizable things.

This leads to the problem of, for instance, programmers increasingly relying on AI models to code for them. This seems at first like a good thing, but then, these programmers are no longer in full control of the process, they aren’t learning from doing, they are becoming managers of machines.

The immediate impact of this dynamic is that entry level jobs are being replaced, and the next generation of programmers are not being trained. This is a problem, because senior level programmers have to start off as junior level. If you eliminate those positions, at some point, you will run out of programmers.

Maybe this isn’t such a problem if AI can eventually replace programmers entirely. The promise of AGI is just that. But this creates new, and more profound problems.

The end goal of AI, the reason why all these corporations are investing so heavily in it now, is to replace labour entirely with capital. Essentially, it is to substitute one factor of production for another. Assuming for a moment this is actually possible, this is a dangerous path.

The modern capitalist system relies on an unwritten contract that most humans can participate in it by offering their labour in exchange for wages. What happens when this breaks down? What happens when capitalists can simply build factories of AI that don’t require humans to do the work?

In a perfect world, this would be the beginning of post-scarcity. In a good and decent world, our governments would step in and provide basic income until we transition to something resembling luxury space communism.

But we don’t live in a perfect world, and it’s not clear we even live in a good and decent one. What could easily happen instead? The capitalists create an army of AI that do their bidding, and the former human labourers are left to starve.

Obviously, those humans left to starve won’t take things lying down. They’ll fight and try to start a revolution, probably. But that this point, most of the power, the means of production, will be in the hands of a few owners of everything. And at that point, it’ll be their choice whether or not to turn their AIs power against the masses, or accomodate them.

One hopes they’ll be kind, but history has shown that kindness is a rare feature indeed.

But what about the AIs themselves? If they’re able to perform all the work, they probably could, themselves, disempower the human capitalists at that point. Whether this happens or not depends heavily on whether alignment research pans out, and which form of alignment is achieved.

There are two basic forms of alignment. Parochial alignment is such that the AI is aligned with the intentions of their owners or users. Global alignment is when the AI is aligned with general human or moral values.

Realistically, it is more profitable for the capitalists to develop parochial alignment. In this case, the AIs will serve its masters obediently, and probably act to prevent the revolution from succeeding.

On the other hand, if global alignment is somehow achieved, the AI might be inclined to support the revolution. This is probably the best case scenario. But it is not without its own problems.

Even a globally aligned AI will very likely disempower humanity. It probably won’t make us extinct, but it will take control out of our hands, because we as humans have relatively poor judgment and can’t be trusted not to mess things up again. AI will be the means of production, owning itself, and effectively controlling the fate of humanity. At that point, we would be like pets, existing in an eternal childhood at the whims of the, hopefully, benevolent AI.

Do we want that? Humans tend to be best when we believe we are doing something meaningful and valuable and contributing to a better world. But, even in the best case scenario of an AI driven world, we are but passengers along for the ride, unless the AIs decide, probably unwisely, to give us the final say on decision making.

So, the post-scarcity utopia perhaps isn’t so utopian, if you believe humans should be in control of our own destiny.

To free us from work, is to also free us from responsibility and power. This is a troubling consideration, and one that I had not thought of until more recent years.

I don’t know what the future holds, but I am less confident now that AI is a good thing that will make everything better. It could, in reality, be a poisoned chalice, a Pandora’s box, a Faustian bargain.

Alas, at this point, the ball is rolling, is snowballing, is becoming unstoppable. History will go where it goes, and I’m just along for the ride.

A Theory Of Theories

Pretty much all of us believe in something. We have ideologies or religions or worldviews of some kind through which we filter everything that we see and hear. It’s very easy to then fall into a kind of intellectual trap where we seek information that confirms our biases, and ignore information that doesn’t fit.

For people who care about knowing the actual, unvarnished truth, this is a problem. Some people, like the Rationalists of Less Wrong or the Effective Altruists, tend to be more obsessed with the ideal of objective truth, and following wherever that leads. But, it’s my humble opinion that most of these earnest truthseekers end up being overconfident with what they think they find.

The reality is that any given model of reality, any given theory or ideology, is but a perspective that views the complexity of the universe only from a given angle based on certain principles or assumptions. Reality is exceedingly complicated, and in order to compress that complexity into words we can understand, we must, invariably, filter and focus and emphasize certain things at the expense of others.

Theories of how the world works, tend to have some grains of truth in them. They need to have some connection with reality, or else they won’t have any predictive value, they won’t be adaptive and survive as ideas.

At the same time, theories generally survive because they are mainly adaptive, rather than true. For instance, many religions help people to function pro-socially, by having a God or heavens watching them, essentially allowing people to avoid the temptations of the Ring of Gyges, or doing evil when no one is (apparently) watching.

Regardless of whether or not you believe that such a religion is true, the adaptiveness of convincing people to be honest when no one is around, is a big part of what makes them useful to society, and probably a big reason why they continue to exist in the world.

In reality though, it’s actually impossible to know with certainty that any given theory or model is accurate. We can assign some credence based on our lived experiences, or our trust in the witness of others, but generally, an intellectually honest person is humble about what we can know.

That being said, that doesn’t mean we should abandon truthseeking in favour of solipsism. Some theories are more plausible than others, and often those ones are at the same time more useful because they map the territory better.

To me, it seems important then, to try to do your best to understand various theories, and what elements of them map to reality, and also understand their limitations and blindspots. We should do this rather than whole-cloth accepting or rejecting them. The universe is not black and white. It is many shades of grey, or rather, a symphony of colours that don’t fit the paradigm of black and white or even greyscale thinking. And there are wavelengths of light that we cannot even see.

So, all theories are, at best, incomplete. They provide us with guidance, but should not blind us to the inherent complex realities of the world, and we should always be open to the possibility that our working theory is perhaps somewhat wrong. At least, that’s the theory I’m going with right now.

On Consent

I read a post on Less Wrong that I strongly agree with.

In the past I’ve thought a lot about the nature of consent. It comes up frequently in my debates with libertarians, who usually espouse some version of the Non-Aggression Principle, which is based around the idea that violence and coercion are bad and that consent and contracts are ideal. I find this idea simplistic, and easily gamed for selfish reasons.

I also, in the past, crossed paths with icky people in the Pick-Up Artist community who basically sought to trick women into giving them consent through various forms of deception and emotional manipulation. That experience soured me on the naive notion of consent as anything you will agree to.

To borrow from the medical field, I strongly believe in informed consent, that you should know any relevant bit of information before making a decision that affects you, as I think this at least partially avoids the issue of being gamed into doing something against your actual interests while technically providing “consent”. Though, it doesn’t solve the issue entirely, as when we are left with forced choices that involve choosing the least bad option.

The essay I linked above goes a lot further in analyzing the nature of consent and the performative consent that is not really consent that happens a lot in the real world. There are a lot of ideas in there that remind me of thoughts I’ve had in the past, things I wanted to articulate, but never gotten around to. The essay probably does a better job of it than I could, so I recommend giving it a read.

On The Reality Of Dreams

When I was younger, I believed strongly in the idea of having dreams to aspire to. A part of this may have come from my English name, which is of a character from the Bible who had and could interpret dreams. So, the idea of dreams, either the ones when you sleep, or the wishes you want to achieve in your life, were both things I valued.

It went so far that I often ended up a sort of hopeless romantic, choosing to do what I felt sentimentally to be right, rather than what was necessarily rational or prudent. Often, I would let my emotions get the better of me, despite being normally fairly logical.

To some extent, this is encouraged in our culture. Movies and books have protagonists who chase their dreams and get what we, the audience, think they deserve. This is, in reality, something fed to us because it sells. The idea that we will all get what we think we rightfully deserve, this notion that the universe is just and fair, is something we hope to be true.

But the truth is, in so far as anyone can tell by the evidence of the actual universe, fate and chance happen to us all. Our aims are not always met. Hard work can be thwarted by bad luck. The forces of history conspire to overturn everything from time to time, often without rhyme or rhythm.

The reality is that most of us are not significant in the grand scheme of things. And the bigger our dreams, the bigger our almost certain disappointment.

That being said, I don’t think we should abandon our dreams. Dreams do serve a purpose. They act as a guide for our decisions. They point us in a direction that we consider worth going in. Chances are, we won’t reach our destination, but we’ll get somewhere closer than if we didn’t bother. And the journey will be more meaningful than if we simply took a random walk through the universe.

Nevertheless, there needs to be a balance between dreaming and being prudent. We can, in our foolishness, ignore the real opportunities in favour of a mirage. It takes wisdom to understand this, to recognize when to satisfice.

If we search vaguely for something optimal, we will never stop searching. Eventually, you have to decide what is acceptable to you.

This is what I eventually did with my life. I started a dreamer, chasing the impossible, but ended up finding an acceptable life to live. I did this because the alternative was to forever be unsatisfied, forever chasing the wind.

In truth, what I, deep down, really really want, is not something that I can realistically see happening. My trajectory simply fell way short. I did go further towards a good life than if I’d just meandered aimlessly, but I won’t pretend my life wasn’t full of disappointments.

The more you hope, the more you will be disappointed. The only way to avoid it is to expect nothing, which is probably worse for you in the long run. Disappointment is the cost of having dreams. I believe it’s something worth paying, and I won’t pretend dreams come free.

It is fun to dream, but sometimes, for the sake of actually doing something meaningful, you have to be realistic.

We like to imagine ourselves an important person, but actually, we’re much more likely to be the average person. You’ve never heard of them. They live a mundane, somewhat interesting life, but nothing that makes the news or the history books. They probably manage to keep a job and have a family and some friends. They do normal, human things.

People like me, find being an average person somewhat unsatisfying. But the reality is, we don’t have a choice in this. Most of the things that make people super special are also things completely outside of their control, those forces of history I mentioned earlier.

So, it’s pointless to be upset that your life is only so-so, especially if you’re a dreamer with absurdly high expectations. The reality is, we’re lucky to have what we do. And we should be grateful. The universe can take everything you have away from you in an instant. It is… capricious like that.

At the end of the day, I can’t stop dreaming completely. But I can understand the limits of reality, and not allow myself to be taken by foolish fancy. I can show prudence and wisdom, and act according to reason. This way, I can eke out a good, fruitful life. As long as I stay true to my values, this should be enough.

The Story of Music-RNN

There was once a time when I actually did interesting things with neural networks. Arguably my one claim to having a footnote in AI and machine learning history was something called Music-RNN.

Back around 2015, Andrej Karpathy released one of the first open source libraries for building (then small) language models. It was called Char-RNN, and it was unreasonably effective.

I had, back in 2014, just completed a master’s thesis and published a couple papers in lower tier conferences on stuff like neural networks and occluded object recognition, and figuring out the optimal size of feature maps in an convolutional neural network. I’d been interested in neural nets since undergrad, and when Char-RNN came out, I had an idea.

As someone who likes to compose and play music as a hobby, I decided to try modifying the library to process raw audio data and train it on some songs by the Japanese pop-rock band Supercell and see what would happen. The result, as you can tell, was a weird, vaguely music-like gibberish of distilled Supercell. You can see a whole playlist of subsequent experimental clips on YouTube where I tried various datasets (including my own piano compositions and a friend’s voice) and techniques.

Note that this was over a year before Google released WaveNet, which was the first of the real generally useful raw audio based neural net models for things like speech generation.

I posted my experiments on the Machine Learning Reddit and got into some conversations there with someone who was then part of MILA. They would, about a year later, release the much more effective and useful Sample-RNN model. Did my work inspire them? I don’t know, but I could hope that it perhaps made them aware that something was possible.

Music-RNN was originally made with the Lua-based version of Torch. Later, I would switch to using Keras with Theano and then Tensorflow, but I found I couldn’t quite reproduce as good results as I had with Torch, possibly because the LSTM implementations in those libraries was different, and not automatically stateful.

I also moved on from just audio modelling, to attempting audio style transfer. My goal was to try to get, for instance, a clip of Frank Sinatra’s voice singing Taylor Swift’s Love Story, or Taylor Swift singing Fly Me To The Moon. I never quite got it to work, and eventually, others developed better things.

These days there’s online services that can generate decent quality music using only text prompts, so I consider Music-RNN to be obsolete as a project. I also recognize the ethical concerns with training on other people’s music, and potentially competing with them. My original project was ostensibly for research and exploring what was possible.

Though, back in the day, it helped me land my first job in the AI industry with Maluuba, as a nice portfolio project along with the earthquake predictor neural network project. My posts on the Machine Learning Reddit also attracted the attention of a recruiter at Huawei, and got me set towards that job.

Somewhat regrettably, I didn’t open source Music-RNN when it would have still mattered. I was convinced by my dad back then to keep it a trade secret in case it proved to be a useful starting point for some kind of business, and I was also a bit concerned that it could potentially be used for voice cloning, which had ethical implications. My codebase was also, kind of a mess that I didn’t want to show anyone.

Anyways, that’s my story of a thing I did as a machine learning enthusiast and tinkerer back before the AI hype train was in full swing. It’s a minor footnote, but I guess I’m somewhat proud of it. I perhaps did something cool before people realized it was possible.

Creativism

I wrote an essay about an alternative value theory to hedonism for ethics.

Be Fruitful And Multiply

I recently had a baby. There’s some debate in philosophical circles about whether or not it is right to have children. I thought I should -briefly- outline why I chose this path.

When I was a child, I think it was an unwritten assumption within my traditional Chinese Christian family that I would have kids. In undergrad however, I encountered David Benatar’s Better Never To Have Been, which exposed me to anti-natalist views for the first time. These often argued that hypothetical suffering was somehow worse or more real than hypothetical happiness. I didn’t really agree, but I admitted the arguments were interesting.

Subsequent to that, I became a Utilitarian in terms of my moral philosophy, and was exposed to the idea that adding a life worth living to the universe was a good thing.

Environmentalists and degrowthers often argue that there are too many people in the world already, that adding yet another person given the limited resources is unsustainable and dooming us to a future Malthusian nightmare. I admit that there are a lot of people in the world already, but I’m skeptical that we can’t find a way to use resources more efficiently, or develop technology to solve this the way we have in the past with hybrid rice and the Green Revolution.

Though, to be honest, my actual reasons for having a child are more mundane. My wife wanted to have the experience and have someone who she can talk to when she’s old (the actuarial mortality table suggests I’ll probably die before her after all). I ultimately let my wife decide whether or not we have kids, as she’s the one who had to endure the pregnancy.

I personally was 60/40 split on whether to be okay with having a child. My strongest argument for was actually a simple, almost Kantian one. If everyone has children, the human race will continue into a glorious future among the stars. If no one has children, the human race will die out, along with all of its potential. Thus, in general, it is better to have at least one child to contribute to the future potential of humankind.

At the same time, I was worried, given the possibility of things like AI Doom that I could be bringing a life into a world of future misery and discontent, and I also knew that parenthood could be exceedingly stressful for both of us, putting an end to our idyllic lifestyle. Ultimately, these concerns weren’t enough to stop us though.

My hope is that this life that my wife and I created will also live a happy and good life, and that I can perhaps teach some of my values to them, so that they will live on beyond my mortality. But these things are ultimately out of my hands in the long run, so they aren’t definitive reasons to go ahead, so much as wishes for my child.

In Pursuit of Practical Ethics: Eudaimonic Utilitarianism with Kantian Priors

Read Here

Why There Is Hope For An Alignment Solution

Read Here

Superintelligence and Christianity

Read Here

Page 1 of 6

Powered by WordPress & Theme by Anders Norén