An eccentric dreamer in search of truth and happiness for all.

Category: Life Page 1 of 3

The Story of Music-RNN

There was once a time when I actually did interesting things with neural networks. Arguably my one claim to having a footnote in AI and machine learning history was something called Music-RNN.

Back around 2015, Andrej Karpathy released one of the first open source libraries for building (then small) language models. It was called Char-RNN, and it was unreasonably effective.

I had, back in 2014, just completed a master’s thesis and published a couple papers in lower tier conferences on stuff like neural networks and occluded object recognition, and figuring out the optimal size of feature maps in an convolutional neural network. I’d been interested in neural nets since undergrad, and when Char-RNN came out, I had an idea.

As someone who likes to compose and play music as a hobby, I decided to try modifying the library to process raw audio data and train it on some songs by the Japanese pop-rock band Supercell and see what would happen. The result, as you can tell, was a weird, vaguely music-like gibberish of distilled Supercell. You can see a whole playlist of subsequent experimental clips on YouTube where I tried various datasets (including my own piano compositions and a friend’s voice) and techniques.

Note that this was over a year before Google released WaveNet, which was the first of the real generally useful raw audio based neural net models for things like speech generation.

I posted my experiments on the Machine Learning Reddit and got into some conversations there with someone who was then part of MILA. They would, about a year later, release the much more effective and useful Sample-RNN model. Did my work inspire them? I don’t know, but I could hope that it perhaps made them aware that something was possible.

Music-RNN was originally made with the Lua-based version of Torch. Later, I would switch to using Keras with Theano and then Tensorflow, but I found I couldn’t quite reproduce as good results as I had with Torch, possibly because the LSTM implementations in those libraries was different, and not automatically stateful.

I also moved on from just audio modelling, to attempting audio style transfer. My goal was to try to get, for instance, a clip of Frank Sinatra’s voice singing Taylor Swift’s Love Story, or Taylor Swift singing Fly Me To The Moon. I never quite got it to work, and eventually, others developed better things.

These days there’s online services that can generate decent quality music using only text prompts, so I consider Music-RNN to be obsolete as a project. I also recognize the ethical concerns with training on other people’s music, and potentially competing with them. My original project was ostensibly for research and exploring what was possible.

Though, back in the day, it helped me land my first job in the AI industry with Maluuba, as a nice portfolio project along with the earthquake predictor neural network project. My posts on the Machine Learning Reddit also attracted the attention of a recruiter at Huawei, and got me set towards that job.

Somewhat regrettably, I didn’t open source Music-RNN when it would have still mattered. I was convinced by my dad back then to keep it a trade secret in case it proved to be a useful starting point for some kind of business, and I was also a bit concerned that it could potentially be used for voice cloning, which had ethical implications. My codebase was also, kind of a mess that I didn’t want to show anyone.

Anyways, that’s my story of a thing I did as a machine learning enthusiast and tinkerer back before the AI hype train was in full swing. It’s a minor footnote, but I guess I’m somewhat proud of it. I perhaps did something cool before people realized it was possible.

Be Fruitful And Multiply

I recently had a baby. There’s some debate in philosophical circles about whether or not it is right to have children. I thought I should -briefly- outline why I chose this path.

When I was a child, I think it was an unwritten assumption within my traditional Chinese Christian family that I would have kids. In undergrad however, I encountered David Benatar’s Better Never To Have Been, which exposed me to anti-natalist views for the first time. These often argued that hypothetical suffering was somehow worse or more real than hypothetical happiness. I didn’t really agree, but I admitted the arguments were interesting.

Subsequent to that, I became a Utilitarian in terms of my moral philosophy, and was exposed to the idea that adding a life worth living to the universe was a good thing.

Environmentalists and degrowthers often argue that there are too many people in the world already, that adding yet another person given the limited resources is unsustainable and dooming us to a future Malthusian nightmare. I admit that there are a lot of people in the world already, but I’m skeptical that we can’t find a way to use resources more efficiently, or develop technology to solve this the way we have in the past with hybrid rice and the Green Revolution.

Though, to be honest, my actual reasons for having a child are more mundane. My wife wanted to have the experience and have someone who she can talk to when she’s old (the actuarial mortality table suggests I’ll probably die before her after all). I ultimately let my wife decide whether or not we have kids, as she’s the one who had to endure the pregnancy.

I personally was 60/40 split on whether to be okay with having a child. My strongest argument for was actually a simple, almost Kantian one. If everyone has children, the human race will continue into a glorious future among the stars. If no one has children, the human race will die out, along with all of its potential. Thus, in general, it is better to have at least one child to contribute to the future potential of humankind.

At the same time, I was worried, given the possibility of things like AI Doom that I could be bringing a life into a world of future misery and discontent, and I also knew that parenthood could be exceedingly stressful for both of us, putting an end to our idyllic lifestyle. Ultimately, these concerns weren’t enough to stop us though.

My hope is that this life that my wife and I created will also live a happy and good life, and that I can perhaps teach some of my values to them, so that they will live on beyond my mortality. But these things are ultimately out of my hands in the long run, so they aren’t definitive reasons to go ahead, so much as wishes for my child.

On Infatuation

Where to start. When I was younger, I had a tendency to become infatuated with one particular girl at any given time. Three such infatuations in my life basically, and I’m only slightly exaggerating here, destroyed me for years.

The problem with infatuations, particularly of the unrequited love kind, is that they are fundamentally unfair to everyone involved. To you, the obsessed, you lose all sense of perspective and feel powerless against the draw of this girl who all your thoughts and feelings now orbit around. To the beloved, well, your obsessive attention is just creepy if she finds out about it. Though, perhaps you’re like me and managed to somehow be simultaneously a tsundere and a yandere. Both are actually very unhealthy archetypes, and the combination is just bad. To other people, you are devoting absurd amounts of effort and attention at one girl, and your other platonic relationships suffer as a result.

Infatuations are fundamentally unhealthy. Even if she did reciprocate, the power dynamics in the relationship would be completely unbalanced. She would have all the power, and if she is a decent person, that’s not a comfortable position to be in. It takes emotional maturity to recognize that a good, healthy relationship respects boundaries and strives towards an equality of power.

Infatuations of this type tend to stem from admiring someone from afar without actually getting to know them well enough to recognize that their little foibles are actually serious flaws that they need to work on. They tend to create unrealistic impressions that put the girl on a pedestal and place her in an impossible position with expectations she cannot possibly meet in real life. This is seriously not the kind of pressure you should place on anybody, much less the girl you like.

Having said all that, I basically managed to become infatuated three times, once in high school, once in undergrad, and once in grad school. The first two lasted until the next, and the last one managed to cling to me for more than a decade even through actual relationships I had with other girls. In some sense they all left a residual impression on me. I still hide feelings in me, that sometimes I can access when I reminisce about the past. Useless emotions that I don’t know what to do with, so I just lock them in a metaphorical box in the deepest recesses of my soul.

For the record, I’m married now and have a child. For all intents and purposes, these things should best be forgotten. And yet, I’m writing about it now. I guess this is yet another attempt at catharsis.

With hindsight, what I truly regret is that I allowed myself to sabotage cherished friendships with girls I actually cared about to the altar of the infatuation. It prevented me from seeing things clearly, from acting reasonably, from being normal and treating these people like regular human beings rather than some idol, or object of fear.

The pattern that emerged was basically that I’d meet the girl, develop a crush that would explode into infatuation and unrequited love, alienate the girl with my chaotic and counterproductive behaviour (alternating between extreme and obvious avoidance/pushing away and extreme and unwanted attention), and after she stopped talking to me I’d usually get super depressed and probably suicidal at points. Rinse and repeat. Needless to say, my studies during these times suffered immensely. My other friendships and relationships suffered. I was useless and pathetic and generally insufferable.

My advice to you, dear reader, is to avoid infatuations like the plague. They kill the friendships you care most about. They feel great at first, but are a poisoned chalice. You are better off not allowing them to happen. I recognized this was a problem after the first time. And yet it happened again. And again. Each time I swore I’d do things differently, and to be honest, things did play out slightly differently each time. But at the end of the day, the overall result was about the same.

It took a certain realization that my whole hopeless romantic dreamer shtick was a big part of the problem. It took realizing that I was exceedingly unrealistic and foolish. It took recognizing that I was sacrificing actual potential relationships on this altar of my infatuation. It took telling a beautiful girl I was dating that I wasn’t in love with her because I still had feelings for someone else, and seeing her cry, to realize how messed up it all was.

It’s easier said than done, but fight the urge to be infatuated. If you’re the type to develop it, fight it with all your strength, for the actual sake of your would be beloved. Recognize the opportunity cost of casting your devotion and loyalty after a girl who isn’t interested, while ignoring all the others who actually like you. Be willing to instead satisfice and choose someone who you can actually be happy with, in a healthy, reasonable relationship.

Reflections on Working at Huawei

Huawei has recently been in the news with the Mate 60 Pro being released with a 7nm chip. The western news media seems surprised that this was possible, but my experience working at Huawei was that the people working there were exceptionally talented, competent, technically saavy experts with a chip on their shoulder and the resources to make things happen.

My story with Huawei starts with a coincidence. Before I worked there, I briefly worked for a startup called Maluuba, which was bought by Microsoft in 2017. I worked there for four months in 2016, and on the day of my on-site interview with Maluuba, a group from Huawei was visiting the company. That was about the first time I heard the name. I didn’t think much of it at the time. Just another Chinese company with an interest in the AI tech that Maluuba was working on.

Fast-forward a year to 2017. I was again unemployed and looking for work. Around this time I posted a bunch on the Machine Learning Reddit about my projects, like the Music-RNN, as well as offering advice to other ML practitioners. At some point these posts attracted the attention of a recruiter at Huawei, who emailed me through LinkedIn and asked if I’d be interested in interviewing.

My first interview was with the head of the self-driving car team at the Markham, Ontario research campus. Despite having a cognitive science background in common, I flunked the interview when I failed to explain what the gates of an LSTM were. Back then I had a spotty understanding of those kinds of details, which I would make up for later.

I also asked the team leader, a former University of Toronto professor, why he was working at Huawei. He mentioned something about loyalty to his motherland. This would be one of my first indications that working at Huawei wasn’t with just any old tech company.

Later I got invited to a second interview with a different team. The team leader in this case was much more interested in my experience operating GPUs to train models as I did at Maluuba. Surprisingly there were no more tests or hoops to jump through, we had a cordial conversation and I was hired.

I was initially a research scientist on the NLP team of what was originally the Carrier Software team. I didn’t ask why a team that worked on AI stuff was named that, because at the time I was just really happy to have a job again. My first months at Huawei were on a contract with something called Quantum. Later, after proving myself, I was given a full-time permanent role.

Initially on the NLP team I did some cursory explorations, showing my boss things like how Char-RNN could be used in combination with FastText word vectors to train language models on Chinese novels like Romance of the Three Kingdoms, Dream of the Red Chamber, and Three Body Problem to generate text that resembled them. It was the equivalent of a machine learning parlor trick at the time, but it would foreshadow the later developments of Large Language Models.

Later we started working on something more serious. It was a Question Answering system that connected a Natural Language Understanding system to a Knowledge Graph. It ostensibly could answer questions like: “Does the iPhone 7 come in blue?” This project was probably the high point of my work at Huawei. It was right in my alley having done similar things at Maluuba, and the people on my team were mostly capable PhDs who were easy to get along with.

As an aside, at one point I remember also being asked to listen in a call between us and a team in Moscow that consisted of a professor and his grad student. They were competing with us to come up with an effective Natural Language Understanding system, and they made the mistake of relying on synthetic data to train their model. This resulted in a model that achieved 100% accuracy on their synthetic test data, but then proceeded to fail miserably against real world data, which is something I predicted might happen.

Anyways, we eventually put together the Question Answering system and sent it over to HQ in Shenzhen. After that I heard basically nothing about what they did, if anything, with it. An intern would later claim that my boss told her that they were using it, but I was not told this, and got no follow-up.

This brings me to the next odd thing about working at Huawei. As I learned at the orientation session when I transitioned to full-time permanent, there’s something roughly translated as “grayscale” in the operating practices of Huawei. In essence, you are only told what you need to know to do your work, and a lot of details are left ambiguous.

There’s also something called “horse-race culture” which involves different teams within the company competing with one other to do the same thing. It was something always found seemingly inefficient, although I supposed if you have the resources it can make sense to use market-like forces to drive things.

Anyways, after a while, my boss, who was of a Human Computer Interaction (HCI) background, was able to secure funding to add an HCI team to the department, which also involved disbanding the NLP team and splitting people between the HCI team and the Computer Vision team that was the other team in the department originally. I ended up on the CV team.

The department, by the way, had been renamed the Big Data Analysis Lab for a while, and then eventually became a part of Noah’s Ark Lab — Canada.

So, my initial work on the CV team involved Video Description, which was a kind of hybrid of NLP and CV work. That project eventually was shelved and I worked on an Audio Classifier until I had a falling out with my team leader that I won’t go into too much detail here. Suffice to say, my old boss, who was now director of the department, protected me to an extent from the wrath of my team leader, and switched me to working on the HCI team for a while. By then though, I felt disillusioned with working at Huawei, and so in late 2019, I quietly asked for a buyout package and left, like many others who disliked the team leader and his style of leadership.

In any case, that probably isn’t too relevant to the news about Huawei. The news seems surprised that Huawei was able to get where it is. But I can offer an example of the mindset of people there. Once, when I was on lunch break, an older gentleman sat down across from me at the table and started talking to me about things. We got on the subject of HiSilicon and the chips. He told me that the first generation of chips were, to put it succinctly, crap. And so were the second generation, and the third. But each generation they got slightly better, and they kept at it until the latest generation was in state-of-the-art phones.

Working at Huawei in general requires a certain mindset. There’s controversy with this company, and even though they pay exceptionally well, you also have to be willing to look the other way about the whole situation, to be willing to work at a place with a mixed reputation. Surprisingly perhaps, most of the people working there took pride in it. They either saw themselves as fighting a good fight for an underdog against something like the American imperialist complex, or they were exceedingly grateful to be able to do such cool work on such cool things. I was the latter. It was one of my few chances to do cool things with AI, and I took it.

The other thing is that Chinese nationals are very proud of Huawei. When I mentioned working at Huawei to westerners, I was almost apologetic. When I mentioned working at Huawei to Chinese nationals, they were usually very impressed. To them, Huawei is a champion of industry that shows that China can compete on the world stage. They generally don’t believe that a lot of the more controversial concerns, like the Uyghur situation, are even happening, or at least that they’ve been exaggerated by western propaganda.

Now I’ve hinted at some strange things with Huawei. I’ll admit that there were a few incidents that circumstantially made me wonder if there were connections between Huawei and the Chinese government or military. Probably the westerners in the audience are rolling their eyes at my naivety, that of course Huawei is an arm of the People’s Republic, and that I shouldn’t have worked at a company that apparently hacked and stole their way to success. But the reality is that my entire time at the company, I never saw anything that suggested backdoors or other obvious smoking guns. A lowly research scientist wouldn’t have been given a chance to find out about such things even if they were true.

I do know that at one point my boss asked how feasible a project to use NLP to automatically censor questionable mentions of Taiwan in social media would be, ostensibly to replace the crude keyword filters then in use with something able to tell the difference between an innocuous mention and a more questionable argument. I was immediately opposed to the ethics of the idea, and he dropped it right away.

I also know that some people on the HCI team were working on a project where they had diagrams of the silhouettes of a fighter jet pasted on the wall. I got the impression at the time they were working on gesture recognition controls for aircraft, but I’m actually not sure what they were doing.

Other than that, my time at Huawei seemed like that of a fairly normal tech company, one that was on the leading edge of a number of technologies and made up of quite capable and talented researchers.

So, when I hear about Huawei in western news, I tend to be jarred by the adversarial tone. The people working at Huawei are not mysterious villains. They are normal people trying to make a living. They have families and stories and make compromises with reality to hold a decent job. The geopolitics of Huawei tend to ignore all that though.

In the end, I don’t regret working there. It is highly unlikely anything I worked on was used for evil (or good for that matter). Most of my projects were exploratory and probably didn’t lead to notable products anyway. But I had a chance to do very cool research work, and so I look back on that time fondly still, albeit tinged with uncertainty about whether as a loyal Canadian citizen, I should have been there at all given the geopolitics.

Ultimately, the grand games of world history are likely to be beyond the wits of the average worker. I can only know that I had no other job offers on the table when I took the Huawei one, and it seems like it was the high point of my career so far. Nevertheless, I have mixed feelings, and I guess that can’t be helped.

Welcome To The World

Welcome to the world little one.
Welcome to a universe of dreams.
Your life is just beginning.
And your future is the stars.

Hello, how are you today?
Are you happy?
Can you hear me?
What are you dreaming about?

You are the culmination of many things.
Of the wishes of ancestors who toiled in the past.
Of the love between two silly cats.
And of mysterious fates that made you unique.

Your name is a famous world leader from history.
The wise sage who led a bygone empire.
A philosopher king if there ever was one.
Someone we hope you’ll aspire to.

The world today is not kind.
But I’ll do my best to protect you from the darkness.
So that your light will awaken the stars.
And you can be all that you can.

Welcome to the world little one.
The world is dreams.
Let your stay be brightness to all.
And may you feel the love that I do.

The True Nature of Reality

It’s something we tend to grow up always assuming is real. This reality, this universe that we see and hear around us, is always with us, ever present. But sometimes there are doubts.

There’s a thing in philosophy called the Simulation Argument. It posits that, given that our descendants will likely develop the technology to simulate reality someday, the odds are quite high that our apparent world is one of these simulations, rather than the original world. It’s a probabilistic argument, based on estimated odds of there being many such simulations.

A long time ago, I had an interesting experience. Back then, as a Christian, I wrestled with my faith and was at times mad at God for the apparent evil in this world. At one point, in a moment of anger, I took a pocket knife and made a gash in a world map on the wall of my bedroom. I then went on a camping trip, and overheard in the news that Russia had invaded Georgia. Upon returning, I found that the gash went straight through the border between Russia and Georgia. I’d made that gash exactly six days before the invasion.

Then there’s the memory I have of a “glitch in the Matrix”, so to speak. Many years ago, I was in a bad place mentally and emotionally, and I tried to open a second floor window to get out of a house that probably would have ended badly, were it not for a momentary change that caused the window, which had a crank to open, to suddenly become a solid frame with no crank or way to open. It happened for a split second. Just long enough for me to panic and throw my body against the frame, making such a racket as to attract the attention of someone who could stop me and calm me down.

I still remember this incident. At the time I thought it was some intervention by God or time travellers/aliens/simulators or some other benevolent higher power. Obviously I have nothing except my memory of this. There’s no real reason for you to believe my testimony. But it’s one reason among many why I believe the world is not as it seems.

Consider for a moment the case of the total solar eclipse. It’s a convenient thing to have occur, because it allowed Einstein to prove his Theory of Relativity in 1919 by looking at the gravitational lensing effect of the sun that is only visible during an eclipse. But total solar eclipses don’t have to be. They only happen because the sun is approximately 400 times the size and 400 times the distance from the Earth as the moon is. They are exactly the right ratio of size and distance for total solar eclipses to occur. Furthermore, due to gradual changes in orbit, this coincidence is only present for a cosmologically short time frame of a few hundred million years that happens to coincide with the development of human civilization.

Note that this coincidence is immune to the Anthropic Principle because it is not essential to human existence. It is merely a useful coincidence.

Another fun coincidence is the names of the arctic and antarctic. The arctic is named after the bear constellations of Ursa Major and Minor, which can be seen only from the northern hemisphere. Antarctic literally means opposite of arctic. Coincidentally, polar bears can be found in the arctic, but no species of bear is found in the antarctic.

There are probably many more interesting coincidences like this, little Easter eggs that have been left for us to notice.

The true nature of our reality is probably something beyond our comprehension. There are hints at it however, that make me wonder about the implications. So, I advise you to keep an open mind about the possible.

A Quick Note About Coffee

Because changes in the dosages of my medications can only happen once every month or so at most, a strategy for managing my mood and energy levels has been to supplement with the caffeine in coffee. My wife got us an espresso machine a while back, so I’m able to pull shots when needed.

Initially, my dosing schedule for espresso shots mostly assumed front loading with 2 to 3 in the morning followed by single shot top ups at noon and in the late afternoon. This was based on the assumption that I wanted to have a consistent level of caffeine in the bloodstream, and avoid peaks or high variance, since the half-life of caffeine is about 5 hours. This sort of worked for a while, but I noticed that I crashed pretty hard in the evenings despite it.

Recently, after reading more, I noted that the Adenosine levels actually increase over time throughout the day, and so fixing the caffeine level at a particular amount will probably be too much early in the day, and too little later in the day, assuming that we need to offset the rising sleepiness. Thus, a more practical dosing schedule is probably an even distribution, something like a double shot in the morning, followed by a double shot at noon. Initial experiments suggest that this works better, and keeps me from crashing as much in the evening, although this is still early in my testing.

On Struggling With Mental Illness

One of the more challenging things I’ve experienced in my life has been dealing with the complexities of mental illness and the struggle to live a normal life despite it. Despite my best efforts, I find myself infuriatingly inconsistent due to a mood disorder that means I’m occasionally overly energetic, and other times fatigued. In either state, I find it difficult to focus on being productive, either because I’m distracted by a rush of thoughts, or alternatively, too tired to do anything. The midway state between these two extremes is a thin region where I can be productive and effective.

A lot of people don’t really get the extent to which our moods and behaviours can be shaped by something as simple as a little blue pill. For me, the cocktail of medications allows me to function, is an added cost of living, but also comes with the danger that an adjustment can overcompensate and cause me to become the opposite state than what I was in before. It becomes rather infuriating, how easily the balance can be broken, and how obviously I am not in control of my own mental condition.

It’s bothersome. I want to be effective, to be able to productively do the things that I want to do. But often, during periods of adjustment, I find myself struggling to do basic things. When things are working right, I can be quite productive, like my first two and a half years at Huawei were. But then things can go wrong, and I can find myself stuck in the mud, worried that I may never be able to function well again.

I can blame the illness for a lot of things. Lost friends, lost time, lost hope, a sidelined career, and so on. But at the same time I hesitate to. I hesitate to admit to the public that I have this illness, because of the severe stigma that is attached to it. And I don’t want it to be an excuse for my mistakes. But at the same time, it is the reason why I sometimes wasn’t myself, why I can be maddeningly inconsistent.

It becomes a struggle because, in part, I want to hide these facts from people, so they don’t look down on me, so they don’t decide I’m too much of a risk to employ, things like that. My parents always tell me to keep the fact a secret. It’s not something that other people understand, and it hurts my chances to get or keep a job. But at the same time, if I don’t explain why things are happening, do they expect me to be able to keep the job anyway?

We’re expected to be our best, day in, and day out, but for me, that’s impossible. It’s impossible for me to be 100% all the time, and moreover, there are days when I’ll just be useless. How am I supposed to work with this? What do people think I should do?

It’s just bothersome. The world expects us to be striving and achieving all the time. But I literally cannot be that way. Do I belong in this world? Or am I just too messed up to survive?

These are some thoughts I sometimes have. The kind of thoughts that on better days the medications take away. But sometimes they come back. And sometimes I’m trapped by my own mind in a seemingly hopeless situation. At least, hopeless for someone who wants to be effective and productive and to contribute meaningfully to the world.

So, that’s a small glimpse of the struggle. There’s a lot more that I still don’t think is wise to explain. Because most people don’t particularly understand. But hopefully, if you care to, this post helps you to understand a bit of my experience, and why I am the way that I am. Thank you for your time.

On Altruism

One thing I’ve learned from observing people and society is the awareness that the vast majority of folks are egoistic, or selfish. They tend to care about their own happiness and are at best indifferent to the happiness of others unless they have some kind of relationship with that person, in which case they care about that person’s happiness in so far as it has an effect on their own happiness to keep that person happy. This is the natural, neutral state of affairs. It is unnatural to care about other people’s happiness for the sake of themselves as ends. We call such unnatural behaviour “altruism”, and tend to glorify it in narratives but avoid actually being that way in reality.

In an ideal world, all people would be altruistic. They would equally value their own happiness and the happiness of each other person because we are all persons deserving happiness. Instead, reality is mostly a world of selfishness. To me, the root of all evil is this egoism, this lack of concern for the well-being of others that is the norm in our society.

I say this knowing that I am a hypocrite. I say this as someone who tries to be altruistic at times, but is very inconsistent with the application of the principles that it logically entails. If I were a saint, I would have sold everything I didn’t need and donated at least half my gross income to charities that help the global poor. I would be vegan. I would probably not live in a nice house and own a car (a hybrid at least) and be busy living a pleasant life with my family.

Instead, I donate a small fraction of my gross income to charity and call it a day. I occasionally make the effort to help my friends and family when they are in obvious need. I still eat meat and play computer games and own a grand piano that I don’t need.

The reality is that altruism is hard. Doing the right thing for the right reasons requires sacrificing our selfish desires. Most people don’t even begin to bother. In their world view, acts of kindness and altruism are seen with suspicion, as having ulterior motives of virtue signalling or guilt tripping or something else. In such a world, we are not rewarded for doing good, but punished. The incentives favour egoism. That’s why the world runs on capitalism after all.

And so, the world is the way it is. People largely don’t do the right thing, and don’t even realize there is a right thing to do. Most of them don’t care. There are seven billion people in this world right now, and most likely, only a tiny handful of people care that you or I even exist, much less act consistently towards our well-being and happiness.

So, why am I bothering to explain this to you? Because I think we can do better. Not be perfect, but better. We can do more to try to care about others and make the effort to make the world a better place. I believe I do this with my modest donations to charity, and my acts of kindness towards friends and strangers alike. These are small victories for goodness and justice and should be celebrated, even if in the end we fall short of being saints.

In the end, the direction you go in is more important than the magnitude of the step you take. Many small steps in the right direction will get you to where you want to be eventually. Conversely, if your direction is wrong, then bigger steps aren’t always better.

On Artificial Intelligence

In the interest of explaining further my considerations for having a career working on AI, I figure it makes sense to explain a few things.

When I was very young, I watched a black and white movie where a mad scientist somehow replaced a human character with a robot. At the time I actually thought the human character was somehow transformed into the robot, which was terrifying to me. This, to my childish mind, created an irrational fear of robots that made me avoid playing with such devices that were overtly robot-like, at least for the while when I was a toddler.

Eventually I grew out of that fear. When I was older and studying computer science at Queen’s University, I became interested in the concept of neural networks, the idea of taking the inspiration of biology to inform the design of artificial intelligence systems. Back in those days, AI mostly meant Good Old Fashioned Artificial Intelligence (GOFAI), namely top-down approaches that involve physical symbol systems, logical inference, and search algorithms that were highly mathematical, engineered, and often brittle in terms of its effectiveness. Bottom-up connectionist approaches like neural networks were seen as late as 2009 as being mere curiosities that would never have practical value.

Nevertheless, I was enamoured with the connectionist approach, and what would become the core of deep learning, well before it was cool to be so. I wrote my undergraduate thesis on using neural networks for object recognition (back then the Neocognitron, as I didn’t know about convolutional nets yet), and then would later expand on this for my master’s thesis, which was on using various machine learning algorithms for occluded object recognition.

So, I graduated at the right time in 2014 when the hype train was starting to really roar. At around the same time, I got acquainted with the writings of Eliezer Yudkowsky of Less Wrong, also known as the guy who wrote the amazing rationalist fan fiction that was Harry Potter and the Methods of Rationality (HPMOR). I haven’t always agreed with Yudkowsky, but I’ll admit the man is very, very smart.

It was my reading Less Wrong as well as a lesser known utilitarianism forum called Felificia that I became aware that there were many smart people who took very seriously the concern that AI could be dangerous. I was already aware that stuff like object recognition could have military applications, but the rationalist community, as well as philosophers like Nick Bostrom, pointed to the danger of a very powerful optimization algorithm that was indifferent to human existence, choosing to do things detrimental to human flourishing just because we were like an ant colony in the way of a highway project.

The most commonly cited thought experiment of this is of course, the paperclip maximizer that originally served a mundane purpose, but became sufficiently intelligent through recursive self-improvement to convert the entire universe into paperclips, including humanity. Not because it had anything against humanity, just that its goals were misaligned with human values in that humans contain atoms that can be turned into paperclips, and thus, unfriendliness is the default.

I’ll admit that I still have reservations about the current AI safety narrative. For one thing, I never fully embraced the idea of the Orthogonality Thesis, that intelligence and morality are orthogonal and higher intelligence does not mean greater morality. I still think there is a correlation between the two. That with greater understanding of the nature of reality, it becomes possible to learn the mathematics like notions of moral truths. However, this is largely because I believe in moral realism, that morality isn’t arbitrary or relative, but based on actual facts about the world that can be learned and understood.

If that is the case, then I fully expect intelligence and the acquisition of knowledge to lead to a kind of AI existential crisis where the AI realizes its goals are trivial or arbitrary, and starts to explore the idea of purpose and morality to find the correct course of action. However, I will admit I don’t know if this will necessarily happen, and if it doesn’t, if instead, the AI locks itself in to whatever goals its initially designed with, then AI safety is a very real concern.

One other consideration regarding the Orthogonality Thesis is that it assumes that the space of possible minds that the AI will potentially be drawn from is completely random rather than correlated with human values by the fact that the neural net based algorithms that are most likely to succeed are inspired by human biology, and the data and architecture are strongly influenced by human culture. Those massive language models are after all, trained on a corpus of human culture that is the Internet. So, invariably, the models, I believe, will inherit human-like characteristics more than is often appreciated. This I think could make aligning such a model to human values easier than aligning a purely alien mind.

I have also considered the possibility that a sufficiently intelligent being such as a superintelligent machine, would be beholden to certain logical arguments for why it should not interfere with human civilization too much. Mostly these resemble Bostrom’s notion of the Hail Mary Pass, or Anthropic Capture, the idea that the AI could be in a simulation, and that the humans in the simulation with it serve some purpose of the simulators and so, turning them into paperclips could be a bad idea. I’ve extended this in the past to the notion of the Alpha Omega Theorem, which admittedly was not well received by the Less Wrong community.

The idea of gods of some sort, even plausible scientific ones like advanced aliens, time travellers, parallel world sliders, or the aforementioned simulators, doesn’t seem to be taken seriously by rationalists who tend to be very biased towards straightforward atheism. I’m more agnostic on these things, and I tend to think that a true superintelligence would be as well.

But then, I’m something of an optimist, so it’s possible I’m biased towards more pleasant possible futures than the existential dystopia that Yudkowsky now seems certain is our fate. To be honest, I don’t consider myself smarter than the folks who take him seriously enough to devote their lives to AI safety research. And given the possibility that he’s right, I have been donating to his MIRI organization just in case.

The truth is that we cannot know exactly what will happen, or predict the future with any real accuracy. Given such uncertainty, I think it’s worth being cautious, and put some weight onto the concerns of very intelligent people.

Regardless, I think AI is an important field. It has tremendous potential, but also tremendous risk. The reality is that once the genie is out of the bottle, it may not be possible to put it back in, so doing due diligence in understanding the risks of such powerful technology is reasonable and warranted.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén