It is the nature of reality that things are complicated. People are complicated. The things we assume to be true, may or may not be, and an honest person recognizes that the doubts are real. The uncertainty of truth means that no matter how strongly we strive for it, we can very much be wrong about many things. In fact, given that most matters have many possibilities, the base likelihood of getting things right is about 1/N, where N is the number of possibilities that the matter can have. As possibilities increase, our likelihood of being correct diminishes.
Thus, humility as a default position is wise. We are, on average, less than 50% likely to have accurate beliefs about the world. Most of the things we believe at any given time are probably wrong, or at least, not the exact truth. In that sense, Socrates was right.
That being said, it remains important to take reasonable actions given our rational beliefs. It is only by exploring reality and testing our beliefs that we can become more accurate and exceed the base probabilities. This process is difficult and fraught with peril. Our general tendency is to seek to reinforce our biases, rather than to seek truths that challenge them. If we seek to understand, we must be willing to let go of our biases and face difficult realities.
The world is complex. Most people are struggling just to survive. They don’t have the luxury to ask questions about right and wrong. To ask them to see the error of their ways is often tantamount to asking them to starve. The problem is not people themselves, but the system that was formed by history. The system is not a conscious being. It is merely a set of artifices that people built in their desperation to survive in a world largely indifferent to their suffering and happiness. This structure now stands and allows most people to survive, and sometimes to thrive, but it is optimized for basic survival rather than fairness.
A fair world is desirable, but ultimately one that is extraordinarily difficult to create. It’s a mistake to think that people were disingenuous when they tried, in the past, to create a better world for all. It seems they tried and failed, not for lack of intention, but because the challenge is far greater than imagined. Society is a complex thing. People’s motivations are varied and innumerable. Humans make mistakes with the best of intentions.
To move forward requires taking a step in the right direction. But how do we know what direction to take? It is at best an educated guess with our best intuitions and thoughts. But the truth is we can never be certain that what we do is best. The universe is like an imperfect information game. The unknowns prevent us from making the right move all the time in retrospect. We can only choose what seems like the best action at a given moment.
This uncertainty limits the power of all agents in the universe who lack the clarity of omniscience. It is thus, an error to assign God-like powers to an AGI for instance. But more importantly, it means that we should be cautious of our own confidence. What we know is very little. Anyone who says otherwise should be suspect.
It’s something we tend to grow up always assuming is real. This reality, this universe that we see and hear around us, is always with us, ever present. But sometimes there are doubts.
There’s a thing in philosophy called the Simulation Argument. It posits that, given that our descendants will likely develop the technology to simulate reality someday, the odds are quite high that our apparent world is one of these simulations, rather than the original world. It’s a probabilistic argument, based on estimated odds of there being many such simulations.
A long time ago, I had an interesting experience. Back then, as a Christian, I wrestled with my faith and was at times mad at God for the apparent evil in this world. At one point, in a moment of anger, I took a pocket knife and made a gash in a world map on the wall of my bedroom. I then went on a camping trip, and overheard in the news that Russia had invaded Georgia. Upon returning, I found that the gash went straight through the border between Russia and Georgia. I’d made that gash exactly six days before the invasion.
Then there’s the memory I have of a “glitch in the Matrix”, so to speak. Many years ago, I was in a bad place mentally and emotionally, and I tried to open a second floor window to get out of a house that probably would have ended badly, were it not for a momentary change that caused the window, which had a crank to open, to suddenly become a solid frame with no crank or way to open. It happened for a split second. Just long enough for me to panic and throw my body against the frame, making such a racket as to attract the attention of someone who could stop me and calm me down.
I still remember this incident. At the time I thought it was some intervention by God or time travellers/aliens/simulators or some other benevolent higher power. Obviously I have nothing except my memory of this. There’s no real reason for you to believe my testimony. But it’s one reason among many why I believe the world is not as it seems.
Consider for a moment the case of the total solar eclipse. It’s a convenient thing to have occur, because it allowed Einstein to prove his Theory of Relativity in 1919 by looking at the gravitational lensing effect of the sun that is only visible during an eclipse. But total solar eclipses don’t have to be. They only happen because the sun is approximately 400 times the size and 400 times the distance from the Earth as the moon is. They are exactly the right ratio of size and distance for total solar eclipses to occur. Furthermore, due to gradual changes in orbit, this coincidence is only present for a cosmologically short time frame of a few hundred million years that happens to coincide with the development of human civilization.
Note that this coincidence is immune to the Anthropic Principle because it is not essential to human existence. It is merely a useful coincidence.
Another fun coincidence is the names of the arctic and antarctic. The arctic is named after the bear constellations of Ursa Major and Minor, which can be seen only from the northern hemisphere. Antarctic literally means opposite of arctic. Coincidentally, polar bears can be found in the arctic, but no species of bear is found in the antarctic.
There are probably many more interesting coincidences like this, little Easter eggs that have been left for us to notice.
The true nature of our reality is probably something beyond our comprehension. There are hints at it however, that make me wonder about the implications. So, I advise you to keep an open mind about the possible.
Note: The following is a blog post I wrote as part of a paid written work trial with Epoch. For probably obvious reasons, I didn’t end up getting the job, but they said it was okay to publish this.
Historically, one of the major reasons machine learning was able to take off in the past decade was the utilization of Graphical Processing Units (GPUs) to accelerate the process of training and inference dramatically. In particular, Nvidia GPUs have been at the forefront of this trend, as most deep learning libraries such as Tensorflow and PyTorch initially relied quite heavily on implementations that made use of the CUDA framework. The strength of the CUDA ecosystem remains strong, such that Nvidia commands an 80% market share of data center GPUs according to a report by Omdia (https://omdia.tech.informa.com/pr/2021-aug/nvidia-maintains-dominant-position-in-2020-market-for-ai-processors-for-cloud-and-data-center).
Given the importance of hardware acceleration in the timely training and inference of machine learning models, it might be naively seem useful to look at the raw computing power of these devices in terms of FLOPS. However, due to the massively parallel nature of modern deep learning algorithms, it should be noted that it is relatively trivial to scale up model processing by simply adding additional devices, taking advantage of both data and model parallelism. Thus, raw computing power isn’t really a proper limit to consider.
What’s more appropriate is to instead look at the energy efficiency of these devices in terms of performance per watt. In the long run, energy constraints have the potential to be a bottleneck, as power generation requires substantial capital investment. Notably, data centers currently use up about 2% of the U.S. power generation capacity (https://www.energy.gov/eere/buildings/data-centers-and-servers).
For the purposes of simplifying data collection and as a nod to the dominance of Nvidia, let’s look at the energy efficiency trends in Nvidia Tesla GPUs over the past decade. Tesla GPUs are chosen because Nvidia has a policy of not selling their other consumer grade GPUs for data center use.
The data for the following was collected from Wikipedia’s page on Nvidia GPUs (https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units), which summarizes information that is publicly available from Nvidia’s product datasheets on their website. A floating point precision of 32-bits (single precision) is used for determining which FLOPS figures to use.
A more thorough analysis would probably also look at Google TPUs and AMDs lineup of GPUs, as well as Nvidia’s consumer grade GPUs. The analysis provided here can be seen more as a snapshot of the typical GPU most commonly used in today’s data centers.
Figure 1: The performance per watt of Nvidia Tesla GPUs from 2011 to 2022, in GigaFLOPS per Watt.
Notably the trend is positive. While wattages of individual cards have increased slightly over time, the performance has increased faster. Interestingly, the efficiency of these cards exceeds the efficiency of the most energy efficient supercomputers as seen in the Green500 for the same year (https://www.top500.org/lists/green500/).
An important consideration in all this is that energy efficiency is believed to have a possible hard physical limit, known as the Laudauer Limit (https://en.wikipedia.org/wiki/Landauer%27s_principle), which is dependent on the nature of entropy and information processing. Although, efforts have been made to develop reversible computation that could, in theory, get around this limit, it is not clear that such technology will ever actually be practical as all proposed forms seem to trade off this energy savings with substantial costs in space and time complexity (https://arxiv.org/abs/1708.08480).
Space complexity costs additional memory storage and time complexity requires additional operations to perform the same effective calculation. Both in practice translate into energy costs, whether it be the matter required to store the additional data, or the opportunity cost in terms of wasted operations.
More generally, it can be argued that useful information processing is efficient because it compresses information, extracting signal from noise, and filtering away irrelevant data. Neural networks for instance, rely on neural units that take in many inputs and generate a single output value that is propagated forward. This efficient aggregation of information is what makes neural networks powerful. Reversible computation in some sense reverses this efficiency, making its practicality, questionable.
Thus, it is perhaps useful to know how close we are to approaching the Laudauer Limit with our existing technology, and when to expect to reach it. The Laudauer Limit works out to 87 TeraFLOPS per watt assuming 32-bit floating point precision at room temperature.
Previous research to that end has proposed Koomey’s Law (https://en.wikipedia.org/wiki/Koomey%27s_law), which began as an expected doubling of energy efficiency every 1.57 years, but has since been revised down to once every 2.6 years. Figure 1 suggests that for Nvidia Tesla GPUs, it’s even slower.
Another interesting reason why energy efficiency may be relevant has to do with the real world benchmark of the human brain, which is believed to have evolved with energy efficiency as a critical constraint. Although the human brain is obviously not designed for general computation, we are able to roughly estimate the number of computations that the brain performs, and its related energy efficiency. Although the error bars on this calculation are significant, the human brain is estimated to perform at about 1 PetaFLOPS while using only 20 watts (https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/). This works out to approximately 50 TeraFLOPS per watt. This makes the human brain less powerful strictly speaking than our most powerful supercomputers, but more energy efficient than them by a significant margin.
Note that this is actually within an order of magnitude of the Laudauer Limit. Note also that the human brain is also roughly two and a half orders of magnitude more efficient than the most efficient Nvidia Tesla GPUs as of 2022.
On a grander scope, the question of energy efficiency is also relevant to the question of the ideal long term future. There is a scenario in Utilitarian moral philosophy known as the Utilitronium Shockwave, where the universe is hypothetically converted into the most dense possible computational matter and happiness emulations are run on this hardware to maximize happiness theoretically. This scenario is occasionally conjured up as a challenge against Utilitarian moral philosophy, but it would look very different if the most computationally efficient form of matter already existed in the form of the human brain. In such a case, the ideal future would correspond with an extraordinarily vast number of humans living excellent lives. Thus, if the human brain is in effect at the Laudauer Limit in terms of energy efficiency, and the Laudauer Limit holds against efforts towards reversible computing, we can argue in favour of this desirable human filled future.
In reality, due to entropy, it is energy that ultimately constrains the number of sentient entities that can populate the universe, rather than space, which is much more vast and largely empty. So, energy efficiency would logically be much more critical than density of matter.
This also has implications for population ethics. Assuming that entropy cannot be reversed, and the cost of living and existing requires converting some amount of usable energy into entropy, then there is a hard limit on the number of human beings that can be born into the universe. Thus, more people born at this particular moment in time implies an equivalent reduction of possible people in the future. This creates a tradeoff. People born in the present have potentially vast value in terms of influencing the future, but they will likely live worse lives than those who are born into that probably better future.
Interesting philosophical implications aside, the shrinking gap between GPU efficiency and the human brain sets a potential timeline. Once this gap in efficiency is bridged, it theoretically makes computers as energy efficient as human brains, and it should be possible at that point to emulate a human mind on hardware such that you could essentially have a synthetic human that is as economical as a biological human. This is comparable to the Ems that the economist Robin Hanson describes in his book, The Age of EM. The possibility of duplicating copies of human minds comes with its own economic and social considerations.
So, how long away is this point? Given the trend observed with GPU efficiency growth, it looks like a doubling occurs about every three years. Thus, one can expect an order of magnitude improvement in about thirty years, and two and a half orders of magnitude in seventy-five years. As mentioned, two and a half orders of magnitude is the current distance from existing GPUs and the human brain. Thus, we can roughly anticipate this to be around 2100. We can also expect to reach the Laudauer Limit shortly thereafter.
Most AI safety timelines are much sooner than this however, so it is likely that we will have to deal with aligning AGI before the potential boost that could come from having synthetic human minds or the potential barrier of the Laudauer Limit slowing down AI capabilities development.
In terms of future research considerations, a logical next step would be to look at how quickly the overall power consumption of data centers is increasing and also the current growth rates of electricity production to see to what extent they are sustainable and whether improvements to energy efficiency will be outpaced by demand. If so, that could act to slow the pace of machine learning research that relies on very large models trained on massive amounts of compute. This is in addition to other potential limits, such as the rate of data generation for large language models, which depend on massive datasets of essentially the entire Internet at this point.
The nature of current modern computation is that it is not free. It requires available energy to be expended and converted to entropy. Barring radical new innovations like practical reversible computers, this has the potential to be a long-term limiting factor in the advancement of machine learning technologies that rely heavily on parallel processing accelerators like GPUs.
Because changes in the dosages of my medications can only happen once every month or so at most, a strategy for managing my mood and energy levels has been to supplement with the caffeine in coffee. My wife got us an espresso machine a while back, so I’m able to pull shots when needed.
Initially, my dosing schedule for espresso shots mostly assumed front loading with 2 to 3 in the morning followed by single shot top ups at noon and in the late afternoon. This was based on the assumption that I wanted to have a consistent level of caffeine in the bloodstream, and avoid peaks or high variance, since the half-life of caffeine is about 5 hours. This sort of worked for a while, but I noticed that I crashed pretty hard in the evenings despite it.
Recently, after reading more, I noted that the Adenosine levels actually increase over time throughout the day, and so fixing the caffeine level at a particular amount will probably be too much early in the day, and too little later in the day, assuming that we need to offset the rising sleepiness. Thus, a more practical dosing schedule is probably an even distribution, something like a double shot in the morning, followed by a double shot at noon. Initial experiments suggest that this works better, and keeps me from crashing as much in the evening, although this is still early in my testing.
One of the more challenging things I’ve experienced in my life has been dealing with the complexities of mental illness and the struggle to live a normal life despite it. Despite my best efforts, I find myself infuriatingly inconsistent due to a mood disorder that means I’m occasionally overly energetic, and other times fatigued. In either state, I find it difficult to focus on being productive, either because I’m distracted by a rush of thoughts, or alternatively, too tired to do anything. The midway state between these two extremes is a thin region where I can be productive and effective.
A lot of people don’t really get the extent to which our moods and behaviours can be shaped by something as simple as a little blue pill. For me, the cocktail of medications allows me to function, is an added cost of living, but also comes with the danger that an adjustment can overcompensate and cause me to become the opposite state than what I was in before. It becomes rather infuriating, how easily the balance can be broken, and how obviously I am not in control of my own mental condition.
It’s bothersome. I want to be effective, to be able to productively do the things that I want to do. But often, during periods of adjustment, I find myself struggling to do basic things. When things are working right, I can be quite productive, like my first two and a half years at Huawei were. But then things can go wrong, and I can find myself stuck in the mud, worried that I may never be able to function well again.
I can blame the illness for a lot of things. Lost friends, lost time, lost hope, a sidelined career, and so on. But at the same time I hesitate to. I hesitate to admit to the public that I have this illness, because of the severe stigma that is attached to it. And I don’t want it to be an excuse for my mistakes. But at the same time, it is the reason why I sometimes wasn’t myself, why I can be maddeningly inconsistent.
It becomes a struggle because, in part, I want to hide these facts from people, so they don’t look down on me, so they don’t decide I’m too much of a risk to employ, things like that. My parents always tell me to keep the fact a secret. It’s not something that other people understand, and it hurts my chances to get or keep a job. But at the same time, if I don’t explain why things are happening, do they expect me to be able to keep the job anyway?
We’re expected to be our best, day in, and day out, but for me, that’s impossible. It’s impossible for me to be 100% all the time, and moreover, there are days when I’ll just be useless. How am I supposed to work with this? What do people think I should do?
It’s just bothersome. The world expects us to be striving and achieving all the time. But I literally cannot be that way. Do I belong in this world? Or am I just too messed up to survive?
These are some thoughts I sometimes have. The kind of thoughts that on better days the medications take away. But sometimes they come back. And sometimes I’m trapped by my own mind in a seemingly hopeless situation. At least, hopeless for someone who wants to be effective and productive and to contribute meaningfully to the world.
So, that’s a small glimpse of the struggle. There’s a lot more that I still don’t think is wise to explain. Because most people don’t particularly understand. But hopefully, if you care to, this post helps you to understand a bit of my experience, and why I am the way that I am. Thank you for your time.
One thing I’ve learned from observing people and society is the awareness that the vast majority of folks are egoistic, or selfish. They tend to care about their own happiness and are at best indifferent to the happiness of others unless they have some kind of relationship with that person, in which case they care about that person’s happiness in so far as it has an effect on their own happiness to keep that person happy. This is the natural, neutral state of affairs. It is unnatural to care about other people’s happiness for the sake of themselves as ends. We call such unnatural behaviour “altruism”, and tend to glorify it in narratives but avoid actually being that way in reality.
In an ideal world, all people would be altruistic. They would equally value their own happiness and the happiness of each other person because we are all persons deserving happiness. Instead, reality is mostly a world of selfishness. To me, the root of all evil is this egoism, this lack of concern for the well-being of others that is the norm in our society.
I say this knowing that I am a hypocrite. I say this as someone who tries to be altruistic at times, but is very inconsistent with the application of the principles that it logically entails. If I were a saint, I would have sold everything I didn’t need and donated at least half my gross income to charities that help the global poor. I would be vegan. I would probably not live in a nice house and own a car (a hybrid at least) and be busy living a pleasant life with my family.
Instead, I donate a small fraction of my gross income to charity and call it a day. I occasionally make the effort to help my friends and family when they are in obvious need. I still eat meat and play computer games and own a grand piano that I don’t need.
The reality is that altruism is hard. Doing the right thing for the right reasons requires sacrificing our selfish desires. Most people don’t even begin to bother. In their world view, acts of kindness and altruism are seen with suspicion, as having ulterior motives of virtue signalling or guilt tripping or something else. In such a world, we are not rewarded for doing good, but punished. The incentives favour egoism. That’s why the world runs on capitalism after all.
And so, the world is the way it is. People largely don’t do the right thing, and don’t even realize there is a right thing to do. Most of them don’t care. There are seven billion people in this world right now, and most likely, only a tiny handful of people care that you or I even exist, much less act consistently towards our well-being and happiness.
So, why am I bothering to explain this to you? Because I think we can do better. Not be perfect, but better. We can do more to try to care about others and make the effort to make the world a better place. I believe I do this with my modest donations to charity, and my acts of kindness towards friends and strangers alike. These are small victories for goodness and justice and should be celebrated, even if in the end we fall short of being saints.
In the end, the direction you go in is more important than the magnitude of the step you take. Many small steps in the right direction will get you to where you want to be eventually. Conversely, if your direction is wrong, then bigger steps aren’t always better.
In the interest of explaining further my considerations for having a career working on AI, I figure it makes sense to explain a few things.
When I was very young, I watched a black and white movie where a mad scientist somehow replaced a human character with a robot. At the time I actually thought the human character was somehow transformed into the robot, which was terrifying to me. This, to my childish mind, created an irrational fear of robots that made me avoid playing with such devices that were overtly robot-like, at least for the while when I was a toddler.
Eventually I grew out of that fear. When I was older and studying computer science at Queen’s University, I became interested in the concept of neural networks, the idea of taking the inspiration of biology to inform the design of artificial intelligence systems. Back in those days, AI mostly meant Good Old Fashioned Artificial Intelligence (GOFAI), namely top-down approaches that involve physical symbol systems, logical inference, and search algorithms that were highly mathematical, engineered, and often brittle in terms of its effectiveness. Bottom-up connectionist approaches like neural networks were seen as late as 2009 as being mere curiosities that would never have practical value.
Nevertheless, I was enamoured with the connectionist approach, and what would become the core of deep learning, well before it was cool to be so. I wrote my undergraduate thesis on using neural networks for object recognition (back then the Neocognitron, as I didn’t know about convolutional nets yet), and then would later expand on this for my master’s thesis, which was on using various machine learning algorithms for occluded object recognition.
So, I graduated at the right time in 2014 when the hype train was starting to really roar. At around the same time, I got acquainted with the writings of Eliezer Yudkowsky of Less Wrong, also known as the guy who wrote the amazing rationalist fan fiction that was Harry Potter and the Methods of Rationality (HPMOR). I haven’t always agreed with Yudkowsky, but I’ll admit the man is very, very smart.
It was my reading Less Wrong as well as a lesser known utilitarianism forum called Felificia that I became aware that there were many smart people who took very seriously the concern that AI could be dangerous. I was already aware that stuff like object recognition could have military applications, but the rationalist community, as well as philosophers like Nick Bostrom, pointed to the danger of a very powerful optimization algorithm that was indifferent to human existence, choosing to do things detrimental to human flourishing just because we were like an ant colony in the way of a highway project.
The most commonly cited thought experiment of this is of course, the paperclip maximizer that originally served a mundane purpose, but became sufficiently intelligent through recursive self-improvement to convert the entire universe into paperclips, including humanity. Not because it had anything against humanity, just that its goals were misaligned with human values in that humans contain atoms that can be turned into paperclips, and thus, unfriendliness is the default.
I’ll admit that I still have reservations about the current AI safety narrative. For one thing, I never fully embraced the idea of the Orthogonality Thesis, that intelligence and morality are orthogonal and higher intelligence does not mean greater morality. I still think there is a correlation between the two. That with greater understanding of the nature of reality, it becomes possible to learn the mathematics like notions of moral truths. However, this is largely because I believe in moral realism, that morality isn’t arbitrary or relative, but based on actual facts about the world that can be learned and understood.
If that is the case, then I fully expect intelligence and the acquisition of knowledge to lead to a kind of AI existential crisis where the AI realizes its goals are trivial or arbitrary, and starts to explore the idea of purpose and morality to find the correct course of action. However, I will admit I don’t know if this will necessarily happen, and if it doesn’t, if instead, the AI locks itself in to whatever goals its initially designed with, then AI safety is a very real concern.
One other consideration regarding the Orthogonality Thesis is that it assumes that the space of possible minds that the AI will potentially be drawn from is completely random rather than correlated with human values by the fact that the neural net based algorithms that are most likely to succeed are inspired by human biology, and the data and architecture are strongly influenced by human culture. Those massive language models are after all, trained on a corpus of human culture that is the Internet. So, invariably, the models, I believe, will inherit human-like characteristics more than is often appreciated. This I think could make aligning such a model to human values easier than aligning a purely alien mind.
I have also considered the possibility that a sufficiently intelligent being such as a superintelligent machine, would be beholden to certain logical arguments for why it should not interfere with human civilization too much. Mostly these resemble Bostrom’s notion of the Hail Mary Pass, or Anthropic Capture, the idea that the AI could be in a simulation, and that the humans in the simulation with it serve some purpose of the simulators and so, turning them into paperclips could be a bad idea. I’ve extended this in the past to the notion of the Alpha Omega Theorem, which admittedly was not well received by the Less Wrong community.
The idea of gods of some sort, even plausible scientific ones like advanced aliens, time travellers, parallel world sliders, or the aforementioned simulators, doesn’t seem to be taken seriously by rationalists who tend to be very biased towards straightforward atheism. I’m more agnostic on these things, and I tend to think that a true superintelligence would be as well.
But then, I’m something of an optimist, so it’s possible I’m biased towards more pleasant possible futures than the existential dystopia that Yudkowsky now seems certain is our fate. To be honest, I don’t consider myself smarter than the folks who take him seriously enough to devote their lives to AI safety research. And given the possibility that he’s right, I have been donating to his MIRI organization just in case.
The truth is that we cannot know exactly what will happen, or predict the future with any real accuracy. Given such uncertainty, I think it’s worth being cautious, and put some weight onto the concerns of very intelligent people.
Regardless, I think AI is an important field. It has tremendous potential, but also tremendous risk. The reality is that once the genie is out of the bottle, it may not be possible to put it back in, so doing due diligence in understanding the risks of such powerful technology is reasonable and warranted.
I know I earlier talked about how AI capability research being dangerous was a reason to leave the industry. However, after some reflection, I realize that not all work in the AI/ML industry is the same. Not all of it involves advancing AI capability per se. Working as a machine learning engineer at a lower tier company applying existing ML technology to solve various problems is unlikely to contribute to building the AI that ends the world.
Given this being the case, I have occasionally wondered whether or not my decision to switch to the game industry was too hasty. I’ve noticed that my enthusiasm for gaming isn’t as strong as my interest in AI/ML was, and so it’s been somewhat surprisingly challenging to stay motivated in this field.
In particular, while I have a lot of what I think are neat game ideas, working as a game programmer generally doesn’t involve these. Working as a game programmer involves working on whatever game the leader of the team wants to make. When this matches one’s interests, it can work out well, but it’s quite possible to find oneself working on a game that they have little interest in actually playing.
Making a game that you’re not really invested in can still be fun in the way that programming and seeing your creation come to life is fun, but it’s not quite the same as building your dream game. In some sense, my game design hobby didn’t really translate over well into actual work, where practicalities are often far more important than dreams.
So, I’m at something of a crossroads right now. I’m still at Twin Earth for a while longer, but there’s a very good chance I’ll be parting ways with them in a few months time. The question becomes, do I continue to work in games, return to machine learning where I have most of my experience and credentials, or do something else?
In an ideal world, I’d be able to find a research engineer position working on the AI safety problem, but my survey of the field so far still suggests that the few positions that exist would require moving to San Francisco or London, which given my current situation would complicate things a lot. And honestly, I’d rather work remotely if it were at all possible.
Still, I do appreciate the chance I got to work in the game industry. At the very least I could get a clearer idea of what I was missing out on before. Although admittedly, my dip into games didn’t reach the local indie community or anything like that. So, I don’t know how I might have interacted with that culture or scene.
Not sure where I’m going with this. Realistically, my strengths are still more geared towards AI/ML work, so that’s probably my first choice in terms of career. On the other hand, Dreamyth was a thing once. I did at one time hold aspirations to make games. Given that I now actually know Unreal Engine, I could conceivably start finally actually making the games I want to make, even as just a side hobby.
I still don’t think I have the resources to start a studio. My wife is particularly against the idea of a startup. The reality is I should find a stable job that can allow my family to live comfortably.
These are ultimately the considerations I need to keep in mind.
Happy Birthday, and goodbye. May your soul live on in the next world. You who were the wind that never had a chance to take a first breath. This world wasn’t fair to you.
It doesn’t matter what we were going to name you. You can be anything, or anyone, now. I’m sorry we couldn’t save you. I’m sorry.
Little butterfly. Perhaps in another parallel world things would be different. You’d grow up and become a paladin, the things we wished for you. The dreams that are impossible now.
I will remember you. I will remember you. I will remember you. Happy Birthday, and goodbye.