I wrote an essay about an alternative value theory to hedonism for ethics.
Category: Philosophy Page 1 of 2
I recently had a baby. There’s some debate in philosophical circles about whether or not it is right to have children. I thought I should -briefly- outline why I chose this path.
When I was a child, I think it was an unwritten assumption within my traditional Chinese Christian family that I would have kids. In undergrad however, I encountered David Benatar’s Better Never To Have Been, which exposed me to anti-natalist views for the first time. These often argued that hypothetical suffering was somehow worse or more real than hypothetical happiness. I didn’t really agree, but I admitted the arguments were interesting.
Subsequent to that, I became a Utilitarian in terms of my moral philosophy, and was exposed to the idea that adding a life worth living to the universe was a good thing.
Environmentalists and degrowthers often argue that there are too many people in the world already, that adding yet another person given the limited resources is unsustainable and dooming us to a future Malthusian nightmare. I admit that there are a lot of people in the world already, but I’m skeptical that we can’t find a way to use resources more efficiently, or develop technology to solve this the way we have in the past with hybrid rice and the Green Revolution.
Though, to be honest, my actual reasons for having a child are more mundane. My wife wanted to have the experience and have someone who she can talk to when she’s old (the actuarial mortality table suggests I’ll probably die before her after all). I ultimately let my wife decide whether or not we have kids, as she’s the one who had to endure the pregnancy.
I personally was 60/40 split on whether to be okay with having a child. My strongest argument for was actually a simple, almost Kantian one. If everyone has children, the human race will continue into a glorious future among the stars. If no one has children, the human race will die out, along with all of its potential. Thus, in general, it is better to have at least one child to contribute to the future potential of humankind.
At the same time, I was worried, given the possibility of things like AI Doom that I could be bringing a life into a world of future misery and discontent, and I also knew that parenthood could be exceedingly stressful for both of us, putting an end to our idyllic lifestyle. Ultimately, these concerns weren’t enough to stop us though.
My hope is that this life that my wife and I created will also live a happy and good life, and that I can perhaps teach some of my values to them, so that they will live on beyond my mortality. But these things are ultimately out of my hands in the long run, so they aren’t definitive reasons to go ahead, so much as wishes for my child.
If you accept the idea that there is no ethical consumption or production under capitalism, a serious question arises: Should you work?
What does it mean to work? Generally, the average person is a wage earner. They sell their labour to an employer in order to afford food to survive. To work thus means to engage with the system, to be a part of society and contribute something that someone somewhere wants done in exchange for the means of survival.
Implicit in this is the reality that there is a fundamental, basic cost to living. Someone, somewhere, is farming the food that you eat, and in a very roundabout way, you are, by participating in the economy, returning the favour. This is ignoring the whole issue of capitalism’s merits. At the end of the day, the economy is a system that feeds and clothes and provides shelter, how ever imperfectly and unfairly. Even if it is not necessarily the most just and perfect system, it nevertheless does provide for most people the amenities that allow a good life.
Thus, in an abstract sense, work is fair. It is fair that the time spent by people to provide food and clothing and shelter is paid back by your spending your time to earn a living, regardless of whatever form that takes. On a basic level, it’s at least minimally fair that you exchange your time and energy for other people’s time and energy. Capitalism may not be fair, but the basic idea of social production is right.
So, if you are able to, please work. Work because in an ideal society, work is your contribution to some common good. It is you adding to the overall utility by doing something that seems needed by someone enough that they’ll pay you for it. Even if in practice, the reality of the system is less than ideal, the fact is that on a basic level, work needs to be done by someone somewhere for people to live.
While you work, try to do so as morally as possible, by choosing insofar as it is possible the professions that are productive and useful to society, and making decisions that reflect your values rather than that of the bottom line. If you must participate in capitalism to survive, then at least try to be humane about it.
“If you want to be perfect, go, sell your possessions and give to the poor, and you will have treasure in heaven. Then come, follow me.” – Jesus
In 1972, the famous Utilitarian moral philosopher Peter Singer published an essay titled: “Famine, Affluence, and Morality” that argued that we have a moral duty to help those in poverty far across the world. In doing so, he echoed a sentiment that Jesus shared almost two millennia prior, yet which most people who call themselves Christians today seem relatively unconcerned with.
From a deeply moral perspective, we live in a world that is fundamentally flawed and unjust. The painful truth is that the vast majority of humans on this Earth live according to a kind of survivorship bias, where the systems and beliefs that perpetuate are not right, but what enables them to survive long enough to procreate and instill a next generation where things continue to exist.
For most people, life is hard enough that questioning whether the way things are is right is something of a privilege that they cannot afford. For others, this questioning requires a kind of soul searching that they shy away from because it would make them uncomfortable to even consider. It’s natural to imagine yourself the hero in your own story. To question this assumption is not easy.
But the reality is that most all of us are in some sense complicit in the most senseless of crimes against humanity. When we participate in an economy to ensure we have food to eat, we are tacitly choosing to give permission to a system of relations that is fundamentally indifferent to the suffering of many. We compete with fellow human beings for jobs and benefit from their misery when we take one of only a limited number of spots in the workforce. We chose to allow those with disproportionate power to decide who gets to live a happier life. And those in power act to further increase their share of power, because to do anything else would lead to being outcompeted and their organization rendered extinct by the perverse incentives that dominate the system.
Given all this, what can one even begin to do about it? Most of us are not born into a position where they have the power to change the world. Our options are limited. To be moral, we would need to defy the very nature of existence. What can we do? If we sell everything we have and give to the poor, that still won’t change the nature of the world, even if it’s the most we could conceivably do.
What does it mean to defy destiny? What does it look like to try to achieve something that seems impossible?
What exists in opposition to this evil? What is good? What is right? What does it look like to live a pure and just life in a world filled with indifference and malice? What does it mean to take responsibility for one’s actions and the consequences of those actions?
Ultimately, it is not in our power to single-handledly change the world, but there are steps we can take to give voice to our values, to live according to what we believe to be right. This means making small choices about how we behave towards others. It means showing kindness and consideration in a world that demands cutthroat competition. It means taking actions that bring light into the world.
Even if we, by ourselves, cannot bring revolution, we can at least act according to the ideals we espouse. This can be as small as donating a modest amount to a charity in a far off land that corrects a small amount of injustice by giving the poorest among us a bednet that protects them from malaria. If approximately $4800 $5500 worth of such things can save a life, and minimum wage can earn you $32,000 a year, if you modestly donate 10% of that to this charity, you can save about three lives one life every two years. If you work for 40 years, you can save about 60 23 lives this way. Those lives matter. They will be etched into eternity, like all lives worth living. (Edit: Corrected some numbers.)
Admittedly, to do this requires participating in the system. You could also choose not to participate. But to do so would abandon your responsibilities for the sake of a kind of moral purity. In the end, you can do more good by living an ethical life, to lead by example and showing that there are ways of living where you strive to move beyond selfish competition, and seek to cooperate and build up the world.
This is the path of true defiance. It does not surrender one’s life to the evils of egoism, or abandon the world to the lost. Instead it seeks to build something better through decisions made that go against the grain. With the understanding that we are all living a mutual co-existence, and that our choices and decisions reflect who we are, our character as people.
We do not have to be perfect. It is enough to be good.
Anyone reading my writings probably knows that I subscribe roughly to the moral theory of Utilitarianism. To me, we should be trying to maximize the happiness of everyone. Every sentient being should be considered important enough to be weighed in our moral calculus of right and wrong. In theory, this should mean we should place equal weight on every human being on this Earth. In practice however, there are considerations that need to be taken into account that complicate the picture.
Effective Altruism would argue that time and distance don’t matter, that you should help those who you can most effectively assist given limited resources. This usually leads to the recommendation of donating to charities in Africa for bednet or medication delivery as this is considered the most effective use of a given dollar of value. There is definitely merit to the argument that a dollar can go further in poverty-stricken Africa than elsewhere. However, I don’t think that’s the only consideration here.
Time and distance do matter to the extent that we as human beings have limited knowledge of things far away from us in time and space. With respect to donations to a distant country in dire need, there are reasonable uncertainties about the effectiveness of these donations, as many of the arguments in favour of them depend heavily on our trust of the analysis done by the charities working far away, that we cannot confirm or prove directly.
This uncertainty should function as a kind of discount rate on the value of the help we can give. A more nuanced and measured analysis thus suggests that we should both donate some of our resources to those distant charities, but that we should also devote some of our resources to those closer to home whom we can directly see and assist and know that we are able to help. Our friends and family, whom we have relationships that allow us to know their needs and wants, what will best help them, are obvious candidates for this kind of help.
Similarly, those in the distant future, while worth helping to an extent, should not completely absolve us of our responsibilities to those near to us in time, who we are much more certain we can directly help and affect in meaningful ways. The further away a possible being is in time, the more uncertain is their existence, after all.
This also means that we ourselves should value our own happiness and, being the best positioned to know how we ourselves can be happy, should take responsibility for our own happiness.
Thus, in practice, Utilitarianism, carefully considered, does not eliminate our social responsibilities to those around us, but rather reinforces these ties, as being important to understanding how best to make those around us happy.
Equal concern does not mean, in practice, equal duty. It means instead that we should expand our circle of concern to the entire universe, and that there is a balance of considerations that create responsibilities for us, magnified by our practical ability to know and help.
Those distant from us are still important. We should do what we reasonably can to help them. But those close to us put us in a position where we are uniquely responsible for what we know to be true.
In the end, it’s ultimately up to you to decide what matters to you, but may I suggest that you be open to helping both those close and far from you, whose needs you are aware of to varying degrees, and who deserve to be happy just like you.
Sometimes you’re not feeling well. Sometimes the world seems dark. The way world is seems wrong somehow. This is normal. It is a fundamental flaw in the universe, in that it is impossible to always be satisfied with the reality we live in. It comes from the reality of multiple subjects experiencing a shared reality.
If you were truly alone in the universe, it could be catered to your every whim. But as soon as there are two it immediately becomes possible for goals and desires to misalign. This is a structural problem. If you don’t want to be alone, you must accept that other beings have values that can potentially be different than yours, and who can act in ways contrary to your expectations.
The solution is, put simply, to find the common thread that allows us to cooperate rather than compete. The alternative is to end the existence of all other beings in the multiverse, which is not realistic nor moral. All of the world’s most pressing conflicts are a result of misalignment between subjects who experience reality from different angles of perception.
But the interesting thing is that there are Schelling points, focal points where divergent people can converge on to find common ground and at least partially align in values and interests. Of historical interest, the idea of God is one such point. Regardless of the actual existence of God, the fact of the matter is that the perspective of an all-knowing, all-benevolent, impartial observer is something that multiple religions and philosophies have converged on, allowing a sort of cooperation in the form of some agreement over the Will of God and the common ideas that emerge from considering it.
Another similar Schelling point is the Tit-For-Tat strategy for the Iterated Prisoner’s Dilemma game in Game Theory. The strategy is one of opening with cooperate, then mirroring others and cooperating when cooperated with, and defecting in retaliation for defection, while offering immediate and complete forgiveness for future cooperation. Surprisingly, this extremely simple strategy wins tournaments and has echoes in various religions and philosophies as well. Morality is superrational.
Note however that this strategy depends heavily on repeated interactions between players. If one player is in such a dominant position as to be able to kill the other player by defecting, the strategy is less effective. In practice, Tit-For-Tat works best against close to equally powerful individuals, or when those individuals are part of groups that can retaliate even if the individual dies.
In situations of relative darkness, when people or groups are alone and vulnerable to predators killing in secret, the cooperative strategies are weaker than the more competitive strategies. In situations of relative light, when people are strong enough to survive a first strike, or there are others able to see such first strikes and retaliate accordingly, the cooperative strategies win out.
Thus, early history, with its isolated pockets of humanity facing survival or annihilation on a regular basis, was a period of darkness. As the population grows and becomes more interconnected, the world increasingly transitions into a period of light. The future, with the stars and space where everything is visible to everyone, is dominated by the light.
In the long run, cooperative societies will defeat competitive ones. In the grand scheme of things, Alliances beat Empires. However, in order for this state equilibrium to be reached, certain inevitable but not immediately apparent conditions must first be met. The reason why the world is so messed up, why it seems like competition beats cooperation right now, is that the critical mass required for there to be light has not yet been reached.
We are in the growing pains between stages of history. Darkness was dominant for so long that continues to echo into our present. The Light is nascent. It is beginning to reshape the world. But it is still in the process of emerging from the shadows of the past. But in the long run, the Light will rise and usher in the next age of life.
It’s something we tend to grow up always assuming is real. This reality, this universe that we see and hear around us, is always with us, ever present. But sometimes there are doubts.
There’s a thing in philosophy called the Simulation Argument. It posits that, given that our descendants will likely develop the technology to simulate reality someday, the odds are quite high that our apparent world is one of these simulations, rather than the original world. It’s a probabilistic argument, based on estimated odds of there being many such simulations.
A long time ago, I had an interesting experience. Back then, as a Christian, I wrestled with my faith and was at times mad at God for the apparent evil in this world. At one point, in a moment of anger, I took a pocket knife and made a gash in a world map on the wall of my bedroom. I then went on a camping trip, and overheard in the news that Russia had invaded Georgia. Upon returning, I found that the gash went straight through the border between Russia and Georgia. I’d made that gash exactly six days before the invasion.
Then there’s the memory I have of a “glitch in the Matrix”, so to speak. Many years ago, I was in a bad place mentally and emotionally, and I tried to open a second floor window to get out of a house that probably would have ended badly, were it not for a momentary change that caused the window, which had a crank to open, to suddenly become a solid frame with no crank or way to open. It happened for a split second. Just long enough for me to panic and throw my body against the frame, making such a racket as to attract the attention of someone who could stop me and calm me down.
I still remember this incident. At the time I thought it was some intervention by God or time travellers/aliens/simulators or some other benevolent higher power. Obviously I have nothing except my memory of this. There’s no real reason for you to believe my testimony. But it’s one reason among many why I believe the world is not as it seems.
Consider for a moment the case of the total solar eclipse. It’s a convenient thing to have occur, because it allowed Einstein to prove his Theory of Relativity in 1919 by looking at the gravitational lensing effect of the sun that is only visible during an eclipse. But total solar eclipses don’t have to be. They only happen because the sun is approximately 400 times the size and 400 times the distance from the Earth as the moon is. They are exactly the right ratio of size and distance for total solar eclipses to occur. Furthermore, due to gradual changes in orbit, this coincidence is only present for a cosmologically short time frame of a few hundred million years that happens to coincide with the development of human civilization.
Note that this coincidence is immune to the Anthropic Principle because it is not essential to human existence. It is merely a useful coincidence.
Another fun coincidence is the names of the arctic and antarctic. The arctic is named after the bear constellations of Ursa Major and Minor, which can be seen only from the northern hemisphere. Antarctic literally means opposite of arctic. Coincidentally, polar bears can be found in the arctic, but no species of bear is found in the antarctic.
There are probably many more interesting coincidences like this, little Easter eggs that have been left for us to notice.
The true nature of our reality is probably something beyond our comprehension. There are hints at it however, that make me wonder about the implications. So, I advise you to keep an open mind about the possible.
Note: The following is a blog post I wrote as part of a paid written work trial with Epoch. For probably obvious reasons, I didn’t end up getting the job, but they said it was okay to publish this.
Historically, one of the major reasons machine learning was able to take off in the past decade was the utilization of Graphical Processing Units (GPUs) to accelerate the process of training and inference dramatically. In particular, Nvidia GPUs have been at the forefront of this trend, as most deep learning libraries such as Tensorflow and PyTorch initially relied quite heavily on implementations that made use of the CUDA framework. The strength of the CUDA ecosystem remains strong, such that Nvidia commands an 80% market share of data center GPUs according to a report by Omdia (https://omdia.tech.informa.com/pr/2021-aug/nvidia-maintains-dominant-position-in-2020-market-for-ai-processors-for-cloud-and-data-center).
Given the importance of hardware acceleration in the timely training and inference of machine learning models, it might be naively seem useful to look at the raw computing power of these devices in terms of FLOPS. However, due to the massively parallel nature of modern deep learning algorithms, it should be noted that it is relatively trivial to scale up model processing by simply adding additional devices, taking advantage of both data and model parallelism. Thus, raw computing power isn’t really a proper limit to consider.
What’s more appropriate is to instead look at the energy efficiency of these devices in terms of performance per watt. In the long run, energy constraints have the potential to be a bottleneck, as power generation requires substantial capital investment. Notably, data centers currently use up about 2% of the U.S. power generation capacity (https://www.energy.gov/eere/buildings/data-centers-and-servers).
For the purposes of simplifying data collection and as a nod to the dominance of Nvidia, let’s look at the energy efficiency trends in Nvidia Tesla GPUs over the past decade. Tesla GPUs are chosen because Nvidia has a policy of not selling their other consumer grade GPUs for data center use.
The data for the following was collected from Wikipedia’s page on Nvidia GPUs (https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units), which summarizes information that is publicly available from Nvidia’s product datasheets on their website. A floating point precision of 32-bits (single precision) is used for determining which FLOPS figures to use.
A more thorough analysis would probably also look at Google TPUs and AMDs lineup of GPUs, as well as Nvidia’s consumer grade GPUs. The analysis provided here can be seen more as a snapshot of the typical GPU most commonly used in today’s data centers.
Figure 1: The performance per watt of Nvidia Tesla GPUs from 2011 to 2022, in GigaFLOPS per Watt.
Notably the trend is positive. While wattages of individual cards have increased slightly over time, the performance has increased faster. Interestingly, the efficiency of these cards exceeds the efficiency of the most energy efficient supercomputers as seen in the Green500 for the same year (https://www.top500.org/lists/green500/).
An important consideration in all this is that energy efficiency is believed to have a possible hard physical limit, known as the Laudauer Limit (https://en.wikipedia.org/wiki/Landauer%27s_principle), which is dependent on the nature of entropy and information processing. Although, efforts have been made to develop reversible computation that could, in theory, get around this limit, it is not clear that such technology will ever actually be practical as all proposed forms seem to trade off this energy savings with substantial costs in space and time complexity (https://arxiv.org/abs/1708.08480).
Space complexity costs additional memory storage and time complexity requires additional operations to perform the same effective calculation. Both in practice translate into energy costs, whether it be the matter required to store the additional data, or the opportunity cost in terms of wasted operations.
More generally, it can be argued that useful information processing is efficient because it compresses information, extracting signal from noise, and filtering away irrelevant data. Neural networks for instance, rely on neural units that take in many inputs and generate a single output value that is propagated forward. This efficient aggregation of information is what makes neural networks powerful. Reversible computation in some sense reverses this efficiency, making its practicality, questionable.
Thus, it is perhaps useful to know how close we are to approaching the Laudauer Limit with our existing technology, and when to expect to reach it. The Laudauer Limit works out to 87 TeraFLOPS per watt assuming 32-bit floating point precision at room temperature.
Previous research to that end has proposed Koomey’s Law (https://en.wikipedia.org/wiki/Koomey%27s_law), which began as an expected doubling of energy efficiency every 1.57 years, but has since been revised down to once every 2.6 years. Figure 1 suggests that for Nvidia Tesla GPUs, it’s even slower.
Another interesting reason why energy efficiency may be relevant has to do with the real world benchmark of the human brain, which is believed to have evolved with energy efficiency as a critical constraint. Although the human brain is obviously not designed for general computation, we are able to roughly estimate the number of computations that the brain performs, and its related energy efficiency. Although the error bars on this calculation are significant, the human brain is estimated to perform at about 1 PetaFLOPS while using only 20 watts (https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/). This works out to approximately 50 TeraFLOPS per watt. This makes the human brain less powerful strictly speaking than our most powerful supercomputers, but more energy efficient than them by a significant margin.
Note that this is actually within an order of magnitude of the Laudauer Limit. Note also that the human brain is also roughly two and a half orders of magnitude more efficient than the most efficient Nvidia Tesla GPUs as of 2022.
On a grander scope, the question of energy efficiency is also relevant to the question of the ideal long term future. There is a scenario in Utilitarian moral philosophy known as the Utilitronium Shockwave, where the universe is hypothetically converted into the most dense possible computational matter and happiness emulations are run on this hardware to maximize happiness theoretically. This scenario is occasionally conjured up as a challenge against Utilitarian moral philosophy, but it would look very different if the most computationally efficient form of matter already existed in the form of the human brain. In such a case, the ideal future would correspond with an extraordinarily vast number of humans living excellent lives. Thus, if the human brain is in effect at the Laudauer Limit in terms of energy efficiency, and the Laudauer Limit holds against efforts towards reversible computing, we can argue in favour of this desirable human filled future.
In reality, due to entropy, it is energy that ultimately constrains the number of sentient entities that can populate the universe, rather than space, which is much more vast and largely empty. So, energy efficiency would logically be much more critical than density of matter.
This also has implications for population ethics. Assuming that entropy cannot be reversed, and the cost of living and existing requires converting some amount of usable energy into entropy, then there is a hard limit on the number of human beings that can be born into the universe. Thus, more people born at this particular moment in time implies an equivalent reduction of possible people in the future. This creates a tradeoff. People born in the present have potentially vast value in terms of influencing the future, but they will likely live worse lives than those who are born into that probably better future.
Interesting philosophical implications aside, the shrinking gap between GPU efficiency and the human brain sets a potential timeline. Once this gap in efficiency is bridged, it theoretically makes computers as energy efficient as human brains, and it should be possible at that point to emulate a human mind on hardware such that you could essentially have a synthetic human that is as economical as a biological human. This is comparable to the Ems that the economist Robin Hanson describes in his book, The Age of EM. The possibility of duplicating copies of human minds comes with its own economic and social considerations.
So, how long away is this point? Given the trend observed with GPU efficiency growth, it looks like a doubling occurs about every three years. Thus, one can expect an order of magnitude improvement in about thirty years, and two and a half orders of magnitude in seventy-five years. As mentioned, two and a half orders of magnitude is the current distance from existing GPUs and the human brain. Thus, we can roughly anticipate this to be around 2100. We can also expect to reach the Laudauer Limit shortly thereafter.
Most AI safety timelines are much sooner than this however, so it is likely that we will have to deal with aligning AGI before the potential boost that could come from having synthetic human minds or the potential barrier of the Laudauer Limit slowing down AI capabilities development.
In terms of future research considerations, a logical next step would be to look at how quickly the overall power consumption of data centers is increasing and also the current growth rates of electricity production to see to what extent they are sustainable and whether improvements to energy efficiency will be outpaced by demand. If so, that could act to slow the pace of machine learning research that relies on very large models trained on massive amounts of compute. This is in addition to other potential limits, such as the rate of data generation for large language models, which depend on massive datasets of essentially the entire Internet at this point.
The nature of current modern computation is that it is not free. It requires available energy to be expended and converted to entropy. Barring radical new innovations like practical reversible computers, this has the potential to be a long-term limiting factor in the advancement of machine learning technologies that rely heavily on parallel processing accelerators like GPUs.
One thing I’ve learned from observing people and society is the awareness that the vast majority of folks are egoistic, or selfish. They tend to care about their own happiness and are at best indifferent to the happiness of others unless they have some kind of relationship with that person, in which case they care about that person’s happiness in so far as it has an effect on their own happiness to keep that person happy. This is the natural, neutral state of affairs. It is unnatural to care about other people’s happiness for the sake of themselves as ends. We call such unnatural behaviour “altruism”, and tend to glorify it in narratives but avoid actually being that way in reality.
In an ideal world, all people would be altruistic. They would equally value their own happiness and the happiness of each other person because we are all persons deserving happiness. Instead, reality is mostly a world of selfishness. To me, the root of all evil is this egoism, this lack of concern for the well-being of others that is the norm in our society.
I say this knowing that I am a hypocrite. I say this as someone who tries to be altruistic at times, but is very inconsistent with the application of the principles that it logically entails. If I were a saint, I would have sold everything I didn’t need and donated at least half my gross income to charities that help the global poor. I would be vegan. I would probably not live in a nice house and own a car (a hybrid at least) and be busy living a pleasant life with my family.
Instead, I donate a small fraction of my gross income to charity and call it a day. I occasionally make the effort to help my friends and family when they are in obvious need. I still eat meat and play computer games and own a grand piano that I don’t need.
The reality is that altruism is hard. Doing the right thing for the right reasons requires sacrificing our selfish desires. Most people don’t even begin to bother. In their world view, acts of kindness and altruism are seen with suspicion, as having ulterior motives of virtue signalling or guilt tripping or something else. In such a world, we are not rewarded for doing good, but punished. The incentives favour egoism. That’s why the world runs on capitalism after all.
And so, the world is the way it is. People largely don’t do the right thing, and don’t even realize there is a right thing to do. Most of them don’t care. There are seven billion people in this world right now, and most likely, only a tiny handful of people care that you or I even exist, much less act consistently towards our well-being and happiness.
So, why am I bothering to explain this to you? Because I think we can do better. Not be perfect, but better. We can do more to try to care about others and make the effort to make the world a better place. I believe I do this with my modest donations to charity, and my acts of kindness towards friends and strangers alike. These are small victories for goodness and justice and should be celebrated, even if in the end we fall short of being saints.
In the end, the direction you go in is more important than the magnitude of the step you take. Many small steps in the right direction will get you to where you want to be eventually. Conversely, if your direction is wrong, then bigger steps aren’t always better.