‘A single death is a tragedy; a million deaths is a statistic.’

These words are commonly attributed to Joseph Stalin, though there is no reliable evidence of this.

Its persistence has less to do with its origin than with its accuracy. It captures a basic feature of how human judgement works: what can be seen and felt as a particular case registers as morally significant, while what exists at scale tends to recede into abstraction.

A dog falls ill. The diagnosis is serious but not immediately terminal. There are options: scans, specialist consultations, a course of treatment that might extend its life – months, perhaps longer if things go well. The costs escalate quickly, thousands become tens of thousands. The outcome is uncertain, but not hopeless. So the decision is made to proceed.

Set out differently, the decision takes on a different shape. The same resources could have funded vaccinations, supported a shelter, or provided basic care for many animals rather than intensive care for one. The choice is not between care and indifference, but between the identifiable case and the statistical many. What becomes apparent is that the former, somehow, carries more weight. What can be seen and named tends to outweigh what can only be counted.

This is a feature of how moral attention works. Responsibility attaches more easily to a single, visible case than to a dispersed set of outcomes. What is immediate and legible is treated as morally salient; what is diffuse and mediated is treated as background. The particular exerts a pull that the aggregate does not. That asymmetry – between what can be seen and what must be inferred – does not remain confined to private decisions.

Consider Bill Gates. There are two familiar ways of telling his story. The first concerns the building of Microsoft into one of the most influential firms of the late twentieth century. The second concerns the redirection of a substantial portion of that wealth through the Bill & Melinda Gates Foundation.

The second is easier to read morally. It is possible to point to specific programmes, to count vaccinations, to track reductions in disease burden, and to attribute outcomes to deliberate acts of giving. The structure is clear: a problem is identified, resources are deployed, a result follows. It fits comfortably within a familiar understanding of what it means to do good. It is visible, intentional, and narratively satisfying.

Harder to place

The first is harder to place in those terms. The development of widely used operating systems, the standardisation of software environments, and the spread of personal computing do not present themselves as discrete moral acts. Their effects are diffuse. They accumulate over time and across contexts. They are mediated through millions of users, firms, and institutions. The line between action and outcome is extended, and often obscured.

There are, of course, well-rehearsed criticisms of that period – questions of market dominance, of strategic behaviour that constrained competitors, of the effects of lock-in. Those concerns are real, but they are not the point here. What matters is the difference in how the two phases are perceived, and the criteria by which they are judged.

If the effects of the first phase are considered in aggregate, they are impossible to ignore. The spread of affordable computing altered the fundamental conditions under which economic and social activity takes place. Communication became faster and cheaper. Information could be stored, retrieved, and transmitted at scale. Coordination across distance became routine. Businesses reorganised around digital systems. Entire sectors emerged that depended on the existence of a broadly shared computing infrastructure.

These changes did not appear as a single, decisive improvement. They appeared as a series of incremental shifts: marginal gains in efficiency, small reductions in cost, gradual increases in speed. Each, on its own, might seem modest.

Taken together, they reshaped the baseline of what could be done, and how quickly it could be done. Their consequences extend into domains not immediately associated with computing. Healthcare systems rely on digital records and data analysis. Supply chains depend on software coordination. Educational materials are created, distributed, and accessed electronically. The effects are not contained within a single field, and they do not present themselves as outcomes that can be straightforwardly counted.

By contrast, the work of the foundation operates within those conditions. It targets specific problems – infectious diseases, gaps in educational provision, failures of access – and seeks to improve outcomes in identifiable ways. The scale is substantial, and the impact often measurable. Lives are saved, diseases are reduced, resources are delivered where they were previously absent. The form of the activity aligns with the form of the moral judgement: intention, intervention, outcome.

Particular deficiencies

The difference between these two forms of impact is not merely one of scale, but of structure. One reshapes the environment within which social and economic activity takes place. The other operates within that environment to address particular deficiencies. One produces effects that are diffuse, compounding, and difficult to attribute. The other produces effects that are targeted, bounded, and readily traceable.

At this point, the argument can be stated more directly. The distinction is not simply descriptive. It has consequences for how we think about wealth and for the standards we apply when judging it. The activities that attract the most moral attention are not necessarily those that have the largest effect, but those that produce visible acts of giving. We end up treating philanthropy as the benchmark of moral worth, while treating the processes that shape living standards, productivity, and opportunity as morally secondary or suspect by default. The result is a form of public judgement that tracks legibility rather than impact, and that rewards the appearance of doing good more reliably than the conditions that make large-scale improvements possible. This is why debates about wealth so often feel unserious: they are organised around visible gestures rather than underlying effects.

The demand placed on wealth is not simply that it generates value, but that it be seen to redistribute it. The question that comes to dominate is not ‘what has this activity done to the world?’ but ‘where is the giving?’ Philanthropy becomes the visible proof of moral worth, and in its absence, suspicion fills the gap. The underlying activities – those that shape productivity, expand capacity, and alter the conditions under which people live and work – are treated as morally ambiguous, even when their effects are substantial.

This does not imply that system-level activity is unambiguously beneficial. The point is not to replace one simplification with another.

Figures such as Elon Musk illustrate the complication. Through companies such as Tesla and SpaceX (not to mention Neuralink), he is engaged in activities that reshape industries and alter cost structures.

Electric vehicles have moved from niche to mainstream consideration. Launch costs have fallen dramatically, changing the economics of space access. These are system-level changes with wide-ranging implications.

At the same time, other areas of activity – particularly those connected to large-scale information platforms such as X (formerly Twitter) – have produced outcomes that are more contested. The effects on public discourse, information flow, and institutional trust are debated, and not easily resolved. The overall balance is not straightforward. What this shows is not that such activity is good or bad in any simple sense, but that its consequences are large, distributed, and difficult to evaluate using the same criteria that apply to targeted interventions. These are not moral acts in the philanthropic sense; they are structural acts with moral consequences.

Relevant distinction

At that level, the relevant distinction is not between doing good and doing harm in the philanthropic sense, but between generating large-scale consequences whose effects are difficult to predict and unevenly distributed. If the largest effects are associated with system-level activity, then it is also at that level that the greatest risks lie. These are not activities that produce more good in a straightforward way. They produce more consequences, full stop.

Thus, the appeal of targeted interventions lies in their clarity. A problem is defined, a response is designed, and an outcome can be observed. The moral vocabulary is well suited to this form: intention, action, result. There is a directness that makes evaluation possible, and that aligns with how moral judgements are typically formed.

The difficulty is that this clarity can mislead when taken as a guide to overall impact. The mechanisms that produce large-scale changes in living standards, health outcomes, and opportunities for advancement are often indirect. They involve the accumulation of small efficiencies, the spread of technologies, and the gradual reorganisation of economic and social relations. These processes do not lend themselves to narrative. They operate below the level at which moral attention is typically directed.

Returning to the initial example clarifies the point. The decision to devote substantial resources to the care of a single animal is intelligible because the object of concern is clear. It is possible to see what is being done, and for whom. The alternative – allocating those resources across a larger number of animals – is less compelling at the level of immediate experience, even if it may produce a greater aggregate benefit. The difference lies not in the presence or absence of concern, but in how that concern is structured.

Something similar occurs in the evaluation of wealth and its uses. Acts of giving are structured in a way that aligns with familiar moral categories. They are intentional, visible, and attributable. System-building activities are not. They are mediated through markets, institutions, and technologies. Their benefits are dispersed and often delayed. As a result, they tend to be assessed in terms that are not explicitly moral, even when their consequences are profound.

Misreading

The result is not simply a misreading of individual actors, but a distortion in how we understand progress itself. We become more attuned to acts of redistribution than to the conditions that make large-scale improvements possible. In doing so, we risk mistaking the visible signs of doing good for the sources of it.

What we most readily recognise as good is not necessarily what does the most good, but what allows us to see good being done. In a world shaped by large systems, that preference leaves us poorly equipped to recognise where the most consequential changes actually occur.

[Image: Tolu Akinyemi 🇳🇬 on Unsplash]

The views of the writer are not necessarily the views of the Daily Friend or the IRR.

If you like what you have just read, support the Daily Friend


contributor

Peter Swanepoel is a historian and writer affiliated with the University of Johannesburg’s History Department, where he works under the supervision of Professor Thembisa Waetjen. His research focuses on the politics and institutional cultures of South African cycling under apartheid. He is the co-author of The Daisy Spy Ring: How South African Intelligence Agents Infiltrated and Disrupted the SA Communist Party (Naledi, 2025) and is currently completing doctoral research with funding from the National Research Foundation. He also writes on politics, history, and society, with an emphasis on institutional analysis, historical context, and moral clarity.