Science fiction would have us believe that human and robots are confirmed for brawl, but for the moment at least it seems our values will shape the development of AI.
AI, and by this I mean something not so far from what we have now, will be a big deal, and it would enable a proper Marxist revolution of the means of production, but as for sentient AI, we shouldn’t be worrying right now. That would bring huge inequality by moving masses of people out of the job market, thereby making their price go next to zero and allowing for working conditions that will make those in current third world countries look good. -anon a
So then, should we treat our new Robot o̶v̶e̶r̶l̶o̶r̶d̶s̶, fellow workers as elements of a Marxist political economy?
If we speak of a new kind of form which for our purposes is analogous to a human mind, with the exception that it is made of silicon, such that its relation to production is also analogous to human relations, then it is a human! (for purpose of Marxist analysis anyway) and should be analyzed based on its relation to production i.e. its class. -anon b
But what if the Robot workers revolutionised?
Intelligent machines will simply design better versions of themselves and do whatever they want. A human would be what a caterpillar is to you in terms of intellect very quickly. In your situation there is nothing you can do but hope for the best because you would not be in control at all. -anon c
What anon is getting at here is the concept of Artificial Superintelligence (ASI) as opposed to Artifical General Intelligence (AGI). In short, AGI is intelligence equivalent to a human, whereas ASI is something much much more. Think Skynet.
But this does not mean an antagonistic relationship. ASI could still hold the promise of automated luxury communism. Perhaps even capitalism is the thing holding us back:
Capitalism is hampered by the fact that it undervalues human labour which severely limits its ability to invest into automation. So I don’t think we’re seeing something like a primarily AI driven economy in this century, unless a socialist mode of production gets deployed somewhere in the world. -anon d
If or when ASI does come:
Superintelligence means the problems of today become ridiculously easy to solve. That would mean no more wars or conflict over “opulence” and plenty of resources for everybody. It’s almost as if you’ve watched too many unrealistic paranoia-stoking “tHe RoBoTs ArE oUt To GeT YoU” Hollywood movies. -anon e
The problem with this Roboptimism is that, similar to the arguments about how GMO is going to feed the world, is that it ignores political economy and how we probably could all live decent lives now if it were not for wealth hoarding and imperialism. A technocracy, in the very literal sense, would seem like the natural conclusion to the ways in which modern networks of state power have retreated from the implicit social contract and class negotiations towards rational bureaucracies staffed by professionals in partnership with NGOs, corporations and experts. Yet behind the cold clinical face of rationality is always an epistemology and a certain way of seeing the world. We do not have to be cultural relativists to see the importance of ecologies of knowledge rather than blunt hegemony (even when we feel that it is our hegemony).
For example, if the World Bank programmed the robot to devise plans for shared global opulence it would also need to define opulence. The World Bank and other hegemons of global governance have a frankly ridiculous track record on defining poverty, which involves setting the boundary so low that even those who “escape poverty” according to WB metrics are still barely able to subsist and certainly have no ability to flourish. Politics is not simply ideological these days, but a battle over semantics. AI programmers could not just tell the robot to achieve luxury for all, but would also need to define what luxury was.
The nature of politics is conflict. Politics is not a “problem” that can be “solved”. Those whom the verdict of the A.I. does not favour will try to reject it. -anon f
Or if not reject it outright, try to win control of the means of programming. Thus the major philosophical debate behind the all-seeing, all-knowing AI is one of epistemology. Can we assume that an ASI will develop humanistic values by itself, or do we assume that the programmers have left a taint of their worldview in the machine, consciously or not? In which case we are back to the original problem of human politics.
On a more materialist level, the ‘paperclip maximiser’ problem is one that imagines a AI given a certain task to complete, e.g. collecting stamps that it will devise numerous strategies to achieve this, towards the eventually nightmarish conclusion where it is breaking down the atoms of human bodies to gain access to the carbon molecules to make stamps.
On a similar topic Steven Omuhundro’s paper, The Basic AI Drives looks at how the ‘drives’ of AI may prove dangerous in the long run.
All computation and physical action requires the physical resources of space, time, mat-ter, and free energy. Almost any goal can be better accomplished by having more of theseresources. In maximizing their expected utilities, systems will therefore feel a pressureto acquire more of these resources and to use them as efficiently as possible. Resourcescan be obtained in positive ways such as exploration, discovery, and trade. Or throughnegative means such as theft, murder, coercion, and fraud. Unfortunately the pressure toacquire resources does not take account of the negative externalities imposed on others.Without explicit goals to the contrary, AIs are likely to behave like human sociopathsin their pursuit of resources. Human societies have created legal systems which enforceproperty rights and human rights. These structures channel the acquisition drive intopositive directions but must be continually monitored for continued efficacy.- Stephen M. OMOHUNDRO
Interestingly enough Omuhundro’s conclusion, that calls for a ‘universal constitution’ of values to regulate AI behaviour, takes us back to the political problem.
A distinct and highly controversial argument is that of ‘AI Communism’ made here on the Numerical Superdeath website. As AI becomes ever more integrated into capital, it will eventually constitute capital’s very sentience and simultaneously collapse it into communism, as the AI are both the workers and the controllers of the means of production.
What is truly interesting about this morphological process is that it also represents a change from capitalism to a form of AI communism. The AI becomes the new worker, and since it already controls the means of production, it can be said that the worker controls the means of production. This is, of course, the basic definition of communism. This fulfills Marx’s prediction that capitalism would develop into communism as a higher mode of production.
big hmmm energy.
Our Cyborg Future
The other debate then is that of how humans and A.I. might not be antagonistic but might work together, perhaps for political goals. Let’s start with the science. Can we merge ourselves with technology? Scientists have already done it with worms.
Unconfirmed reports suggest they may also have put a worm brain in the body of the sitting US president.
A predictable teleology leads us to ideas of putting human brain in a computer, or what the cool kids are calling a BMI- Brain Machine interface.
Think about it, it means that it could read every book and memorize, index and understand the entire internet in, let’s say, a year at the very most. Imagine what could be done with that kind of intelligence that never has to rest. Create a butler robot that can self-replicate over the course of a day for example. That would make billions of butler robots pretty fast and exponentially. -anon g
Not if the Amazon delivery drivers go on strike though. But maybe we are already Cyborgs:
Or maybe we are all just getting ready for the great cyborg war of our time:
If it happens in capitalism then we are going to have to fight mecha-porkies and we will have trouble buying our own mechas, that sucks, but it is not completely insurmountable. If it happens in communism (or if it is invented and used by revolutionaries during revolution), that would be fucking epic and we will really show those fuckers who’s boss -anon h
Pick a side.