domingo, 4 de julho de 2021

Algorithmic governmentality and prospects of emancipation bbb

 

In memory of Alain Desrosières and of inspiring conversations. Our work bears the marks of his invaluable suggestions.

1The new opportunities for statistical aggregation, analysis and correlation afforded by big data are taking us away from traditional statistical perspectives focused on the average man to “capture” “social reality” as such, directly and immanently, from a perspective devoid of any relation to “the average” or “the norm” [1][1]Note that the average man theory developed by Quételet is a…. “A-normative objectivity”, or even “tele-objectivity” (Virilio, 2006: 4), the new regime of digital truth, is exemplified by multiple new automatic systems modelling “social reality” [2][2]On this point see IBM’s “Big Data in Action” presentation:…, both remotely and in real time, compounding the contextualization and automatic personalization of interactions surrounding security, health, administration, business, etc. [3][3]“Smarter marketing”, or individualized marketing based on… We here assess to what extent, and with what consequences, the “tele-objectivity” of these algorithmic uses of statistics allows those systems to become mirrors of the most immanent normativities [4][4]Immanent norms are those that are not imposed externally but… in society, informing all measurement or relation to the norm, all convention and evaluation, as well as allowing those system to contribute to (re)producing and multiplying this immanent normativity (immanent in life itself, Canguilhem would say), albeit by obscuring social normativities, silencing these as far as possible because they cannot be translated digitally.

2We need to clarify somewhat this independence from any pre-existing norm. By referring to the a-normativity of algorithmic governmentality we are not claiming that its technical frameworks spontaneously arise from the digitized world, autonomously and independently of all human intentionality or technological “script”, nor that security, marketing or entertainment applications (to mention but a few) which integrate these self-learning algorithmic systems are only responding to a demand from the human actors concerned [5][5]Contrary to what is suggested by the organic metaphors used by…. The critique we develop regarding algorithmic governmentality neither overlooks nor invalidates the science and technology studies’ perspective; we merely focus on something other than the mechanisms of co-construction between technological systems and human actors. We here simply argue that datamining, used for profiling purposes (irrespective of the applications), following a correlation rationale to rebuild singular cases fragmented by coding, relates these singular cases not to a general norm, but only to a system of eminently evolving relations between various measurements that are not reducible to any average. [6][6]On the distinction between models of correlation and or… This emancipation from all forms of average stems essentially from the self-learning nature of these systems, and can be considered inherent to contemporary normative action.

3From this point of view we can also say that algorithmic governmentality departs from the conventional origin of statistical information, as described by Alain Desrosières (1992: 132): “Statistical information does not fall out the sky like a pure reflection of a pre-existing ‘reality’. Quite the contrary, it can be seen as the provisional and fragile consecration of a series of conventions of equivalence between beings which multiple uncoordinated forces are constantly trying to differentiate and separate”. This conventional origin of statistical information means that “the tension between the fact that this information claims to be a reference of debate and the fact that it can however always be challenged and thus become the object of debate presents major challenges for thinking about the conditions of the possibility of a public space”. Because they are no longer rooted in any convention, the particular uses of statistics involved in datamining operations avoid this pitfall. However, as we shall see further on, that does not mean they generate public space: on the contrary, under cover of “personalizing” information, service and product offers, in the algorithmic governmentality era one is rather witnessing a colonization of public space by a hypertrophied private sphere. So much so that there are fears that new forms of information filtering will result in forms of informational immunization conducive to a radicalization of opinions and the disappearance of shared experience (Sunstein, 2009). This goes without mentioning the trend towards the systematic capturing of any available human attention for the benefit of private interests (the attention economy), rather than to foster democratic debate and serve the general interest.

4We start by describing the functioning of statistics for decision making (decisional statistics), understood in very generic terms as the automated extraction of relevant information from massive databases for forecasting or exclusion purposes (consumption, risks, customer loyalty development, the definition of new customer bases, etc.). In order to bring this to light, we break down this statistical practice into three stages which are in fact blurred (and are actually all the more effective because they are blurred). Each time, we show how individual subjects are in fact avoided, to the extent that this creates a sort of statistical “double” of both subjects and “reality”. Second, after examining this statistical “double” and indicating that at this stage it hinders any subjectification process, we show that algorithmic governmentality thus focuses not on individuals, on subject, but on relations. Finally, based on this observation, we show how relations themselves are transformed, to the extent that they are paradoxically substantified and represent an extraction from the becoming, and therefore an obstacle to the individuation process – rather than being strongly embedded in that process. The becoming and the individuation processes are a matter of “disparation”, in other words the processes of integration of disparities or differences into a coordinated system. However, more fundamentally still, they are a matter of “disparateness”: a heterogeneity of orders of magnitude and a multiplicity of regimes of existence constantly stifled by algorithmic governmentality through the closing in of (digital) reality on itself [7][7]“Gilbert Simondon has shown […] that individuation presupposes….

The three “stages” of algorithmic governmentality

The collection of big data and the constitution of data warehouses

5The first stage consists of the collection and automated storage of unfiltered mass data, what can be called dataveillance, integral to big data. The data are available in massive quantities, from various sources. Governments collect them for the purposes of security, control, resource management, spending optimization, etc. Private companies collect large quantities of data for marketing and advertising purposes, to customize offers, to improve their stock management or their service offers, in short, to improve their sales efficiency and therefore their profits, etc. Scientists collect data for knowledge acquisition and improvement purposes, etc. Individuals themselves willingly share “their” data on social networks, blogs, “mailing lists”, etc. All these data are stored electronically in “data warehouses” with virtually unlimited storage capacities and potentially accessible at any time from any computer connected to Internet, anywhere in the world. These data are collected and stored as much as possible by default, devoid of any prediction about specific end uses of this collection, in other words the purposes that the data will serve once correlated with other data They consist of information that is given up rather than relinquished, traces left and not data shared, though they do not seem to be “stolen”, and they also appear as absolutely ordinary and scattered. Together, all these factors eliminate or at least conceal any end goal; they minimize the subject’s involvement, and therefore the consent which can be required for this information sharing, thus removing all form of intentionality.

6These data therefore seem to constitute a generalized digital behaviourism (Rouvroy, 2013a) insofar as they plainly express the multiple facets of reality, breaking it down fully, but in a perfectly segmented way, without any collective meaning other than that of unpacking reality. This seems to be the most novel phenomenon: be it to keep the trace of a purchase, of a trip, of the use of a word or of a language, each element is brought down to its most basic state, in other words both stripped of the context in which it arose and reduced to «data». A piece of data is then just a signal cleansed of any inherent meaning – which is of course why we tolerate leaving traces, but it is also what seems to support their claim to perfect objectivity: such heterogeneous data, so unmotivated, so material and so free of subjectivity, they cannot lie! We should point out here that the very evolution of technological capacities reinforces this type of objectivity of data escaping all subjectivity: our software programs are now able to recognize emotions and to turn them into data, or to translate facial movements or skin tones into statistical data, for example to measure a product’s appeal, the (sub-)optimal layout of goods on a stall or a passenger’s suspicious behaviour. What is interesting is that the main characteristic of such data is that they are perfectly innocuous, can remain anonymous and are non-controllable. It follows that we quite readily give them up, for as they bear no meaning (at least as long as they are not correlated), are far less intrusive than a loyalty card, and do not seem to lie; in other words, they can be considered to be perfectly objective! This harmlessness and objectivity are both due to a sort of avoidance of subjectivity.

Data processing and knowledge production

7The second stage is that of datamining as such, in other words the automated processing of these big data to identify subtle correlations between them. It seems crucial to note here that it is therefore a matter of knowledge production (statistical knowledge comprised of simple correlations) based on information that is unsorted and therefore perfectly heterogeneous. This knowledge production is automated, which means that it requires minimal human intervention, and is uninformed by any pre-existing hypothesis (unlike traditional statistics used to substantiate a hypothesis), so again avoiding all form of subjectivity. The purpose of what is called machine learning is ultimately to directly enable the production of hypotheses based on the data themselves. Thus, we are once again faced with the idea of knowledge which could hold absolute objectivity, by being removed from all subjective intervention (all hypothesis formulation, all sorting between what is relevant and what is thought to be just “noise”, etc.). Norms seem to emerge directly from reality itself. These norms or this “knowledge” are however “only” comprised of correlations [8][8]Here we can cite Chris Anderson, editor-in-chief of Wired, in…. This is not a problem per se if we remember that the very condition of a scientific ethos and of a political ethos is to preserve doubt, to remain wary of the sufficiency of correlations, to maintain the distinction between correlation and cause, to be wary of the self-performative “effects” of correlations (their retroactive capacity), to avoid that decisions producing legal effects regarding individuals or affecting them significantly be made solely on the basis of automated data processing [9][9]Note that the EU legal regime of personal data protection…, and to consider that politics (particularly the concern for mutualizing risks) must fundamentally refuse to act on the sole basis of correlations. It seems important to remember this, given the trend towards a world that seems to be functioning increasingly as if it were itself made of correlations, as though it were enough to establish these to ensure that it ran smoothly [10][10]The race to claim the greatest objectivity precisely and very….

Action on behaviours

8In order to properly understand what constitutes the algorithmic profiling discussed here, it is important to understand the crucial difference that exists between information at individual level, on the one hand, which more often than not is observable or perceptible by the individual concerned, and on the other hand the knowledge produced through the profiling. Most of the time, this knowledge is not available to individuals and they cannot perceive it, but it is nevertheless applied to them in such a way as to infer knowledge or probabilistic predictions regarding their preferences, intentions and propensities which would otherwise not be evident (Van Otterloo, 2013).

9The third stage consists in using this probabilistic statistical knowledge to anticipate individual behaviours and associate them with profiles defined on the basis of correlations discovered through datamining. This stage of the application of the norm to individual behaviours, the most evident examples of which can be found in a great variety of spheres of human existence (obtaining credit, deciding on a surgical operation, pricing an insurance contract, suggesting targeted purchases on online shops, etc.), is less relevant here. We will simply note three things: firstly, predictive effectiveness is all the greater if it results from the aggregation of big data, in other words data that are “simply” capable of reflecting the diversity of reality itself [11][11]We should question here the very nature of this effectiveness…. Second, action based on the anticipation of individual behaviours could in the future be increasingly limited to an intervention on their environment, especially if the environment itself is reactive and intelligent, that is, if it collects data in real-time through multiple sensors, and shares and processes them to constantly adapt to specific needs and dangers, which is already the case at least during the significant part of life that individuals spend online. Thus, once again, this avoids any form of direct constraint on individuals to rather make their disobedience or certain forms of marginality ever less probable, at the level of their very environment. Third, the profile “linked” to an individual’s behaviour could itself be tailored perfectly efficiently, by multiplying the correlations used, to the extent of it seeming as though all discriminatory categories are avoided, and even of being able to take into account what is most specific to each individual, what is most distant from big numbers and averages. In short, this presents the possibility of a seemingly perfectly “democratic” normativity, devoid of any reference to general classes and categories – in fact, algorithms’ blindness to socially experienced categorizations (social, political, religious, ethnic, gendered, etc.) is the recurrent argument used by advocates of these algorithms replacing human evaluation (particularly in airports) (Zarsky, 2011). In their seemingly non-selective way of relating to the world, datamining and algorithmic profiling appear to take into consideration the entirety of each reality, right down to its most trivial and insignificant aspects, putting the whole world on par – the businessman and the charwoman, the Sikh and the Icelandic. The aim is no longer to exclude anything that does not fit the average but to avoid the unpredictable, to make sure that everyone is truly themselves.

Governance without a subject, but not without a target?

10As noted above, the three stages described merge with one another, and their normative functioning is rendered especially powerful and processual by the fact that they mutually reinforce one another (further concealing end uses, further reducing any possibility of intentionality, adapting to our own reality even more, etc.). We thus use the term algorithmic governmentality to refer very broadly to a certain type of (a)normative or (a)political rationality founded on the automated collection, aggregation and analysis of big data so as to model, anticipate and pre-emptively affect possible behaviours. According to the general tenets of statistical thinking [12][12]See, amongst others, Berns (2009), Desrosières (2000, 2008,…, the apparent shifts currently produced by the transition from statistical governance to algorithmic governance, which allegedly give meaning to the phenomenon of rarefaction of subjectification processes, are therefore as follows. First, there is an apparent individualization of statistics (with the evident antinomy thus expressed), claimed no longer to be conveyed (or no longer seeming to be conveyed) by references to the average man, that is ushering in the idea of one becoming one’s own profile, automatically attributed and evolving in real time. Second, there is growing concern about avoiding the danger of tyrannical statistics that might reduce the statistical objects to cattle, by making sure that this statistical practice develops as though our consent were given, since it is because we are each unique that algorithmic governance claims to address each person through their profile. Rather than agreement or event consent, this is a matter of adhesion by default to a normativity as immanent as that of life itself. It is thus argued that inherent to contemporary statistical practice is the expression of individuals’ tacit adhesion. Hence a possible decline of subjectifying reflexivity, and the reduction of opportunities to challenge forms of “knowledge” production based on datamining and profiling. Algorithmic governmentality produces no subjectification, it circumvents and avoids reflexive human subjects, feeding on infra-individual data which are meaningless on their own, to build supra-individual models of behaviours or profiles without ever involving the individual, and without ever asking them to themselves describe what they are or what they could become. The moment of reflexivity, critique and recalcitrance necessary for subjectification to form seems to constantly become more complicated or to be postponed (Rouvroy, 2011). Algorithmic governmentality, with its perfect adaptation in “real time”, its “virality”, (the more it is used, the more the algorithmic system is refined and improves, since all interaction between the system and the world translates into a recording of digitized data, correlative enrichment of the “statistical base”, and improvement of the algorithms’ performance) and its plasticity, renders the very notion of “misfire” meaningless; in other words, a misfire cannot “jeopardize” the system, it is immediately re-ingested to further refine behavioural models or profiles. Moreover, depending on the objective of algorithmic systems’ application – for example fraud, crime or terrorism prevention – “false positives” will never be interpreted as “misfires”, since the system follows a screening rather than a diagnostic approach: the aim is to not miss any true positives, irrespective of the rate of false positives.

11Of course, the project of individualized and soft anticipation of behaviours is not what is surprising or concerning, irrespective of its extent. It is nevertheless worth highlighting the paradox, from the outset, of relying on non-intentional “apparatuses”, in other words a-signifying machines, to minimize or eradicate uncertainty, thus relinquishing the ambition of giving meaning to events. In fact these are no longer necessarily treated as events, since each one can just as well be broken down into a network of data re-aggregated with other data, independently of the events as they are occurring and perceived as such by human beings. Algorithmic governmentality is therefore constantly “shuffling the cards”, moving away from a “historical” or “genealogical” perspective (Rouvroy, 2013b).

12Increasingly, “power” grasps the subjects of algorithmic governmentality no longer through their physical body, nor through their moral conscience – the traditional holds of power in its legal discursive form [13][13]As well as its disciplinary form – to use Foucauldian models of… – but through multiple “profiles” assigned to them, often automatically, based on digital traces of their existence and their everyday journeys. Algorithmic governmentality is quite close to what Foucault already had in mind with his concept of security apparatuses:

13

“The regulator of a milieu, which involved not so much establishing limits and frontiers, or fixing locations, as, above all and essentially, making possible, guaranteeing, and ensuring circulations: the circulation of people, merchandise, and air, etcetera”.
(Foucault, 2009: 51)

14The fact of power having a digital rather than a physical “grasp” in no way means that individuals are ontologically and existentially reducible to networks of data that can be recombined by apparatuses, nor that they are totally under the grip of these apparatuses. It simply means that, irrespective of their capacity for understanding, willpower and expression, “power” approaches them no longer on the basis of these capacities, but rather on that of their “profiles” (as a potential fraudster, a consumer, a potential terrorist, a student with high potential, etc.). Algorithmic governmentality further exacerbates the ambivalences of the time regarding the question of individualization. Our era is often considered as that of the victory of the individual, in the sense that an individualization of services is observed, due to the possibility afforded by statistical practices to closely target the needs and dangers specific to each individual. At the same time, it is also seen as an era in which individuals are jeopardized, as their intimacy, privacy, autonomy and self-determination are threatened by those very practices. Some even write about the risks of pure desubjectification. Both hypotheses – that of the individual at the centre of everything, and that of desubjectification – are, in our opinion, equally wrong. Let us see why.

Is personalization really a form of individuation?

15IBM presents “individualized” marketing – “smart marketing” – as a revolution that is turning marketing and advertising into “consumer-oriented services”, sounding the great return of the customer-king who, placed at the heart of companies’ concerns, no longer has to even conceive of or express his or her desires, which are commands. In the words of Éric Schmidt, the CEO of Google: “we know roughly who you are, roughly what you care about, roughly who your friends are [in other words we know your ‘school of fish’] the technology will be so good it will be very hard for people to watch or consume something that has not in some sense been tailored for them” (in other words a seemingly individualized prediction would be possible). In fact, this form of individualization resembles more of a hyper-segmentation and a hyper-plasticity of commercial offers than comprehensive consideration of the needs and desires specific to each person. In fact the aim is of course precisely not so much to tailor the offer to individuals’ spontaneous desires (assuming such a thing exists), as to adapt those desires to the offer by tailoring sales strategies (the way of presenting the product, of pricing it, etc.) to each person’s profile. Thus, “dynamic pricing” strategies or the adaptation of certain goods’ or services’ price to each potential customer’s “willingness to pay” appear to already be in place on certain airline ticket sales websites. This is not only about individualization: it is indeed market segmentation. Here is a rather trivial example: you go onto the website of an airline whose name we shall not mention (call it Company Y) and find out about the price of a flight to Pisa from Brussels, leaving in three days. Say that the price shown is €180. As this is a bit too expensive for you, you go onto another company’s website (Company Z), or you look elsewhere online, to find a cheaper ticket. Suppose that you do not find better. You return to Company Y’s website and – surprise surprise – you realise that the ticket price has increased by €50 within less than half an hour, just the time for you to do your research. This is simply because you have been attributed a “captive traveller” profile: based on your online browsing and your desired departure date, the website has detected that you really need this airplane ticket and that you will therefore be prepared to spend an extra €50 to get it, especially since you will have the impression that if you do not hurry up to buy it, the price will only increase. If, instead of reacting “logically” and buying the ticket as fast as possible, you change computer and IP address and visit the airline’s website once again, your ticket will cost you €180 instead of €230. Why? Because the vendor relies on your first reflex being to buy as soon as possible following the “alert” raised: the price is increasing, and fast. In this case, the consequences are relatively trivial. But this example clearly shows how, rather than scrupulously respecting each singular consumer’s individual desires, the approach automatically detects certain (purchase) propensities and the (in)elasticity of individual demand regarding a price variation to trigger a purchase. The latter will then be based on a reflex response to an alert stimulus short-circuiting individual reflexivity and the formation of singular desire.

16The aim is therefore to prompt individuals to act without forming or formulating a desire. Algorithmic governance thus seems to signal the culmination of a dispersal of the spatial, temporal and linguistic conditions of subjectification and individuation. These are being replaced by objective, operational regulation of possible behaviours, based on “raw data” that carry no meaning on their own and whose statistical processing is primarily designed to accelerate flows – avoiding any form of “detour” or subjective “reflexive suspension” between “stimuli” and their “reflex responses”. The fact that what thus “flows” is a-signifying is of no importance [14][14]On the contrary, even the fact that what “flows” is…. Because digital signals “can be computed quantitatively irrespective of their possible meaning” (Eco, 1976: 20 cited by Genosko, 2008), everything happens as though meaning were no longer absolutely necessary, as though the universe were already – independently of any interpretation – saturated with meaning, as though it were therefore no longer necessary for people to connect to one another through meaningful language, nor through any symbolic, institutional or conventional transcription. It consequently seems that the apparatuses of algorithmic governmentality consecrated both signifiers’ emancipation from the signified (quantification, algorithmic recombinations of profiles) and the substitution of signifiers with the signified (production of reality within the world itself – the only reality that “counts” for algorithmic governmentality is digital reality) (Rouvroy, 2013b). This assignation of human action to a preconscious stage has everything to do with what Bernard Stiegler calls proletarianization:

17

“Historically, proletarianization was the loss of workers’ knowledge to machines, which absorbed this knowledge. Today, proletarianization is the standardization of behaviours through marketing and services, and the mechanization of minds through the externalization of knowledge in systems, such that these ‘minds’ no longer know anything about these information processing devices, of which they merely set the parameters. This is precisely what the electronic mathematization of financial decision making shows, and it affects everyone: employers, doctors, designers, intellectuals, leaders. More and more engineers take part in technical processes whose functioning they know nothing about, but which are ruining the world”.
(Stiegler, 2011)

18Maurizio Lazzarato sums up quite well how a-signifying semiotics, exemplified by digital behaviourism, produce machinic enslavement rather than subjective alienation:

19

“If signifying semiotics have a function of subjective alienation, of ‘social subjection’, a-signifying semiotics have one of ‘machinic enslavement’. A-signifying semiotics synchronize and modulate the pre-individual and pre-verbal elements of subjectivity by causing the affects, perceptions, emotions, etc. to function like component parts, like the elements in a machine (machinic enslavement). We can all function like the input/output elements in semiotic machines, like simple television or Internet relays that facilitate or block the transmission of information, communication or affects. Unlike signifying semiotics, a-signifying semiotics recognize neither persons, nor roles, nor subjects. […] In the first case, the system speaks and generates speech; it indexes and folds the multiplicity of pre-signifying and symbolic semiotics over language, over linguistic chains, by giving priority to its representative functions. In the second case, however, the system does not generate discourse: it does not speak but it functions, setting things in motion by connecting directly to the ‘nervous system, the brain, the memory, etc.’ and activat[ing] the affective, transitivist, transindividual relations that are difficult to attribute to a subject, an individual, a me.”
(Lazzarato, 2006)

The paradoxes of personalization: an algorithmic governmentality without subjects but compatible with contemporary hyper-subjectification phenomena

20However “impressive” it may be, the hypothesis of desubjectification, of “the jeopardization of the individual”, of the individual diluted in networks, is in no way self-evident. One could even say that social networks and so on produce “hyper-subjects” – probably because for their users, they are full of signifying semiotics –, that many people have become obsessed with producing subjectivity, and that it has even become some individuals’ reason to live. It therefore seems too simplistic to us just to claim that the transformations underway produce desubjectification only, on the grounds that they weaken the bastions of intimacy (even this is debatable: certain devices in the information society, on the contrary, reinforce individuals’ isolation, sparing them from interacting with others…) and of privacy, and that they perhaps affect the conditions of autonomy and free choice (how this happens remains to be seen: intelligent environments sparing us from constantly having to make choices in perfectly trivial areas of life can also free our minds, make us available for more interesting intellectual tasks, make us more altruistic, etc.). Yet laws protecting privacy and personal data, essentially motivated by risks of personal, private or sensitive information being revealed, of inappropriate disclosures, of individuals losing control over “their” profiles and of infringements of the principles of individual autonomy and self-determination, have focused on erecting a series of essentially defensive and restrictive “barriers” around the individual.

21Without considering this as pointless, we would like to strongly emphasize this “algorithmic governance’s” indifference to individuals, insofar as it simply focuses on and controls our “statistical doubles”, in other words combinations of correlations, produced automatically and using big data, themselves constituted or collected “by default”. In short, what we are, “roughly”, to use Éric Schmidt’s term, is precisely no longer ourselves (singular beings) in any way. And that is precisely the problem, a problem which as we shall see is more the result of a rarefaction of subjectification processes and opportunities, of a difficulty to become subjects, than the product of a “de-subjectification” or jeopardization of the individual.

22With this in mind, let us return to the question of the subject, or rather of “avoidance” of the subject in the three-stage normative process described above. The first thing to point out is the difficulty to produce algorithmic subjects who conceive of or think about themselves as such. First of all, as we have seen, the subject’s consent is weak when they share information (these data can often be used while still remaining anonymous, but this could just as well no longer be the case, as the meaning of their anonymity has become relative). That is not to say that this information is “stolen”, which would allow the subject to oppose it, to stand as a subject resisting such theft. Rather, we are witnessing a considerable decline in the “deliberate” nature of information disclosures – most of the time trivial, insignificant, segmented and decontextualized information –, of these “traces” whose subsequent trajectory and uses, for the “subject”, are unpredictable and uncontrollable, even if significant research investment is currently going into developing technical tools to allow computer service “users” to better control “their” data. Second, in terms of its processing, the main characteristic of the “knowledge” produced is that it appears to emerge directly from big data, without the hypothesis leading to this knowledge being pre-existent: the hypotheses themselves are “generated” from the data. Finally, the normative action deriving from these statistical processes will always be closer to action on / and therefore by the environment than to action on the individual themselves. The latter’s action no longer arises in direct confrontation with an external norm – law, average, definition of normality –; their realm of possibilities is directly organized within their environment.

23For these three reasons, we argue that both the force and the danger of the generalization of the statistical practices that we are witnessing lies not in these practices’ individual nature, but on the contrary in their autonomy or even their indifference to the individual. To put this as clearly as possible, our problem is not being stripped of what we considered as our own, or being forced to give up information that would violate our privacy or our freedom. Far more fundamentally, it stems from the fact that our statistical doubles are too detached from us, that we have no “relationship” with them, when at the same time contemporary normative actions are directed towards these statistical doubles in order to be effective. The confessional constructs the subject of the introspection which probes his/her soul, virtue, desires and deepest intentions, for through the process of confession “he who speaks promises to be what he affirms himself to be, precisely because he is just that” (Foucault, 2014: 16); the law produces subjects of law intent on their equality and the impartiality of procedures; and the average man once seemed too average compared to the singular subject likely to contradict this average. Algorithmic governance, however, neither produces nor provides an affordance for any active, consistent and reflexive statistical subject likely to lend it legitimacy or resist it [15][15]Our analysis claims a more nuanced stance regarding the trends…. That is precisely what we must now be attentive to, essentially through knowledge (even technical) and recognition of the discrepancy, the difference between these statistical representations and what constitutes individuals in their individuation processes, with the moments of spontaneity, the events and the sidesteps from possibilities anticipated that prevail in these processes.

24What seems harder to overcome, however, and therefore what seems to constitute a real break, is the appearance of possibilities of knowledge that no longer presuppose the expression of any hypothesis, thus signalling the disappearance of the idea of a project, at least in some social spaces [16][16]Algorithmic governmentality is so devoid of projects that it…. The issue is the loss of the idea of the project, not so much as something applicable or verifiable, but rather as something that can shift, in other words precisely something that can experience misfires and on that basis make history by being constantly reworked and transformed. Yet even for an organism, even for life, for the organic as a place of normative activity, there are misfires, conflicts, monstrosity, limits and instances where limits are overcome, with the deviations and shifts this induces in life, as Canguilhem has shown. With algorithmic governance, there is a tendency to consider social life as organic life, while thinking of the latter as though adaptations therein were no longer the fruit of shifts or misfires, as though they could thus no longer produce any crisis or interruption and could no longer hold accountable or challenge subjects or norms themselves.

25The field of action of this “power” is not situated in the present, but in the future. This form of governance essentially relates to what could become, to propensities rather than actions taken, unlike criminal enforcement or civil liability, for example, which are concerned only with offences allegedly committed or being committed (in the case of a flagrante delicto), or damage allegedly caused. More actively, algorithmic governance not only perceives possibility in the present moment, producing an “augmented reality”, an actuality with a “memory of the future”, it also gives substance to the dream of systematized serendipity. From this point of view, our reality has become the realm of possibility; our norms wish to anticipate possibility correctly and immanently, and the best way of doing that of course is to present us with a realm of possibility that corresponds to us and into which subjects then just need to slip. It is important to note the difference from legal discursive normativity: the latter was set out discursively and publicly, before any action on behaviours, which were therefore constrained by this normativity, but maintained the possibility of not obeying it at the risk of sanction. Statistical normativity, however, is precisely never predefined, and resists all discursivity. It is incessantly constrained by behaviours themselves, and paradoxically seems to make any form of disobedience impossible [17][17]On this point, see Rouvroy (2011).. The result is that, if we keep to an individualist, liberal approach, a paradox emerges: action on behaviours, what we call “algorithmic governance”, appears to be both fundamentally harmless and perfectly objective, since it is founded on a reality pre-existing all manifestation of subjective understanding or desire, whether individual or collective. Yet at the same time, this reality appears to be made especially reliable and objective by the fact that it disregards our understanding of reality, to fuel the dream of perfectly democratic governance. Faced with this “dream”, we should point out that our behaviours have never been so processed – observed, recorded, classified, evaluated –, underpinned by codes of intelligibility and criteria that are completely opaque to human understanding, as it is now on this statistical basis. The innocuousness, the “passivity” of algorithmic governance is thus only apparent: algorithmic governance “creates” a reality at least as much as it records it. It sparks consumption “needs” or desires, but in so doing it depoliticizes the criteria of access to certain places, goods or services; it devalues politics (since there is allegedly no more need to decide, to arbitrate in situations of uncertainty, since these are pre-emptively defused); it does away with institutions, with public debate; it replaces prevention (by pre-emption alone); and so on. [18][18]As we have shown elsewhere, particularly in Rouvroy (2012).

26Resituating this movement within a long-term perspective, this time not being lured by the perspective of pure novelty (which only makes sense in relation to the legal discursive model), we see that this algorithmic governance further entrenches the liberal ideal of an apparent disappearance of the very project of governance. As we have shown elsewhere (Berns, 2009), it seeks not to govern reality, but to govern on the basis of reality. The technological-political evolution described here reflects this trend [19][19]Just like other practices of contemporary governance, such as…, to the extent that the fact of not being governed (or not wanting to be governed) could now amount to not wanting oneself (and still without this meaning that our privacy has been violated).

Relations as targets of “power” in algorithmic governmentality?

27Beyond this still moral and normative diagnosis, or perhaps to reinforce it, we now try to identify what purpose the avoidance of subjects serves. What is the object or the target of the three stages described, and of algorithmic governmentality more generally, if not individuals themselves? Or to put it differently, what is to be governed by preventing or at least complicating the very possibility of subjectification processes? Our hypothesis is that the object – which therefore does not manage to become a subject – of algorithmic governance is precisely relations: the data shared are relations [20][20]The word “relation” – understood here in its most basic sense,… and only subsist as relations; the knowledge generated consists of relations of relations; and the normative actions that derive from it are actions on relations (or environments) referred to relations of relations. It is therefore as a governance of relations, in the very reality of its practices to organize the realm of possibilities, that we now try to identify the potential novelty of this algorithmic governance.

28We thus now transpose our twofold reflection (on glittering objectivity and on the productivity of algorithmic statistics) into Simondonian and Deleuzian/Guattarian terms. On the surface of it, this productive tele-objectivity at play in datamining and algorithmic profiling practices seems to leave the realm of the subject and therefore potentially to allow for what Simondon calls a transindividual individuation process – which amounts to neither I nor we, but designates a process of co-individuation of the “I” and “we” producing social reality, that is, associated environments in which meanings form. However, we wish to show that, on the contrary, it forecloses possibilities of such transindividual individuations by limiting individuation processes to the subjective monad.

29We show, moreover, that the relinquishment of all form of “scale”, of “standard” or of hierarchy to be replaced by an immanent and eminently plastic normativity (Deleuze and Guattari, 2004) is not necessarily conducive to the emergence of new forms of life. We mean this in the sense of an emancipation described by Deleuze and Guattari as the plane of immanence overcoming the plane of organization, of a tabula rasa of former hierarchies in which the normal man or the average man played a major role [21][21]The objective of the rhizomatic description of knowledge was….

Transindividual and rhizomatic perspectives

30The incentive to study algorithmic governmentality from a Simondonian perspective stems from the fact that this mode of governance seems to rely on and target, no longer subjects, but relations as pre-existent to their terms; in other words not just the social, intersubjective relations that build individuals, of which any individual would be considered the sum. Rather, it focuses on relations themselves, independently of any simple and linear individuation, unassignable to the individuals they link together: relations in the sense of the “relationality” also subsisting beyond the individuals they link together. Thus, in order to understand what is at stake here, should we shift, with Simondon, from a classical ontology or metaphysics of substance, focused on the individual and states (within which relations are attributed to an individual), to an ontology of relations (whereby relations ontologically “take precedence” over the individuals they go through), or yet an ontogenesis concerned with the becoming and therefore with understanding the very movement of individuation? It is important to note from the outset that this hypothesis would distance us both from a certain “nominalist” individualism (which assumes the reality of the sole individuals based on whom we could potentially abstract universals), but also from a certain holistic “realism” which presupposes that collective essences, genres and classes are pre-existent to individuals, themselves completely subsumable into collective essences. In short, conceiving of relations in a primary way, for their own sake, constitutively, would amount to breaking with the vertical movement taking us from the particular to the general, irrespective of its direction.

31There is moreover a striking resemblance between the processes of production and continuous transformation of profiles generated automatically, in real time, purely inductively, through the automatic cross-referencing of heterogeneous data (datamining), and the metabolisms specific to Deleuze’s and Guattari’s rhizome:

32

“The rhizome is reducible neither to the One nor to the multiple. It is not the One that becomes Two or even directly three, four, five, etc. […] Unlike a structure, which is defined by a set of points and positions, with binary relations between the points and bi-univocal relationships between the positions, the rhizome is made only of lines: lines of segmentarity and stratification as its dimensions, and the line of flight or deterritorialization as the maximum dimension after which the multiplicity undergoes metamorphosis, changes in nature. These lines, or lineaments, should not be confused with lineages of the arborescent type, which are merely localizable linkages between points and positions. Unlike the tree, the rhizome is not the object of reproduction: neither external reproduction as image-tree nor internal reproduction as tree-structure. The rhizome is an antigenealogy. It is a short-term memory, or antimemory. The rhizome operates by variation, expansion, conquest, capture, offshoots. […] In contrast to centered (even polycentric) systems with hierarchical modes of communication and preestablished paths, the rhizome is an acentered, nonhierarchical, non-signifying system without a General and without an organizing memory or central automaton, defined solely by a circulation of states”.
(Deleuze and Guattari, 2004: 23)

33The relationship between Simondon’s ontology of relations and the rhizome metaphor in the work of Deleuze and Guattari also stems from the fact that, in the latter’s description, a rhizome.

34

“has no beginning or end; it is always in the middle, between things, interbeing, intermezzo. The tree is filiation, but the rhizome is alliance, uniquely alliance. The tree imposes the verb ‘to be’, but the fabric of the rhizome is the conjunction, ‘and… and… and…’ This conjunction carries enough force to shake and uproot the verb ‘to be’. […] Between things does not designate a localizable relation going from one thing to the other and back again, but a perpendicular direction, a transversal movement that sweeps one and the other away, a stream without beginning or end that undermines its banks and picks up speed in the middle”.
(Deleuze and Guattari, 2004: 27-28)

35We thus consider the extent to which, the conditions in which, and the reservations with which the emergence of emancipated forms of life can actually be aided by the appearance of seemingly harmonious [22][22]It is important to see that the target of our critique is not… social tools, with the overcoming of the metaphysics of substance claimed by Simondon, to grasp the becoming in the making in individuation processes, and with the plane of immanence overcoming the plane of organization, which Deleuze and Guattari celebrated as a source of emancipation [23][23]The objective of the rhizomatic description of knowledge was….

36Simondon’s thought around individuation appears to be the most accomplished attempt to conceive of relations and of individuals’ association to an environment [24][24]Although other attempts can be found, for example, from the…, insofar as it jettisons the Aristotelian meaning of relations, which always presupposed their substance and therefore reduced them to their strictly logical tenor. By refusing this primacy of substance, thus shifting from a metaphysics of states to a metaphysics of their modifications or their becoming, Simondon, by contrast, gave ontological tenor to relations, so as to account for the very process of individuation. However this means that relations, which hold the “rank of being”, always exceed or spill over from that which they connect, that they never just amount to an inter-individual sociality and that, as much as possible, they are conceived of through the prism of their ontological primacy: “relations do not arise between two terms that are already individuals”; it is “the internal resonance of an individuation system” (Simondon, 2005: 29) [25][25]M. Combes’ valuable analysis (1999) was of great help to us.. Moreover, it also means that the pre-individual field, within which individuation processes must be embedded to be conceived of as processes and as developing whilst still keeping this pre-individual dimension preceding their movements of differentiation, is conceived of as potentially metastable, in other words its equilibrium must be envisaged as vulnerable to internal, even minimal change within the system. This non-stability of the pre-individual field is inherent to the possibility of the taking of form (in-formation) through differentiation. It is thus the very condition of thinking that does not fall into the paralogism which consists in always presupposing and even individuating the principle of that for which it is searching for the cause. In other words, if there is becoming, it is solely to the extent that there are incompatibilities between orders of magnitude, dissymmetric realities.

37From these operations or processes arise individuals and environments, individuals associated with environments (the individual is the “reality of a metastable relation”) which are real and all equally real. The individual as a relation, as relative to an environment is real, that is, the relative is real; it is reality itself. From what we could call a subjectivist perspective, relations, and individuals as relations, are therefore in no way the expression of a measurement to which they would then be relative to the extent of losing their reality: they are the reality of the becoming, just as the environment associated with an individual is all but reduced to the measurement, in other words the probability of the individual’s appearance [26][26]Simondon devoted many analyses to the danger of loss of reality….

38Is it possible to assess the novelty of algorithmic governance in its attempt to govern through relations as we have described, abiding by the requirements of Simondonian thinking? Not that this would consist in considering whether the contemporary statistical reality is more Simondonian than other forms of reality; that would be absurd. Rather, the aim is to highlight and measure potential novelties, and more importantly the fact of it potentially giving the possibility to grasp the individual in and even through their relations, following Simondon’s extremely stringent requirements to found an ontology of relations.

39Paradoxically, by probabilizing the totality of reality (which as such seems to become the medium of statistical action) and seemingly desubjectifying this probabilistic perspective (which no longer bothers with a founding hypothesis), in short, giving oneself the possibility to govern behaviours without directly worrying about individuals, and simply governing based on a statistical expression of reality that might replace reality itself (the perspective of digital behaviouralism), algorithmic government continues to absolutize the individual (even if the latter is considered in relative terms, as that which relations enable one to avoid) and at the same time to derealize him or her, in so far as he or she is merely relative to series of measures which themselves serve as reality and therefore without their subjective nature being apparent. The relations on which algorithmic governance is carried out are measures which, by virtue of their very capability to appear as the unmediated and unsubjective expression of reality, that is, by their apparent objectivity, render everything that arises in relation to them and through them all the more relative – and less real. That which arises is simply relative to a series of measures that serve as reality. In other words, by their ability to appear to be free of all subjectivity, relations and their measures, render both reality and the individual him- or herself relative. But, considered in the light of Simondonian thinking, this appears to be the fruit of an inversion. Whereas previously, according to the metaphysics of substance and the individual, any grasp or any measure of an individual’s environment always seemed to be insufficient because too subjective, thus preventing the individual’s reality from being attained in its individuation, this insufficiency (with the ontological difference that it revealed between the individual and their environment) would henceforth be resolved by making the individual entirely relative to measures themselves considered to be rid of all subjectivity, even if they were only measures. Still taking advantage of this comparison of a governance practice and Simondonian thinking, we could even go so far as to say that, by being focused on relations, this practice is able to “monadologize” them, to transform them into states, even statuses, as if the relations were themselves individuals, causing them to lose Simondonian thinking: the becoming at work in a metastable reality.

40It is this monadologization of relations that we observe by considering that big data exist only as series of relations which split up reality, that the knowledge generated on this basis consists in linking up relations yet without any assumptions on reality itself, and that, by acting on relations after having referred them to relations of relations, the resulting normative actions exclude precisely the possibility of a metastable reality within which an individual becoming may be set. What Simondon’s writings proposed was to stop thinking the becoming based on the individual constituted being that was a given, insofar as it signified that we are disregarding the experience of individuation itself, as it happens. But that which was no longer to be disregarded (in order no longer to presuppose the individual), was precisely because “the possible does not contain the already actual before it emerges”, and therefore that “the individual, which arises, differs from the possible which led to its individuation” (Debaise, 2004: 20). The failure or deviation, that we said we feared would be expelled into a reality enhanced with possibility, one that seems to include possibility, and that we might consider were as inherent to the expression of constructions, projects, hypotheses, appear then precisely as that solely from which a relationship appears, understood as unassignable to that which it connects – that is, insofar as it connects precisely asymmetrical and partially incompatible or disparate realities from which new realities or significations will emerge.

41“That which essentially defines a metastable system is the existence of a ‘disparation’, at least of two orders of worth, two distinct scales of reality, between which there is not yet interactive communication”, wrote Deleuze (2002), as a reader of Simondon. But this avoidance of failure or deviation works as a negation of this “disparateness”. Algorithmic governmentality presents a form of totalization, of withdrawal of the statistical “real” into itself, of reduction of power to the probable, and of indistinctness between the dimensions of immanence (or consistency) and organization (or transcendence). It constitutes the digital representation of the world as an immune sphere of pure actuality (Lagrandé, 2011), pre-emptively expurgated of all forms of latent power, of any “other” dimension of all virtuality (Rouvroy, 2011). This “failure of failure” of the digital modelling of possibilities – by the pre-emption of possibilities or by the automatic recording and enrolment of all “irregularity” in the processes of refinement of “models”, “patterns” or profiles (in the case of learning algorithmic systems) – removes from what could arise from the world in its dissymmetry in relation to reality (here, the statistical corpus), its power of irruption, of mise en crise[27][27]Once again, it is important to note here that crisis, that….

42Remember that the status of the approach that Deleuze and Guattari called rhizomatic and cartographic schizo-analysis and micro-analysis was not so much descriptive as “strategic”. Rules for creating hypertexts or nomadology, the concepts of rhizome and immanence, were controversial (Marchal, 2006); they conveyed strategic thinking aimed at structuring the social “differently” and at refusing a hierarchical model. Algorithmic governmentality, like rhizomatic strategy, giving itself a two-dimensional horizontal topology with neither depth nor verticality, nor project nor projection [28][28]“The topology of the network is a pure surface which needs to…, is interested in neither the subject nor individuals. All that counts are relations between data, which are merely infra-individual fragments, partial and impersonal reflections of daily existences that datamining makes it possible to correlate at a supra-individual level, but that indicate nothing greater than the individual, so no people. In the age of Big Data and algorithmic governmentality the rhizome metaphor seems to have taken on a purely descriptive or diagnostic status: we are currently faced with the “material” actualization, so to speak, of the rhizome. The metabolism of the “static body” – which interests algorithmic governmentality, that statistical body incomparable to socially and physically tested, alive substantial bodies, beyond the mere agglomeration of elements, the consistency that signifies both that this body holds together and that it is susceptible to an event (Rouvroy and Berns, 2009, 2010) – is a singular reminder of the rhizomatic characteristics or principles put forward by Gilles Deleuze and Félix Guattari. Is this “embodiment” of the rhizomatic concept suited to forms of emancipated individuation?

43First, what about relations that are no longer “physically inhabited” by otherness? In algorithmic governmentality, every subject is itself a multitude, but it is multiple without otherness, fragmented into multiple profiles, all of which relate to “oneself”, to his or her propensities, presumed desires, opportunities and risks. Should a relationship – even a scene devoid of subjects – not always be “inhabited”, be it by “a missing people” (Deleuze, 1987, 1990), a planned people? Does the “relationship” not imply, at least, a collective consisting of more than one, insofar as it is the condition of dissymmetry?

44Second, what can we say about the emancipating nature of a transindividual or rhizomatic perspective when our desires are made to precede ourselves? Does this chronological primacy of an offer that is personalized in relation to the subject’s unexpressed propensities not always determine and stabilize individuation processes from the pre-individual stage? Do these new uses of statistics that are datamining and profiling not reduce us to impotence faced with the immanent norms spawned by algorithmic governance?

45Third, what about the emancipating nature of a transindividual or rhizomatic perspective when the relationship is no longer carried by any specific becoming (becoming a subject, becoming a people, etc.), that is, when it can no longer relate anything since, precisely, the target in the sense of what this new way of governing by algorithms wants to exclude, is that which “might happen” and was not foreseen because it is the fruit of disparateness; in other words, the share of uncertainty, virtuality and radical potentiality that makes human processes free to project themselves, to relate themselves, to become subjects, to become individualized along trajectories that are relatively and relationally open? We could say that, yes, the perspective is indeed “liberating” insofar as it sweeps away all former hierarchies (in the broadest sense… as the “normal person” or the “average person” occupy a place in this hierarchy), but it is not emancipating in the framework of any becoming or any project. Hence, there is a form of “liberation” but that does not imply liberty in the “strong” sense of the word. Does the regime of digital truth (or digital behaviourism) not threaten, today, to undermine the very underpinnings of emancipation by eliminating notions of critique and of project (Rouvroy, 2013) and even of common?

46Without having answered these questions, we wanted to show that, rather than reverting to personological approaches (that the possessive individualism of legal data protection systems exemplifies), which would be as ineffective as unjustified, the essential issue – that which could be saved as a resource preceding any “subject” or individuation, and constituting the latter – is “the common”, in the sense of the “in-between”, that place of co-appearance where beings are addressed and talk about themselves to one another, with all their dissymmetries and “disparateness”. Our intention was to show that the existence of this “common” therefore relies not on homogenization, on a withdrawal of the real into itself, but on the contrary, on heterogeneity of the orders of worth, on a multiplicity of regimes of existence, in short, on disparate scales of reality. In other words, the common requires and presupposes non-coincidence because it is from there that processes of individuation occur when that is what compels us to address one another. By contrast, the government of relations, based as it is on the elimination of any form of disparity, “monadologizes” relations, to the extent that the latter no longer relate anything nor express anything common.

Notes

  • [1]
    Note that the average man theory developed by Quételet is a “social physics” theory, both “normative” and “descriptive”: “an individual who, within themselves, in a given era, sums up all the qualities of the average man, would represent all that is great, beautiful and good”, Quételet wrote. However, he added, “such an identity can hardly occur, and it is generally within humans’ reach to resemble this type of perfection only by a greater or lesser number of its dimensions” (Quételet, 1836: 289-290). It goes without saying that the average man, a standard and an ideal, is different from individuals, and represents none of them, from a perspective that can seem radically antinominalist.
  • [2]
    On this point see IBM’s “Big Data in Action” presentation: http://www-01.ibm.com/software/data/bigdata/industry.html.
  • [3]
    “Smarter marketing”, or individualized marketing based on consumers’ algorithmic profiling, is now presented as a revolution turning marketing and advertising into “services”, the added value of which is argued to be distributed fairly between companies (better sales performance) and consumers (who are offered products based on their individual profiles).
  • [4]
    Immanent norms are those that are not imposed externally but arise spontaneously, one could say, from life itself, from the world itself, independently of any qualification, evaluation or deliberation.
  • [5]
    Contrary to what is suggested by the organic metaphors used by IBM, in particular, to promote ubiquitous computing, autonomic computing and ambient intelligence as the next “natural” stages in the development of information technology, communication and networking, and as virtually natural elements of the evolution of the human species itself, we have shown the ideological components supporting the emergence of these “innovations”. Even as machines become increasingly “autonomous” and “intelligent”, they of course remain dependent on the initial design, intentions, scripts or scenarios on the basis of which they were conceived. From the time of the design (and irrespective of the forms they later take on), they convey visions of the world, conscious and subconscious expectations and projections of their designers (Rouvroy, 2011).
  • [6]
    On the distinction between models of correlation and or regression, see Desrosières (1988).
  • [7]
    “Gilbert Simondon has shown […] that individuation presupposes a prior metastable state – in other words, the existence of a ‘disparateness’ such as at least two orders of magnitude or two scales of heterogeneous reality between which potentials are distributed. Such a pre-individual state nevertheless does not lack singularities: the distinctive or singular points are defined by the existence and distribution of potentials. An ‘objective’ problematic field thus appears, determined by the distance between two heterogeneous orders. Individuation emerges like the act of solving such a problem, or – what amounts to the same thing – like the actualisation of a potential and the establishing of communication between disparates” (Deleuze, 1968: 246).
  • [8]
    Here we can cite Chris Anderson, editor-in-chief of Wired, in his article The End of Theory: “This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves” (cited in Cardon, 2012).
  • [9]
    Note that the EU legal regime of personal data protection explicitly protects individuals against decisions made concerning them solely on the basis of automated data treatment (see Article 15 of the Directive 95/46/CE). But the guarantees offered by the EU directive only apply if the automated data processing concerns personal data, in other words data regarding identified or identifiable persons. Yet algorithmic profiling can very well “function” with anonymous data.
  • [10]
    The race to claim the greatest objectivity precisely and very tangibly consists in forgetting political choice: the ideal of exact pricing tailored in real time, now within reach, constantly adapting to risks effectively incurred, whether in the insurance industry or that of transport, must be seen as a pure demutualization of risks which paradoxically annihilates the very idea of insurance or of the public service mission.
  • [11]
    We should question here the very nature of this effectiveness of the norm, which appears to be ever more solipsistic in the sense of the success of normativity itself being the only thing at stake (Berns, 2011). As one of many examples, the still highly ideological if not political ideal of “evidence-based medicine”, with the statistical support it claims, no longer allows for imagining not only the choice of the patient – though it is taken into account down to its most specific characteristics – but even scientific evolution.
  • [12]
    See, amongst others, Berns (2009), Desrosières (2000, 2008, 2009), Ewald (1986), and Hacking (2006).
  • [13]
    As well as its disciplinary form – to use Foucauldian models of power. From this point of view, this constitutes the third model of power analysed by Foucault, that which considers security apparatuses from an essentially regulatory perspective. The evolution described here thus consists in establishing new breaks in this third model of power – the security apparatuses model. The principle of security apparatuses “what is involved is precisely not taking either the point of view of what is prevented or the point of view of what is obligatory, but standing back sufficiently so that one can grasp the point at which things are taking place, whether or not they are desirable. […] the law prohibits and discipline prescribes, and the essential function of security, without prohibiting or prescribing, […] is to respond to a reality in such a way that this response cancels out the reality to which it responds – nullifies it, or limits, checks, or regulates it. […] this regulation within the element of reality is fundamental in apparatuses of security” (Foucault, 2009: 69).
  • [14]
    On the contrary, even the fact that what “flows” is a-signifying is precisely what makes “machinic enslavement” possible: “There is a molecular machinic subconscious, which consists of coding systems, automatic systems, moulding systems, borrowing systems, etc., which involve neither semiotic chains, phenomena of subjectification of subject/object relationships nor conscience phenomena. They operate through what I call machinic enslavement phenomena, whereby functions and organs directly interact with machinic systems, semiotic systems. The example I always use is that of driving a car in a dreamlike state. Everything functions outside of consciousness; it’s all about reflexes, one’s mind is elsewhere, almost even asleep; and then there is a semiotic signal to wake up, which suddenly brings one back to a conscious state and reinjects signifying chains. There is therefore a machinic enslavement subconscious” (Guattari, 1980).
  • [15]
    Our analysis claims a more nuanced stance regarding the trends and rupture observable over the course of a long history of normative practices. Algorithmic governmentality could appear to involve certain mechanisms present before the generalization of the idea of the legal discursive norm, which would then appear far more as the exception than the rule in this long-term history. If we challenge the normative functioning of algorithmic governmentality, which ensures its legitimacy and establishes its power, it can in fact seem like there are far more similarities between the sinner subject who confesses and the possibility of the contemporary algorithmic subject, than between the latter and the “subject of law”, constructed by the law, insofar as the algorithmic subject and the Christian subject both appear to be the fruit of a dialogue with oneself, aided by political, spiritual or technical mediation. For example, this can be observed in still rare experiments like the “Quantified Self” (see the article by A.-S. Pharabod, V. Nikolski and F. Granjon in this issue). Independently of the actual reach, value and representativeness of this type of experiment, it seems useful to note that the production and refinement of the “healthy” subject that it depicts, while certainly aided by technical or statistical mediation, presupposes a subject refining themselves, more than it attests to a subject producing themselves. Moreover, it is based on a refusal of the general use of technical mediation, preferring supposedly strictly individual re-appropriation. In other words, the reflexivity it demonstrates, with the subject’s awareness of the norm, precisely seems to us to be foreign to the non-relation that individuals can develop at that stage with their double statistics.
  • [16]
    Algorithmic governmentality is so devoid of projects that it perhaps presents a radical version of governance through objectivity, as understood by Laurent Thévenot (2012): “In governance through objectivity, legitimate authority is indeed displaced and distributed in things, making it difficult to grasp it and challenge it, since it prevails in the name of realism and loses its political visibility.”
  • [17]
    On this point, see Rouvroy (2011).
  • [18]
    As we have shown elsewhere, particularly in Rouvroy (2012).
  • [19]
    Just like other practices of contemporary governance, such as reporting or evaluation. See Berns (2011, 2012).
  • [20]
    The word “relation” – understood here in its most basic sense, the least loaded –, through which we qualify data, here only serves to attest to an operation which links a and b whilst being able to overlook what lies behind the terms thus linked. As we will show, the full force of algorithmic governmentality ultimately lies in its capacity to “monadologize” this relation, to the extent that this relation is precisely unable to grasp the becoming inherent to this relationality.
  • [21]
    The objective of the rhizomatic description of knowledge was not so much descriptive as “strategic”, legitimated by its utility for the exercise of resistance against a hierarchical model, the epistemological translation of an oppressive social structure.
  • [22]
    It is important to see that the target of our critique is not the Simondonian theory of transindividual individuation, nor the Deleuzo-Guattarian rhizomatic perspective, which algorithmic governmentality exemplifies at surface level. Precisely, our critique targets the apparent compatibility of algorithmic governmentality with these emancipatory theories and perspective, when in fact we argue that algorithmic governmentality tends rather to prevent both transindividual individuation processes and openness to the new meanings conveyed by relations between “disparate” entities.
  • [23]
    The objective of the rhizomatic description of knowledge was not descriptive so much as “strategic”, legitimated by its utility for the exercise of resistance against a hierarchical model, the epistemological translation of an oppressive social structure.
  • [24]
    Although other attempts can be found, for example, from the thinking of Spinoza (V. Morfino, 2010) or Marx (E. Balibar,1993).
  • [25]
    M. Combes’ valuable analysis (1999) was of great help to us.
  • [26]
    Simondon devoted many analyses to the danger of loss of reality inherent to a subjectivist and probabilistic conception of contemporary physics. See M. Combes (1999: 39).
  • [27]
    Once again, it is important to note here that crisis, that moment which requires decision making in uncertainty, is precisely the moment of the political: “Legitimate authority shifts and is distributed in things, making it difficult to grasp and challenge since it prevails in the name of realism and loses its political visibility. Critique is paralyzed as it seems overtaken and rendered obsolete. With the reference to objectivity, often coupled with the claim to information transparency, does this not impact on a major requirement of democratic deliberation?” (Thévenot, 2012).
  • [28]
    “The topology of the network is a pure surface which needs to be distinguished from the objective plan that Lacan used to describe the topology of the subject. While it is indeed a plane, a surface (exit the ‘psychology of depths’), it is the effect of a projection and this differentiates it from the ‘pure’ surface of the network which does not involve any projection” (Marchal, 2006).
Uploaded on Cairn-int.info on 18/10/2016
https://doi.org/10.3917/res.177.0163

Nenhum comentário:

Postar um comentário