Monday 18 March 2024

Guest Blog: An Exploration of the Ways Agricultural Big Data is Assetized By Agriculture Technology Companies.

By Sarah Marquis

Hackfort, S., Marquis, S., & Bronson, K. (2024). Harvesting value: Corporate strategies of data assetization in agriculture and their socio-ecological implications. Big Data & Society, 11(1).
https://doi.org/10.1177/20539517241234279

In the past decade, much attention has been paid to the ways that Big Tech companies like Google and Facebook leverage personal data to create value, both economic and otherwise. Meanwhile, as digital technologies become more ubiquitous in agriculture, we thought it necessary to interrogate the ways that agricultural big data is being assetized in parallel ways.

In this paper, we ask the following question: how is agricultural data transformed into value by the most powerful agribusinesses and ag-tech firms? 

To answer our research question, we read many financial records and annual reports and analyzed earnings calls to see how agricultural data was valued and discussed by multi-national agribusinesses like John Deere, Bayer, BASF and Farmers Edge. We came to several conclusions. The first is that any attempt to systematically examine what agribusinesses do with agricultural data is impaired by legal mechanisms that obfuscate data practices, datasets, and algorithms: copyright, intellectual property law, trade secrecy law, and arbitration agreements all allow for proprietary technologies and a high degree of vagueness and opacity. This is a finding in and of itself; such obfuscation prevents critical analysis and the kind of oversight that the equitable governance of technology requires. Our second, broader argument is that data itself is very likely an asset for agricultural firms, which now uniformly include big data-based services in their portfolios. We outline three strategies that firms use (or are likely to use) to generate value from agricultural data:

 Agribusinesses use agricultural big data to secure relationships in which users are dependent upon them.
 Agribusinesses gain from practices of price-setting and data sharing.
 Agricultural big data is used to develop new products and target marketing materials to users.

The strategies we have identified have socio-ecological implications; they affect social justice, food sovereignty, and sustainability, the latter of which does not always receive due attention in critical data studies (c.f. Gabrys, 2016; Goldstein and Nost, 2022). Our results indicate the reproduction of asymmetrical power relations in the agri-food system favoring corporations and the continuation of long-standing dynamics of inequalities. We can infer that the big data-based predictions agribusinesses sell to farmers are directed toward a productivist model of “surveillance agriculture” (Stone, 2022a) that reinforces existing patterns of unsustainable agro-industrial farming and renders other routes, such as agroecology, peasant farming, and organic farming less legitimate and possible. 

Wednesday 14 February 2024

Guest Blog: A Social Privacy Approach to Investigative Genetic Genealogy

 by Nina de Groot

de Groot, N. F. (2023). Commercial genetic information and criminal investigations: The case for social privacy. Big Data & Society, 10(2). 
https://doi.org/10.1177/20539517231216957

In 2022, four college students from the University of Idaho were stabbed to death. Law enforcement found DNA at the crime scene that probably originated from the murder suspect, but the DNA profile did not match with the FBI database of criminal offenders. The FBI then used, according to the prosecution, another method: searching in a public genetic genealogy database for relatives of the unknown suspect. Weeks later, they were, most likely based on this lead, able to arrest a suspect. By building extensive family trees of the unknown suspect and the DNA test consumer who is genetically related to the suspect, one can eventually zero in on the suspect by mapping genetic relationships and examining birth and death certificates or social media profiles. This so-called ‘investigative genetic genealogy’ is increasingly used in both the US and Europe.

Investigative genetic genealogy may have significant potential when it comes to crime-solving. Yet, the debate on investigative genetic genealogy tends to be reduced into a dichotomy between protecting privacy on the one hand and serving society by solving horrendous crimes, on the other. As the CEO of a commercial genetic genealogy database has said: “You have a right to privacy. You also have the right not to be murdered or raped”. However, reducing it to this dichotomy is problematic.

In my paper ‘Commercial genetic information and criminal investigations: The case for social privacy’, I propose to consider investigative genetic genealogy through the lens of social privacy. Social privacy is a helpful way to look beyond this simplified dichotomous choice. For example, it is not only about the privacy of the individual DNA-test consumer, but about the privacy of potentially thousands of genetic relatives. If one individual DNA test consumer gives consent to law enforcement use of their data, this person does so for many close and distant relatives.

Perhaps more importantly, a social privacy approach allows for the consideration of broader social- political concerns of investigative genetic genealogy. For example, it allows us to explore the complex issues when it comes to commercial parties entering the scene of criminal investigations, including dependence on commercial actors. Commercial companies currently have a major say in deciding in which crimes to allow (and not allow) law enforcement use of their database. Additionally, public forensic laboratories often do not have the technology to generate the type of DNA profile needed, which leads to the dependence of law enforcement on private actors in this process. Furthermore, investigative genetic genealogy could have problematic consequences for the relationship between citizens and state as well as for the public nature of criminal investigations.

A social privacy approach can also help shed a light on the implications for criminal investigations and the democratic process. Namely, if only a few percent of a population gives consent for law enforcement use of their DNA data, almost every individual within that population can be identified through distant relatives. Therefore, the decisions of only a fraction of the population can have far- reaching implications, reflecting a potentially harmful ‘tyranny of the minority’, which is a concept from digital data ethics. In that respect, insights from the social privacy debate on the interconnectedness of data for the online digital data debate, can be useful for exploring these issues in the context of investigative genetic genealogy.

Friday 5 January 2024

by Zelly Martin, Martin J. Riedl, Samuel C. Woolley

Martin, Z. C., Riedl, M. J., & Woolley, S. C. (2023). How pro- and anti-abortion activists use encrypted messaging apps in post-Roe America. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231221736

How pro- and anti-abortion activists use encrypted messaging apps in post-Roe America

On June 24th, 2022, the United States Supreme Court ended nearly 50 years of federal protection of abortion. Immediately, we noticed that experts were recommending encrypted messaging apps as the solution to abortion-related data leakage for abortion-seekers, activists, and healthcare providers. Not two months later, a teenager and her mother in Nebraska were arrested for the teenager's abortion on the basis of data obtained from their Facebook Messenger conversations. It seemed, then, that encrypted messaging apps might provide security that unencrypted spaces (as Facebook Messenger, at the time, was) could not. 

We thus set out to explore the utility of encrypted messaging apps as privacy-promoting spaces in a post-Roe America. We interviewed pro-abortion, anti-abortion, and encryption activists in U.S. states with varying levels of abortion restriction or protection. We found that while our pro-abortion interviewees often considered encryption as a powerful tool for security, our anti-abortion interviewees largely rejected it on principle alone, believing it to be characteristic of inauthenticity or criminality, or simply found it untrustworthy. Yet activists on both sides of the abortion issue used encrypted messaging apps for reasons other than security, including convenience and coordination.

Ultimately, we argue that although end-to-end encryption is a powerful security tool, it must be used in combination with other security practices to effectively resist patriarchal (and ubiquitous) surveillance by corporations and law enforcement in a post-Roe America.  

Monday 11 December 2023

Guest Blog: Christopher Till on Spotify's datafication of user health

by Christopher Till

Till, C. (2023). Spotify as a technology for integrating health, exercise and wellness practices into financialised capitalism. Big Data & Society, 10(2).
https://doi.org/10.1177/20539517231210278

Over the last few years I have noticed how Spotify, and similar music streaming services, have produced content and tailored services with the intention of helping us to improve our health and to exercise more. While looking into their innovations in this area (e.g. matching music tempo to running cadence, automatically generating playlists for particular activities) I began to wonder what business interests were driving this (aside from simply retaining subscribers). To explore this I analysed Spotify’s patent applications (to see what innovations they have in the works and how they plan to datafy and analyse user interactions), financial statements and industry interviews and press releases amongst other materials. I found that the failure of Spotify’s subscription service to turn a profit (as yet) or provide sufficient growth to please shareholders has led them to engage in increasing attempts to financialize the practices of their users. Health, exercise and wellness practices are presented as a particularly fruitful area of exploitation which Spotify can datafy in such a way to make them amenable to targeted advertising and therefore tell a story to investors about the growth and future profitability of the company. So, users’ everyday lives are increasingly seen as commercially useful sites to be mined for insights for targeted advertising and health, exercise and wellness practices are integrated by Spotify into the financialized networks of digital capitalism.

Wednesday 29 November 2023

Bookcast: Luke Munn interviewed by Andrew Dougall

In this bookcast, Andrew Dougall interviews Luke Munn, Research Fellow in Digital Cultures & Societies at the University of Queensland about his recent book 'Technical Territories: Data, Subjects, and Spaces in Infrastructural Asia' (2023). 



Tuesday 31 October 2023

by Alexander Campolo and Katia Schwerzmann  

Campolo, A., & Schwerzmann, K. (2023). From rules to examples: Machine learning’s type of authority. Big Data & Society10(2).
https://doi.org/10.1177/20539517231188725  

The ethics of examples in machine learning 

Our article investigates a perceived transition from a rules-based programming paradigm in computing to one in which machine learning systems are said to learn from examples. Instead of specifying computational rules in a formal programming language, machine learning systems identify statistical structure in a dataset in order to accomplish tasks. There are many studies that show how data is constructed in various ways. We make a more specific argument: that in machine learning, data must be made exemplary—aggregated, formatted, and processed so that norms can emerge—to enable desired predictive or classificatory objectives.  

We are most interested in the ethical and even political ramifications of this transition. How does being governed by examples, by machine learning's specific type of predictions and classifications, differ from the rule of computational rules? How, concretely, is authority exercised by machine learning techniques? If you would like answers to these questions, please read our article! 

A larger issue is why speak in terms of rules and examples in the first place. We are aware that these themes may strike readers as unfamiliar in light of existing critical research on algorithms and artificial intelligence. Many studies have, with good reason, focused on the discriminatory or unequal effects of machine learning systems. Other, more conceptual work posits some external standard (consciousness, language, intelligence, neoliberalism etc.) and evaluates whether or not machine learning systems measure up to it, or whether they are mere "stochastic parrots," for example. We are sympathetic to both of these avenues, and they will continue to bear fruit. 

Our article, however, begins from a different position: not from "outside" of machine learning, but from within it. We were first struck by repeated references to "learning from examples" made by machine learning researchers themselves. This community even has an informal historical understanding in which an overemphasis on highly-specified formal rules led to failure in previous forms of AI, notably expert systems. Our first task, then, was to discern how examples work in machine learning. We argue that data is made exemplary, that is, capable of eliciting norms, through a set of technical practices that characterize machine learning, including labeling, feature engineering, and scaling. 

Merely characterizing how these terms are used within the machine learning community risks reproducing their views. After beginning our study up close or from within, we then examine these practices "from afar"—to identify their epistemological, ethical, and political implications. We theorize examples through historical-conceptual comparison, in contrast to rules and closely analyze several case-studies under the headings of labeling, feature engineering, and scaling. Our article draws from both classics, such as the work of Max Weber, and contemporaries, such as Lorraine Daston's very recent book  Rules: A Short History of What We Live By

This comparative approach situates machine learning within a constellation of concepts from social theory such as rationalization, calculation, and prediction. It connects machine learning to longer-running historical forces while also making its specific type of authority intelligible: how, precisely, do we use it to govern ourselves and others. Comparing rules and examples brought a number of other philosophical oppositions to light—specification and emergence, prompts and commands, the implicit and the explicit, the general and the particular, is and ought... These indicate further lines of research both for ourselves and hopefully our readers.