top of page
giannitribuzio1

6-Digital Tech. and Privacy protection.Response strategies to the introduction of self-driving cars

The right to be left alone and the ability to remain out of the public eye are defining characteristics of privacy, at least according to traditional understanding. However, the notion of privacy has become more nuanced with the Third and Fourth Industrial Revolutions, with people moving much of their lives online and requiring protection of more than just their physical privacy. New technologies emerge and with them new privacy concerns like trust , transparency speed of tech. development and responsibility


Privacy in the digital era


It is important to appreciate that loss of privacy is not caused by the technologies themselves. These technologies are simply tools that follow instructions to gather, analyze, or disseminate data. Rather, privacy violations occur when your personal information is misused, abused, or exploited by individuals, businesses, or States that use that technology


In fact, the very people who tend to complain about privacy violations are often the same subjects who willingly hand over their personal information to others. This is known as the privacy paradox. Where responsibility for privacy should lie? Should all stakeholders (data senders and receivers) be held liable for violations? What responsibility, if any, should be shifted onto the technology itself?


Contextual integrity ad 5 Fair information principles (for privacy protection)

it is an approach to protecting privacy online. Is a benchmark theory of privacy that links the protection of personal information to the norms of personal information flow within specific contexts . In essence, by understanding how information is exchanged within specific contexts, it is much easier to identify when these norms, and thus privacy, have been violated. This however becomes more complex, when the norms around the flow of digital information are constantly changing


-Notice and awareness

Prior to the collection, the data collector should disclose its information practices to the individual they are collecting personal information from( identification, how the collector plans to use and to share the data with, how the data will be collected, and how plans to maintain the confidentiality, integrity, and quality of the data)


-Choice and consent:

This principle states that consumers should be given a choice to share their personal information with a data collector willingly, as well as agency to decide how that data can be used by the collecting entity.


-Access and participation:

Access and participation give consumers the right to view any information collected about themselves, as well as contest any information that is incorrect or incomplete. In practice, this principle requires efficient mechanisms for data access, verification, and correction.


-Integrity and security:

This principle states that consumers have a right for their data to be accurate and secure. this requires that data collectors only gather data from reputable sources and discard data that is no longer relevant. Furthermore, data collectors are required to take the necessary steps to protect consumers from the loss, theft, disclosure, or unauthorized access of their data by third parties or malicious actors.


-Enforcement :

For these fair information principles to be effective mechanisms need to be put in place to enforce their application while also allowing consumers to seek remedial action if they are harmed by unfair information practices.


Future privacy concerns

Today, not only have digital technologies disrupted conventional ideas around privacy, but the speed and frequency of emerging technologies has raised and continues to raise new privacy concerns. A tension exists between technological progress and the right to privacy, especially as the digital economy has come to rely on technologies that use automated, algorithmic, and AI-based systems to collect, analyze, and make inferences based on users’ personal data. This tension is further amplified by the speed and scale of technological change, which creates new privacy concerns that regulators struggle to keep pace with.


Surveillance

Profiling and discrimination

AI gives rise to issues of discrimination when making automated decisions, namely false positives (incorrectly targeting individuals) and false negatives (incorrectly overlooking or excluding individuals)

Targeted Advertising

This raises a question: where is the line between the utility of targeted advertising and intrusion?

Algorithm Bias

The influence that humans have on encoding their own biases into the algorithms that they create or into the data they use to train those algorithms.

AI was that it would provide an objective method for making predictions or decisions. However, algorithms tend to replicate – and even amplify – human biases, potentially causing harm to individuals



What are the Implications/Harms associated with the use of personal data analysis ?


-technology-driven privacy breaches: Loss of opportunity for individuals, exacerbation of existing wealth inequalities, Social detriment due to quantifiable behaviors used rather than relying on social interaction, Loss of liberty

-Facial recognition technology : this technologies can be inaccurate. For example it more likely to be misidentified black men than white men

-Track-and-trace for public safety: It is another form of safety surveillance. Concerns are around the centralized storage of personal information by government departments, universities, and employers. This centralized control is vulnerable to cyberattacks, which might expose individuals’ personal data to unauthorized entities

-Quantified self: it has become popular for people to monitor, store, and view many aspects of their health and behaviour – from their heart rate. While personal health tracking has risen in popularity, it has also become a central issue in privacy circles, with questions around who collects data, for what purpose, and for whose benefit.

-Digital identity systems This includes digital identity systems, where a citizen’s identity details, often including biometric data, are recorded digitally. Such systems promise efficient data sharing and integration for more effective service delivery and administration. However, there are also some major concerns, including whether governments can provide adequate protection of personal data, as well as the potential for state overreach and abuse.


Response strategies to the introduction of self-driving


Given the privacy concerns around the use of self-driving technology, what response strategies could be implemented by citizen, state, and industry actors?


Civil society' s response


As AV’s discussion increase within Society, citizens become more concerned about the protection of their personal data and will develop creative and innovative ideas when they become aware that current law leaves them exposed. One possible response is to develop Apps that allow them to control the flow of info and authorize or not access to their extremely sensible data

Privacy literacy will emerge spontaneously once driverless cars’ models are advertised. Citizen will become active actors, exchanging knowledge and so contributing to the public debate regarding the evolution of laws, individual behaviors


When corporations and government are aligned in supporting systemic tech. transition toward AV while a great part of the public is unconvinced, skeptical or opposed to the privacy issues that go with that, privacy advocacy organizations represent public opinion’s response to these threats. Their response can influence privacy policy by lobbying for increased privacy protection, legal challenges, and the dissemination of information to the general publics. They might decelerate the transition process in order to develop a better regulation or to change the original tech design


Individual activists will emerge. They are sensible to surveillance issues that AV introduce and will challenge the State-led security narrative( the security vs privacy paradigm), placing political pressure to authorities, lobbying media and producing evidence of the pitfalls they might be exposed to.


Government's response


Government could facilitate social acceptance of AV by allowing/encouraging citizen involvement to see these trials on public roads—and seeing is believing and they will want to be involved and see the benefit. Australia’s government has also pursued this strategy(Public engagement) of building consensus with the public to address privacy risks


A single self-driving car could generate as much as 100GB of data. Who owns these data? Where are these stored? For how long? Under which conditions? Governments should respond developing a new Privacy Legislation framework to anticipate the coming of automated vehicles on public roads, to minimize their bias and to produce international standard for access to data recording.


Data protection law experts will be needed to properly assess policy challenges of AV testing and deployment focusing on the implications in terms of data protection and individuals’ privacy. They will provide valuable suggestions for policy making at local, European and international level and Potential areas of cooperation as well as evidence of divergent approaches.


States should also work with legal specialist with technological background to strengthen existing policy like in US where all vehicles (AV and old ones) must provide owners the ability to stop the data collection(except for data essential for safety and post-incident investigations),and manufacturers are prohibited from using the collected data for marketing or advertising without consent from the owners


Industry's response


Car manufacturer should be encouraged to build and incorporate data privacy into their devices at the outset(privacy by design),rather than after and should consider to conduct privacy risk assessment, minimizing the data they collect and retain and test their security measures before launching their products. Any self-learning algorithm needed for automated cars should be transparent in its functionality and have previously been checked by an independent body in order to reduce the risk of discriminatory automated decisions.


Human(subjective) biases are reduced with AI but it can replicate and deploying our biases at scale(considering only variables that improve predictability).Companies should ensure to avoid flawed data sampling(groups are over- or underrepresented in the training data), cultural biases (AV might be more inclined to save younger rather than older pedestrians if an accident cannot be avoided). Inappropriate calibration of trust in intelligent agents is a serious problem, and when combined with bias, the potential for harm greatly intensifies.


Manufacturer could take advantage of other industry resources. For example, in US Association of automakers have already put in place consumer privacy protection principles and could share intelligence and best practices about the approach to customer privacy principles in the development of automobiles and features.


Autonomous vehicle manufacturers should strictly collaborate with vendor designers and developers of the parts of the vehicle and may wish to keep these concerns in mind during product development. Those organizing testing of these technologies must provide clear guidelines.






2 views0 comments

Recent Posts

See All

Comments


bottom of page