TechEthos Holds Final Policy Event on Ethics for the Green and Digital Transition

On 14 November, the Horizon 2020-funded TechEthos project held its final policy event in Brussels to discuss the role of ethics in the green and digital transitions for an audience of researchers, policymakers, and the wider community.

Since 2020, an EU-funded consortium led by the Austrian Institute of Technology (AIT) has been exploring the societal and ethical impacts of new and emerging technologies. The TechEthos project aims to facilitate “ethics by design”, i.e., it advocates to bring ethical and societal values into the design and development of technology from the very beginning of the process.

The policy event, which signals the end of the project in 2023, was hosted by Barbara Thaler, Member of the European Parliament (MEP) and member of the Panel for the Future of Science and Technology (STOA). The event highlighted ongoing ethical debates, as well as current and expected EU policy debates such as the proposed AI Act, the implementation of the Digital Services Act and Digital Markets Act, the European Green Deal, and the European Commission’s proposal for a Carbon Removal Certification Framework.

In his opening statement, Mihalis Kritikos, Policy Analyst at the European Commission’s Directorate-General for Research and Innovation (DG RTD), stressed that ethically designed policies are essential for public acceptance of new technologies, and ultimately for a just digital and green transition.

His remarks were complemented by TechEthos Coordinator, Eva Buchinger, who provided an overview of how TechEthos addresses possible concerns from society related to new and emerging technologies using an approach that combines scanning, analysing and enhancing existing frameworks and policies. The key messages are condensed in a number of TechEthos policy briefs, which are available here.

Ethics for the digital transformation

In the first keynote of the day, Laura Weidinger, Senior Research Scientist at Google DeepMind, discussed different approaches to socio-technical safety evaluation of generative AI systems to explore how we can ensure that models like ChatGPT are safe to release into society. At an early developmental stage, it can be challenging to predict a technology’s capabilities, how it will be used, and its impact on the world.

Most of the current safety evaluations and mechanisms focus on fact-checking the direct capability of language models, i.e., whether the information they generate is accurate. However, the believability of such information, as well as its impact on society, remain largely understudied. Weidinger advocated for an ethics evaluation framework with a clear division of roles and responsibilities, where model-builders carry the main responsibility for capability testing, application developers for studying their use, and third-party stakeholders for looking at systemic impact.

A panel consisting of Laura Weidinger, Alina Kadlubsky (Open AR Cloud Europe), and Ivan Yamshchikov (CAIRO – the Center for Artificial Intelligence and Robotic), introduced by Alexei Grinbaum (CEA – the French Alternative Energies and Atomic Energy Commission & TechEthos partner), dived deeper into the ethical, social, and regulatory challenges of Digital Extended Reality and Natural Language Programming.

The panel reflected on key ethical issues related to emerging technologies, including transparency (should AI-generated content always be marked as such, and is this sufficient for users to process it accordingly?), accountability (how do we balance the responsibilities of users and developers and can real-world values and regulations be translated into the metaverse?), and nudging/manipulation (what should be permitted, and when does it serve the benefit society?).

Ethics for the green transition

In the afternoon, Behnam Taebi, Professor of Energy & Climate Ethics at Delft University of Technology, gave a keynote lecture on the governance and ethical challenges of emerging technologies for the green transition. Climate Engineering technologies continue to be controversial, and some researchers have even called for a complete ban on research in this area. This is largely due to substantial possible risks (e.g., ozone depletion, negative impacts on agriculture, and many as yet unknown risks), as well as regulatory complexity due to their international and intergenerational nature.

However, these technologies are increasingly being considered essential to cap global warming at 1.5 degrees, meaning that an ethically-informed future governance framework will be needed. Taebi emphasised that these technologies (and their potential and risks) continuously evolve, and so do public perception and moral beliefs. Therefore, a dynamic ethical assessment will be required to make regulatory frameworks fit for the future.

The lecture was followed by a panel discussion with Behnam Taebi, Dušan Chrenek (European Commission, DG Climate Action), and Matthias Honegger (Perspectives Climate Research), introduced by Dominic Lenzi (University of Twente & TechEthos partner), which provided further insights into the ethical, social, and regulatory challenges related to Climate Engineering .

The panel concluded that any technological opportunities that contribute to climate change mitigation should be explored. However, they emphasised that it is important to acknowledge that different technologies (i.e., carbon dioxide removal v. solar radiation modification) have very different societal implications, hugely diverse risk profiles, and unique intellectual property challenges, and should therefore be subject to tailored ethical analyses and regulatory frameworks.

Highlights and outlook for the ethical governance of emerging technologies

In the final session, Maura Hiney, Chair of the ALLEA Permanent Working Group on Science & Ethics, placed TechEthos’s outcomes in the larger context of the recently revised ALLEA Code of Conduct for Research Integrity, and reiterated the requirement for suitable research integrity frameworks to guide researchers that work on emerging technologies.

To conclude the event, Eva Buchinger, Laurence Brooks (University of Sheffield), and Renate Klar (EUREC – European Network of Research Ethics Committees) shared their views and insights on the continuation and implementation of the work beyond the lifetime of the TechEthos project.

 

TechEthos Publishes Policy Briefs on Enhancing EU Law on Emerging Technologies

TechEthos publishes recommendations on enhancing EU legal frameworks for new technologies that could impact the planet, the digital world, and bodily integrity.

In February 2023, TechEthos released four policy briefs targeted at enhancing EU legal frameworks for emerging technologies in the three families of Climate Engineering (Carbon Dioxide Removal and Solar Radiation Modification), Extended Digital Reality, and Neurotechnologies. These policy briefs, co-authored by Julie Vinders and Ben Howkins from Trilateral Research, were developed based on the analysis of International and EU laws and policies governing these three technology families, published as a report in July 2022.

TechEthos Report on the Four Policy Briefs Published in Feb 2023

The findings of this report were debated in a series of policy consultations with relevant EU officials, particularly those working at relevant Directorate General (DG) units and cabinets of the European Commission and involved in relevant legislative and policy development processes, held from December 2022 to February 2023, which then led to the identification of the regulatory priorities for the EU set forth as recommendations in the policy briefs.

Some key highlights from the four policy briefs are shown below.

Enhancing EU legal frameworks for Carbon Dioxide Removal (CDR):

  • Clarify the role of CDR, a type of climate engineering technique that removes atmospheric carbon dioxide and stores it in geological, terrestrial and oceanic reservoirs, in meeting the EU’s legally binding target of net-zero by 2050
  • Carefully evaluate wider socio-economic implications of CDR, including but not limited to fundamental rights, biodiversity, international development, international trade, food production and food security, short- and long-term cost implications, and energy security
  • Devise robust sustainability requirements for CDR, particularly those in the context of the Sustainable Development Goals (SDGs)

Enhancing EU legal frameworks for Solar Radiation Modification (SRM):

  • Investigate whether further research into various types of SRM, a type of climate engineering technique that aims to reflect sunlight and heat back into space, should be conducted, and determine the conditions, if any, under which SRM research in general, and especially any open-air testing, could be conducted
  • Focus on both the large-scale SRM activities with the purpose of moderating the global climate system as well as the cumulative effect of small-scale SRM activities conducted for purposes other than the moderation of the global climate system
  • Collaborate internationally and evaluate existing international governance regimes

Enhancing EU legal frameworks for Digital Extended Reality (XR):

  • Include the protection of fundamental rights, such as the right to dignity, the right to autonomy, the right to non-discrimination, the right to privacy, and the right to freedom of expression, as a central consideration in assessing the risk factor of AI-enabled XR technologies
  • Recognise that the immersive and increasingly realistic nature of XR technologies, which include advanced computing systems that can change how people connect with each other and their surroundings through interactions with virtual environments, may exacerbate the risks and impacts of harmful online content consumed through XR, particularly by special category groups such as children
  • In addition to the Code of Practice on Disinformation, the EU should encourage the adoption of similar industry-led self-regulatory codes addressing issues associated with harm to XR users, including hate speech, online violence, (sexual) harassment, and mis- and disinformation

Enhancing EU legal frameworks for Neurotechnologies:

  • Monitor and assess the possible under-regulation of consumer and dual use neurotechnologies (devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of natural persons)
  • Recognise and define putative neurorights, such as the “right to cognitive liberty”, prospectively, through the adoption of a Declaration on Neurorights and Principles, similar to the European Declaration on Digital Rights and Principles, and include them in the human rights frameworks
  • Adjust and promote the more effective enforcement of existing legal frameworks

You can also find a report consolidating the recommendations in the four briefs here.

——

TechEthos is led by AIT Austrian Institute of Technology and will be carried out by a team of ten scientific institutions and six science engagement organisations from 13 European countries over a three-year period. ALLEA is a partner in the consortium of this project and will contribute to enhancing existing legal and ethical frameworks, ensuring that TechEthos outputs are in line with and may complement future updates to The European Code of Conduct for Research Integrity.

“Questions, Not Answers, Are Better Suited to Start a Reflection on Ethical Issues”

Technology has immense power to shape our world in a variety of spheres, from communication to education, work, health, transportation, climate, politics, and security. New and innovative technologies with such gross potential for wide socio-cultural and economic impact (often referred to as “emerging technologies”) are thus often fraught with ethical questions – which range from concerns about privacy breaches to manipulation, fairness, and the exacerbation of power gaps and exploitation. Because they could affect every aspect of our lives, it is important to acknowledge and address these ethical questions right at the outset – as early in the process of technological design and implementation.

In this relatively nascent field of emerging technologies and ethics, TechEthos (Ethics for Technologies with High Socio-Economic Impact), a Horizon 2020-funded project, published a report on the ethical issues that need to be considered for three technology families: Digital eXtended Reality, including the techniques of visually eXtended Reality (XR) and the techniques of Natural Language Processing (NLP), neurotechnologies, and climate engineering, including Carbon Dioxide Removal (CDR) and Solar Radiation Management (SRM).

Dr Laurynas Adomaitis, Tech Ethicist, CEA

In this Digital Salon interview, we speak with the lead author of the report, Dr Laurynas Adomaitis, Tech Ethics Researcher at Commissariat à l’Énergie Atomique et aux Énergies Alternatives (CEA), on the ethical dilemmas inherent to emerging technologies, how researchers can effectively use the tools in the report, and the role for policymakers and funding organisations in promoting the integration of ethics into every stage of technology research.

 

Question: Are the core ethical dilemmas in emerging technologies fundamentally similar to ethical considerations inherent to all research? How are they different?

Laurynas Adomaitis: Emerging technologies are often based in research, so there definitely is overlap between the core dilemmas we discuss in research ethics. For example, while looking at climate engineering, we discovered that one point of contention was whether research into Solar Radiation Management (reflecting/refracting solar energy back into space) is ethically justified. One of the arguments against it is that researching such techniques presents the world with a “plan B”, which may distract from climate change mitigation efforts.

We also found a lot of issues with consent in XR (extended reality) and neurotech, which cuts across research ethics. For example, there are ethical concerns with so-called “deadbots” – chatbots constructed based on conversational data from deceased individuals. How is consent possible for an application that did not exist when the person was conscious? Likewise, in neurotech we must be aware of changing people’s mental states. For example, sometimes a treatment is required before consent can be given, but then can it be revoked by the patient? Or, if a BCI (brain-computer interface) changes a person’s mental states, can it also change how they feel about consent?

 

“Each technology family has many issues and at least one beastly challenge to conquer.”

 

Q: Which of the three technology families did you find particularly fraught with ethical issues? Why?

LA: The three technology families – XR, neurotech, and climate engineering – are at very different stages of development. Many applications in XR are already in production and available to the public; neurotech is starting in medical tests but is mainly based on future promise, whereas climate engineering is only beginning to be explored with huge issues on the horizon.

Each technology family has many issues and at least one beastly challenge to conquer. For climate engineering, it’s irreversibility – can we make irrevocable changes to the planet? For neurotech, it’s autonomy – how can we enhance cognitive abilities, while respecting independent and free thinking? For XR, it’s a set of particular issues, like nudging, manipulation, deep fakes, concerns about fairness, and others. I think it’s a wider array of issues for XR because it is already hitting the reality of implementation, where many practical problems arise. There are even skeptical researchers who think that virtual realities should not exist at all because of the moral corruption they may cause, especially with children. This fundamental issue still lingers spurring the need for empirical studies.

 

Q: What were some overarching ethical themes common to all three technology families?

LA: There are cross-cutting issues that relate to uncertainty, novelty, power, and justice. But the most important aspect that kept reappearing was the narratives about new technologies that are found in lay reactions to it.

We used a framework to elucidate this in the report that was developed in the DEEPEN (Deepening ethical engagement and participation in emerging Nanotechnologies) project over 10 years ago. It worked very well in the context of our ethical analysis. Many concerns were along the lines of five tropes of lay reactions to novelty: “Be careful what you wish for”, based on the motifs of exact desire and too big a success; “Messing with Nature”, based on the motifs of irreversibility and power; “Opening Pandora’s box”, based on the motifs of irreversibility and control; “Kept in the dark”, based on the motifs of alienation and powerlessness; and “The rich get richer, the poor get poorer”, based on the motifs of injustice and exploitation. Although these reactions are natural, and sometimes justified, we had to keep asking ourselves whether they are the most pressing ones. It’s still astonishing that the same narratives apply across times and technologies.

 

“There are cross-cutting issues that relate to uncertainty, novelty, power, and justice. But the most important aspect that kept reappearing was the narratives about new technologies that are found in lay reactions to it.”

 

Source: TechEthos Report on the Analysis of Ethical Issues

 

Q: How can the research community best implement the tools/findings in this report?

LA: The report is structured in a hierarchical way, starting with some core dilemmas that are the foundation of reasoning, then there are applications and, finally, values and principles. The value sections are the most important for researchers and practitioners. They cover the key considerations, and each value section ends with a set of questions. We wrote these questions with a researcher in mind. What should one consider when trying to explore, design, and implement the technology? What are the checks and balances with respect to the value in question? We intended these questions to be operationalisable so they offer the best value for implementation.

 

Q: How can policymakers better support the integration of “ethics by design” in emerging technologies?

LA: Technology research should be in step with ethical research on the technologies. The time difference between the development in tech and ethical or policy research creates a divide, where we have to work retroactively, and it’s very inefficient. Imagine if carbon-intensive technology and industry were developed alongside climate preservation from the very beginning. Of course, there have been philosophers and ethicists, like Hans Jonas, as early as the 1970s calling for ecological activism and responsibility for future generations. But they were mavericks and pioneers, working with passion but without support. We should try to open up these perspectives and take them seriously at the policy level when the technologies are emerging.

 

“Technology research should be in step with ethical research on the technologies. The time difference between the development in tech and ethical or policy research creates a divide, where we have to work retroactively, and it’s very inefficient.”

 

Q: What role can funding organisations play in centering ethics in emergent tech?

LA: It’s a difficult question to answer since causality is very uncertain in provoking ethical reflection. Ethical reflection is, as we like to call it, opaque. It’s not always transparent when it happens or why. What will actually cause people – researchers and industry alike – to stop and reflect? In our report, we avoided guidelines or directives that would offer “solutions”. Instead, we focused on questions that should be asked. Questions are better suited for starting a reflection on ethical issues. For example, if you’re building a language model, how will it deal with sensitive historical topics? How will it represent ideology? Will it have equal representation for different cultures and languages?

There is no “one way” to address these challenges, but the questions are important and researchers should at least be aware of them. If the standards for dealing with them are not clear yet, I would prefer to see each research project find their own way of tackling them. That will lead to more original approaches and, if a working consensus is found, standardisation. But the central role played by the funding bodies could be to guide the researchers into the relevant questions and start the reflection. We intended our report to provide some instruction on that.

 


You can read our summary of the TechEthos report by Dr Adomaitis on the analysis of ethical issues in Digital eXtended Reality, neurotechnologies, and climate engineering here, and the full report here. 

TechEthos is led by AIT Austrian Institute of Technology and will be carried out by a team of ten scientific institutions and six science engagement organisations from 13 European countries over a three-year period. ALLEA is a partner in the consortium of this project and will contribute to enhancing existing legal and ethical frameworks, ensuring that TechEthos outputs are in line with and may complement future updates to The European Code of Conduct for Research Integrity.

New Report Explores the Ethics of Digital eXtended Reality, Neurotechnologies, and Climate Engineering

TechEthos project publishes two analyses of the ethics and laws applicable to the three technology families under study

In June 2022, TechEthos (Ethics for Technologies with High Socio-Economic Impact), a Horizon 2020-funded project, published a draft report on the ethical issues that need to be considered for the three technology families under study:

  • Digital eXtended Reality, including the techniques of visually eXtended Reality (XR) and the techniques of Natural Language Processing (NLP)
  • Neurotechnologies
  • Climate Engineering, including Carbon Dioxide Removal (CDR) and Solar Radiation Management (SRM)

The report, co-authored by tech ethicist Laurynas Adomaitis and physicist Alexei Grinbaum at the Commissariat à l’Énergie Atomique et aux Énergies Alternatives (CEA), along with Dominic Lenzi from the University of Twente (TU), is currently under review by the European Commission. It is based on literature studies, original research, expert consultation, and digital ethnographies.

TechEthos

Source: TechEthos Report on the Analysis of Ethical Issues

In addition to briefly describing the technologies in each family, the report identifies core ethical dilemmas, describes key applications and case studies, and identifies ethical values and principles in line with the “ethics by design” (the implementation of ethical, legal, and societal values and principles from the conception to implementation stages of technology design) methodology, provides operational checks and balances for each value/principle in the form of questions, and outlines mitigations strategies for the same.

The 142-page report is structured into four chapters, which include an introduction into technology ethics and cross-cutting issues in the three technology families, and a deep-dive into each one. Some examples of the ethical issues unique to the different technologies include:

  • The impact of digital eXtended Reality on the values and principles of transparency, dignity, privacy, non-manipulation, and responsibility, as well as their relevance for the analysis of risk reduction, environmental impact, dual use and misuse, gender bias, and power and labour relations
  • The lack of human-like reasoning or understanding in NLP systems, spontaneous anthropomorphisation of chatbots, and the influence of artificial emotions on human users
  • The impact of neurotechnologies on the values and principles of autonomy, responsibility, privacy, risk reduction, and informed consent
  • The potential for less costly, but less effective climate engineering solutions to divert resources away from more sustainable, but more expensive initiatives
  • The potential for climate engineering to be more wasteful

Beyond the well-researched and in-depth analysis of the conceptual arguments, there are also helpful use cases and questions that stakeholders can ask when dealing with the ethics of the technologies in each family.

Analysis of international and EU law and policy applied to Digital eXtended Reality, Neurotechnologies, and Climate Engineering

In July 2022, following the analysis of the ethical dilemmas inherent to each technology family studied by TechEthos, a second draft report was published, which delved into the international and EU laws and policies for their relevance and applicability to Digital eXtended Reality, Neurotechnologies, and Climate Engineering. Although there is no dedicated EU or international law governing these three technology families, there do exist several legal frameworks that could be applied to them.

The report serves to review these legal domains and related obligations at international and EU levels, identifies the potential implications for fundamental rights and principles of democracy and rule of law, and reflects on issues and challenges of existing legal frameworks to address current and future implications of the technologies. The 242-page report covers human rights law, rules on state responsibility, environmental law, climate law, space law, law of the seas, and the law related to artificial intelligence (AI), digital services and data governance, among others as they apply to the three technology families.

The report was co-authored by Nicole Santiago, Ben Howkins, Julie Vinders, Rowena Rodrigues, and Zuzanna Warso from Trilateral Research (TRI), Michael Bernstein from the AIT Austrian Institute of Technology, and Gustavo Gonzalez and Andrea Porcari from the Associazione Italiana per la Ricerca Industriale (Airi). It aims to present an evidence base for the TechEthos project’s development of recommendations for policy and legal reform, and is currently being reviewed by the European Commission.

—————————————————————–

TechEthos is led by AIT Austrian Institute of Technology and will be carried out by a team of ten scientific institutions and six science engagement organisations from 13 European countries over a three-year period. ALLEA is a partner in the consortium of this project and will contribute to enhancing existing legal and ethical frameworks, ensuring that TechEthos outputs are in line with and may complement future updates to The European Code of Conduct for Research Integrity.

How to Integrate Ethics into the Design of Disruptive Technologies

Eva Buchinger – TechEthos coordinator, AIT

Bioengineering, virtual reality, autonomous systems and many other technologies enter into society and our daily lives with the potential to radically transform our work, health, environment, and even our privacy and personal interactions. To reconcile the needs of research and innovation and the concerns and aspirations of society, ethical and societal considerations should be grafted onto the thinking of research and development practices.

TechEthos is an EU-funded project that seeks to create ethics guidelines to deal with this type of new and emerging technologies with a high socio-economic impact. Eva Buchinger (Austrian Institute of Technology, AIT) is the lead coordinator of the project. In this interview, she presents the key concepts tackled by TechEthos and its expected impact. The project started in January 2021 and will run until the end of 2023. 

 

Question: What are the aims and rationale of the TechEthos project? 

Eva Buchinger: TechEthos aims to facilitate “ethics by design”, namely, to bring ethical and societal values into the design and development of new and emerging technologies from the very beginning of the process. The project will provide ethics guidelines for 3-4 selected technologiesTo reconcile the needs of research and innovation and the concerns of society, the project will explore the awareness, acceptance and aspirations of academia, industry and the general public alike.   

TechEthos aims to facilitate “ethics by design”.

Q.: What kind of technologies are you looking at and why? Can you give one example and describe why their ethics dimensions are so significant?

E. B.: We will be looking at new and emerging technologies with a high socio-economic impact and significant ethics dimensions. That is, part of our work will be identifying technologies that are socially, economically and ethically (potentially) disruptive.  

“Disruption” is thereby understood as a generic term, referring to a significant change, may it be positive or negative. We will decide which high-impact technologies we will focus on in TechEthos at the end of the project’s first phase in July 2021. This decision will be informed by a horizon scanning process consisting of a meta-analysis combined with an expertbased impact assessment. We will consider a broad set of technologies ranging from bioengineering to cognitive technologies and smart materials.  

As for now, TechEthos understands the “ethics dimension” as relating to fundamental principles such as human rights, privacy and autonomy as well as specific concerns related to health, environment and human interactions.

Q.: What kind of impact does the project expect to have for policy and the research community?

E. B.: TechEthos is explicitly designed to serve researchers from academia and industry, research ethics committees and research integrity bodies, and governance agents such as standardization bodies, regulators, and policymakers. This will be achieved by developing operational guidelines and codes and other ethical toolsengaging in the process with a wide range of ethical codes and guidelines for the target technologies that currently exist. This will serve as the basis for constructive interpretation and guide the determination of how to enhance existing frameworks or supplement existing practices with new guidelines.  

The goal is to create a set of principles that are action-oriented for the above-mentioned users. Given the wide range of possible technologies, it is impossible to fully anticipate how the various codes or guidelines will be constructed in advance. However, the methodology we are adopting is sufficiently flexible to accommodate a variety of scenarios. 

TechEthos is explicitly designed to serve researchers, ethics bodies, and policymakers.

Q.: Who is involved and why is this the best consortium to achieve the project’s aims? 

E. B.: The TechEthos consortium benefits from the diversity of its partners as well as approaches. The project consists of ten scientific partners and six science engagement organisations representing 14 countries from all over Europe. The project will additionally involve a broad range of stakeholders from academia, industry, policy, and civil society. These stakeholders will contribute through interactive formats such as interviews, surveys, workshops, scenario exercises and games, and exhibitions.

The scientific partners are universities (De Montfort UniversityTechnische Universiteit DelftUniversiteit Twente); applied research institutions (Associazione per la Ricerca Industriale, Austrian Institute of TechnologyCEA Commissariat à l’énergie atomique et aux énergies alternativesTrilateral Researchand associations specialising in research ethics (ALLEA, the European Federation of Academies of Sciences and Humanities, EUREC European Network of Research Ethics Committees Office).  

The science engagement organisations are supervised by ECSITE (Association européenne des expositions scientifiques techniques et industrielles) and located in six European countries (Science Center Network AustriaiQLANDIA Science Popularization CentreBucharest Science FestivalCentre for the Promotion of Science, Parque de las CienciasVetenskap & Allmänhet Public & ScienceAll of them have outstanding expertise in dealing with ethics of new and emerging technologies.  

The well-balanced composition of the consortium together with the project’s participative multi-stakeholder approach provides an excellant basis to achieve TechEthos’s aims.    

Q.: What have been the best and worst moments in coordinating a collaborative H2020 project so far?

E. B.: The best experience in coordinating such a diverse consortium is to know that we are working with the top specialists in the field to reach our highly ambitious goals. The greatest challenge may be the unavoidable moments of utmost tension before this wonderful diverse pool of expertise and excellence synergizes into an operational solution.

Read more