Trust is fundamental in any issue involving health, especially when an “artificial” component is inserted in decision-making or operational processes. Incorrect diagnoses or breaches of privacy by AI systems are examples of situations that can lead to a loss of confidence in the relationships between healthcare providers and patients. The growing adoption of AI technologies in different areas of human activity and, especially, those related to human health and well-being, has brought to the discussion, by government agencies, international organizations and the scientific community, several questions related to ethics and their implications for algorithms and human-computer interaction (HCI) 88. In this sense, when using AI, it is important to establish principles associated with values such as: inclusion, transparency, privacy, responsibility, and reliability. One of the main challenges for building AI systems is to ensure that different users (e.g., managers, health professionals, patients, the general public), when using such systems, have their expectations positively guaranteed in these aspects.
The development of AI technologies and applications in the thematic axes must consider a series of requirements that preserve human autonomy, security, impartiality and end-to-end explainability (i.e., permeating all components of the solution) of AI decisions, reinforcing the “feeling” of confidence of the user in the information presented and in the decisions suggested, resulting from the reliability demonstrated by the systems. Therefore, the general objective of this research line is to ensure that the AI strategies developed in the context of the thematic axes preserve the balance required by the ethical implications and human values with regards to transparency, privacy and security, responsibility, reliability, impartiality, and explainability. This balance should be considered at all relevant levels, from the devices and processes of data acquisition, databases integration methods, machine learning algorithms, to the interfaces through which users communicate with the systems. Usability and accessibility of interfaces are essential for all users to be able to understand and use the system, whether autonomous or not, and, above all, to trust the information presented. Special attention should be given to situations involving vulnerable groups, such as children, people with special needs, or in adverse health conditions, historically disadvantaged or at risk of exclusion, or in situations characterized by asymmetries in power or information.
The use of AI technologies increases the challenges in the area of HCI to consider greater integration between people and systems, taking user interactions to the stage of collaboration between people and autonomous software systems 107. This expansion brings the need to develop new techniques and methods of design and evaluation108, such as, for example, methods that incorporate and balance values for different stakeholders in the design of systems and algorithms, and interfaces that efficiently communicate to users the principles considered in their behavior (e.g., privacy, impartiality, transparency, among others with ethical implications) 109. In addition to considering the profiles, needs and values of different stakeholders, the design of AI systems for health must consider the conditions of our population, with huge differences in terms of socioeconomic and cultural aspects as well as accessibility to services. Such distinguishing conditions require original and specific solutions, which, in addition to the aforementioned aspects, also consider environmental, technological, and temporal factors.
Recent research raises challenges related to the development of metrics and indexes for ethics in AI algorithms88. Such metrics and indexes are a fundamental step towards the creation of a methodological framework for the design of algorithms and tools that are consistent with predefined ethical specifications as well as mechanisms for assessing the confidence in systems developed from these specifications. In addition to several independent studies addressing aspects such as transparency and bias, recent initiatives seek to standardize the proposal of a framework for the design of intelligent systems that prioritize privacy and security issues, as well as monitoring current efforts to integrate ethical issues in the development of AI solutions88. Regarding security and privacy, we note that many health organizations choose not exploit their available data due to the lack of effective solutions. For example, hospitals often want to aggregate their confidential data in order to later use them to train descriptive and predictive models 92. For instance, data data from different sources could be processed and rearranged in order to reveal a patient’s condition and, thus, improve diagnosis. However, initiatives such as this stumble upon issues related to regulation and confidentiality of patient data. In this sense, several solutions employing cryptographic protocols and efficient schemes for secure collaborative learning 110, including federated learning,have been proposed to solve the problem of aggregation of confidential data 111.
This research track addresses the construction of a framework for AI strategies for health that contemplates the previously mentioned principles related to ethics and human values, always focusing on the stakeholder. These strategies run through or interact with all other lines of research in AI. Therefore, we intend to: (1) create a map of issues of relevance to the specific context of AI for health 112; (2) develop experimental studies with public health data as well as data from private providers (e.g., UNIMED) to identify biases and imbalances of different natures (e.g., race, geographic region, socioeconomic aspects, gender, cultural aspects, language, religion and previous diseases) 113; (3) research the requirements for end-to-end transparency and explainability; (4) create mechanisms for maintaining data confidentiality and preserving the privacy of individuals in accordance with the General Law on Protection of Personal Data (LGPD – Brazilian Law No. 13,709 / 2018); (5) develop new visualization and interaction techniques that translate these ethical principles for different stakeholders, bringing confidence in the results generated, and (6) develop new types of interfaces that can adapt and evolve with individuals depending on the phases of life and health conditions of the patient.
In general, our focus will be on finding solutions that address the tradeoffs between the various aspects and requirements already mentioned in order to better meet the needs of different stakeholders, adequately addressing different types of biases and imbalances and counting, whenever possible, with direct participation of the users, that is, the “human-in-the-loop”. It is noteworthy that some of these tradeoffs involve non-trivial challenges and possibly conflicting objectives. As an example, the creation of effective AI models should use the maximum available data from multiple sources, while guaranteeing the privacy of individuals, using techniques such as federated learning, which preserve security and privacy in distributed databases.
88. Perrault R, Shoham Y, Brynjolfsson E, Clark J, Etchemendy J, Grosz B, et al. The AI Index 2019 Annual Report. AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA. 2019;
92. Halevy A, Norvig P, Pereira F. The Unreasonable Effectiveness of Data. IEEE Intell Syst. 2009 Mar;24(2):8–12.
107. Farooq U, Grudin J. Human-computer integration. Interactions. 2016 Oct 26;23(6):26–32.
108. Stephanidis C, Salvendy G, Antona M, Chen JYC, Dong J, Duffy VG, et al. Seven HCI Grand Challenges. International Journal of Human–Computer Interaction. 2019 Aug 27;35(14):1229–69.
109. Xu W. Toward human-centered AI: a perspective from human-computer interaction. Interactions. 2019 Jun 26;26(4):42–6.
110. Nikolaenko V, Weinsberg U, Ioannidis S, Joye M, Boneh D, Taft N. Privacy-Preserving Ridge Regression on Hundreds of Millions of Records. In: 2013 IEEE Symposium on Security and Privacy. 2013. p. 334–48.
111. Dwork C. Differential Privacy: A Survey of Results. In: Theory and Applications of Models of Computation. Springer Berlin Heidelberg; 2008. p. 1–19.
112. Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. The ethics of algorithms: Mapping the debate. Big Data & Society. 2016 Dec 1;3(2):2053951716679679.
113. Schiebinger L. Scientific research must take gender into account. Nature. 2014 Mar 6;507(7490):9.