Auditoría y evaluación de impacto de la IA según la oficina del comisionado de información del Reino Unido

A medida que el uso de sistemas de datos e inteligencia artificial se vuelve crucial para los servicios y negocios centrales, cada vez más exige un enfoque de gobernanza complejo y de múltiples partes interesadas. La «Guía sobre el marco de auditoría de la IA: borrador de la guía para la consulta» de la Oficina del Comisionado de Información es un avance en la gobernanza de la IA. El objetivo de esta iniciativa es producir una guía que abarque componentes tanto técnicos (por ejemplo, evaluaciones de impacto del sistema) como no técnicos (por ejemplo, supervisión humana) para la gobernanza y representa un hito significativo en el movimiento hacia la estandarización de la gobernanza de la IA. Este documento resumirá y evaluará críticamente el esfuerzo de la ICO y tratará de anticipar futuros debates y presentar algunas recomendaciones generales.

https://link.springer.com/article/10.1007/s43681-021-00039-2

Por qué la máquinas no pueden ser morales

El hecho de que las decisiones del mundo real tomadas por inteligencias artificiales (IA) a menudo tengan una carga ética ha llevado a varias autoridades a abogar por el desarrollo de «máquinas morales». Sostengo que el proyecto de construir “ética” “en” máquinas presupone una comprensión errónea de la naturaleza de la ética. Basándome en el trabajo del filósofo australiano Raimond Gaita, sostengo que los dilemas éticos son problemas para personas en particular y no (solo) problemas para todos los que enfrentan una situación similar. Además, la fuerza de una afirmación ética depende en parte de la historia de vida de la persona que la hace. Por estas dos razones, las máquinas podrían, en el mejor de los casos, diseñarse para proporcionar un simulacro superficial de la ética, que tendría una utilidad limitada para enfrentar los dilemas éticos y políticos asociados con la IA.

https://link.springer.com/epdf/10.1007/s00146-020-01132-6?sharing_token=MCflXPvE6Zhx8XLZQ3v7wfe4RwlQNchNByi7wbcMAY77XkFDBtIjSQXBdOwfT-6VFxK6j_84KzP8NbhzYbH-z1xYL4D7A6mKeEQUvRg6Fcf5JzMhRnj106oUrimEEERgjDenm25MyCymVpdhWMi5WMVDU8pww714b-EX-lsm7Q0%3D

La ética de la tecnología de reconocimiento facial

Esta es una presentación completa de los principales problemas éticos en los debates sobre la tecnología de reconocimiento facial. Después de definir los términos básicos (detección facial, caracterización facial, verificación facial e identificación facial), se discuten los siguientes temas: estándares, medidas y daños distribuidos desproporcionadamente; erosión de la confianza; daños éticos asociados con una perfecta vigilancia facial; alienación, deshumanización y pérdida de control; y el debate de la pendiente resbaladiza.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3762185

Cómo los estados y las plataformas tratan con Covid-19 Desinformación: un estudio exploratorio de 20 países

 Los documentos de trabajo de la Red Global Digital de Derechos Humanos contienen tanto un idealista como una dimensión orientada a la práctica. A menudo es difícil pero siempre inevitable para el mundo académico llegar al «mundo real». Los académicos que trabajan con los derechos humanos digitales tienen desde hace algún tiempo se dio cuenta de que en el dominio digital de los derechos humanos la teoría importa menos y las soluciones técnicas importa más. La serie Working Paper, de nuevo idealista, intenta invertir este patrón.El nivel de pragmatismo de este objetivo depende de la capacidad de la Red para romper o al menos cuestionar el fortalecimiento de las empresas en línea como actores poderosos en la definición deimagen de los derechos humanos en el panorama digital. La actual edición inaugural muestra claramente cómo los tiempos turbulentos aceleran la solidificación del novedoso “paradigma digital” en la protección de los derechos humanos. Lo que en tiempos ordinarios han tardado décadas puede mostrarse como una tendencia importante en un breve período de tiempo debido a la crisis pandémica. Esto significa «normalización» de características que anteriormente se consideraban contestable. Por ejemplo, la ausencia de transparencia y previsibilidad como características inherentes de la gobernanza del contenido privado se ha aceptado tácitamente durante mucho tiempo debido al enfoque pasar del proceso de evaluación del contenido al resultado. Pero en tiempos difíciles, la gente esperar respuestas y justificación para las decisiones que afecten la forma en que pueden comunicarse.

https://leibniz-hbi.de/uploads/media/default/cms/media/fi1c9mo_GDHRNet_Working%20Paper1.pdf

La ética de los algoritmos: problemas y soluciones clave

 La investigación sobre la ética de los algoritmos ha crecido sustancialmente durante la última década. Junto al desarrollo exponencial y aplicación de algoritmos de aprendizaje automático, nuevos problemas éticos y Se han propuesto soluciones relacionadas con su uso ubicuo en la sociedad. Este artículo se basa en una revisión de la ética de los algoritmos publicado en 2016 (Mittelstadt et al. 2016). Los golas deben contribuir a el debate sobre la identificación y análisis de las implicaciones éticas de los algoritmos, para proporcionar una análisis actualizado de preocupaciones epistémicas y normativas, y para ofrecer una guía práctica para el gobernanza del diseño, desarrollo y despliegue de algoritmos.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3662302

Editado por Aniceto Pérez y Madrid, Especialista en Ética de la Inteligencia Artificial y Editor de Actualidad Deep Learning (@forodeeplearn).

Los artículos publicados son incluidos por su estimada relevancia y no expresan necesariamente los puntos de vista del Editor este Blog.

El Modelo Fiduciario de Privacidad

 Este ensayo resume y reafirma la teoría de los fiduciarios de la información y el modelo fiduciario de privacidad.

En la era digital, las personas son cada vez más dependientes y vulnerables a las empresas digitales que recopilan datos de ellas. Las empresas utilizan estos datos para predecir y controlar lo que hacen las personas y para vender el acceso de terceros a ellos. Debido a la vulnerabilidad y dependencia creadas por el capitalismo de la información, la ley debería considerar a las empresas digitales que recopilan y utilizan datos del usuario final como fiduciarios de la información. El modelo fiduciario es parte de una tendencia más amplia en la ley de privacidad que ve la privacidad en términos de relaciones de lealtad y confianza.

Los fiduciarios de información tienen tres tipos básicos de deberes hacia sus usuarios finales: un deber de confidencialidad, un deber de cuidado y un deber de lealtad. Estos deberes fiduciarios también deben “correr con los datos”: las empresas digitales deben asegurarse de que cualquier persona que comparta o utilice los datos sea igualmente confiable y esté legalmente obligado por los mismos requisitos legales de confidencialidad, cuidado y lealtad que ellos.

El modelo fiduciario tiene importantes consecuencias para la ley de la Cuarta Enmienda. Limita la aplicación de la doctrina de terceros a aquellas personas y empresas que no son nuestros fiduciarios de información. Si entregamos nuestros datos a un fiduciario de información, por el contrario, el gobierno debe obtener una orden judicial para acceder a ellos, porque tenemos una expectativa razonable de que nuestro fiduciario tiene la responsabilidad de no traicionarnos. De esta manera, el modelo fiduciario ayuda a preservar nuestra seguridad frente al gobierno a medida que entregamos cada vez más información sobre nosotros a las empresas digitales. Impide que nuestros derechos constitucionales se contraigan continuamente en la era digital.

El modelo fiduciario es totalmente consistente con los deberes fiduciarios de los gerentes corporativos para con los accionistas. Sin embargo, una vez implementado, el modelo fiduciario transformará los modelos comerciales existentes y tendrá efectos sistémicos. Su propósito central es dar incentivos legales a las empresas digitales para que actúen en interés de sus usuarios finales, intereses que a menudo afirman respetar pero que en realidad no lo hacen.

El modelo fiduciario no sustituye a las reformas de la ley de competencia ni a la regulación antimonopolio. Quitemos lo contrario, los reformadores deben proceder en múltiples frentes. El poder de las empresas digitales surgió de cambios en muchas áreas diferentes de la ley durante la Segunda Edad Dorada, y abordar ese poder también requerirá reformas en muchas áreas diferentes. Por lo tanto, prestar atención únicamente a la reforma de la privacidad dejará sin abordar problemas cruciales de concentración económica. Pero lo contrario también es cierto. Centrarse únicamente en las políticas antimonopolio y de competencia puede no resolver, o incluso exacerbar, amenazas importantes a la privacidad digital.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3700087

Editado por Aniceto Pérez y Madrid, Especialista en Ética de la Inteligencia Artificial y Editor de Actualidad Deep Learning (@forodeeplearn).

Los artículos publicados son incluidos por su estimada relevancia y no expresan necesariamente los puntos de vista del Editor este Blog.

Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

Este interesante artículo analiza cómo la legislación y la jurisprudencia europea aborda la discriminación.

La regulación consiste en cuatro directivas: Igualdad Racial, Igualdad de género, igualdad de Acceso de Género y la directiva de Empleo. Las directivas europeas no son directamente aplicables, han de ser trapuestas por los Estados Miembros, lo que supone una regulación no uniforme.

La discriminación puede ser directa, de una persona concreta, o indirecta, de un grupo frente a otro grupo.

En las demandas de discriminación de un grupo o de una persona, se busca un comparador y un grupo contra el que se demanda estar desfavorecido. Este artículo recoge diversos casos de la Corte Europea de Justicia. Normalmente, los tribunales se fijan en el grupo desfavorecido y en el comparador y no suelen atender al grupo favorecido. Las sentencias se basan en la aplicación del sentido común. En general se ignoran los indicadores estadísticos presentados, aduciendo que las resoluciones son dependientes del contexto.

Esta publicación presenta un indicador denominado Disparidad Demográfica Condicional que tiene en cuenta tanto los aventajados como los desfavorecidos del grupo aventajado y desfavorecido.

Los técnicos estadísticos y de IA han estudiado mucho la cuestión de la equidad y la judicatura ha ignorado ese campo. En cambio los últimos conocen bien la complicación de la administración de justicia en los casos de discriminación y no los primeros. Por eso, aboga por un trabajo conjunto entre técnicos y juristas para poder obtener información estadística que, aunque no sea un valor de numérico absoluto de justicia de una situación, pueda ser una información útil para que los jueces puedan resolver más informados.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3547922

Adaptado por Aniceto Pérez y Madrid, Filósofo de las Tecnologías y Editor de Actualidad Deep Learning (@forodeeplearn)

Contribución de la OCDE al una IA centrada en el ser humano

La «gestión mediante algoritmo» ya es una realidad en el espacio laboral, lo que puede conducir a prácticas muy intrusivas. El software y hardware permiten crear herramientas de control del desempeño laboral mediante la monitorización de maneras impensables en el pasado, así como recoger enormes cantidades de datos de su actividad, (Elevtronic Performance Monitoring – EPM)permitiendo que un empleado de bajo rendimiento pueda ser despedido automáticamente sin verificación del supervisor.

Formas incrementadas de control de trabajadores de fábricas y de oficina incluyen la monitorización de emails, escucha telefónica, seguimiento del contenido del ordenador y tiempo de uso, monitorización por vídeo y seguimiento GPS. Con esto se realiza «People Analytics».

EPM y Personal Analytics pueden ser usados legítimamente para fomentar la productividad o elevar la seguridad. Los dispositivos wearables pueden usarse para mejorar la salud, pero pueden usarse para una grave intrusión de la vida privada violando la privacidad.

Además,la idea de que la IA llevará a prácticas médicas más objetivas y libre de sesgos puede ser sustancialmente errónea. Mucha literatura describe cómo estos algoritmos pueden perpetuar prácticas discriminatorias y  la marginalización de grupos vulnerables, especialmente cuando la recogida de datos es pobre. Los mecanismos de auto aprendizaje  en los que el software elige sus propios criterios puede resultar incluso peor. La falta de transparencia y el riesgo de deshumanizar del trabajo se exacerbaría.

La guía de la OCDE pide a los Gobiernos que «tomen medidas, incluso con el diálogo social, para asegurar una transición limpia para los trabajadores al desplegar la IA» y «trabajen con los interesados para promover el uso responsable de la IA en el trabajo y para mejorar la seguridad de los empleados y la calidad de los empleos.

Se requiere regulación para gobernar los datos recogidos sobre el desempeño en el trabajo, las características personales de los trabajadores así como cómo son recogidos y tratados, así como prohibir los controles intrusivos. Cualquier decisión que afecte a los trabajadores deberían tomarse bajo supervisión humana.

En 2017 la UE abogaba por un enfoque de la IA con «humano al mando». El despliegue de la IA debe ser responsable, seguro y útil donde las máquinas sigan siendo máquinas y las personas sigan controlando las máquinas. Este enfoque se debe seguir estrictamente en lo referente al trabajo. El derecho a no estar sujeto a toma de decisiones automatizada sin intervención humana se abre paso en la regulación supranacional. Los individuos no deberían quedar solos al lidiar con los intríngulis de estas tecnologías cuando quieran comprender y oponerse a sus consecuencias y aplicaciones.

Los Gobiernos tienen un papel esencial que jugar en la regulación. Por ejemplo se pueden usar incentivos fiscales. Es necesario que los trabajadores, los sindicatos y los directivos estén adecuadamente formados en los retos y oportunidades de esta tecnología.

https://poseidon01.ssrn.com/delivery.php?ID=622087009029070101099011072022002024022042010014033020088068074077091106091083109005011118030002007113008089081006076102118083001006043079004112097024114093068113104019076008081002086014064113072117121028104105093100120127095066123113007029093115096020&EXT=pdf

por Aniceto Pérez y Madrid, Filósofo de las Tecnologías y Editor de Actualidad Deep Learning (@forodeeplearn)

Artificial Intelligence and Public Standards

Este informe trata de dilucidar reglas para la IA operando en la función pública. Es el resultado de muchas reuniones y debates. Contiene muchas citas interesantes de personajes relevantes. Incluye una serie de recomendaciones para que la IA siga los mismos principios (Nolan) que la desarrollada por humanos.

“Artificial Intelligence is one of the most transformative forces of our time, and is bound to alter the fabric of society.” European Commission, Independent High-Level Expert Group on AI

The Data Ethics Framework principles
 1. Start with clear user need and public benefit
 2. Be aware of relevant legislation and codes of practice
 3. Use data that is proportionate to the user need
 4. Understand the limitations of the data
 5. Ensure robust practices and work within your skillset
 6. Make your work transparent and be accountable
 7. Embed data use responsibly.

“When decision systems are introduced into public contexts such as criminal justice, it is important they are subject to the scrutiny expected in a democratic society. Algorithmic systems have been criticised on this front, as when developed in secretive circumstances or outsourced to private entities, they can be construed as rulemaking not subject to appropriate procedural safeguards or societal oversight.” Law Society Report, Algorithms in the Criminal Justice System

“States should engage in inclusive, interdisciplinary, informed and public debates to define what areas of public services profoundly affecting access to or exercise of human rights may not be appropriately determined, decided or optimised through algorithmic systems.” The Council of Europe’s draft Guidelines for States on actions to be taken vis-à-vis the human rights impacts of algorithmic systems

“We are not aware of any body with systematic knowledge of where automated decision-making tools are being used in the public sector.” Centre for Data Ethics and Innovation

“There is a serious lack of transparency and concomitant lack of accountability about how the police and other law enforcement agencies are already using these technologies.” Professor Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics, University of Birmingham Law School and School of Computer Science

“Transparency – and therefore accountability – over the way in which public money is spent remains a very grey area in the UK…People are convinced that the growth of technology in the public sector has hugely important ramifications, but are baffled as to what exactly is going on and who is doing it.” Dr Crofton Black, Government Data Systems: The Bureau Investigates, The Bureau of Investigative Journalism

“Much of the public simply don’t yet know enough about how AI or automation works, or where innovations might be used, to make an informed decision on whether they support or oppose them. This creates a vacuum of information, into which negative narratives about Britain’s future are just as likely to take root as positive ones.” Mark Kleinman, Professor of Public Policy and Director of Analysis at the Policy Institute, King’s College London

“When you have a non-human decisionmaker, can you always ascribe the outcome to a human? If you cannot then you have a gap where there is no legal liability. One could stretch existing laws around negligence and vicarious liability, but the more independently AI takes decisions, the harder it will be to tie decisions back to human beings.” Jacob Turner, Barrister and Author of Robot Rules: Regulating Artificial Intelligence

“Rather than focusing on the concept of humans-in-the-loop, we need to think carefully about the end-to-end process and ensure that we think about how AI and humans work together to deliver efficiencies and better results.” Sana Khareghani, Head, Office for AI

“If you are saying that there may be some decisions that need to be made so rapidly that the machine makes the decision (if it has been appropriately codified), there is still human accountability at the design stage and in the verification and validation of the AI system before it is put into use. This means you may not have an accountability gap as ultimately a human is still accountable at the design and testing stages.” Fiona Butcher, Fellow, Defence Science and Technology Laboratory, Ministry of Defence

“The fact that we cannot always explain how an AI system made a decision and whether that process was adequate challenges public servants’ ability to make decisions in an open and transparent manner.” Leverhulme Centre for the Future of Intelligence, University of Cambridge

“If you stick with a simpler model which is inherently interpretable, you are not going to sacrifice that much on accuracy but you are going to keep the benefits of understanding the variables you are using and understanding how the model works.” Dr Reuben Binns, Postdoctoral Research Fellow in AI, ICO

“I think something we need to be challenging ourselves on is whether the lack of transparency and the lack of explainability is a real necessity for the system or whether it is bad design…sometimes there is a challenge to be made of vendors and people who are building the system.” Simon McDougall, Executive Director, Technology Policy and Innovation, ICO 

“Claims about what is technically (im)possible should be treated with caution. Our engagement with industry to date suggests that, if a degree of explainability is made a priority from the outset by its commissioner, it can be built in.” Centre for Data Ethics and Innovation

“The incorporation of an AI tool into a decisionmaking process may come with the risk of creating ‘substantial’ or ‘genuine’ doubt as to why decisions were made and what conclusions were reached…consideration should be given to the circumstances in which reasons for an explanation of the output may be required.” Marion Oswald, Senior Fellow in Law and Director of the Centre for Information Rights, University of Winchester

“There is a very old adage in computer science that sums up many of the concerns around AI enabled public services: ‘Garbage in, garbage out’ In other words, if you put poor, partial, flawed data into a computer it will mindlessly follow its programming and output poor, partial, flawed computations. AI is a statisticalinference technology that learns by example. This means if we allow AI systems to learn from ‘garbage’ examples, then we will end up with a statistical-inference model that is really good at producing ‘garbage’ inferences.” British Computer Society

“Decision-making, algorithmic or otherwise, can of course also be biased against characteristics which may not be protected in law, but which may be considered unfair, such as socio-economic background. In addition, the use of algorithms increases the chances of discrimination against characteristics that are not obvious or visible. For example, an algorithm might be effective at identifying people who lack financial literacy and use this to set interest rates or repayment terms.” Centre for Data Ethics and Innovation, Interim Report on Data Bias

“The statistics speak for themselves. We know that you are eight times more likely to be subject to stop and search in the UK if you are black. If you are building an algorithm on these statistics, that is a huge problem.” Sandra Wachter, Associate Professor and Senior Research Fellow, Oxford Internet Institute

“Some of our existing systems are designed in a way that makes it impossible to measure bias…One of the good things about machine learning technologies is that they have exposed some bias which has always been there.” Professor Helen Margetts, Professor of Society and the Internet at the University of Oxford and Director of the Public Policy Programme, The Alan Turing Institute

“Right now we are more likely to be replacing a human process with an AI process. All us humans are bringing a whole suitcase of preconceptions, prejudices and baggage along with us to that decision, some conscious and some unconscious. As we talk around bias in AI – and there is plenty of stuff to talk about – we have to keep in mind we are not moving from a beautiful neutral model.” Simon McDougall, Executive Director, Technology Policy and Innovation, ICO

“I think we have to start from the point of view that we are dealing with biased systems usually anyway. It is one of the hopes of artificial intelligence that it might be able to reduce bias in certain areas and, certainly, provide lots more ways of systematically thinking about measuring that bias.” Dr Jonathan Bright, Senior Research Fellow, Oxford Internet Institute

“There will be new jobs for humans to work out what machines are doing. And this is where it comes back to diversity – those humans in the loop must be diverse, so they can see the true range of possible impacts the machine is having.” Professor Dame Wendy Hall, Regius Professor of Computer Science, University of Southampton and co-author, UK government AI review

“What we might want to say is ‘it is unacceptable not to know the ways in which your system is biased, and you are then required to account for how you use and understand the results of that system in that context.’ You need to be able to provide a justification and that justification has to be subject to scrutiny and challenge.” Oliver Buckley, Executive Director, CDEI 

“A draft tool we have looked at (at West Midlands Police) had intelligence information built in as input factors, including things like the number of stop and search counts, and that raised red flags around what that could be a proxy for in that particular region.” Marion Oswald, Senior Fellow in Law and Director of the Centre for Information Rights, University of Winchester

“I’m not convinced that human cleansing of data adequately answers this problem. When we remove certain data points, how are we sure that we are making a dataset less biased? Whose rules are being used, why and who is saying that those rules are the right ones?” Sana Khareghani, Head, Office for AI

“A hallmark of good governance is the development of shared values, which become part of the organisation’s culture, underpinning policy and behaviour throughout the organisation, from the governing body to all staff.” The Independent Commission on Good Governance

“The guidelines and advice are the shared responsibility of the Office for AI in BEIS, and the Government Digital Service. The OAI is also responsible for promoting the development of AI technologies and industries, and so has a conflicting interest, and the GDS has wide responsibilities to support digitalization of central government. It seems unlikely that either organisation has the capacity or remit to ensure robust and consistent ethical supervision on broader questions of automated decision system adoption and use in public policy, including their use outside central government.”Dr Emma Carmel, Associate Professor, Social and Policy Sciences, University of Bath

“[It is] not adequate to employ technical legal arguments to ‘cobble together’ an ‘implicit’ lawful basis, given that power, scale and intrusiveness of these technologies create serious threats to the rights and freedoms of individuals, and to the collective foundations or our democratic freedoms.”  Professor Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics, University of Birmingham Law School and School of Computer Science

“[H]uman involvement has to be active and not just a token gesture. The question is whether a human reviews the decision before it is applied and has discretion to alter it or whether they are simply applying the decision taken by the automated system.” What does the GDPR say about automated decision-making and profiling? ICO

“You need to be able to give an individual an explanation of a fully automated decision to enable their rights, to obtain meaningful information, express their point of view and contest the decision.” ICO Guidance, Why Explain AI, Project ExplAIn

“Although predictive policing is simply reproducing and magnifying the same patterns of discrimination that policing has historically reflected, filtering this decisionmaking process through complex software that few people understand lends unwarranted legitimacy to biased policing strategies that disproportionately focus on BAME and lower income communities.” Policing by Machine, Liberty

“In 2017, Durham Constabulary started to implement a Harm Assessment Risk Tool (HART), which utilised a complex machine learning algorithm to classify individuals according to their risk of committing violent or non-violent crimes in the future. This classification is created by examining an individual’s age, gender and postcode. This information is then used by the custody officer, so a human decision maker, to determine whether further action should be taken. In particular, whether an individual should access the Constabulary’s Checkpoint programme which is an “out of court” disposal programme. There is potential for numerous claims here. A direct age discrimination could be brought by individuals within certain age groups who were scored negatively. Similarly, direct sex discrimination claims could be brought by men, in so far as their gender leads to a lower score than comparable women. Finally, indirect race discrimination or direct race discrimination claims could be pursued on the basis that an individual’s postcode can be a proxy for certain racial groups. Only an indirect race discrimination claim would be susceptible to a justification defence in these circumstances.” AI Law Hub

“Public bodies must consider the Public Sector Equality Duty when they make decisions about how they fulfil their public functions and deliver their services. When moving towards automated decision making the PSED provides an opportunity for equality considerations to be built into decision-making processes as they are developed.” Rebecca Hilsenrath, Chief Executive, Equality and Human Rights Commission 

“People often say ‘Let’s have a new regulator. Let’s have a new, shiny one.’ Actually, there is a lot of expertise already in the regulators because they are having to deal with this kind of thing in markets which they are there to regulate. We ought to build on that and use the expertise we have got.” Professor Helen Margetts, Professor of Society and the Internet, University of Oxford and Director of the Public Policy Programme, The Alan Turing Institute

“The Cabinet Office should reinforce the message that the Seven Principles of Public Life apply to any organisation delivering public services. The Cabinet Office should ensure that ethical standards reflecting the Seven Principles of Public Life are addressed in contractual arrangements, with providers required to undertake that they have the structures and arrangements in place to support this. Commissioners of services should include a Statement of Intent as part of the commissioning process or alongside contracts where they are extended, setting out the ethical behaviours expected by government of the service providers.” Recommendations from the Committee’s 2014 and 2018 reports into providers of public services

“Ethical standards are definitely not part of the procurement process at this point in time.” Ian O’Gara, Accenture

“Assertions of commercial confidentiality should not be accepted as an insurmountable barrier to appropriate rights of access to the [algorithmic] tool and its workings for the public sector body, particularly where the tool’s implementation will impact fundamental rights. Government procurement contracts relating to AI and machine learning should not only include source code escrow provisions, but rights for the public sector party…as standard.” Marion Oswald, Senior Fellow in Law and Director of the Centre for Information Rights, University of Winchester

“Public servants must be incentivised in some way to carry out impact assessments and act upon their results, without being constrained from adopting beneficial innovation.” Centre for Data Ethics and Innovation

“The AIA provides designers with a measure to evaluate AI solutions from an ethical and human perspective, so that they are built in a responsible and transparent way. For example, the AIA can ensure economic interests are balanced against environmental sustainability. The AIA also includes ways to measure potential impacts to the public, and outlines appropriate courses of action, like behavioral monitoring and algorithm assessments.” Canadian Government Video on AIA

Will you have documented processes in place to test datasets against biases and other unexpected outcomes? This could include experience in applying frameworks, methods, guidelines or other assessment tools. Will you be developing a process to document how data quality issues were resolved during the design process? Will you be making this information publicly available? Will you undertake a Gender Based Analysis Plus of the data? Questions on data quality taken from Canada’s Algorithmic Impact Assessment 

Goal-Setting and Objective-Mapping How are you defining the outcome (the target variable) that the system is optimising for? Is this a fair, reasonable, and widely acceptable definition? Does the target variable (or its measurable proxy) reflect a reasonable and justifiable translation of the project’s objective into the statistical frame? Is this translation justifiable given the general purpose of the project and the potential impacts that the outcomes of its implementation will have on the communities involved? Questions taken from the UK government guidance’s Stakeholder Impact Assessment

“We note the recommendation by the Law Society that a national register of automated decision making tools in use in criminal justice be established. Subject to appropriate exceptions, thresholds and safeguards, this would appear to support the Nolan Principles and would facilitate impact assessment of public sector ADMTs. Such a register may be appropriate in other parts of the public sector.” Centre for Data Ethics and Innovation

“You can imagine a scenario where things go wrong because the public sector has implemented some AI technology because it is shiny, cool and exciting rather than helpful.” Eddie Copeland, Director, London Office of Technology and Innovation (LOTI)

“Humans must be ultimately responsible for decisions made by any system…Good governance will require for each use case, a specific understanding of the appropriate division of responsibilities.” Centre for Data Ethics and Innovation

“The person [needs to have] both the agency and the knowledge necessary to make changes to the system’s behaviour and to intervene when it seems like something is going to go wrong.” Dr Brent Mittelstadt, Research Fellow and British Academy Postdoctoral Fellow, Oxford Internet Institute

“Another concern is when you have systems that continue to learn through interaction with the user. There is the potential for a user to either maliciously poison the training data or to be mischievous in the way that they train the system thereby influencing the way it develops in the future.” Fiona Butcher, Fellow, Defence, Science and Technology Laboratory, Ministry of Defence

“It is unclear whether civil society organisations have the capacity to engage in meaningful oversight, particularly given the rapidity with which different systems are being deployed across the sector and across the world.” Law Society Report, Algorithms in the Criminal Justice System

“We use oversight bodies to assure ourselves that we have consent from the public because we know that the people who are most likely to be adversely affected by AI are less likely to come forward and present their views. We use oversight bodies, scrutiny panels and independent advisory groups to be representative of those communities.” Superintendent Chris Todd, West Midlands Police

Working with the right skills to assess AI When identifying whether AI is the right solution, it’s important that you work with: • specialists who have a good knowledge of your data and the problem you’re trying to solve, such as data scientists • at least one domain knowledge expert who knows the environment where you will be deploying the AI model results.97 Office for AI Guidance, Assessing if artificial intelligence is the right solution

“From the perspective of the judiciary or the courts, I think education is the starting point… we are going to have to do a lot of work to develop effective training, knowledge systems and skills systems, to enable judges as well the Court Service staff to understand the implications of the operations of the systems.” John Sorabji, Principal Legal Adviser to the Lord Chief Justice and Master of the Rolls

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/863657/AI_and_Public_Standards_Web_Version.PDF

*****, ******, *******,  Principios y regulación,

por Aniceto Pérez y Madrid, Filósofo de las Tecnologías y Editor de Actualidad Deep Learning (@forodeeplearn)

Crea tu sitio web con WordPress.com
Comenzar