SIENNA, SHERPA, PANELFIT: Setting future ethical standards for ICT, Big Data, SIS, AI & Robotics



welcome to this first webinar of the shamash benefit projects setting future ethical standards for ICT big data smart information systems artificial intelligence and robotics that's a mouthful in this shared webinar we want to tell you about the three projects these are currently the three largest European projects focused on ethical and Human Rights aspects of information and communication technology especially Big Data artificial intelligence and data processing and transfer so my name is phillip breck I am coordinator of the Siena project and I will start off by a shared introduction also on behalf of the coordinators of Sherpa and penile fins of our three projects so as you can see here the coordinator of the Sherpa project is Professor Mary stop yeah in the UK and in Spain we have coordinators of elephants in Eagle too big well Bailey and Carlos Marie for next steps going forward this is our next slide so these are three European projects funded by the European Union's rise in 2020 see they are projects aimed to help improve existing ethical human rights legal frameworks for ICT development and research in Europe they all started almost simultaneously roughly the same year and they're both three and a half all three of them three of your projects and so because of the overlap and focus we have decided to collaborate so what do these acronyms actually mean because our names are evidence China means stakeholder informed ethics for new technologies with high socio-economic and human rights impacts sure means shaping the ethical dimensions of smart information systems a European perspective and panel feeds is participatory approaches to a new ethical legal framework for ICT that doesn't tell you everything about the projects but it's a start so we're all concerned with ethical legal and human rights aspects of information and communication technology with different emphasis and focus but so I collaborates well we want to actually concreate out those with and for policy makers and stakeholders and end users we think if we co-create outputs they will be better outputs and more effective outputs we also want to deliver complementary guidance for software developers industry policymakers researchers and citizens again by developing our guidance together we can achieve more mass and focus we also want to ensure that expertise and experiences that we have can help improve existing policy and the policy making process and we want to build synergies for optimal communication and dissemination policy to maximize for the impact of our projects so these are all good reasons to collaborate we think here you see in picture the painting the different focus of our projects so we're all concerned with three topics AIE big data and data sharing and of course there's a lot of activity in these areas currently everything nowadays seems to be about data and about AI in the ICT field the Siena project first of all has perhaps the largest focus on artificial intelligence its considers the full field of artificial intelligence as well as for bollocks and we also look at human enhancements and genomics so don't only look at ICT and we studied ethics and human rights implications the Sherpa project has a focus on AI and Big Data and especially the combination of the two how big data systems and applications are becoming more and more intelligence and what are the ethical and human rights implications of that and then finally penile fins has a big focus on data and data sharing it looks broadly at information communication technology it has mostly a legal perspective it looks and it looks particularly as data commercialization informed consents and cybersecurity so you see there's overlap between these three interests and we want to exploit that overlap so this n marks the start of an active European collaboration in the ethics and you advise emerging digital technologies and we hope you will enjoy our presentations after the weapons webinar you will receive an email with more information on that how to connect with us and engage with us because we cannot do this without your help to make this project successful stakeholder engagement and interaction it's very important to us so we look forward to your good thank you very much hello everybody so my name is Philip Bray I am coordinator of the Siena project I am a professor in ethics at the University of Twente and I want to tell you a bit about our Siena project and how it is focused on developing ethical guidelines for artificial intelligence so I'm going to talk about our approach for the development of guidelines so of course ethical guidelines for AI have been developed already by various parties we will be the first project we feel where we do so on the basis of a thorough academic approach and extensive stakeholder consultation we actually take three years to do this so we're going to tell you about our approach and then the ethical analysis we did so far in working towards these guidelines so what is our approach well first of all we have objectives about three sets of ethical guidelines first of all we want to develop policy guidelines for ethical AI which will be similar to the guidelines that have been recently developed by high-level expert expert expert group on AI but we want to build build on them but provide more detail and rigor secondly we want to develop specific professional guidelines for AI scientists and engineers working on the development of artificial intelligence systems and software and third we want to defile develop guidelines for AI for research ethics committees have an eight step program taking us two and a half years to complete we're clearly in the middle of this for the development of these guidelines working with stakeholders on making this happen first we recently completed a socio-economic impact assessment off and for site analysis of possible future developments and impacts of artificial intelligence over the next 20 years because we believe we want to have as much information as possible on potential future developments of AI not just looking at AI as it as it exists now but what is likely or at least plausible to happen in the coming years so that we are prepared for future developments secondly we want to have a good analysis of the legal context of AI and looking at development international norms legal orders and national legislations because we believe we shouldn't develop ethical guidelines and analysis in a vacuum you want to consider legislation that's already out there for being developed third and fourth we want to know what people think and feel about AI because we want our guidelines to have to be as Democratic as possible for that reason we want to do a public opinion survey as well as conceal citizens in a panel five panels actually in different countries so this is something we actually recently completed and we're now at the reporting stage where we will be writing reports to present our results to the world we did a public opinion survey interviewing 11,000 people in 11 countries about their acceptance of artificial intelligence how do they feel about AI and what do they think how do they think it should be developed and used should not be developed used then we did the same in more depth in 5 EU panels of citizens where we got 50 people together a cross-section from the countries in which they were performed and we asked citizens what they thought currently we're also in the middle of an ethical analysis and evaluation of AI based on these previous steps as well as on this analysis and expert workshops so we're inviting experts especially in athletes but also in technologies themselves we also do an extensive consultation of the existing Demick literature on the ethical aspects of AI based on all that we will this fall proposed a new ethical framework for AI technology and based on that also look at existing ethical guidelines and codes to eventually develop ethical guidelines based on our ethical failure because ethical guidelines will be of three types as mentioned before first will be ethical policy guidelines which in which we will collaborate with Council of Europe UNESCO and other advisory advisory and regulatory bodies especially of course in Union and its member states we will also develop operational guidelines for research ethics committees for a on here we will collaborate with Urich which is the European Association of research ethics committees as well as the National Research Committee associations associated with Europe and third we will develop a code of responsible conduct for a AI and robotics researchers in collaboration with organizations actually ACM your robotics Council of European professional information societies and others so as said before we want to really engage stakeholders in this process hear what they have to say both as experts and as bearers of interest in the matter so we engage a lot of engineers computer scientists social scientists ethicists we have people involved from industry governments civil society of all kinds to hear their voices and to come to joint ideas and plans as well as over 11,000 citizens whom we interview and engage so let me tell you something about our results so far as I said we're in the middle of this but we have some results to share first we want to do an ethical analysis where we recognize not just general ethical issues in AI I mean there's all these guidance lists where it's about belief fairness transparency etcetera etc you do that but for us that's only part of the work we also want to look at Attica issues unique to particularly assistant techniques for example we want to look at specific ethical issues with smart big data systems or effective computing or social robotics etc and we want to look at Africa issues unique to particular replication areas like defense like education government agriculture instead of etc so here we already think we distinguish ourselves from other projects in this way if you look at the general ethical issues as a preliminary analysis you can see that we looked at ethical values or principles that we think are important for AI to adhere to many of those will probably familiar to you if you are familiar with the literature for example high-level ethics group for the European group and ethics so we're looking at autonomy and control privacy well-being dual use equality responsibility accountability transparency the issues of employment democracy as well as the general desirability of intelligent automation so each of these ethical issues we do an extensive analysis looking at the existing literature survey arguments pro and con and looking at implementation then we also look at Africa issues of AI systems and techniques where we discover a lot of different types of systems and needs this only is selection what you see here and for each of those we look at specific ethical issues example computer vision raises specific ethical issues that have to do with recognizing and trapping people and the ethics of perhaps are different again than the ethics for social robots where it's about human robot interaction and what can all go wrong in that and two peoples have rights and and well-being and then third we also look at escalation reputation domains this these are the application domains that we're focusing on so as you can imagine the escalation with AI in defense raised to extend different ethical issues then there's the application of AI health care or in media and entertainment in one for example there's the threat to democracy with social media applications for example in defense it's it's about killer robots and autonomous systems that can kill so different ethical issues to be considered in those different domains so I can go on and on but I think I'm at it out of my time as you can see there's a lot of ethical issues to be considered here and you want to do so in a way that's going to be helpful for cause Baker's developers and practitioners and users in the field thank you very much – beshte i'm professor at the Montford University which is located in Leicester in the United Kingdom and I am the coordinator of the Sherpa product so surely the acronym stands for shaping the ethical dimensions of information technology a European perspective what are we doing in the shuttle project so we use the term smart information systems as a core from defining the range of activities of the project and I therefore need to say a little bit about what we mean by a small information systems because it's slightly outside the current discourse so what you see on the screen now is a representation of smart information systems these are characterized by what we call technical drivers and that's where we see artificial intelligence in particular the type of artificial artificial intelligence application that are very highly debated at the moment so we talk about neural networks machine learning and these are based on big data so they need data to be trained so the combination of artificial intelligence big data is at the core or we call smart automated systems but these are never just used in isolation they always linked to other types of technologies so they have what we call the main enabling technologies those are technologies that either produce the data and that may be your source of data from social media data to neuro stimulation data you name it so they produce the data but they also then do something with the data so they use the output of the artificial intelligence algorithms to do something in the world and again that's a broad variety of people so it would be effective computing applications it could be robotics many things so that what we mean that's what we mean by small information systems and we're interested in ethics and human rights of these technologies now a lot of the debates you hear the public around rights of these technologies focuses on the perceived negative aspects so before we go there and we will also talk about those but before I go there I think it's important to underline that it's not just about natives so these technologies also have a lot of benefits and these benefits are very broad but they have an important moral component to them and therefore those technologies can also have ethical justifications so one of those is that global challenges such as the ones who formulated the sustainable development goals may be capable of being promoted through the use of these technologies and you see on this slide showed advertising of last year's AI for goods so much where the core of the discussion was about how AI can be made useful for the sustainable development there is a lot of emphasis on economic growth and economic growth is seen as not just an economic benefit but because there are increasing wealth and well-being effects to it it's also seen as a moral good he signals a promising us to provide better services personalized services which can be useful for consumer goods but they can also be very helpful for people with limited abilities so they can increase our human capabilities in many different ways they can for example compensate for disabilities but they can also allow us to do things which human beings might have wanted to do but we're not able to do so there's a possibility of greater inclusion of helping people to participate in their communities including the participation and overall this has the promise attached to it that these technologies can promote empowerment so they can help people achieve their potential better however at the same time there are numerous editing concerns when I go through those very quickly Carly because Phillip is such on the earlier presentation the biggest one for me is that projection and privacy but there are all sorts of other social economic and other concerns that people race so a key one is questions of the changes of employment now it's an open question whether these technologies will lead to increased unemployment or maybe create more unemployment but I think it's fairly obvious that there will be a shift in employment so some people will benefit and others will not and that raises all sorts of questions around policy and there are worries about concentrations of power and money the big tech companies controlling the data controlling the algorithms and using these technologies to cement their market dominance further and not just market but then also the political dominance their voice around lots of control the human beings whose range of abilities is too much structured by the technologies some people are worried about human enhancement so changing the nature of what it means to be human and we have specific concerns around things like transparency where these technologies can introduce biases in decision-making which can lead to personal discrimination but they also issues around security misuse and other human rights infringement so there are lots of concerns would we do so the project is trying to start or has started with an attempt to represent these issues Makela and the challenge that we are facing here is that well as a huge amount of discourse around ethics and human rights of a advocator there is very little empirical research across the board that shows what the implications of these technologies are so we've used this challenge to structure a number of activities so we have undertaken ten case studies of current information systems which range trust a number of application areas these are already in addition to case studies which describe the present we've developed a set of future looking scenarios we also looked at the technical side particularly cyber threats how does AI Big Data facilitate new cyber threats or what solutions might they be we are looking at the ethical impacts very much along the lines of what Philip talked about earlier and we also have a specific stream of activities looking at human rights applications so to give you a flavor of what we're doing so the case studies release is just a the topics of the case studies we've done one on the Internet of Things we've looked at how governments use these technologies we've looked at small cities which are related to governments we looked at how scientific research in particular you know if it's neurosciences use these technologies but also how they may impact the future development of these technologies so these are probably all technologies or application areas of the signals that you would expect because look at things like agriculture where may be less obvious in the public mind that there is a road in place here by artificial intelligence we have looked at the user insurance looked at energies and utilities companies use them how the communication medium we've looked at retailer trade as well as manufacturing so the idea here was to pick a range of important application areas to see what the literature says around the ethical issues but also what practices so we have in all cases talk to people in only relations that use these technologies and try to find out whether the representation of the ethical aspects in the literature corresponds to the experience of people who make actual use of them so in addition to this so this case is all studies of current existing projects systems and so on we've also wanted to understand how these things are going to play out in the future we therefore developed a set of five scenarios and these scenarios I said in the very short term future so we're talking about seven to five years so the mid-2020s we've looked at what we can expect to happen in the areas of predictive policing in warfare in mimicking technologies which you may be aware of under the name of deep fakes we've looked at education and autonomous vehicles so this was an attempt to go beyond the current time horizon and extrapolate or it can expect in the future now this allows us to come up with a an understanding of what these technologies are how they are in practice but it's not good enough for us as a project to simply say what we think about them it's very clear that these are very broadly socially relevant technologies and therefore what we need is to understand the perceptions of competing rights and interests by stakeholders more broadly in order to do that we first define a number of activities we have stakeholder board and the members of those stakeholder wards you can see on the website if you're interested we're doing a set of interviews with experts we are about to launch a survey of thousand-plus respondents and we will also do a deaf history with experts through explore the way these issues can be addressed now understanding the technologies understand the ethical rights implications of these technologies doesn't necessarily lead to solutions so one challenge is to understand what are the options there what can we actually do in terms of addressing them and that's what we do in under this third challenge we've defined something which we call the SIS book which really is a page on on our website where we collect all our insights with case studies and the scenarios but also very importantly possibilities and options in terms of dressing them in some of the options that we've refined that we will explore and what teacher are these here so we're looking at guidelines for research and innovation over size so what guidelines exists what are the gaps in those guidelines and they've improved we're looking at regulatory options and regulatory in a broad sense from self-regulation all the way to top-down legislation and we're exploring the possibility of standardization and we're looking at whether there are technical options and what those might be and as part of the regulatory review we're also looking at the possibility and the need of regulator and we're thinking about exploring the what a terms of reference for such regulator might look like now the idea is that the responsibilities challenge will give us a number of options then the next question is which of those options are the most important ones which are the ones that we think need to be promoted so we do this in response to this challenge for which is the testing and evaluating of the solutions where we will go back to stakeholders with our possible options and try to identify which ones are the most significant to us so we will have stake or evaluation validation which will include a number of interviews and we will focus on a set of focus groups so we will do six focus groups across the EU with ranges of stakeholders in order to find out what the most important solutions are and the outcome of that will be a prioritization and the finalization of our recommendations and that is of course something that we work with very closely with CNI and family to ensure that these recommendations are consistent and make sense in the light of the other projects the final challenge that we face then is how does this actually then feed back into the world how do we have an impact we do a number of the things that you would expect any EU project to do such as dissemination communication and this webinar is one aspect of that but we also have two aspects that I think are not common in all you products on the one hand we have an artist on the consortium so we are looking for novel ways of representing the challenges and the solutions that we encounter and secondly we have an explicit advocacy task where the specific aim is to reach our true decision-makers policymakers but also other decision-makers in order to communicate our findings and our recommendations so if you're interested in this please join us to have a look at the website and sign up to our stakeholder network and with this picture of the members of the consortium I end my presentation my name is so might be a little different for you tonight firstly because we are distracted we also reach underst project in the sense that we started to see so small together so well I will start introducing the different of the ideas behind so the main point is that we are just in a moment where everything is changing in these lives it's changing due to a number of different citizens protection regulation of looking to Falls not so far ago and we reach there and of course it's making some flowers which we can share and on the other hand it is clear we have some conflicts in the sense there we zero in the Union we want to project our city sense for once for the human rights on the other side we are also willing here somehow digital single mark there's an initiative on that and so sometimes it's very difficult to conciliate the needs of the market and human rights so the point is that we have ninety-nine step we must encourage creation of the Americas name data commercialization while ensuring the prediction for for me to decision support of individual information and supporting implementation of sustainable security system server say-so policy is based on three main pillars Ronnie's commercialization Yahoo maybe the most important one is informed consent under once grants based on security and security issues so one of these conservatives maybe four of them the first one is to facilitate implementation of new regulation talking only about general interpretation regulations all of the new regulation on that data protection data serving the organization serves operational standards of practical guidance able to reuse the ethical and legal issues posed by Exedy technology spread promoting innovation and market access we also want to suggest complete improvements to the authority and governance framework st. Paul's want to create multiple learning and support roles and promote the work in among stakeholders and policymakers and we want to increase the quantity and quality of the information about two millimeters professionals journalists and my people I will connect people in the sense of family because we consider the journalists once were somehow responsible to change to translate information to more population okay people usually with mass media they don't really see it well they appreciate needs a little bit harder than key information for specialized websites and journalists player different so I'm going to do is well we are trying to produce a set of outcomes that will serve so personal standards and vertical payments okay so there there will be seven main outcomes wanted of course not all of them share same importance and I think that we should start was maybe the most important outcomes we are willing to commit offenses the guidance money issues ICT research innovation this is more less fun oh it's pretended to be a kind of work that should serve all those working in these areas she has a pretty deep – OH to get all information needed about her to accomplish with legal issues and try to do things as good as possible Smyly devoted to research every country's protection authorities are the protection officers and so on but we hope even researchers working these can make use of their okay so it will both what we contain a lot of information also some kind of interpretation of these laws that might be helpful I mean we'll be gathering information that we produced by different sources and might serve well to understand what researcher is supposed to do while performing his or her research that's why this for what the main objective the proposes to she gave efficiency while careful and informative performance while reducing miscommunication and failure to comply with regulation based upon this we can eat so how to produce this law what are the different things of course we will do this work and so on and we will introduce this is different this is this is mainly directed to policy makers who are the end users and this is more or less a question that that we would like to try to suggest some improvements okay this means that we will analyze all the all the materials that we have and we will try to identify issues that were critical answers that have no energy despite the regulation this is dispersed of course the digital issues and gas analysis of the regulation and we will try to to give some concrete suggestions to improve the current situation this is in the sense that we this is mainly focused on the way the monitor is being organized at the present moment so we will joy from the issues and gas-related to the governance of the ethical legal issues of privacy to assure an innovation on the morning during our security and security continents okay so we will try to find out what the current systems are working well whether we could improve them in any way due to differing engagement choice we also – and some repository of information for exchange of abs so those who are working on these issues and who are willing to share those and find some answers to the questions and so on might ask permission to be part of this community and we could provide it if we consider deeper interesting call people small ways to devoted to try to share those informations on plan common solutions then we'll get some other organs one cycle of Condor or responsible research and innovation that extremely focused on researchers and this is many to facilitate the primary aims of ethical context from the very beginning okay so it's it's work related to the ethics vicinity and it's a question that we would try to eg to let research via well what else to do from the very beginning certain generally yes that conserved was peaceful horse handle for journalists that will be part of each other on means because you're a community and we will try to find out all the main holes I mean no misunderstanding so maybe if these years the journalist working on the stories sure right now and we will try to write the kind of you thoughts all that okay so that's the point of this and then we have finally a kind of citizens in varrock let's a super sea of various material prefer to inform the people highly accessible way and there will be two different versions one special version adaptable to vulnerable population needs so the point is that we try to support the right of people to get inform one the quality and effectiveness of interactions between the research community policymakers and stakeholders and also vulnerable population okay so here you can find a kind of summary of our counts okay and you can see the different users on the main characteristics of fairly scenario is main outcomes well it's important to you under and we will try to produce of these materials thanks to a combination approach that it's based on real need to involve all social actors so the point is that we will be continually engaging people in different activities so we got a lot of different activities to be performed and once we will reduce the matrix of our outcomes we will generate some kind of in cultures for example with different percentages of equality symbols just check it well improve them okay it needed to be yes as the other two projects we are very interested in dissemination and communication are important to produce on high impact and as one of the main challenges we are focused and so one field is hams trial has destroyed a different way to achieve this kind of impact and he could shake a lot of course we will produce government paper so we will be in tunnels and organics paneling economy conferences will all academic courses organized and there will also be lots of face-to-face training courses Marcos webinars and a concert as a parasail and the worst service will also still we also have a kind of dissemination happens there will be four runs to be how it comes in Vienna imagery are exhausted and we'll be able to raise awareness name products or outcomes and this is just so and so in the organizations that aren't all here on this project we are 13 to protection agencies harmful for instance under also every part of our project we also have representatives from NGOs and science associations and so on so I think this is not only as the most important thing that I should tell you I think that I've done well in time and looking forward to receiving your feedback and you will be much welcome if you want to receive all the information so thank you so much to everyone and that's all thank you so thank you all we will end the webinar now and we will all let you know when it's available on YouTube for you to watch again thank you so much good bye

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *