Congress holds hearing on use of persuasive technology on the internet — 06/25/2019



of persuasive technologies on internet platforms each of our witnesses today has a great deal of expertise with respect to the use of artificial intelligence and algorithms more broadly as well as in the more narrow context of engagement persuasion and brings unique perspectives to these matters your participation in this important hearing is appreciated particularly as this committee continues to work on trafficking data privacy legislation I convened this hearing in part to inform legislation that I'm developing that would require internet platforms to give consumers the option to engage with the platform without having the experience shaped by algorithms driven by user specific data internet platforms have transformed the way we communicate and interact and they have made incredibly positive impacts on society in ways that are too numerous to count the vast majority of content on these platforms is innocuous and at its best it is entertaining educational and beneficial to the public however the powerful mechanisms behind these platforms meant to enhance engagement also have the ability or at least potential to influence the thoughts and behaviors of literally billions of people there's one reason why there's widespread unease about the power of these platforms and why it is important for the public to better understand how these platforms use artificial intelligence and opaque algorithms to make inferences from the reams of data about us that affect behavior and influence outcomes without safeguards such as real transparency there is a risk that some internet platforms will seek to optimize engagement to benefit their own interests and not necessarily the benefit the consumers interests in 2013 former Google executive chairman Eric Schmidt wrote that modern technology platforms and I quote are even more powerful than most people realize and our future will be profoundly altered by their adoption and successfulness in societies everywhere in quote since that time algorithms and artificial intelligence have rapidly become an important part of our lives largely without us even realizing it his online content continues to grow large technology companies rely increasingly on a I powered automation to select and display content that will optimize engagement unfortunately the use of artificial intelligence and algorithms to optimize engagement can have an intended and possibly even dangerous downside in april bloomberg reported that youtube has spent years chasing engagement while ignoring internal calls to address toxic videos such as vaccination conspiracies and disturbing content aimed at children earlier this month New York Times reported that YouTube's automated recommendation system was found to be automatically playing a video of children playing in their backyard pool to other users who had watched sexually themed content that is truly troubling and it indicates the real risks in a system that relies on algorithms and artificial intelligence to optimize for engagement and these are not isolated examples for instance some have suggested that the so called filter bubble created by social media platforms like Facebook may contribute to our political polarization by encapsulating users within their own comfort zones or echo chambers Congress has a role to play in ensuring companies have the freedom to innovate but in a way that keeps consumers interest in well being at the forefront of their progress while there must be a healthy dose of personal responsibility when users participate in seemingly free online services companies should also provide greater transparency about how exactly the content we see is being filtered consumers should have the option to engage with a platform without being manipulated by algorithms powered by their own personal data especially if those algorithms are opaque to the average user we are convening this hearing in part to examine whether algorithmic explanation and transparency our policy options that Congress should be considering ultimately my hope is it at this hearing today we are able to better understand how internet platforms use algorithms artificial intelligence and machine learning to influence outcomes and we have a very distinguished panel before us today we are joined by Tristan Harris the co-founder of the Center for Humane Technology MS maggie stanphill the director of google's user experience dr. Stephen Wolfram founder of Wolfram research and mrs. Rashida Richardson the director of policy research at the AI now Institute thank you all again for participating on this important topic and I want to recognize senator Schatz any opening remarks that he may have thank you mr. chairman social media and other Internet platforms make their money by keeping users engaged and so they've hired the greatest engineering and tech – to get users to stay longer inside of their apps and on their websites they've discovered that one way to keep us all hooked is to use algorithms that feed us a constant stream of increasingly more extreme and inflammatory content and this content is pushed out with very little transparency or oversight by humans this set up and also basic human psychology makes us vulnerable to lies hoaxes and misinformation a Wall Street Journal investigation last year found that YouTube's recommendation engine often leads users to conspiracy theories partisan viewpoints and misleading video videos even when users aren't seeking out that kind of content and we saw that YouTube's algorithms were recommending videos of children after users watched sexualized content that did not involve children and this isn't just a YouTube problem we saw all of the biggest platforms struggled to contain the spread of videos of the Christchurch massacre and it's anti-muslim propaganda the shooting was live streamed on Facebook and over a million copies were uploaded across platforms many people reported seeing it on autoplay on their social media feeds and not realizing what it was and just last month we saw a fake video of the Speaker of the House go viral I want to thank the chairman for holding this hearing because as these examples illustrate something is really wrong here and I think it's this Silicon Valley has a premise it's that society would be better more efficient smarter more frictionless if we would just eliminate steps that include human judgement but if YouTube Facebook or Twitter employs rather than computers we're making the recommendations would they have recommended these awful videos in the first place now I'm not saying that employees need to make every little decision but companies are letting algorithms run while using humans to clean up the mess algorithms are amoral companies designed them to optimize for engagement as their highest priority and by doing so they eliminated human judgment as part of their business models as algorithms take on an increasingly important role we need for them to be more transparent and companies need to be more accountable for the outcomes that they produce imagine a world where pharmaceutical companies were not responsible for the long-term impacts of their medicine and we couldn't test their efficacy or of engineers we're not responsible for the safety and structure they design and we couldn't review the blueprints we are missing that count that kind of accountability for internet platform companies right now all we have a representative sample sets data scraping and anecdotal evidence these are useful tools but they are inadequate for the rigorous systemic studies that we need about the societal effects of algorithms these are conversations worth having because of the significant influence that algorithms have on people's daily lives and this is a policy issue that will only grow more important as technology continues to advance and so thank you mr. chairman for holding this hearing and I look forward to hearing from our experts Thank You senator Schatz and we do as I said have a great panel to hear from today and we're gonna start on my left and your right with mr. Tristan Harris who's co-founder and executive director Center for Humane technology is Maggie Stan Phil's I mentioned who is the Google user experience director at Google Inc dr. Stephen Wolfram who was the founder and chief executive officer of Wolfram research and MS Rashida Richardson director of policy research at AI now Institute so if you would confine your oral remarks to his closest five minutes as possible it'll give us an opportunity to maximize the chance for members to ask questions but thank you all for being here we look forward to hearing from you mr. Harris Thank You senator thune and senator Schatz everything you said it's sad to me because it's happening not by accident but by design because the business model is to keep people engaged which in other words this hearing is about persuasive technology and persuasion is about an invisible asymmetry of power when I was a kid I was a magician and magic teaches you that you know you can have asymmetric power without the other person realizing it you can masquerade to have asymmetric power while looking like you have an equal relationship you say pick a card any card well meanwhile you know exactly how to get that person to pick the card that you want and essentially what we're experiencing with technology is an increasing asymmetry of power that's been masquerading itself as a equal or contractual relationship where the responsibility is on us so let's walk through why that's happening in the race for attention because there's only so much attention companies have to get more of it by being more and more aggressive I call it the race to the bottom of the brainstem so it starts with techniques like pull-to-refresh so you pull to refresh your newsfeed that operates like a slot machine it has the same kind of addictive qualities that keep people in Las Vegas hooked to the slot machine other examples are removing stopping cues so if I take the bottom out of this glass and I keep refilling the water or the wine you won't know when to stop drinking so that's what happens with infinitely scrolling feeds we naturally remove the stopping cues and this is what keeps people scrolling but the race for attention has to get more and more aggressive and so it's not enough just to get your behavior and predict what will take your behavior we have to predict how to keep you hooked in a different way and so it crawled deeper down the brain stem into our social validation so that was the introduction of likes and followers how many followers do I have and that got every it was much cheaper to instead of getting your attention to get you addicted to getting attention from other people and this has created the kind of mass narcissism and mass cultural thing that's happening with with young people especially today and after two decades in decline the mental health of 10 to 14 year old girls has actually shot up a hundred and seventy percent in the last eight years and this has been very characteristic ly the cause of social media and in the race for attention it's not enough just to get people addicted to attention the race has to migrate to AI who can build a better predictive model of your behavior and so if you give an example of YouTube so there you are you're about to hit play on a YouTube video and you hit play and then you think you're gonna watch this one video and then you wake up two hours later and say oh my god what just happened and the answer is because you had a supercomputer pointed at your brain and at the moment you hit play it wakes up an avatar voodoo doll like version of you inside of a Google server and that avatar based on all the clicks and likes and everything you've ever made those are like your hair clippings and toenail clippings and nail filings that make the avatar look and act more and more like you so that inside of a Google server they can simulate more and more possibilities if I pick you at this video if I pick you at this video how long would you stay and the business model is simply what maximizes watch time this leads to the kind of algorithmic extremism that you've pointed out and this is what's caused 70% of YouTube's traffic now and be driven by recommendations not by human choice but by the machines and it's a race between Facebook's voodoo doll' where you flick your finger can they predict what to show you next and Google's voodoo doll and these are abstract metaphors that apply to the whole tech industry where it's a race between who can better predict your behavior Facebook has something called loyalty prediction where they can actually predict to an advertiser when you're about to become disloyal to a brand so if you're a mother and you take Pampers diapers they can tell Pampers hey this user is about to become disloyal to this brand so in other words they can predict things about us that we don't know about our own selves and that's a new level of asymmetric power and we have a name for this asymmetric relationship which is a fiduciary relationship or a duty of care relationship the same standard we apply to doctors to priests to lawyers imagine a world in which priests only make their money by selling access to the confession booth to someone else except in this case Facebook listens to two billion people's confessions has a has a supercomputer next to them and is calculating and predicting confessions you're gonna make before you know you're gonna make them and that's what's causing all this havoc so I'd love to talk about more of these things later I just want to finish up by saying this affects everyone even if you don't use these products you still send your kids to a school where other people believing that anti-vaccine conspiracy theories causes impact for your life or other people voting in your election and when Marc Andreessen said at the you know into 2011 that the quote was software is going to eat the world and what he meant by that Marc Andreessen was the founder of Netscape what he meant by that was that software can do every part of society more efficiently than non suffer right because just adding efficiencies and so we're going to allow software to eat up our elections we're gonna allow it to eat up our media our taxi our transportation and the problem was that software was eating the world without taking responsibility for it we used to have rules and standards around Saturday morning cartoons and when YouTube gobbles up that part of society it just takes away all of those protections I just want to finish up by saying that I know mr. Rogers Fred Rogers testified before this committee 50 years ago concerned about the animated bombardment that we were showing children I think he would be horrified today about what we're doing now and at that same time he was able to talk to the committee and that committee made a choice differently so I'm hoping we can talk more about that today thank you Thank You mr. Harris miss Danville chairman Thune ranking member Schatz members of the committee thank you for inviting me to testify today on Google's efforts to improve the digital well-being of our users I appreciate the opportunity to outline our programs and discuss a research in the space my name is Maggie stanphill I'm a user experience director and I lead the cross Google digital well-being initiative Google digital well-being initiative is an initiative that's a top company goal and we focus on providing users with insights about their individual tech habits and the tools to support an intentional relationship with technology at Google we've heard from many of our users all over the world that technology is a key contributor to their sense of well-being it connects them to those they care about it provides information and resources that build their sense of safety and security and this access has democratized information and provided services for billions of users around the world for most people their interaction with technology is positive and they are able to make healthy choices about screen time and overall use but as technology becomes increasingly prevalent in our day to day lives for some people it can distract from the things that matter most we believe technology you should play a useful role in people's lives and we've committed to helping people strike a balance that feels right for them this is why our CEO sundar Pichai first announced the digital well-being initiative with several new features across android family link youtube Gmail all of these to help people better understand their tech usage and focus on what matters most in 2019 we applied what we learned from users and experts and introduced a number of new features to support our digital while being initiative I'd like to go into more depth about our products and tools we've developed for our users on Android the latest version of our mobile operating system we added key capabilities to help users take a better balance with technology and make sure that they can focus on raising awareness of tech usage and providing controls to help them oversee their tech use this includes a dashboard it shows information about their time on devices it includes app timers so people couldn't set time limits on specific apps it requires a do not disturb function to silence phone calls and text as well as those visual interruptions that pop up and we've introduced a new wind down feature that automatically puts the user's display into night light mode and that reduces blue light and greyscale to remove color and ultimately the temptation to scroll finally we've got a new setting called focus mode this allows pausing specific apps and notifications that users might find distracting on YouTube we have similarly launched a series of updates to help our users define their own sense of well-being this includes time watched profiles take a break reminders the ability to disable audible notifications and the option to combine all YouTube app notifications into one notification we've also listened to the feedback about the YouTube recommendation system over the past year we've made a number of improvements to these recommendations raising up content from authoritative sources when people are coming to YouTube for news as well as reducing recommendations of content that comes close to violating our policies or spreads harmful and misinformation when it comes to children we believe the bar is even higher that's why we've created family link to help parents stay in the loop as your child explorers on Android and on Android cue parents will be able to supervise screen time limits and bedtimes and remotely lock their child's device similarly YouTube kids was designed with the goal of ensuring that parents have control over the content their children watch in order to keep the videos in the YouTube kids app family-friendly we use a mix of filters user feedback and moderators we also offer parents the option to take full control over what their children watch by hand selecting the content that appears in their app we're also actively conducting our own research and engaging in important expert partnerships with independent researchers to build a better understanding of the many personal impacts of digital technology we believe this knowledge can help shape new solutions and ultimately drive the entire industry toward creating products that support a better sense of well-being to make sure we are evolving the strategy we have launched a longitudinal study to better understand the effectiveness of our digital well-being tools we believe this is just the beginning as technology becomes more integrated into people's daily lives we have a responsibility to ensure their products support the digital well-being we are committed to investing more optimizing our products in focusing on quality experiences thank you for the opportunity to outline our efforts in this space I'm happy to answer any questions you might have Thank You mr. Danville mr. wolfram well thanks for inviting me here today I have to say that this is pretty far from my usual kind of venue but I have spent my life working on the science and technology of computation and AI and perhaps some of what I know can be helpful here today so first of all here's a way I think one can kind of frame the issue so many of the most successful internet companies like Google and Facebook and Twitter are what one can call automated content selection businesses they ingest lots of content and then they essentially use AI to select what actually show to their users how does that AI work how can one tell if it's doing the right thing people often assume that computers just run algorithms that someone sat down and wrote but modern AI systems don't work that way instead lots of the programs they use are actually constructed automatically usually by learning from some massive number of examples and if you go look inside those programs there's usually embarrassingly little that we humans can understand in there and here's the real problem it's sort of a fact of basic science that if you insist on explained ability then you can't get the full power of a computational system or an AI so if you can't open up the AI and understand what it's doing how about sort of putting external constraints on it can you like write a contract that says what the AI is allowed to do well partly actually through my own work we're starting to be able to formulate computational contracts contracts that are written not in legalese but in a precise executable computational language suitable for an AI to follow but what should the contract say I mean what's the right answer for what should be at the top of someone's newsfeed or what exactly should be the algorithmic rule for balance or diversity of content well as AI start to run more and more of our world we're going to have to develop a whole network of kind of AI laws and it's going to be super important to get this right probably starting off by agreeing on sort of the right AI Constitution but it's going to be a hard thing kind of making computational how people want the world to work right now that's still in the future but okay so what can we do about people's concerns now about automatic content selection I have to say that I don't see a purely technical solution but I didn't want to come here and say that everything is impossible especially since I personally like to spend my life solving quotes impossible problems in fact I think that if we want to do it we actually can use technology to set up kind of a market-based solution I've got a couple of concrete suggestions about how to do that both are based on giving users a choice about who to trust for the final content they see one of the suggestions introduces what I call final ranking providers the other introduces constraint providers in both cases these are third-party providers who basically insert their own little ai's into the pipeline of delivering content to users and the point is that users can choose which of these providers they want to trust the idea is to leverage everything that the big automated content selection businesses have but to essentially add a new market layer so users get to know that they're picking a particular way that content is selected for them it also means that you get to avoid kind of all-or-nothing banning of content and you don't have kind of a single point of failure for spreading bad content and you open up a new market potentially delivering even higher value for users of course for better or worse unless you decide to force certain content or diversity of content which you could people can live kind of in their own content bubbles though importantly they get to choose those themselves well lots of technical details about everything I'm saying as well as some deep science about what's possible and what's not and I've tried to explain a little bit more about that in my written testimony and I'm happy to try and answer whatever questions I can here thank you Thank You mr. Wolfram miss Richardson chairman Thune ranking member Schatz and members of the subcommittee thank you for inviting me to speak today my name is rashida Richardson and I'm the director of policy research at the AI now Institute at New York University which is the first University Research Institute dedicated to understanding the social implications of artificial intelligence part of my role includes researching the increasing reliance on AI and algorithmic systems and crafting policy and legal recommendations to address them and mitigate the problems we identify in our research the use of data-driven technologies like recommendation algorithms predictive analytics and inferential systems are rapidly expanding in both consumer and government sectors they determine where our children go to school whether someone will receive Medicaid benefits who is sent to jail before trial which news articles we see in which job seekers are offered an interview thus they have a profound impact on our lives and require immediate attention and action by Congress though these technologies affect every American they are primarily developed and deployed by a few powerful companies in therefore shaped by these companies incentives values and interest these companies have demonstrated limited insight into whether their products will harm consumers and even less experienced in getting those harms so while most technology companies promise that their products will lead to broad societal benefits there's little evidence to support these claims and in fact mounting evidence points to the contrary for example IBM's Watson supercomputer was designed to improve patient outcomes but recently internal IBM documents showed it actually provided unsafe and erroneous cancer treatment recommendations this is just one of numerous examples that have come to light in the last year showing the difference between the marketing companies used to sell these technologies in the stark reality of how these technologies ultimately perform while we while many powerful industries pose potential harms to consumers with new products the industry producing algorithmic and AI systems pose three particular risks that current laws and incentive structures fail to adequately address the first risk is that AI systems are based on compiled data that reflect historical and existing social and economic conditions this data is neither neutral or objective thus AI systems tend to reflect and amplify cultural biases value judgments and social inequities meanwhile most existing laws and regulations struggle to account for or adequately remedy these disparate outcomes as they tend to focus on individual acts of discrimination and less on systemic bias or bias encoded in the development process the second risk is that many AI systems and internet platforms or optimization systems that prioritized technology companies monetary interests and results in products being designed to keep users engaged while often ignoring social costs like how the product may affect non users environments and markets an on AI example of this logic and model is a slot machine while a recent AI based example is the navigation system ways which was subject to public scrutiny following many incidents across the US where the application redirected highway traffic through residential neighborhoods unequipped for the influx of vehicles which increased accidents and risk to pedestrians the third risk is that most of these technologies are black boxes both technologically and legally technologically their black boxes because most of the internal workings are hidden away inside the companies legally technology companies obstruct accountability efforts through claim of proprietary or trade secret legal protections even though there is no evidence that legitimate inspection auditing or oversight poses any competitive risk controversies regarding emerging technologies are becoming increasingly common and show the harm caused by technologies optimized for narrow goals like engagement speed and profit at the expense of social and ethical considerations like safety and accuracy we are at a critical moment where Congress is in a position to act on some of the most pressing issues and by doing so paving the way for a technological future that is safe accountable and equitable with these concerns in mind I offer the following recommendations which are detailed in my written statement require technology companies to waive trade secrecy and other legal claims that hinder oversight and accountability mechanisms require public disclosure of technologies that are involved in any decision about consumers by name and vendor empower consumer protection agencies to apply truth in advertising laws revive the Congressional office of Technology Assessment to perform premarket review and post market monitoring of technologies enhance whistleblower protections for technology company employees that identify unethical and unlawful uses of AI or algorithms require any transparency or accountability mechanisms to include a detailed reporting of the full-stack supply chain and require companies to perform in publish algorithmic impact assessments prior to public use of products and services thank you thank you miss Richardson let me start mr. Harris with you as we go about crafting consumer data privacy legislation in this committee we know that internet platforms like Google and Facebook have vast quantities of data about each user what can these companies predict about users based on that data thank you for the question so I think there's there's a important connection to make between privacy and persuasion that I think often isn't linked so maybe it's helpful to link that you know with Cambridge analytic that was a an event in which based on your Facebook Likes based on a hundred and fifty of your facebook likes I could your political personality and then I could do things with that what the reason I opened it described in my opening statement that this is about an increasing asymmetry of power is that without any of your data I can predict increasing features about your using AI there's a paper recently that with 80% accuracy I can predict your same Big Five personality traits that Cambridge analytic 'got from you without any of your data all I have to do is look at your mouse movements and click patterns so in other words it's the end of the poker face your behavior is your signature and we can know your political personality based on tweet text alone we can actually know your political affiliation with about 80% accuracy computers can calculate probably that you're homosexual before you might know that you're homosexual they can predict with 95% accuracy that you're gonna quit your job according to an IBM study they can predict that you're pregnant they can predict your micro expressions on your face better than a human being can micro expressions are you're you're soft like reactions to two things that you're not they're not very visible but are invisibly visible computers can predict that as you keep going and you realize that you can start to deep fake things you can actually generate a new synthetic piece of media a new synthetic face or synthetic message that is perfectly tuned to these characteristics and the reason why I open the statement by saying we have to recognize that what this is all about is a growing asymmetry of power between technology and the limits of the human mind my favorite socio biologist iya Wilson said the fundamental problem of humanity is that we have Paleolithic ancient emotions we have medieval institutions and we have godlike technology so we're chimpanzees with MOOCs and our Paleolithic brains are limited against the the increasing exponential power of technology at predicting things about us the reason why it's so important to migrate this relationship from being extractive to get things out of you to being a fiduciary is you can't have asymmetric power that is specifically designed to extract things from you just like you can't have again lawyers or doctors whose entire business model is to take everything they learn and sell it to someone else except in this case the level of things that we can predict about you is is far greater than actually each of those fields combined when you actually add up all the data that it more and more accurate voodoo doll of each of us and there's 2 billion voodoo dolls by the way there's one for one out of every 4 people on earth with YouTube's and Facebook are more than 2 billion people the stanphill in your prepared testimony you know that companies like Google have a responsibility to ensure that products support users digital well being as Google used persuasive technology meaning technology that is designed to change people's attitudes and behaviors and if so how do you use it and do you believe that persuasive technology supports a user's digital well-being Thank You senator no we do not use persuasive technology at Google in fact our foremost principles are built around transparency security and control of our users data those are the principles through which we design products at Google mr. what dr. Wolfram in your prepared testimony you write that it's possible to expect impossible I should say to expect in a useful form of general explained ability for automated content selection systems if this is the case what should policymakers require expect of Internet platforms with respect to algorithm explanation or transparency I don't think that explaining how algorithms work is a is a great direction I mean the basic issue is if the algorithm is doing something really interesting then you're not going to be able to explain it because if you could explain it it would be like saying you can jump ahead and say what it's going to do without letting it just do what it's going to do so it's kind of a it's kind of a scientific issue that if you're going to have something that is explainable then it isn't getting to you use the sort of full power of computation to do what it does so my own view which is sort of disappointing for me as a technologist is that you actually have to put humans in the loop and in a sense the thing to understand about AI is we can automate many things about how things get done what we don't get to automate is the goals of what we want to do the goals of what we want to do and not something that our is sort of definable as an automatic thing the goals of what we want to do is something that humans have to come up with and so I think the the most promising direction is to think about breaking kind of the AI pipeline and figuring out where you can put into that AI pipeline the right level of kind of human input my own feeling is the most promising possibility is to kind of insert to leave the great value that's been produced by the current automatic content selection companies in ingesting large amounts of data being able to monetize large amounts of content etc but to insert a way for users to be able to choose who they trust about what finally shows up in their newsfeed or in their search results or whatever else and I think that there are technological ways to make that kind of insertion that will actually if anything adds to the the richness of potential experience for users and possibly even the financial returns for for the market and very quickly miss Richardson what are your views about whether algorithm explanation or algorithm transparency or appropriate policy responses and kind of respond to dr. Wolfram's I think they're a good interim step in that transparency is almost necessary to understand what these technologies are doing antis us the benefits and risk but I don't think transparency or even explained ability is an end goal because I still think you're going to need some level of legal regulation to impose liability so bad or negligent actors act in a proper manner but also $0.10 device companies to do the right thing or apply due diligence because in a lot of cases that I cited in my written testimony there are sort of public relations disasters that happen on the back end and many of them could have been assessed or interpreted during the development process but companies aren't incentivized to do that so in some ways transparency and explain ability can give both legislators and the public more insight into these choices that companies are making to assess whether or not liability should be attached or different regulatory enforcement needs to be pursued thank you senator Schatz Thank You chairman thank you to the testifiers first thing yes or no question do we need more human supervision of algorithms on online platforms mr. Harris yes yes yes although I would put some footnotes sure yes with footnotes so I want to follow up on what dr. Wolfram said in terms of the unbreakable 'ti of of these algorithms and the lack of transparency that has sort of built into what they are foundationally and and the reason I think that's an important point that you're making which is that you need a human circuit breaker at some point to say no I choose not to be fed things by an algorithm I choose to jump off of this platform that's one aspect of humans acting as a circuit breaker I'm a little more interested in the human employee either at the line level or the human employee at the supervisory level who takes some responsibility for how these algorithms evolve over time and and miss Richardson I want you to maybe speak to that question because it seems to me as policymakers that's where the sweet spot is is to find an incentive or a requirement where these companies will not allow these algorithms to run essentially unsupervised and not even understood by the highest echelon of the company except in their output and so miss Richardson can you help me to flesh out what that would look like in terms of enabling humans supervision so I think it's important to understand some of the points about the power asymmetry that mr. Harris mentioned because I definitely do think we need a human in the loop but it we also need to be cognizant of who actually has power in those dynamics and that you don't necessarily want a frontline employee taking full liability for a decision that they or for a system that they had no input in the design or even in their current sort of position and using it so I think it needs to go all the way up in that if you're thinking about liability or responsibility in any form it needs to attach it those who are actually making decisions about the goals the designs and ultimately the implementation and use of these technologies and then figuring out what are the right pressure points or incentive dynamics to encourage companies or those making those decisions to make the right choice that benefits society yeah I think that's right I think that none of this ends up coming too much unless the executive level of these companies feel legal and financial responsibility to supervise these algorithms miss stamfa light I I was a little confused by one thing you said did you say Google doesn't use persuasive technology that is correct sir mr. Harris is that true um it's complicated persuasion is happening all throughout the ecosystem in my mind by the way this is less about accusing one company Google or Facebook it's about understanding that every company I get that but she's here and yes and she just said that they don't use persuasive technology and I'm trying to figure out are you talking about just the Google suite of products you're not talking about YouTube or are you saying in the whole alphabet pantheon of companies you don't use persuasive technology because either I misunderstand your company or I misunderstand the definition of persuasive technology can you help me to understand what's going on here sure with respect to my response mr. senator it is related to the fact that dark patterns and persuasive technology is not core to how we design our products at Google which are built around transparency are you talking about YouTube or the whole family of companies the whole family of companies including YouTube you don't want to clarify that a little further we build our products with privacy security and control for the users that is what we build for and ultimately this builds a life long relationship with the user which is primary that's our truck I don't know what any of that meant miss Richardson can you help me I think part of the challenge as mr. Harris mentioned is how you're defining persuasive in that both of us mentioned that a lot of these systems and internet platforms are a form of an optimization system which optimizing for certain goals and there you could say that is a persuasive technology which is not accounting for a certain social risk but I think there's a business incentive to not to take a more narrow view of that definition so it's like I can't speak for Google because I don't work for them but I think the reason you're confused is because it you may need to clarify definitions of what is actually persuasive in the way that you're asking the question and what is Google suggesting doesn't have persuasive characteristics in their technologies thank you thank you senator Schatz senator Fischer Thank You mr. chairman mr. Harris's you know I've introduced the detour act with Senator Warner to curb some of the manipulative user interfaces we want to be able to increase transparency especially when it comes to behavioral experimentation online obviously we want to make sure children are not targeted with some of the dark patterns that are out there in your perspective how do dark patterns for that user autonomy online yeah so persuasion is so invisible and so subtle in fact often times are criticized when you use this language or you say we're crawling down the brainstem people think that you're overreacting but it's a design choice so I went my for background I studied at a lab called the persuasive technology lab at Stanford that taught engineering students essentially about this whole field and my friends in the class were the founders of Instagram and Instagram as a product invented well copied Twitter actually in the technique of the what you could call a dark pattern of the number of followers that you have to get people to follow each other so there's a follow button on each profile and that's meant I mean that doesn't seem so dark that's what's so insidious about it it's just you're giving people a way to follow each other's behavior but what it actually is doing is an attempt to cause you to come back every day because now you want to see do I have more followers now than I did yesterday and how how are these platforms and getting or personal information how much how much choice do we really have I thought I thought the doctors comment about that the goals we want to do as humans that you know we have to get involved in this but then your introductory comments are basically I think telling us that everything about us is already known so it wouldn't be really hard to manipulate where our goals want to what they even want to be at this point right and and the goal is to subvert your goal so I'll give you an example if you say I want to delete my Facebook account if you bet delete it puts up a screen that says are you sure you want to delete your Facebook account the following friends will miss you and it puts up faces of certain friends now am i asking to know which friends will miss me no does Facebook ask those friends do I know – are they are they gonna miss me if I leave no they're calculating which of the five faces would be most likely to get you to cancel and not delete your Facebook account so that's a subtle and invisible dark pattern that's just meant to sway behavior I think another example you're trying to get on there opening question is with you consent to giving your data to Facebook or your location and oftentimes the you know there'll be a big blue button which they've have a hundred engineers behind the screen split testing all the different colors and variations and arrangements of where that button should be and then a very very very small grey link that people don't even know is there and so what we're calling a free human choice is a manipulated choice and again it's just like a magician they're saying pick a card any card but in fact there's an asymmetry of power when you're when you're on the internet and you're trying to look something up and you have the steel pop up on your screen this is so irritating and you have to hit okay to get out of it because you don't see the other choice on the screen as you said it's very light it's gray but but now I know if I hit okay this is going to go on and on and and whoever is going to get more and more informational about me they're good they're really invading my privacy but I can't get rid of this screen otherwise unless you turn off your computer and start over right the there's all sorts of ways to do this if I'm a persuader and I really want you to hit OK on my dialog so I can get your data I'll wait until the day that you're in a rush to get the address to that place you're looking for and that's the day that I'll put up the dialog that says hey were you going to get me your information and that of course you're gonna say fine yes whatever because in persuasion there's something called hot states and cold states when you're in a hot state or an immediate impulsive state it's very easy to persuade someone then versus when they're in a cold calm and reflective state and technology can actually either manufacturer wait till you're in those hot states so have how do we protect ourselves and our privacy and what role does the federal government have to play in this besides getting our bill passed I mean at the end of the day the reason why we go back to the business model is it is about alignment of interests you you don't want a system of asymmetric power that is designed to manipulate people you're always going to have that in so far as the business model is one of manipulation as opposed to regenerative meaning you have a subscription style relationship so I would say Netflix probably has many fewer dark patterns because it's in a subscription relationship with its users when Facebook says that you know how else could we give people this free service well it's like a priest whose entire business model is to manipulate people saying well how else can I serve so many people yeah right so how do we keep our kids safe there's so much to that I think what we need is a mass public awareness campaigns so people understand what's going on one thing I have learned is that if you tell people this is bad for you they won't listen if you tell people this is how you're being manipulated they no one wants to feel manipulated thank you thank you senator Fischer senator Blumenthal thanks mr. chairman and thank you to all of you for being here today you know I was struck by what senator Schatz said in his opening statement algorithms are not only running wild but they are running wild in secrecy they are cloaked in secrecy in many respects from the people who are supposed to know about them miss Richardson referred to the black box here that black box is one of our greatest challenges today and I think that we are at a time when algorithms AI and the exploding use of them is almost comparable to the time of the beginnings of atomic energy in this country we now have an Atomic Energy Commission nobody can build bombs nuclear bombs in their backyard because of the dangers of nuclear fission and fusion which is comparable I think to what we have here systems that are in many respects beyond our human control and affecting our lives in very direct extraordinarily consequential terms beyond the control of the user and maybe the Builder so on the issue of persuasive technology I find missed and fill your contention that Google does not build systems with the idea of persuasive technology in mind somewhat difficult to believe because I think Google tries to keep people glued to its screens at the very least that persuasive technology is operative it's part of your business model keep the eyeballs it may not be persuasive technology to convert them to the far left or the far right some of the content may do it but at the very least the technology is designed to promote usage YouTube's recommendation system has a notorious history of pushing dangerous messages and content promoting radicalization disinformation and conspiracy theories earlier this month senator Blackburn and I wrote to YouTube on reports that its recommendation system was promoting videos that sexualized children effectively acting as a shepherd for pedophiles across its platform now you say in your remarks that you've made changes to reduce the recommendations of content that quote violates our policies or spreads harmful misinformation and according to your account the number of views from recommendations for these videos has dropped by over 50% in the United States I take those numbers as you provided them can you tell me what specific steps you have taken to end your recommendation systems practice of promoting content that sexualizes children Thank You senator we take our responsibility to supporting child safety online extremely seriously therefore these changes are in effect and as you stated these have had a significant impact but what specifically resulting in actually changing which content appears in the recommendations so this is now classified as borderline content that includes misinformation and child exploitation content you know I am I'm running out of time I have so many questions but I would like each of the witnesses to respond to the recommendations that Miss Richardson has made which I think are extraordinarily promising and important I'm not going to have time to ask you about them here but I would like witnesses to respond in writing if you would please and second let me just observe on the on the topic of human supervision I think that human supervision has to be also independent supervision on the topic of arms control we have a situation here where we need some kind of independent supervision some kind of oversight and yes regulation I know it's a dirty word these days in some circles but protection will require intervention from some independent source here I don't think trust me can work anymore Thank You mr. chairman thank you senator Blumenthal Centre Blackburn Thank You mr. chairman and thank you to our witnesses we appreciate that you are here and I enjoyed visiting with you for a few minutes before the hearing began mr. Wolfram I want to pick up where we had discussed in in your testimony computational irreducibility and look at that for just a moment as we talked about this does it make algorithmic transparency sound increasingly elusive and would you consider that moving toward that transparency is a worthy goal or should we be asking another question yeah I think you know there are different meanings to transparency you know there are there are if you are asking tell me why the algorithm did this versus that that's really hard and if we really want to be able to answer that we're not going to be able to have algorithms that do anything very powerful because in a sense by being able to say this is why it did that well we might as well just follow the path that we use to explain it rather than have it do it what it needed to do itself so the transparency what we can't do is try to get a pragmatic result yeah we can't we can't go inside we can't open up the hood and say why did this happen and that's why I think the other problem is knowing what you want to have happen like you say this algorithm is bad this algorithm gives bad recommendations what do you mean by bad recommendations you have to be able to define something that says oh the thing is biased in this way the thing is produced in content we didn't like how do you you know you have to be able to give a way to define those those bad things all right miss Richardson you and I can see you're making notes in one away yet on this but you also talked about compiled data and encoded bias in getting the algorithm to yield a certain result so let's say you build this algorithm and you build this box to contain this data set or to make certain that it is moving this direction then as that algorithm self replicates and moves forward does it move further to that direction or does data inform it and pull it a separate direction if you're building it to get it to yield a specific result so it depends on what type of technical system we're talking about – but to unpack what I was saying is the problem with a lot of these systems is they're based on data sets which reflect all of our current conditions which also means any imbalances in our conditions so one of the examples that I gave in my written testimony referenced Amazon's hiring algorithm which was found to have gender disparate outcomes and that's because it was learning from prior hiring practices and there's also examples of other similar hiring algorithms one of which found that if you had the name Jared and you played lacrosse you had a better chance of getting a job interview and there it's not necessarily that the correlation between your name being Jared and playing lacrosse it means that you're necessarily a better employee than anyone else it's simply looking at patterns in the underlying data but it doesn't necessarily mean that the patterns that the system is seen actually reflects reality or in some cases it does and it's not necessarily how we want to view reality and instead shows the skis that we have in society okay mr. Wolfram you mentioned in your testimony there could be a single content platform but a variety of final ranking providers are you suggesting that it would be wise to prohibit companies from using cross business data flows I'm not sure how that relates to I mean the you know my the thing that I think is the case is it is not necessary to have the final ranking of content that there's a lot of work that has to be done to get content ready to be finally ranked for a newsfeed or for search results and so on that's a lot of heavy lifting the the choice which is made often separately for each user about how to finally rank content I don't think has to be made by the same entity and I think if you break that apart you kind of change the balance between what is controllable by users and what is not I don't think it's realistic to that's I think yeah I mean I would like to say that one of the questions about you know a data set implies certain things we don't like what that implies and so on one of the challenges is to define what we actually want and one of the things that's happening here is that because these are AI systems computational systems we have to define much more precisely what we want that we've had to do before and so it's a it's necessary to kind of write these computational rules and that's a tough thing to do and it's something which cannot be done by computer and it can't even be necessarily done from prior data it's something people like you guys have to decide what to do about Thank You mr. chairman I would like unanimous consent to enter the letter that senator Blumenthal and I sent earlier this month and thank you I know my time has expired but I will just simply say to miss stamphill that the evasive nasaan answering senator Blumenthal's question about what they are doing is inadequate when you look at the safety of children online just to say that you're changing the content that appears and the recommended list is inadequate and mr. Harris I will submit a question to you about what we can look at on platforms for combating some of this bad behavior Thank You senator Blackburn Center Peters Thank You mr. chairman and thank you to our witnesses fascinating discussion I like to address an issue I think is of profound importance to our democratic republic and that's the fact that in order to have a vibrant democracy you need have an exchange of ideas and an open platform and certainly part of the promise of the Internet as it was first conceived as though we'd have this incredible Universal Commons were override range of ideas would be discussed and debated it would be robust and yet it seems as if we're not getting that we're actually getting more and more siloed mr. doctor Wolfram you mentioned how people can make choices and they could live in a bubble but at least it would be their bubble that they get to live in but that's what we're seeing throughout our society as polarization increases more and more folks are reverting to tribal type behavior mr. Harris you talked about our medieval institutions and and Stone Age Minds tribalism was alive and well and in the past and we're seeing advances in technology in a lot of ways bring us back into that kind of tribal behavior so my my question is to what extent is this technology actually accelerating that and is there a way out yes mr. Harris yeah thank you I love this question the there's a tendency to think here that this is just human nature no that's just people are polarized and this is just playing out it's a mirror it's holding up a mirror to society but what it's really doing is it's an amplifier for the worst parts of us so in the race to the bottom of the brainstem to get attention let's take an example like Twitter it's calculating what is the thing that I can show you that will gets the most engagement and it turns out that outrage more outrage gets the most engagement so it was found in a study that for every world word of moral outrage that you add to a tweet it increases your retweet rate by 17 percent so in other words you know the polarization of our society is actually part of the business model another example of this is that shorter briefer things work better and attention economy than long complex nuanced ideas that take a long time to talk about and so that's why you get a hundred and forty characters dominating our social discourse but reality and the most important topics to us are increasingly complex while we can say increasingly simple things about them that automatically creates polarization because you can't say something simple about something complicated and have everybody agree with you people will by definition misinterpret and hate you for it and then it's never been easier to retweet that and generate a mob that will come after you and this is created a callout culture and chilling effects and a whole bunch of other subsequent effects in polarization that are amplified by the fact that these platforms are rewarded to give you the most sensational stuff one last example of this is on YouTube let's say we we actually equalize I know there's people here concerned about equal representation on the left and the right in media let's say we get that perfectly right as recently as just a month ago on YouTube if you did a map of the top 15 most frequently mentioned verbs or keywords in the recommended videos they were hates debunks obliterates destroys in other words you know Jordan Peterson destroys social justice warrior in video so that kind of thing is the background radiation that we're dosing two billion people with and you can hire content moderators in English and start to handle the problem as asthma stamphill said but the problem is that two billion people in a hundred you know hundreds of languages we're using these products how many engineers at YouTube speak to 22 languages of India where there's an election coming up so that's some context on that well there's a lot of context fascinating and I'm running out of time but I took particular note in your testimony when you talked about how technology will eat up elections and you were referencing I think another writer on that issue in the remaining brief time I have what's your biggest concern about the 2020 elections and how technology may eat up this election coming up yeah that comment was that another example of we used to have protection that technology took away we used to have equal price campaign ads so that it cost the same amount on Tuesday night at 7:00 p.m. for any candidate to run an election when Facebook gobbles up that part of media it just takes away those protections so there's now no equal pricing um what in terms of what I'm worried about I'm mostly worried about the fact that none of these problems have been solved the business model hasn't changed and the reason why you see a Christchurch event happen in the video just show up everywhere or you know any of these examples fundamentally there's no easy way for these platforms to address this problem because the problem is their business model I do think there's some small interventions like fast lanes for researchers accelerated access for people who are spotting disinformation but the real problem another example of software eating the world is that instead of NATO or the Department of Defense protecting us in a global information warfare we have a handful of 10 or 15 security engineers at Facebook and Twitter and they were woefully unprepared especially in the last election and I'm worried that they still might be right thank you Thank You senator Peters senator Johnson a mr. chairman mr. Harris I agree and you say that the best line of defense as individuals is exposure people need to understand that they are being manipulated and a lot of this hearings been talking about the manipulation algorithms artificial intelligence I want to talk about the manipulation by human intervention human bias we don't allow or we certainly put restrictions to the FCC on an individual owning their ownership of TV stations radio stations newspapers because we don't want that monopoly of content in in us in the community much less you know Facebook Google access billions of people hundreds of millions Americans so I had staff on Instagram go to the political account and by the way I have a video of this so I'd like to enter that into the record they hit follow and this is what the list they were given and this is in the exact order and I had asked the audience from the witnesses to to just see if there's a conservative in here how many there are here's the list Elizabeth Warren Kamala Harris New York Times Huffington Post Bernie Sanders CNN politics New York Times opinion NPR economist Nancy Pelosi The Daily Show Washington Post covering poetess NBC Wall Street Journal Pete Budaj edge time New Yorker Reuters Southern Poverty Law Center Kirsten Gillibrand the Guardian BBC News ACLU Hillary Clinton Joe Biden bethe O'Rourke Real Time with Bill Maher c-span SNL Pete Souza UN Guardian HuffPost Women's Late Show of Stephen Colbert moveon.org Washington Post opinion USA Today today New Yorker women's March Late Night with Seth Meyers the hill CBS Justin Trudeau it goes on these are five conservative staff members if they're really algorithms shuffling the content that they might actually want to or that they would agree with you would expect you'd see maybe Fox News Breitbart news max you might even see like a really big name like Donald Trump and there wasn't so my question is who's producing that list is that Instagram is that political site how is that being generated I have a hard time feeling that's generated or being manipulated by an algorithm or by AI um you know I don't know any I'd be really curious to know what the click pattern was that in other words you open up an Instagram account and it's blank and you're saying that if you just ask who I follow or I'm up you cow and you're giving suggestions for you to follow yeah I mean I I honestly have no idea how Instagram ranks those things but I'd be very curious to know what the original original clicks were that that produced that list so can anybody else explain that I mean that I don't believe that's AI trying to give content to conservative staff member of things they may want to read I mean this is this to me looks like Instagram if they're if they're actually the ones producing that list trying to push a political bias mr. Wolfram you seem to one way and you know the thing that will happen is if there's no other information it will tend to be just what where there is the most content or why the most people on the platform in general have clicked so it may simply be a statement in that particular case that really speculating but that the the users of that platform tend to like those things and so if there's no again that you'd have to assume then that the vast majority of users of Instagram are liberal progressives that that might be evidence of that mist and fills that would you're understand would be Thank You senator he's probably doing a google to it'd be interesting but I can't speak for Twitter I can speak for Google's stance just generally with respect to AI which is we build products for everyone so we've got systems in place to ensure no bias is introduced but we have I mean you won't deny the fact that there are plenty of instances of content being pulled off of conservative websites and having to repair the damage of that correct I mean what's happening here Thank You senator I wanted to quickly remind everyone that I am a user experience director and I work on digital well-being which is a program to ensure that users have a balanced relationship with neck so that's what's happening on here because again I think conservatives have legitimate concern that content is being pushed from a liberal progressive standpoint to the vast majority of users of these social sites I mean I really wish I could comment but I don't know much about where that's happening so this Richardson so there has been some research on this and it showed that when you're looking at engagement levels there is no partisan disparities and in fact it's equal so I agree with dr. Wolfram and that what you may have saw was just what was trending like even in the list you mentioned Southern Poverty Law Center and they were simply trending because their executive director was fired so that may just be a result of the news not necessarily the organization but it's also important to understand that research has also shown that when there is any type of disparity on partisan lines it's usually dealing with the under the veracity of the underlying content and that's more of a content moderation issue rather than what you're shown okay anyway well I'd like to get that video entered in the record and we'll keep looking into this without objection and to the senator from Wisconsin's point I think if you google yourself you'll find most of the things that pop up right away are gonna be from news organizations that tend to be to the left I mean that's I had had that experience as well and it seems like if if that actually was based upon a neutral algorithm or some other form of artificial intelligence that since you're the user and since they know your habits and patterns you might see something instead of from the New York Times pop-up from Fox News The Wall Street Journal that that to me has always been hard to explain well let's work together to try and get that explanation because it's it's a valid concern senator tester Thanks Thank You mr. chairman thank all the folks who testified here today mr. Stanfield does YouTube have access to personal data on a user's gmail account Thank You senator I am an expert in digital well being at Google so I'm sorry I don't know that with depth and I don't want to get out of my depth so I can take that back answer okay so when it comes to Google search history you wouldn't know that either I'm sorry senator I'm not an expert in search and I don't want to get out of my depth but I can take it back okay all right so let me see if I can ask a question that you can't answer does does YouTube do you know if YouTube uses personal data in shaping recommendations Thank You senator I can tell you that I know that YouTube has done a lot of work to ensure that they're improving recommendations I do not know about privacy and data because that is not necessarily core to digital well-being I focus on helping provide users with balanced technology usage so in YouTube that includes a time watch profile it includes a reminder where if you want to set a time limit you'll get a reminder ultimately we we give folks power to basically control their their usage I understand what you're saying I think that what I'm concerned about is is that if a minute doesn't matter if you're talking to Google or Facebook or Twitter whoever it is has access to personal information which I believe they do mr. Harris do you think they do I wish that I really knew the exact answer to the general premise is that with more personal access to information that Google has they can provide better recommendations is usually the talking point that's correct and the the business model because they're competing for who can predict better what will keep your you know your attention my eyes in that yeah they would use as much information as they as they can and usually the way that they get around this is by giving you an option to opt out but of course the default is usually to opt in and that's I think what's leading to what you're talking about yes so I I am 62 years old getting older every minute the more this conversation goes on but but but I will tell you that it never ceases to amaze me that my grandkids oldest one is about fifteen or sixteen and goes down to about eight when we're on the farm is absolutely glued to this absolutely glued to it to the point where if I want to get any work out of them I have to threaten them okay because they're they're riveted so mrs. Stanfill do you guys when you're in your leadership meetings do you actually talk about addictive nature of this because it's it's as addictive as a cigarette or more and do you talk about the addictive nature do you talk about what you can do to to stop I will tell you that I'm probably going to be dead and gone and I'll probably thankful for it when all this comes to fruition because because I think that this scares me to death senator Johnson can talk about the conservative websites you guys could literally sit down at your board meeting I believe in determine who's going to be the next president the United States I personally believe that you have that capacity now I could be wrong and I hope I'm wrong and so does it does it do any of any other folks that are that are here I'll go with Miss Robinson do you do you see it the same way or am I overreacting to a situation that I don't know enough about now I think your concerns are real in that the business model that most of these companies are using in most of the opposite optimization systems are built to keep us engaged keep us engaged with provocative material that can skew in the directions that your concerns and I don't know your history but do you think that the board of directors for any of these companies actually sit down and talk about impacts that I'm concerned about or are they talking about how they continue to use what they've been doing to maximize their profit margin I don't think they're talking about the risk you're concerned about and I don't even think that's happening in the product development level and that's in part because a lot of teams are siloed so I doubt these conversations are happening in a holistic way to sort of address your concerns which is good I don't want to get in a fistfight on this panel I missed a fill or the conversations you have since you couldn't answer the previous ones indicate that she's right the conversations are siloed is that correct no that's not correct sir so then why why can't you answer my questions I can answer the question with respect to how we think about digital wellbeing at Google it's a cross company okay our so it's actually a goal that we work on across the company so I have the novel duty of connecting those dots but we are doing that and we have incentive to make sure that we make progress okay well I just want to thank you all for being here and hopefully you all leave friends because I know that there's certain senators accrue to myself just tried to pit you against one another that's not intentional I think that this is this is really serious I thought I have exactly the opposite of Penance senator Johnson has and that I think there's a lot of driving to the conservative side so it shows you that when humans get involved in this we're going to screw it up but by the same token there needs to be those circuit breakers as Senator Schatz talked about thank you very very much thank you to the old geezer from Montana senator Rosen all of you for being here today I have so many questions as a former software developer and systems analyst and so I see this really as I have three issues and one question so the issue one really is going to be there's a combination happening of machine language artificial intelligence and quantum computing all coming together it exponentially increases the capacity of predictive analytics it grows on itself this is what it's meant to do issue to the monetization the data brokering of these analytics and the bias in all areas in regards to the monetization of this data and then as you spoke earlier where does the ultimate liability lie with the scientists that craft the algorithm with the computer that potentiates the data and the algorithm or the company or persons who monetize the end use of the data for whatever means right so three big issues many more but on its face my question today is on transparency so many sectors we require transparency we're used to it every day think about this for potential harm so every day you go to the grocery store the market a convenience store in the food industry we have required nutrition labeling and on every single item it clearly discloses our nutrition content we can't even have it on menus now calorie count oh my maybe I won't have that Alfredo right you'll go for the salad and so we've accepted this all of our companies have done this it's a state of there isn't any food that doesn't have a label maybe there is some food but basically we have it so to empower consumers how do you think we could address some of this transparency that maybe at the end of the day we're all talking about in regards to this these algorithms the data what happens to it how we deal with it it's overwhelming I think with respect to things like nutrition labels we have the advantage that we're using 150 year old science to say what the chemistry of what is contained in a food is things like computation and AI are a bit of a different kind of science and they have the this feature that this phenomenon of computational irreducibility happens and it's not possible to just give a quick summary of what the effect of this computation is going to be but we know I know having written algorithms for myself I have kind of an expected outcome I have a goal in there you talk about no goals there is a goal whether you meet it or not whether you exceed it or not whether you fail or not there is a goal when you write an algorithm to to give somebody who's asking you for this data the confusing thing is that the practice of software development has changed and that's it's changed in machine learning and AI and so on they can create their own goals machine learning it's not quite the its own goals it's rather that when you write an algorithm you know I expect you know when I started using computers a ridiculously long time ago also you know you would write a small program and you would know what every line of code was supposed to do on computing you don't but but you still should have some ability to control the outcome well I think my feeling is that rather than saying I mean yes there are you can put constraints on the outcome the question is how do you describe those constraints and you have to essentially have something like a program to describe those constraints let's say you want to say we want to have balanced treatment we want so let's take it out of technology and just talk about transparency in a way we can all understand can we put it in English terms that we're gonna make your data well being how you use it do you sleep don't you sleep how many hours a day think about your Fitbit who is it going to we can bring down to those English language parameters that people understand well I think some parts of it you could I think the part that you cannot is when you say we're going to make this give unbiased treatment of you know let's say political directions or something I'm deep not even talking unbiased in political directions there's gonna be bias and age and sex and race and necess there's inherent bias in everything so that given you can still have other conversations I mean my feeling is that rather than labeling rather than saying we'll have a nutrition label like thing that says what this algorithm is doing I think the better strategy is to say let's give some third party the ability to be the brand that finally decides what you see just like with different newspapers you can decide to see your news through the Wall Street Journal or through the New York Times or whatever is ultimately liable if people get hurt by the monetization of this data or the data brokering of some of it that's a good question I mean that's I think that that it will help to break apart the the underlying platform something like Facebook for example you kind of have to use it there's a network effect and it's not the case that you know you can't say let's break Facebook into a thousand different Facebook's and you can pick which one you want to use that's not really an option but what you can do is to say when there's a news feed that's being delivered is everybody seeing a news feed from the same with the same set of values with the same brand or not and I think the realistic thing is to say have separate providers for that final news feed for example I think that's it that's a possible direction there's a few other possibilities and that's a way and so you're sort of label says this is the such-and-such branded news feed and people then get a sense of is that the one I like is that the one that's doing something reasonable if it's not they'll just as a market matter reject it that that's my thought I think I'm Way over my time I could have a we can all have a big conversation here I'll submit more questions for the record thank you Thank You senator Rosen and my apologies to the senator from New Mexico who I I missed you were up actually before senator from Nevada but senator Udall is recognized Thank You mr. chairman mr. hey and thank you to the panel at the very very important topic here mr. Harris I'm particularly concerned about the radicalizing effect that algorithms can have on young children and it's been mentioned here today and in several questions I'd like to drill down a little deeper on that children can be inadvertently can inadvertently stumble on extremist material in a number of ways by searching for terms they don't know are loaded with subtext by clicking on shocking content designed to catch the eye by getting unsolicited recommendations on content designed to engage their attention and maximum maximize their viewing time it's a story told over and over by parents who don't understand how their children have suddenly become engaged with the alright and white nationalist groups or other extremist organizations can you provide more detail how young people are uniquely impacted by these persuasive technologies and the consequences if we don't address this issue promptly and effectively Thank You senator yes this is one of the issues that most concerns me as I think Senator Schatz mentioned at the beginning there's evidence that in the last month even as recently as that keeping in mind that these issues have been reported on for years now there was a pattern identified by YouTube that young girls who had taken videos of himself dancing in front of cameras were linked in usage patterns to other videos like that that that went further and further into that realm and that was just identified by YouTube you know as a supercomputer as a pattern it's a pattern of this is a kind of pathway that tends to be highly engaging the way that we tend to describe this if you imagine a spectrum on YouTube on my left side there's the calm Walter Cronkite section of YouTube on the right hand side there's crazytown UFOs conspiracy theories Bigfoot you know whatever and if you take a human being and you could drop them anywhere you could drop them in the calm side or you could drop them in crazy town but if I'm YouTube and I want you to watch more which direction from there am I going to send you I'm never gonna send you to the calm section I'm always gonna send you towards crazy town so now you imagine two billion people like an ant colony of humanity and it's tilting the playing field towards the crazy stuff and the specific examples of this a year ago a teen girl who looked at a dieting video on YouTube would be recommended anorexia videos because that was the more extreme thing to show the voodoo doll that looked like a teen girl there's all these voodoo girls that look like that and the next thing to show is is anorexia if you looked at a NASA moon landing it would show flatter with conspiracy theories which recommended hundreds of millions of times before take being taken down recently I wrote down another example 50% of white nationalists Annabelle and cat study had said that it was YouTube that had red-billed them red pilling is the term for you know the opening of the mind um the best predictor of whether you'll believe in a conspiracy theory is whether I can get you to believe in one conspiracy theory because one conspiracy sort of opens up the mind and makes you doubt and question things and say that get really paranoid and the problem is that YouTube is doing this in mass and it's created sort of two billion personalized truman shows right each channel has that radicalizing direction and if you think about it from an accountability perspective back when we had Janet Jackson on one side of the TV screen at the Super Bowl and we had 60 million Americans and the other we had a 5 second TV delay and a bunch of humans in the loop it for a reason but what happens when you have two billion Truman shows two billion possible Janet Jackson's and two billion people on the other end it it's a digital Frankenstein that that's really hard to control and so that's I think the way that we need to see it from from there we can talk about how to regulate it you know and Miss Stan Phil you you've heard him just described what do –gel does with young people what responsibility does Google have if his algorithms are recommending harmful videos to a child or a young adult that they otherwise would not have viewed thank you senator unfortunately the research and information cited by mr. Harris is not accurate it doesn't reflect current policies nor the current algorithm so what the team has done in effort to make sure these advancements are made they have taken such content out of recommendations for instance that limits the views by more than fifty percent so so are you saying you don't have any responsibility Thank You senator because the clearly young people are being directed towards this kind of material there's no doubt about it Thank You senator YouTube is doing everything that it can to ensure child safety online and works with a number of organizations to do so we'll continue to do so you agree with that mr. Harris um I don't because I know the researchers who are unpaid who stay up till 3:00 in the morning trying to scrape the datasets to show what these actual results are and it's only through huge amounts of public pressure that incrementally they've tackled bit by bit issue by issue bits and pieces of it and if they were truly acting and responsibility they would be doing so preemptively without the unpaid researchers staying up till 3:00 in the morning doing that work Thank You mr. chairman Thank You senator Udall senator Sullivan Thank You mr. chairman and I appreciate the witnesses being here today very important issues that were all struggling with let me ask mrs. Stanfill I had the opportunity to engage in a couple rounds of questions with mr. Zuckerberg from Facebook when he was here one of the questions I asked which I think we're all trying to struggle with is this issue of what what you when I say you Google or Facebook what you are right you think there's this notion that you're a tech company but some of us think you might be the world's biggest publisher I think about 140 million people get their news from Facebook you when it combines Google and Facebook I think it's about it and somewhere north of 80 percent of Americans get their news so what are you are you are you a publisher or you a tech company and are you responsible for your content I think that another really important issue Mark Zuckerberg did say he was responsible for their content but at the same site at the same time he said that they're a tech company not a publisher and as you know whether you are one or the other is really critical almost the threshold issue in terms of how into what degree you would be regulated by federal law so which one are you Thank You senator as I might remind everybody I am a user experience director for Google and so I support our digital well-being initiative with that said I know we're a tech company that's the extent to which I know this definition that you're speaking of so do you you feel you're responsible for the content that comes from Google on your on your websites when people do searches Thank You senator as I mentioned this is a bit out of my area of expertise as the digital well-being expert I would defer to my colleagues to answer that specific question well maybe we can take those questions for the record of course anyone else have a thought on a pretty important threshold question yeah I think so this Gators if it's okay if I thank you senator the issue here is that section 230 of the Communications Decency all about section two is all about section 230 has obviously made it so that the platforms are not responsible for any content that is on them which freed them up to do what we've created today the problem is if you you know is YouTube a publisher well they're not generating the content they're not paying journalists they're not doing that but they are recommending things and I think that we need a new class between you know the New York Times is responsible if they say something that defames someone else that reaches a certain hundred million or so people when YouTube recommends flat earth conspiracy theories hundreds of millions of times and if you consider that 70% of YouTube's traffic is driven by recommendations meaning driven by what they are recommending when an algorithm is choosing to put in front of the eyeballs of a person it's if you were to backwards derive a model would be with great power comes no responsibility let me let me have follow-up on that two things real quick cuz I want to make sure I don't run out of time here it's a it's a good line of question you know when I asked mr. Zuckerberg he actually said they were responsible for their content that was in a hearing like this now that actually starts to get close to being a publisher from my perspective so I don't know what Google's answer is or others but I think it's an important question in mr. Harris you just mentioned something that I actually think is a really important question I don't know if some of you saw Tim Cook's commencement speech at Stanford a couple weeks ago I happened to be there and saw it I thought was quite interesting but he he was talking about all the great innovations from Silicon Valley but then he said quote lately it seems this industry is becoming better known for less for a less noble innovation the belief that you can claim credit without accepting responsibility then he talked about a lot of the challenges and then he said it feels a bit crazy that anyone should have to say this but if you've built a Chaos factory you can't dodge responsibility for the chaos taking responsibility means having the courage to think things through so I'm gonna open this up kind of final question and maybe we start with you mister what do you think he was getting at it was a little bit generalized but he obviously put a lot of thought into his commencement speech at Stanford this notion of building things creating things and then going oh I'm not responsible for that what what's he getting at and then I'll open up to any other witnesses I thought it was a good speech but I'd like your views on it yeah I mean I think it's exactly what everyone's been saying on this panel that these things have become digital Frankenstein's that are terraforming the world in their image whether it's the mental health of children or our politics in our political discourse and without taking responsibility for taking over the public square so again it comes back to and you who do you think is responsible I think we have to have the platforms be responsible for when they take over election advertising they're responsible for protect doing elections when they take over mental health of kids or Saturday morning they're responsible for protecting Saturday morning anyone else have a view on the quotes I gave from Tim Cook speech mr. what I think one of the questions is what do you want to have happen that is what you know when you say something bad is happening what is the what you know it's giving the wrong recommendations what by what definition of wrong what what is the you know who is deciding who is kind of the moral arbiter I mean it if I was running one of these automated content selection companies my company does something different I would be I would not want to be kind of a moral arbiter for the world which is what's effectively having to happen when there's sort of decisions being made about what content will be delivered what will not be being delivered my feeling is the right thing to have happen is to take is to break that apart to have a more market-based approach to have third parties be the ones who are responsible for sort of that final decision about what content is delivered to what users so that the the platforms can do what they do very well which is the kind of large scale engineering large-scale monetization of content but somebody else gets to be that somebody that users can choose from a third party gets to be the one who is deciding sort of the final ranking of content show into particular users so users can get you know brand allegiance to particular content providers that they want and not to other ones Thank You mr. chairman Thank You senator Sullivan senator Markey Thank You mr. Jim very much YouTube is far and away the top website for kids today research shows that a whopping 80 percent of six through 12 year olds 6 through 12 year olds use YouTube on a daily basis but when kids go on YouTube far too often they encounter inappropriate and disturbing video clips that no child should ever see and some instances when kids click to view cartoons and characters in their favorite games they find themselves watching material promoting self harm and even suicide in other cases kids have open videos featuring beloved Disney princesses and all of a sudden see a sexually explicit scene videos like this shouldn't be accessible to children at all let alone systematically served to children mr. Harris can you explain how once a child consumes one inappropriate YouTube video the web sites algorithms begin to prompt the child to watch more harmful content of that sort yes Thank You senator so if you watch a video about a topic let's say it's that cartoon character the Hulk or something like that YouTube picks up some pattern that maybe Hulk videos are interesting to you the problem is there's a dark market of people who you're referencing in that long article that was very famous who actually generate content that based on the most viewed videos they'll look at the thumbnails and say oh there's a Hulk in that video there's a spider-man in that video and then they have machines actually manufacture free generated content and then upload it to YouTube machines and tag it in such a way that it gets recommended near those content items and YouTube is trying to maximize traffic for each of these publishers so when these machines upload the content it tries to dos them with some some views and saying well maybe this video is really good and it ends up gathering millions and millions of views because kids you know quote-unquote like them and I think the key thing going on here is that as I said in the opening statement this is about an asymmetry of power being masked as an equal relationship because technology companies claim we're giving you what you want as opposed to two twelve year old they just get it they just keep getting fed the next videos the next video the next video correct and and there's no way that that can be a good thing for our country over a long period of time especially when you realize the asymmetry that YouTube's pointing a supercomputer at that child's brain calculating role so clearly the way the websites are designed can pose serious harm to children and that's why in the coming weeks I will be introducing the kids internet design and Safety Act the Kids Act specifically my bill will combat amplification of inappropriate and harmful content on the inter that online design features like auto play that coerced children and create bad habits and commercialization and marketing that manipulate manipulates kids and push them into consumer culture so to each of today's witnesses will you commit to working with me to enact strong rules that tackle the design features and underlying issues that make the internet unsafe for kids mr. Harrison yes yes it's a terrific goal but it's not particularly my expertise okay yes okay thank you mr. Stanfield miss Stanfield a recent reporting suggests that YouTube is considering significant changes to its platform including ending autoplay for children's videos so that when one video ends another as human doesn't immediately begin hooking the child on too long viewing sessions call for an end to autoplay for kids can you confirm that to this committee that YouTube is getting rid of that feature Thank You senator I cannot confirm that as a representative from digital well-being thank you I can get back to you though I think it's important I think it's very important that did that happen voluntarily or through federal legislation to make sure that the internet is a healthier place for kids and senator senators plonked and shots and myself sent of us a senator Collins senator Bennet working on a bipartisan children and media research advancement act that will commissioned a five-year 95 million dollar research initiative at the National Institutes of Health to investigate the impact of tech on kids it will produce research to shed light on the cognitive physical and socio-emotional impacts of Technology on kids I look forward on that legislation to working with everyone at this table as well so that we can design legislation and ultimately a program I know that Google has endorsed the camera act Miss Stanfill can you talk to this issue yes thank you senator I can speak to the fact that we have endorse the camera act and look forward to working with you on further regulation okay same thing for you mr. Harris we've also endorsed it with Center for Humane technology thank you so I just think we're we're late as a nation to this subject but I don't think that we have an option we have to make sure that there are enforceable protections for the children on that country Thank You mr. chairman thank you Sarah Markey senator young I thank our panel for being here I thought I'd ask a question about concerns that many have and I expect concerns will grow about AI becoming a black box where it's unclear exactly how certain platforms make decisions in recent years deep learning has proved very powerful at solving problems and has been widely deployed for tasks like image captioning voice recognition and language translation as the technology advances there is great hope for AI to diagnose deadly diseases calculate multi-million dollar trading decisions and implement successful autonomous innovations for transportation and other sectors nonetheless the intellectual power of AI has received public scrutiny and has become unsettling for some futurists eventually society might cross a threshold in which a using AI requires a leap of faith in other words a I might become as they say a black box where it might be impossible to tell how an AI that has internalized massive amounts of data is making its decisions through its neural network and by extension it might be impossible to tell how those decisions impact the psyche the perceptions the human understanding in and perhaps even the behavior of an individual in early April the European you in release final ethical guidelines calling for what it calls trustworthy AI the guidelines aren't meant to be or intend to interfere with policies or regulations but instead offer a loose framework for stakeholders to implement their recommendations one of the key guidelines relates to transparency and the ability for AI systems to explain their capabilities limitations and decision-making however if the improvement of AI requires for example more complexity imposing transparency requirements will be equivalent to a prohibition on innovation so I I will open this question to the entire panel but my hope is that dr. work workmen wolf wolf wolf men I'm sorry sir you can begin can you tell this committee the best ways for Congress to collaborate with the tech industry to ensure AI system accountability without hindering innovation and specifically should Congress implement industry requirements or guidelines for best practices it's a complicated issue I think that it varies from industry to industry I think in the case of what we're talking about here internets automated content selection I think that the right thing to do is to insert a kind of level of human control into what is being delivered but not in the sense of taking apart the details of an AI algorithm but making the structure of the industry be such that there is some human choice injected into what people are being what's being delivered to people I think the the the bigger story is we need to understand how we're going to make laws that can be specified in computational form and applied to a is we're used to writing laws in English basically and we're used to being able to say you know write down some words and then have people discuss whether they're following those words or not when it comes to computational systems that won't work things are happening too quickly they're happening too often you need something where you're specifying computationally this is what you want to have and then the system can perfectly well be set up to automatically follow those computational rules or computational laws the challenges to create those computational rules and that's something we're just not yet experienced with it's something that we're starting to see computational contracts as a practical thing in the world of blockchain and so on but we don't yet know how you would specify some of the things that we want to specify as rules for how our systems work we don't yet know how to do that computationally are you familiar with the e use approach to develop ethical Gettle guidelines for trustworthy AI I'm not I'm not familiar with those ethics are any the other panelists in okay well then perhaps that's a model we could look at perhaps that would be ill-advised so for stakeholders that may be watching these proceedings or listening to them they can tell me – others have thoughts um so in my written comments I outlined a number of transparency mechanisms that could help address some of your concerns and some of the recommendations one specifically which was the last one is we suggested that companies create an algorithmic impact assessment and that framework which we initially wrote for government use can actually be applied in the private sector and we built the framework from learning from different assessments so in the u.s. we use environmental impact assessments which allows for a robust conversation about development projects and their impact on the environment but also in the EU which is one of the reference points that we use they have a Data Protection impact assessment and that's something that's done both in government and in the private sector but the difference here and why I think it's important for Congress to take action is what we're suggesting is something that's actually public so we can have a discourse about whether this is a technological tool that has a net benefit for society or something that's too risky that shouldn't be available I'll be attentive to your proposal do you mind if we work with you a dialogue if we have any questions about it yes thank you there's how many thoughts it's it's okay if you don't okay sounds like we have a lot of work to do industry work with other stakeholders to make sure that we don't we don't act impulsively but we also don't neglect this area of Public Policy thank you Thank You senator young senator Cruz mr. stamphill a lot of Americans have concerns that big tech media companies in Google Google in particular are engaged in political censorship and bias as you know Google enjoys a special immunity from liability under Section 230 of the Communications Decency Act the predicate for that immunity was that Google and other big tech media companies would be neutral public fora does Google consider itself a neutral public forum Thank You senator yes it does okay are you familiar with a report that was released yesterday from Veritas that included a whistleblower from within Google that included videos from a senior executive with at Google and then included documents that are purportedly internal PowerPoint documents from Google yes I heard about that report in industry news have you seen that report no I have not so you didn't review the report to prepare for this area it's been a busy day and I have a day job which is digital well being at Google so I'm trying to make sure I keep the time sorry this this hearing is impinging on your day job it's a great opportunity thank you well one of the things in that report and I would recommend people interested in political bias at Google watch the entire report and judge for yourself there's a video from a woman Gen John I it's a secret video that was recorded gen John I as I understand it is the head of quote responsible innovation for Google are you are you familiar with miss Jen I I work in user experience and I believe that AI group is somebody we worked with on the AI principles but it's a big company and I don't work directly with Jen do you know her no I do not know Jen as I understand it she is shown in the video saying and this is a quote Elizabeth Warren is saying that we should break up Google and like I love her but she's very misguided like that will not make it better it will make it worse because all these smaller companies who don't have the same resources that we do will be charged with preventing the next Trump situation it's like a small company cannot do that do you think it's Google's job to quote prevent the next Trump situation Thank You senator I don't agree with that no sir so a different individual a whistleblower identified simply as an insider at Google with knowledge of the algorithm as quoted on the same report as saying Google quote is bent on never letting somebody like Donald Trump come to power again you think it's Google's job to make sure quote somebody like Donald Trump never comes to power again no sir I don't think that is Google's job and we build for everyone including every single religious belief every single demographic every single region and certainly every political affiliation well I have to say that that certainly does not appear to be the case of the senior executives at Google do you know of a single one who voted for Donald Trump Thank You senator I'm a user experience director and I work on Google Digital well-being and I can tell you we have diverse views do you know of anyone who voted for Trump of The Cider definitely know of people who voted for Trump of the senior executives at Google I don't talk politics with my workmate is that a no sorry is that a no – what do you know of any senior executives even a single senior executive at the company a voted for Donald Trump as the digital well-being expert I don't think this is in my purview to comment on so you don't know all right you don't have to know I definitely don't know I can tell you what the public records show the public records show that in 2016 do employees gave to the Hillary Clinton campaign 1.3 1 5 million dollars that's a lot of money care to venture how much they gave to the Trump campaign I would have no idea sir well the nice thing is it's a round number $0.00 not a penny according to the public reports let's talk about one of the powerpoints that was leaked the Veritas report has Google internally saying I propose we make machine learning intentionally human-centered and intervene for fairness is this how is this document accurate thank you sir I don't know about this document so I don't know okay I'm gonna ask you to respond to the committee and writing afterwards as to whether this PowerPoint and the other documents that are included in the Veritas report whether those documents are accurate and I recognize that your lawyers may want to write explanation you're welcome to write all the explanation that you want but I also want a simple clear answer is this an accurate document that was generated by Google do you agree with the sentiment expressed in this document no sir I do not they read you another also in this report it indicates that google according to this whistleblower deliberately makes recommendations if someone is searching for conservative commentators deliberately shifts the recommendation so instead of recommending other conservative commentators it recommends organizations like CNN or MSNBC or left-leaning political outlets is that occurring thank you sir I can't comment on search algorithms or recommendations given my purview as the digital while being lead I can take that back to my team though so is it part of a digital well meaning being for search recommendations to reflect where the user wants to go rather than deliberately shifting where they want to go thank you sir as a user experience professional we focus on delivering on user goals so we try to get out of the way and get them on the task at hand so a final question one of these documents that was leaked explains what Google is doing and it has a series of steps training data are collected and classified algorithms or program media or filtered ranked aggregated and guaranteed and that ends too with people parentheses like us are programmed does Google view its job as programming people with search results Thank You senator I can't speak for the whole entire company but I can tell you that we make sure that we put our users first in our design well I think these questions raise very serious what these documents raise very serious questions about political bias at the company thank you senator Cruz senator Schatz to anything to wrap up with just a quick statement and then a question I don't want the working of the refs to be left unresponded too and I won't go into great detail except that there are members of Congress who used the working of the refs to terrify Google and Facebook and Twitter executives so that they don't take action in taking down extreme content false content polarizing content contra their own rules of engagement and so I don't want the fact that the Democratic side of the aisle is trying to engage in good faith on this public policy matter and not work the RAF's allow the message to be sent to the leadership of these companies that they have to respond to this bad faith accusation every time we have any conversation about what a doing tech policy my final question for you this will be the last time I leap to your defense miss stanphill did you say privacy and data is not core to digital well-being thank you sir I might have misstated how that's being phrased so what I meant you mean to say oh I mean to say that there is a team that focuses day-in day-out on privacy security control as it relates to user data or bit outside of my area but so you're talking sort of bureaucratically and I don't mean that as a pejorative you you're talking about the way the company is organized I'm saying isn't privacy aren't privacy and data core to digital well-being I see sorry I didn't understand that point senator in in retrospect what I believe is that it is inherent in our digital well-being principles that we focus on the user and that requires that we focus on privacy security control of their data thank you thank you senator Schatz and to be fair I think both sides work the refs but let me just ask follow-on question I appreciate senator Blackburn's line of questioning from earlier which may highlight some of the limits on transparency as we have sort of started I think with our opening statements today or trying to look at ways that in this new world we can provide a level of transparency maybe in a you said it's going to be very difficult in terms of explained ability of AI but just understanding a little bit better how to provide users the information they need to make educated decisions about how they interact with the platform services so the question is might it make sense to let users effectively flip a switch to see the difference between a filtered algorithm based presentation and an unfiltered presentation I mean there are already for example search services that aggregate user searches and feed them on mass to search engines like Bing so that you're effectively seeing the results of a generic search independent of specific information about you works okay there are things for which it doesn't work well I think that this idea of you know you flip a switch I think that is probably not going to have great results because I think there will be unfortunately great motivation to have the case where the switch is flipped to not give user information give bad results I'm not sure how you would motivate giving good results in that case I think that it's also there's it's it's sort of a when you think about that switch you can think about a whole array of other kinds of switches and I think pretty soon it gets pretty confusing for users to decide you know which switches they flip for what do they give location information but not this information do they give that information not that information I mean my my own feeling is the most promising direction is to let some third party be inserted who will develop a brand there might be twenty of these third parties it might be like newspapers where people can pick you know do they want news from this place that place another place to insert third parties and have more of a market situation where you are relying on the the trust that you have in that third party to determine what you see rather than saying the user will have precise detailed control mean much as I would like to see more users be more engaged in kind of computational thinking and understanding what's happening inside computational systems I don't think this is a case where that's going to work in practice so I think the issue with a flip the switch hypothetical is users need to be aware of the trade-offs and currently so many users are used to the conveniences of existing platforms so if there is currently a privacy preserving platform called DuckDuckGo which doesn't take your information and it is a gives you search results but if you're used to seeing the most immediate result at the top DuckDuckGo even though it's privacy preserving may not be the choice that all users would use but they're not hyper aware of what are the trade-offs of them giving that information to the provider so I think it's while I understand the reason you're giving that metaphor it's important for users to understand both the practices of an under of a platform and also to understand the trade-offs where if they want a more privacy preserving service what are they losing or gaining from that yeah the the issue is also that users I think has already been mentioned will quote-unquote prefer because it saves them time and energy the summarized feet that's algorithmically filtering it down for them you know even Jack Dorsey at Twitter has said that you know when you show people the reverse chronological feed versus the algorithmic one people they just save some time and it's more relevant to do the algorithmic one so even if there's a switch most people will quote-unquote prefer that one I think they have to be aware of the trade-offs and we have to have a notion of what fair really really means there what I'm most concerned about is the fact that this is still fairness – with respect to look increasingly fragmented truth that debase is the information environment that a democracy depends on a shared truth or a shared narrative but I that's more like to comment on that issue I mean I think the challenge is when you want to sort of have a single shared truth the question is who gets to decide what that truth is and I think that's you know the question is that decided within a single company you know implemented using AI algorithms is that decided in some more you know market kind of way by a collection of companies I think it makes more sense in kind of the American way of doing things to imagine that it's decided by a whole selection of companies rather than being something that is is burnt into some platform that for example has sort of become universal through network effects and so on alright well thank you all very much this was a this is a very complicated subject but one I think that your testimony in responses of help shed some light on and certainly will shape our thinking in terms of how we proceed but there's indefinitely a lot of a lot of food for thought there so thank you very much for your time and for your input today we'll leave the hearing record open for a couple of weeks and we'll ask senators if they have questions for the record to submit those and we would ask all of you if you can to get those responses back as quickly as possible so that we can include them in the final hearing record I think with that we are adjourned

10 Comments

  1. Hi Bye said:

    https://www.nationalimmigrationproject.org/PDFs/community/2018_23Oct_whos-behind-ice.pdf

    June 27, 2019
    Reply
  2. Tara Chan said:

    These cheapskates dodging questions with their shallow lies. They will never admit the truth. Declass and prosecute these swine

    June 27, 2019
    Reply
  3. cam shaft said:

    You think someone in the senate would log into YT or goog and just show them what they see? The idiot from goog is tell less than the truth! Does AI run goog?

    June 27, 2019
    Reply
  4. lon from appleton said:

    As Julian Assange once was heard to say after being asked a long and puzzling question: "Thank for that epic word salad." The same thought occrs to me listening to the presenters. However this should not be missed. We need to know what's behind the curtain.

    June 27, 2019
    Reply
  5. PerceptionDeception said:

    Down memory lane https://thehill.com/policy/technology/277251-report-highlights-hundreds-of-meetings-between-white-house-and-google

    June 27, 2019
    Reply
  6. havenization said:

    Which of these congress men / women will be meeting with $GoogleLobbiest$ later for a damage control spirit cooking , after Project Veritas expose ? I wonder which demon they will summon for the task ? Maybe they will drop a bomb on Iran to distract ! A lot of Google /Facebook dollars in politicians pockets .

    June 27, 2019
    Reply
  7. Racingpappy101 said:

    Google needs to change the term "Their Users" to "Their Sheeples" because they are controlling what they want those sheeples to become

    June 27, 2019
    Reply
  8. Holden Caulfield said:

    Why was this video doctored during Ted Cruz questioning?

    June 27, 2019
    Reply
  9. Brian Beaupre said:

    Google the new Russia of 2020, only Google Election Interference is designed to help Democrats only

    June 27, 2019
    Reply
  10. PerceptionDeception said:

    GOOG. HA no tHANKS. Project Veritas Timed it all perfectly.. just for TODAY, now go EXPERIENCE REALITY

    June 27, 2019
    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *