Fostering Responsible Innovation – A Conversation with Google's CEO Sundar Pichai



dear miss mr. peachy dear for Kamaka welcome and thank you all for coming today's event fostering responsible innovation a conversation with Google CEO sundar pichai the event is of great significance for the university – as taking a leading role in the field of research in artificial intelligence in Berlin Berlin in general in particular teo Berlin is a very attractive location for artificial intelligence there more than 180,000 students in the Berlin area thirty-four thirty-five thousand that T will Berlin and of those about five thousand to study computer science in one way or other at Tegel berlin alone there are several centers that we work on artificial intelligence that's the brilliant big data center at tier Berlin for canonical is the director and that's the competence center of machine learning loss robot Miller's director and there's a cluster of excellence which is entitled science of intelligence the director is Oliver Bach from computer science the scientists involved there jointly explore the fundamental laws and principles underlying different forms of intelligence be it artificial individual or collective well was pleased to welcome guests from all over the world above all in order to give students the opportunity to get up-to-date insights into developments outside of the scientific landscape the students here were able to submit questions in advance about 300 of you did that the topics are integrated into the conversation between sundar Pichai and for Komaki in a few minutes just a few words on Saunders bio Sunday I received a bachelor's degree from the Indian Institute of Technology the older master's degree from Stanford University and MBA from Wharton School as Google's chief executive officer sundar is responsible for Google development and technology strategy as well as the company's day-to-day operations which I think must be a challenging task sunder joined Google in 2004 and helped lead the development of Google toolbar and Google Chrome key consumer products which are now being used by over a billion people in 2014 he took over product engineering and research efforts for all of googles products and platforms this includes search maps communications google play commerce and ads as well as our suit of cloud-based solutions for consumers businesses and education known as Google Apps and cloud platform sooner became Google's CEO in August of 2015 I wish you all a pleasant and exciting day today mr. chai the floor is yours [Applause] good morning thank you freshmen Thompson and professor Marco for inviting me here today it's an honor to be here I'll be honest with you I love my work as CEO but some of my favorite days are when I'm surrounded by engineers and students so there's a very good day for me I like like all of you I went to a Technical University I did so in India I grew up there growing up in India the culture of knowledge was very valued and learning was encouraged so University was a natural second home to me and I have many many fond memories none of which actually included a lecture so I'll actually keep my remarks pretty brief google has long history in Germany we opened the first German office 18 years ago that's before Chrome Android YouTube and actually three years before I joined Google in 2004 that means from our earliest days German engineering and creativity have helped shape how we approach our mission to organize the world's information and to make it universally accessible and useful Google continues to invest heavily in Europe we employ over 16,000 people across 39 cities and 25 countries in our mea region Europe Middle Eastern Africa in Europe we have invested more than 4.3 billion euros in five data centers since 2007 generating more than 5.4 billion euros in GDP over the last decade between 2014 and 2017 our Creator publisher and app partners in our ameri-ghen earned more than twenty two billion dollars using our Adsense YouTube and Google Play platforms after this talk I'll head over to help open our newest office here in Berlin across from museum Island the new space can house about 300 Googlers and they work across our team including policy sales cloud YouTube and our latest hub for digital training and education Berlin is also home to several Google AI researchers who are working to make our that's even more helpful to everyone we have three other offices in Germany including our headquarters in Hamburg our engineering center in Munich and our office in Frankfurt many of our important security and privacy products were developed by German engineers it's a great time to be an engineer especially here in Berlin a capital of culture media and a fast-growing startup scene so I look forward to talking more about the opportunities so thank you [Applause] so I would like to personally extend a warm welcome to assume the pitch I the CEO of Google to the Technische universität Berlin one of the leading science universities in Germany and Europe so I during this morning would like to discuss your views on a variety of topics including AI technology ethics also the educational path to AI about you about Google and about research and of course about the competitive landscape and we have mr. Thomson already has mentioned that before I've received more than 300 questions from students which I will also relating the discussion but so for you you have been at Google now for roughly 15 years and you have led many different products you know I remember Google gears also from some time ago but toolbar and of course most importantly chrome so during that tenure you've been able to witness the rise of this new wave of AI technology and also it's used in Google products so what I was wondering is what has surprised you most about the AI technologies and the applications in Google during your tenure first of all it's it's great to be here you know to your point you know we had always at Google you know today what we call off his machine learning we had done some form of it early on in our products you know when we launched email how we went about spam reduction in Gmail or you know and and so you know how we did ranking in search so we had some notion of machine learning I would say what surprised me was you know I quite didn't expect the computational breakthroughs which which we started having later which just made the whole technique so much more effective than any of us expected today we widely you know use it across all our products you know if you look at search now we use machine learning quite a bit to rank so last year I don't know if you if you try using Gmail when you type you have a feature called Smart compose so we cannot predict a few things you can potentially type there right and all of that is powered by machine learning so it's been this kind of a cross-cutting technology which is applying across all our products you know the camera technology which you know based on software what you can do and so on and so forth so it's pretty pervasive across everything we do mmm yeah in that respect given this new wave of AI technology what do you think we can realistically expect in the next five years so are there some areas that are overrated by the potential is overrated that you see and also why should we not use AI maybe you know for sure you know I would say you know it's kind of I feel both I feel it both AI is overhyped as well as it's also under height you know we you know we tend to overestimate technology in the near term and underestimated in the long term and you know there's no doubt that the AI is one of the most profound things we will ever work on as humanity but we are in very early days you know I get frustrated by all the things I want to be able to do or ask the teams to do and the limitations we have in our systems but but the progress is fast you know it's good that it's early because we can actually make sure we develop it safely and in a beneficial way but I I do think there's a lot of hype you know and but in some ways I view it as a positive thing it attracts people to the field I think it's really important the current generation embraces it and you know I think it's gonna be a big part of the future mmm in that respect one of those big challenges in AI or in the current world that I actually see is the trustworthiness of information and obviously we can have AI technologies help with that through fact-checking filter bubbles I recently attended a play in New York the lifetime of a fact also there recently was an odd a journalist in Germany and and the news magazine Der Spiegel that was faking and fabric stories so fact-checking seems to play an important role so what role can a high technology play there or what role does Google see in that context I mean its core to our mission right I mean we want to we want to give people highly accurate and trusted information last year probably if you take 2018 we handle trillions of searches and even if you get it correct ninety nine point ten nines that may still account to a lot of mistakes and and the one good thing about searches everyone sees what everyone else sees so when we make a mistake the world notices it and so you know and and so trying to make sure the information is accurate is I feel core of what Google does it's definitely challenging and you know I worry about a future where both you know technology always has a dual purpose and you know the same technology with which we can do our job better is what bad actors can use so we have to worry about fake videos fake audio you know what we call deep fakes and and you know so those are the kinds of things we're beginning to do early research on but you know we today do a good job of you know I think understanding objective facts is easier where it gets harder is when the world is divided there are many subjective areas in which how do you how do you reflect what is a what is a better better thing to portray and you know those are the things which are definitely more difficult to do absolutely so they're interesting thing is in our questions that we received from the students just to read it out to you there are several topics and we tried to cluster the topics so there were one answering the questions 50 of those cover the future of AI there was a lot of questions about a I had ethics which we will go into about Google products and several questions about the life of a CEO which I think are of particular interest also to be students here what's your work like and so one question that I thought was quite interesting was how do you manage your daily work how do you stay up-to-date with all the latest updates and requirements of your role so what's the secret not sure I've fully figured it out hard you know I well we you know we have Google there's a lot of talented people so I rely on very good people when you run a large company a lot of what ice you know it's very easy where if you don't fight it the company actually runs you you're not running the company right and so you know you can easily get into a situation where you're just your schedule gets fully booked and and you're doing what everyone else in the company wants you to do and so a lot of what I try to think about is how to resist that and you know how do I carve out time to make sure you know I do what what I think is important and so you know every year I try to write down the ten things I want to accomplish that year and or on a quarterly basis maybe the three things I want to get done and if I make progress on one or two of those things I think it's it's it's very good so you know I try hard to make sure I have time for product and technical reviews and and try and set my schedule this is the system setting your schedule and it's pretty hard but yeah and what time does technology and engineering still play for you compared to their non engineering parts of the job for sure as a global company which impacts you know we're billions of users around the world and you know I think engaging with the external world is a naturally important part of what we do you know Tech has definitely gotten much larger it's impacting society at scale you know with that comes scrutiny which I think is good society should scrutinize it and so that does take up more of my time but I've tried a car or time to make sure I spend time on you know you know with the product teams with engineers to actually help build our products mm-hmm and in that respect another question was can you share an example for any of your decisions in your life career as a CEO or any assumptions you made which turned out to be wrong in the future some things you would prefer to rectify if you get a second chance a good question by them happy to take live questions to eviction people have them you know a lot of you know when you're when you're in the technology world you know things are constantly changing and you know and it's it's more than if you think about it every ten years or so the world changes in the 80s we had the PC industry mid-90s the internet and mid you know 2000s the smartphone's game and we are now in the age of AI and and so things change every ten years so you're constantly trying to predict the future you know I think there are things which you know you you try to think in time things and and sometimes you get it right and sometimes you get it wrong an example of where absolutely an example of everything we got it wrong and where we got it right and you know VR AR has been a area where we are all trying to decide what pace at which you invest how do you how do you predict when it's going to move forward I think a lot of us rushed into it and while we were doing it it was clear the first wave the consumer experience is nowhere close to being ready you know but we all invested in it and maybe we didn't ship great products the first time we shipped it and so it's the kind of you learn that you timed it wrong an example of timing I think we got right is you know you're talking about machine learning today we were one of the earliest companies to invest in custom machine learning hardware what we call as TP use now and I remember getting a I don't know 450 million euros or something this is many years ago people saying we want to build custom chips for machine learning and you have to make the decision where it's the right thing to do and I think we invested early and we got the position right but you know there are definitely I'm constantly trying to predict that wave of technology and and and try and get it right and so it's like you know you you're trying to do it well it's like downhill skiing you make a mistake next thing you know how you're you hurt yourself so yeah I related note so as you said you have been designing TP use new processors so I'm important aspect of Google seems to be that Google owns the entire stack finish from infrastructure hardware software to the applications so all together and even though you are a under one hand making a lot of your revenue in advertising you are at core an IT company so how crucial is this holistic approach of if you wish software hardware Co design for the company and for your overall strategy and success I think it plays an important part I you know if you go back all the way to Google you know when Google started you know we built our own Linux servers we built our data centers I think that kind of vertical integration allows you to iterate much faster and and and have true insights I think the reason we could develop custom machine learning hardware is because we were writing the machine learning algorithms on top of it and so we have that iterative loop but we do believe in opening it up to everyone and while we work across the stack I think as a company we really care about democratizing AI access I think it's super important that you know AI works for everyone so be tensorflow you know or or we even provide RTP use for research or we expose it through our cloud cloud offering I think it's important everything we do inside we want to make sure it's available as API is and platforms to the external world mmm so that means in that respect you mentioned tensorflow as an example so what role for this democratization but also if we if I shift another topic which is to research in collaborations maybe of universities which role does open-source technology and open innovation play for google oh I you know if you look at you know spend a lot of my time on working on open source technologies Chrome Android everything is open and you know Google wouldn't be here without Linux and what Linux helped early on I think I think it's up to us to make sure the next generation has the set of innovation will come everywhere I think one of the great things about what's happened over the last 10 years is you see innovation not just in the u.s. happening around the world including in places like Berlin and you know making sure we are making that available is key to how we think about it and anytime you do things as a platform you know the the leverage innovation you get on top of it think about what GPS and smart phones enabled right and giving everyone smart phones and having GPS you you know you completely changes the game in terms of what people can do and so I think I think we've always taken that to heart and you know and and constantly thinking about what technologies we put out as open source yeah and in this context you you you're now opening a research office in Berlin and you have this history of collaborating with universities so what in this particular increasingly complex research space where you have large scales with respect to budget and computing power that you have at Google that most universities wouldn't have you are aiming at those partnership of universities so what do you expect from universities what do you expect from a university like T Berlin with respect to on the one hand of course collaboration on the other hand and research but then also respect to students you know I in a for me when I look at the pace at which we are progressing and I look at how almost all fields in the future will be impacted by AI I think it's really important you know regardless whether you're in computer science or not and you know and making sure you have some exposure to what machine learning in AI is and so at a high level I would you know encourage all of you to understand it you could be doing a course on AI and ethics you know it doesn't matter that but you know just gaining exposure to the technology I think I think how we shape how AI rolls out you know it's super important in terms of research you know we collaborate throughout gules history we have collaborated with many universities we have deep partnerships in Germany you know we have several major efforts in Germany on AI and we collaborate with universities as well when you know we do want people students to you know user compute resources apply it to problems you know and give us new insights and and share the research with the world and that's what we are hoping to achieve mm-hmm yeah and in that respect you already touched the topic of ethics now with respect to courses it's currently happening that many companies are stepping establishing ethical guidelines for the development of a best products and there's AI counsel everyone in Germany we have one University or one in the department so how can companies make use of AI in a responsible way and how can we consider those ethical guidelines what does it take from your perspective you know I mean it's an emerging area we you know I think we've published a pretty comprehensive set of AI principles we wanted to start the conversation you know and our principles are around you know how do you develop AI for to be beneficial to society how do you have how do you have rigorous testing for safety how do you privacy design principles how do you avoid bias in the systems how do you make it accountable to humans at the end of the day and so these are the kind of principles we're beginning to think about and you know we wanted a publish it it's our contract as to how we would develop AI I think we are encouraging everyone to articulate it and and I think it's early days and you know normally as humanity we develop technology then we try to harness it we always have bad effects of it but something as important as AI you know the negative effects could be profound and so you know talking about it early on maybe slowing down development to making sure you're developing it in a safe way mm-hmm I I think I think is super important we have to I think this is an area where you need global frameworks maybe no different from climate change or something where you know like you had with the Paris agreement I think we need more global frameworks for ethics and safety of AI it needs to be governments NGOs universities businesses coming together to do it and so I think it's really important and there's of course exactly this challenge now I was for the German Academy of Science in a discussion board I was giving a presentation where we had to discuss the freedom and the responsibility which I really does two aspects like the science and research now right so in that context the question is should a I be regulated not having in mind irrelevant there should be freedom on the other hand there's the responsibility should it be regulated and you already alluded to the NGOs and governments and businesses so what role should those play in you know setting a framework setting policies in this fast-moving space of I I you know I I definitely think you know governments have have a strong role to play and I do think regulation will be an important part of it but you know what I would at this stage approach it where there's a lot of collaboration there's a lot of learning are you laying the foundation for it you know you want smart regulation you don't want and you know you don't want to slow down innovation one of the challenges for technology like AI is you know I feel like the good people need to work as fast on it as as you know people who may use it for bad purposes so I think I think you want to have a balance there you know it is going to progress globally I think things we can make sure it's it is actually happening everywhere it's not happening in an unequal way there aren't haves and have-nots when it comes to development of AI I think all that are important principles so I think the trick with regulating something at an early stages you may get it wrong or you may slow it down and so you know I would probably encourage open dialogue a lot of conversations building the expertise even within government to understand AI better and take it step by step beyond and now as switching to another topic area because we have lots of students here and we have received lots of questions also about this education at half to Korea in AI and the first thing that lots of students were interested in here's we are the University were where do you when would you foresee that we would need to change in education and training so what should be done maybe differently and then maybe you could also comment on what based on the perspective of Google and your perspective as a CEO should be the new desired skill sets for future workers you know one of the you know depends on you know one of the biggest things I think we need to change about education especially is you know what we think of as continuous killing and continuous retraining I I think especially you know technology just changed the nature of work you know or I saw I think gone are the days where people could learn something and just assume that alone will carry it for 45 years so I think continuous rescaling and Retraining becomes important you know especially you know as you as you work across the segments of population I think for people here you know I said earlier I would expose yourself to these topics I think for if you're a computer scientist you know I would I would be excited at this moment I think I would you know embrace it it would allow view hopefully AI would you know take away some of the burdens of coding and programming over time and help you abstract away and work on you will be more productive and you'll be able to tackle more problems and deeper problems so I think it's an opportunity if you're a computer scientist mm-hmm so that that's exactly one of the next questions which is how would I potentially change the future of work respect you already alluded to we may have an increased amount of automation some jobs may go away so what would it impact as you said computer scientists we may have computer scientists that might be able to focus more on the abstractions I may help us to automatically program will we reach a point and was it a discussion panel some time ago I said okay in the future we may replace management with AI so what's the future that you first see there and what does this mean for the students nowadays what they should study or focus on in order not to have irrelevant skills and have the right skills you know it's a question I get asked a lot but you know the general direction I think it'll happen so for example you know there are radiologists who ask me you know we published papers around how a I can better diagnose or you know and on on certain diseases so you can look at pathological samples and you can use machine learning to predict whether tissues you know cancerous or not and you know in general the way I feel is it's not really the way it will change the professionals it'll help radiologists handle some of the burdensome work you know using machines and they will maybe have more time to spend with patients they can work on more difficult cases and so on so in general a lot of the change I think for skilled workers will you know you know make them more productive and so if you're a computer scientist I think you know you'll be able to work be more productive and work on deeper problems that's how I think most of the transition will happen I do think automation is a concern we've always as a society we've had to deal with the automation you know so which which has displacement but it also ends up creating new jobs the trends aren't always linear you know when ATMs came people predicted you know it would displace all tellers bank tellers but you know banks actually opened more branches for a long time and so they hired more you know tellers to newer jobs which didn't exist today will come up so you know it's a it's a more complicated thing but I think it's definitely a disruptive thing and you know which is why all of us need to work together to do it but I think for people in this room I would be excited about the change you know across my travels one of the important things I realizes you know given how much technology will drive the future economy I think it's important to be positive about it I think there are countries where you embrace technology and I think it's really important to do that and that's what you know helps you navigate it and so you know technology is going to evolve whether we like it or not and so being positive about it and you know understanding this is all going to make the world better – and working to make sure it makes the world better is our opportunity I think absolutely so a couple of years ago I had the opportunity to discuss here also with Eric Schmidt and there we also talked about skills and if I would paraphrase it in a provocative way Eric Schmidt says would have said essentially about the skills students in well don't really study marketing essentially what you need to study is we came up with this concept of the t-shaped student which is you have to have certain skills in depth which is part of the teeth like in computer science and technology math AI machine learning and then of course you need to have some breadth but during the university studies it's very important to focus on the depth so that you will have a solid depth west the breadth will also come of course over time you will learn some of it on the job later on when you have already been working like you go for an MBA like like you did right so the question for you in the respect now is what would you advise students with respect to what study programs should they pursue if we talked about high school graduates which direction should they study or for bachelor's or master's students what a good areas good topics to focus on with choosing a career path and academic program you know I was when I was in university I literally had no clue what I wanted to do and you know hi you know then creates most of you to make sure your you know whatever that you're following something which you're genuinely interested in but I'm a big fan of cross-disciplinary thinking and so the more you can expose yourself to you know thinking in other disciplines so if you're in engineering I would do courses in humanities and psychology and you know YC versa and I think I think trying to trying to with the caveat that you want to be deep in something but you know I think reaching out expanding your interest and and being cross-disciplinary is you know I think is something which will always help you over time mm-hmm very good then some question on the competitive landscape which is obviously managing such large corporations are challenging undertaking in particular in this hyper competitive climate that we're in so where do you see the most challenges is it are they from China are they from nest and startups are they from government regulation where do you see the biggest challenges for Google you know if you look at for every large company you know I I genuinely think the challenge is more from within right and you know how do you how do you just not get used to being comfortable doing what you've always done well and how do you how do you constantly push yourself to innovate and you know I think you know about about everything else I think that is the that is the hardest thing as a company you deal with and and you know what I what I worry the most about I think you know we we've always viewed ourselves as a company which you know works on technology in a deep way and applies it to problems across the board and you know I see we see more opportunities than ever before so I've always never felt we are constrained by what competition does but more about are we working on the most important things things which will really impact people at scale and and are we doing it well I think I think there is there is an inherent questions around you know when you're a big company about you know you you know you are obviously part of the fabric of society and you know out of which you know comes a sense of responsibility so compared to being a younger company I think we think a lot lot about how do we make sure what we are doing is responsible it's helpful to society helpful to people and you know I think that increasingly influences our thinking in terms of how we approach things so I was just thinking since you said it might be okay to have the audience ask some questions I would give the audience the opportunity now to ask a few questions that we have not yet covered we tried to obviously include many of the 300 questions in the strategy of time it's not possible so if they are any we would be happy to hear them now and it'd be able to take them yep okay so I can speak a bit louder please yeah thank you you were speaking you were speaking a few times about Google making their technology public partically available for everyone and yet yesterday Google was fined like 40 millions in France I think for like not keeping to the European data protection guidelines what do you comment on that you know so first of all GDP are you know I think was an important piece of regulation I think privacy is an important human right and you know I you know I'm glad Europe led the way in terms of articulating a comprehensive set of regulation you know which which we are very supportive of and we worked very very hard over 18 months to comply I think you have to understand whenever there's a new big comprehensive regulation which passes for a company which has been operating for 20 years across many products you're trying hard to comply with everything so we are trying to understand exactly you know where in the ruling we need to make changes but we're committed to doing so and you know I think I think it's inherently part of when you when you see something that comprehensive complying is it's just something you know we are very committed to doing but we have more work left to do to do that but I think the higher level the principle of GDP are is very very important you know we are encouraging other countries to do that I think to the extent there is a more comprehensive global privacy regulation I think it serves all of as well you've already talked a bit about the dangers of machine learning for example with people maybe of faking videos things like that but what do you think about what you view on the more long-term risks of artificial intelligence like some thinkers like Elon Musk worried that maybe within this century AI might be developed that could be superior to humans and all of the profound implications which they would have and I would be very interested in your view on that and also how how do you see why you see gules plays in the development and yeah you know I think it's a it's a good question I think it's important you know I share Khan say you know the concerns there I think it's important to be serious about the concerns and and you know AI is definitely one of the most profound things we're going to work on and it's important to understand that it is going to progress it's beyond one company beyond one country you know I think you know there is going to be forward progress of the technology and and which is why I think doing it in a globally cooperative way where it's possible being public about the work you do committing to safety principles ethical principles all that will end up mattering the good news I can tell is you know we we actually are working on AI at Google it's really early right you know I can't I mean you can't are AI systems samarco can tell you can't you know can't do many things which are you know a third grader can do and so we are definitely in early stages and I do think you know we you know we need to work hard to make sure a I progress is in a way in which it helps humanity and and I think that's a deep principle which we need to strive for so I would take the concern seriously but I would take it constructively I won't get paralyzed by it I think the last thing you can do is to stop working on it you know that's probably the the the worst way to deal with it I think I think it's important to take the concern seriously but not get paralyzed by it and you know make thoughtful progress to its doing it we have navigated many difficult technologies before and you know I think I think this is no different it's it's probably more challenging than most things but you know I you know I would worry about climate change equally today in fact maybe I would worry about it a bit more than than the scenario you're talking about I think there was some question today and then those are here first of all good morning mr. pitch I I recently saw a video of you testifying before the US Senate and I don't want to sound arrogant but there was some really stupid questions like that's what I'd like to know as future engineers and sort of like entrepreneurs what would our role be in educating the politicians because I I'd like to like I'd like to think it's the fear of the unknown that made them ask those questions that's why I'm asking if you're what would we what was your experience like you know I mean I actually a few things you know dealing with policymakers globally in a representative democracy you know the policy politicians are representing the constituents and so in some ways it's important to understand that you know they're reflecting real concerns of people as how they're experiencing technology their their actual concerns that and their fears about it so I think it's important to understand it you know I'm actually very optimistic throughout my dealings I find more than ever before politicians and policy makers there's a whole new generation of policy makers who are who I think are very very savvy for example in Europe I have met with people you know in you know baby it in the European Parliament who are very very understanding of important issues like privacy you know or AI and you know so there's a new generation which really knows what they are talking about and I think it's increasing and so from our standpoint we view our role as to make sure we engage very constructively we educate where we can I think universities will end up playing a big role hopefully you're training future policy makers and politicians here as well so I think it's a it's a change which will happen gradually but I think I think we'll we'll make progress well good morning I got a question about Google's culture because pushing a very culturally inclusive kind of culture but a person like James Dewar offered a different perspective and he got fired on the spot for it so I'm wondering if this is a good approach for the future at Google as big company is able to to push society aspects because it's such a very important and big tech giant it's not democracy organized for something so we got a big company like Google who can push an agenda without a democratic influence so what's your point on people like James to move who offer different perspective in Statesville get all of you James so you know as a company you know there are two things right when you when as a set of products we build you know it's time for freedom of expression we give various platforms for people to express themselves and and and it's part of our core mission so even when we you know internally within the company we take that to heart you know more than any other company I've known you know if you within Google somebody told me there are 1 million groups we have created at Google – over the past year so people can express themselves in many many different ways but also it's important understand some time you're running a company you're running a business you want to have a culture in which everyone is respected and people feel like they can all participate fully in the workplace it's just part of that we have to make decisions on where you draw a line right you know when you're running no different from any organization you know you you have codes of conduct and when people violate codes of conduct you know we we make decisions but you know we are very careful as a company to foster a culture of open debate transparency and we strive hard to do that and you know we don't penalize people for you know political beliefs or religious beliefs or something like that but we do have a code of conduct and I think it's important as a company we do that as well so it's it's a balance so there were some Christians here so this is maybe a bit of a personal question I'm sure that you have a lot of responsibilities as a CEO and it's required to perform to do to out keep a certain level of productivity so is there a certain morning routine that you enjoy which helps you keep up with that you know I you know I'm uh I'm a late night person so mornings are definitely not my strengths in a coffee helps you know I still have you know I I still even today I read a physical newspaper it's the way to direction do you talk about routine I you know I try I always try to start my day reading a newspaper try to you know it helps me understand what's going on in a way in a more tactical way you know you know in a way I can feel the real world and what's happening so it's it's part of how I wake up and you know with my cup of coffee and newspaper and I still do that before I I get started but you know so beyond that I don't nothing a very insightful I think so I think we have time for two or three more questions so there's somebody here in the front and then maybe somebody here and somebody here in this area thank you very much um I'm wondering about and we already talked a little bit about political landscape and worldwide we can see that there's a move to the right there's a prospering up of nationalist movements and in Europe we have that and in America we have President Trump and I'm wondering about what do you think is is there a danger to the technology world from these kind of nationalists and right-wing movements all over the world probably up you know I you know it's a company I you know I do think you know through any any face not nestea related to this you know I would just say I've always felt more than ever before you know giving giving information accurate and relevant and trusted information is more important than ever before and and you know I would take democratic processes seriously and I think we should need to respect and understand why why things are happening and and and respect that I've always you know felt that way I grew up in a democracy and I think you know and I think when you see representative outcomes I think I think it's important to respect that from our goal we viewed as how do we you know we think of ourselves as a global company but we want to be local in the places we operate in I think it's we feel that responsibility and so for example in in our large markets over time you want to actually hire more people locally invest more in the more in the countries I think that's the right thing to do from a product standpoint we want to really double down on what we want to do which is provide information and work harder than ever before to make sure we are working if we're providing accurate information to our users so that's that's how I think about it at a high level mm-hm so then I think we had this lady here in the front I got a question for a while and hi well actually a personal question I suppose done you travel a lot and you meet a lot of people every day different nationalities different backgrounds and I wondering what surprised you about someone else what kind of territories do you say a good Wow I mean what and because I suppose dangerous had there are a lot of smart people around you you are in the smart people but what made you say gosh I want to meet that person haha well two things which make me go say well one is you know I can never multitask I can only do one thing at a time and you know I especially in the younger generation even my daughter I see her I don't know how she does Matt that way she has loud music and you know and she's doing her homework and I could be talking on the phone but she somehow is listening to what I am saying as well I have no idea how people do that and you know always even at work there are people who are who can be talking who can be sending emails and you know I always find that a superpower I can only do one thing at a time so you know so maybe that's the that's the thing that impresses me the most the second is I don't know how to nap whenever I you know run into people who know how to nap I'm always blown away you know I I came from the u.s. to Germany and I'm jet lag it's like but I don't know how to nap in the middle of the day and I'm with my people some of my co-workers and they can kick back and I would love to be able to do that so so you mentioned before that when you were in the university you were not quite so sure that you were not quite so sure that what you wanted to do and now you are CEO of Google so can you briefly talk about your journey from there to here like wait wait wait didn't work so I can quite see where the question came from but you know I you know always had you know growing up for me I didn't have much access to technology I never had access to computers most of my time growing up but every piece of technology I got access to it was a big difference in our lives you know we waited about five years to get our first telephone and but when the telephone came came to our house it changed my life you know overnight and you know we've talked about it before like before that if I need to get blood test results for my mom I would take a two-hour trip to the hospital and they would tell me sorry it's not ready come the next day and once the phone came I could call and you know find out whether it was ready or not so every single piece of technology I got I saw how it changed changed our lives so it always left me with a very positive view of how technology can improve people's lives and so I always wanted to be a part of that you know Paul wanted to wanted to really work on something which could impact the lives of billions of people and so it was kind of a guiding thing which I wanted to do and that's what led me to Google and and you know some you know that's that's how I ended up doing what I'm doing today so maybe one very last question and that is so Google of course had created a lot of press and had made some great impact with alphago so what's the next thing after alphago you know I you know it's it's it's a remarkable achievement it's you know our deep mine team which worked on it and to me if you look at you know it's inspirational for me to watch a system like that and you know some of the for people who play there go or chess you know you know advanced way people would say the system has an intuition to it or it can make moves which you know most humans have never done before so I think I think it's exciting to see that that face of it you know my senses you tackle more complex tasks maybe something with more of a real-world characteristics limited information and you know and so the next set of stuff and so I'm sure the team is constantly looking for newer challenges so I can't wait to see what they do next excellent that I think we conclude the fireside chat and now we would ask assume that we tried to sign our guest book and Christy Roman think will come to the stage as well for that and for a photo and we think the audience for the question but most in particular let's again thank mister Pichai for this very informative discussion and a lot of insights both professionally and personally thank you [Applause] this here yeah

2 Comments

  1. falafel dürüm said:

    He should be ashamed to be the CEO of a privacy-violating company that tracks you everywhere, even in physical stores, and censors the internet as well as manipulates you by manipulating search terms and advertising and sells userdata to the NSA. The TU should better invite people like Linus Torvalds and not rich people from Google and Microsoft; but since using Microsoft's spyware at Universities, especially the "Technical" (*lol*) university, is normal, keep doing it!

    June 29, 2019
    Reply
  2. Raja Mukherjee said:

    Seamlessly the session got repeated, Is this AI at work? Sundar, as usual, is magnificent in his humility

    June 29, 2019
    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *