Computing the Future: Setting New Directions (Part 1)



good afternoon welcome to our next session of the day computing the future setting new directions my name is Cindy Barnhart and I am mi t–'s Chancellor and the Ford Foundation professor of engineering being a chancellor can entail different things at different schools here at MIT it's about all things students so it is through this lens that of static students that I view the future and the directions we take at MIT and why I and so many others at MIT are so excited about the Stephen Schwarzman college of computing a catalyst for inventing the future and envisioning an evolving mi t–'s brand of research in education our students will benefit in many ways for one there will be increased opportunities to create and engage in multidisciplinary education and research and to learn through new integrated curricula and to pursue flexible pathways and degree programs no doubt our students will reimagine and reinvent their experiences at MIT and beyond in ways we can't fully fathom right now and that opportunity for invention is just one reason we see game-changing possibilities for the new college and speaking of game-changing we are grateful and thrilled to be hosting and hearing from digg David Segal Diane green drew Houston and John E Kelly in this session they are each luminaries in their fields and also thankfully steadfast supporters of mi t–'s mission they will offer us their own unique experiences perspectives in insights on our sessions theme while serving as powerful examples to our students about the kind of positive pack they too can have on the world one day so without further delay it is my pleasure to welcome to the stage David Siegel co-chairman of to Sigma investments and founding advisor of mi t–'s quest for intelligence thank you for the opportunity to speak here today as we think together about the future of computing though I'm extremely optimistic that the best is yet to come my talk today will focus on being realistic I apologize but unlike what has become common in the tech industry today there will be no hype in what I'm about to say maybe a little bit of hype as many of you know I've spent my life pretty much my entire life working with algorithms and I focused on the field of artificial intelligence the research and progress in these areas in my lifetime has been astounding we've moved very importantly from a world where computers assist people to one where computers are routinely making decisions for people and this is a big difference this is a massive shift and in my mind of course this will make the world a better place but and I want to be realistic we are not just here we there is a fear that we could move in a direction that is far from an algorithmic utopia we need to build a world that benefits from and understands how these technologies work and most importantly where our new technologies fall short I see a lot of misinformation out there and I worry that it's distracting people from the real technological problems that we face and importantly the research that we must do I see parallels between the world today and of that remarkably entertaining movie dr. Strangelove if you've never seen it you should go watch it later and no I'm not Peter Sellers I'm glad that you guys got the joke it parodies it parodies the bizarre thinking of the Cold War error the plot centers around the dangers of an unstoppable algorithmic doomsday machine the threat of nuclear war back then was real but paranoia about the Red Menace was borderline hysterical some of the hype around our increasingly algorithmic AI driven world and the dangers that might exist remind me of this paranoia I've learned to love algorithms over the years but my experiences with them have been very good but not always perfect important basic research is still needed to take our algorithmic world to the next level safely and sanely as algorithms are called upon to increasingly make decisions we must all pause and reflect on their current strengths and weaknesses in this talk I'd like to highlight four fundamental research areas that I feel need substantial progress over the coming years to avoid a Strangelove like future they are managing complex ability complexity and reliability security data integrity and bias and explain algorithmic explained ability I believe the way professors and students at this new college approach these research areas and others will have huge implications for the world we live in tomorrow years ago algorithms were pretty simple it was easy to look at 10 lines of Fortran code you know as I did as a kid and understand what they were doing today we rely on algorithms that run on complicated computer systems they could have 10 million or 50 million or more lines of code managing the complexity of these systems has become a real engineering challenge it's one thing if they are performing something simple and harmless but it's a different situation when they perform more crucial tasks like driving your car managing their ever in creasing complexity while maintaining reliability requires fundamental advances in software engineering Microsoft is full of top engineers it's a terrific company still they famously have trouble rolling out updates to Windows let's face it Windows is not that complicated compared to some of the big systems we're developing today we are building software that exceeds the complexity of anything humans have ever built before I mean that's a remarkable thought we have to continue to learn how to manage this increasing complexity better related to this challenges security their dependence on decision making algorithms grow we need to make sure our systems are more secure than ever people didn't pay much attention to security when the internet was first invented and we all know that we're continuing to pay the price for that today despite some important advances the problem will become more acute as decision making algorithms proliferate in any society you have bad actors we have to assume that people will try even harder to exploit security flaws of increasingly critical software so let's learn from our experience with the internet and make security a top priority data integrity issues are the third area which I think basic research has to be prioritized this is in the news quite a bit since data is such an integral part of machine learning algorithms but substantial unsolved problems in data privacy data ownership and data bias remain without solutions the algorithms that drive our world are at high risk of becoming data compromised concerns about data use can hinder the development of algorithms and their benefits for example if there really was the way to guarantee privacy and protection of any kind of data let's take healthcare data people would be more willing to contribute their valuable information to medical research projects consider this a few years ago the Centers for Medicare and services began to withhold patient some patient records from research datasets they contained information on substance misuse the centers cited privacy concerns and took the data out but substance misuse is associated with more than 60,000 deaths the year in America so that'll mission was a big loss for researchers and by extension patients we are missing the opportunity to solve pressing problems because of the lack of accessible data moving to my next concern and this is related are questions of data bias which are becoming more and more important frankly most datasets have some kind of bias any machine learning algorithm built from such data you guessed it is probably biased but I would point out that there is some confusion between data bias and ethics which is ethics and AI is a very hot topic when people talk about ethics in the context of AI sometimes I think what they really mean is data bias not to digress too deeply into the topic of ethics in AI that would take up the rest of the afternoon and in fact part of the afternoon is dedicated to a discussion on this area I I just like to remind people that in my view and maybe this is a little controversial on AI is really just a tool algorithms are tools tools can be used for good things or bad things perhaps it gets a bit trickier when your tool is making important decisions automatically for you and then it may be reasonable to ask is this an ethical application but I think the argument ultimately applies more to deciding if the tool is being used appropriately than is the tool appropriate there's another problem too which is debating ethics is a societal discussion that is as old as humanity itself last I checked there is no right answer to most ethical questions and to me that means there is no universal way to create ethical AI maybe this is controversial not sure but discussions about ethics should really in my view primarily be about how the algorithms are used not about the algorithms themselves but let's get back to data bias data is the material that makes up AI tools and bias data makes bad tools it's like having a hammer that's unevenly balanced like lis to hurt someone or break something you don't want a biased hammer so research is needed to ensure that the integrity of data sets that much machine learning and other AI research is being done with does not have biases that are impacting their effective use I finally like to talk about explain ability or interpret ability which remains a central problem in machine learning why did a model do that machine learning algorithms can't provide an explanation for their outputs or at least they can't provide an explanation in the way that we normally would mean explanation an explanation that humans can understand the parameters an algorithm relies on are numerous and complex there's no way usually to intuitively explain their thought process this is obviously a problem not just for researchers it makes it hard for you to improve your your algorithms but it's also for society a big issue as we increasingly rely on these algorithms I believe we have to be able to understand the world we're interacting with to be comfortable with living in it people will only tolerate ai's lack of ability to explain its decisions up to a point we don't complain or ask too many questions when decisions are in our favor like when a loan application is approved you're not going to debate the loan officer why did you approve it imagine though if someone is denied a heart transplant and the AI that made that decision is asked why it can't just say well I solved millions of equations and they concluded no heart for you sorry when an opaque algorithmic decision works against us we want to know why and if the answers aren't clear because of the interpret ability problem we won't stand for it and I think that's totally fair related to this is the issue that you can't yet reason with an algorithm people reasoned with each other all the time that is in fact related to our unique ability to to have we have one shot learning the ability to learn from an example of what presented with a single compelling example we can change our minds this is a critical part of human reasoning reasoning but not yet something that we perfected in any kind of AI or machine learning scenario okay let's take stock on my talk I've shared some important research problems that need to be addressed to manage our transition to an increasingly algorithmic decision making world in an earlier era of computing we typically were writing software to solve problems that clearly had a right answer years ago one of the first programs I rode used the newton-raphson method to compute the square root of a number how many people here have done that exercise a lot it was a fun project and like most coding until more recently you could tell if your program worked because there was the right answer that everyone agreed upon today we are increasingly asking computers to do things where there isn't necessarily a right answer or at least an answer that everyone agrees with I encourage the research community to focus on the critical points that I've mentioned today data bias can make algorithms subtly a subjective ethics vary by individual and culture complicating that matters further complexity can introduce critical bugs that are hard to find with traditional testing approaches the lack of explained ability will reduce our confidence that answers being produced are believable or even correct one big advantage in a human driven world that we've had over the years is that it's proven to be very robust one reason it's a robust is each of us goes about our decision-making differently imagine if critical decisions are made increasingly by a single or small number of implementations of an algorithm that robustness could be lost this is particularly challenging when the decisions being made don't have an obviously correct answer it's probably okay for the world to rely on one provably correct square root subroutine but maybe not for complex and critical problems we have to be careful not to turn over decisions to algorithms nai that are beyond their current abilities dr. Strangelove learned that lesson the hard way thank you very much that was great hi there I'm Diane Greene I am so honored and pleased to be here with you today celebrating the Schwartzman college of computing I believe compute and I think everybody here does is going to be key to future research breakthroughs in across science and engineering and the Schwartzman college of computing positions MIT at the forefront and readies it readying it to contribute and lead you know its core to MIT success that it provides faculty and students the labs and the lab equipment to make their breakthrough discoveries things like the new nano building or scanning electron microscopes but now we have a new tool that's just completely general-purpose and pretty much every labs going to use which is compute and advanced AI and and they also need data and if you will data is kind of like what the great library's full of books used to be so we so we need these three things for our researchers they need access to data they need self learning AI and they need a lot of compute to do their groundbreaking research today the public clouds are the leading and largest amount of competitive leading and largest amount of compute capabilities they also industry also has access to most of the data compute is super expensive although we keep it down and and that issue needs addressing and I believe we can do this by evolving our models for how industry academia and government all collaborate and and we need to solve the funding issues you know computers they've been important to research since the first inning Eenie Act but we're now at this turning point in their criticality because instead of being strained by a human's ability to code up an algorithm we now do algorithm development by combining data with the algorithm and having it learn and find that patterns faster and across more data than any human is capable of and it can actually surpass what a human can do for finding these patterns of course with this AI and particularly with the compute intensive technologies such as neural net deep learning the rule is the more compute and the more data the better the accuracy of the results just to give you an illustration of the power of AI and science an experiment was run by the late physicist Susan Zhang and his research group where they used machine learning to rediscover the periodic table and they were able to do it in just a few hours and this you know perhaps chemistry's one of its greatest achievements originally took scientists years of trial and error to achieve and I think it shows that AI will undoubtedly be instrumental in future Nobel prize-worthy discoveries this is a pretty different situation from the 60s and 70s when DARPA funded the ARPANET to connect the universities and researchers created the Internet and then it went into the civilian sector the large Internet companies invented a disruptive and highly profitable model that has let them transcend the need for government-funded research and they've been able to fund their own compute data management in AI and as a result the large internet companies have the most advanced compute technologies so industries now ahead of the academic institutions and compute capabilities in access to some of the biggest data sets and also in the ability to assemble large numbers of the world's leading AI researchers but this is problematic because it's not industry's mission to do basic research or to collaborate with outsiders although they often do a friend at Cornell PhD student Mather a rake ragout recently did ran a few queries for me on the origin of papers in the top two over the last three years in the top two AI conferences ranking them by the number of citations and it showed that up to 60% of the top 5% of the papers ranked by citation came out of industry whereas overall all the papers twenty five percent came from industry alphabets AI research lab deep mine recently won a protein folding contest and a computational biologist said deep mine success just kind of came out of the blue he said their work was just tremendously impressive but he said it should be noted that they did have the best ml tool machine learning tools and the deepest pockets for compute so industries cloud the center's are super big single buildings could have a whole football stadium inside of them their hospital clean have rows and rows of compute racks and what you see increasingly is huge numbers of specialized AI compute clusters they need this because the speed and the real-estate efficiency of compute tips computer chips are net are not still doubling every year as Moore's law said and so the speed improvements now are coming from workloads specialized custom chips and architectures this underscores the validity of a new axiom no chip no AI China understands this they've invested some I read recently 65 billion over the last two years to jumpstart their semiconductor industry capabilities the cloud companies are spending billions of capex dollars every year to expand their compute storage and network capabilities and the researchers that work in these companies have more access to compute a certain proprietary data sets than anyone else they're also able to pay a multiple of University salaries two weeks ago I was in Germany and a professor told me that her university's top AI graduates and scientists were constantly being recruited away either to America or China last week I was told by two different people independently that a Chinese University near Hong Kong was given 800 million dollars just to recruit AI tellen so is the rest and another interesting thing is that the rest of the world starts to adopt AI it's even possible that we're not going to be able to keep up with the demand if you look at the the internet companies or the open AI a non-profit AI think tank that's working on artificial general intelligence they recently issued a report that showed that the large AI training runs over the last few years have been growing exponentially doubling every three and a half months since 2012 that's pretty fast growth in a sense capitalism is doing its job and perhaps we just leave well enough alone the industry AI Labs are open sourcing their work they're publishing their research and in general they really are working to make the world a better place but that won't solve for the sort of long-term research that leads to the giant breakthroughs where will that continue to come from and that's why we're here today because we believe that's precisely where MIT and the troy's main college of computing will play an important part as an aside EECS long term research is starting to be disadvantaged in particular because the cloud companies run these mega operations across more and more specialized workloads and they get unique knowledge for example they know how many of each type of machine instruction gets executed for the different workloads and they can experiment with new configurations in every new data center I'll just quickly mention that access to large amounts of data also brings challenges first it needs to be cleaned in order to be useful this is super labor-intensive for graduate students industries using large amounts of compute to develop and run automated techniques the data is also siloed and difficult to join with other datasets particularly if they're spread across different clouds since they don't seamlessly work together then valid privacy concerns are driving regulators to legislate the use of data and sometimes counterproductive ways like those bits can't cross my geopolitical boundary this advantages the countries with the biggest populations and the fewest regulations a recent article in Nature Medicine described the China based study that showed the deep learning could be used to diagnose common childhood diseases with high Agora C with the high end of what the doctors could do this was a collaboration of American and Chinese researchers they had a six hundred thousand patient data set was from gang zu China it'd be difficult to get that much data and use it similarly in the United States due to our regulations about patient data and then industry is understandably careful about sharing data that could violate users rights we can't yet credibly guarantee the privacy of users data so it's good that the Schwartzman college of computing will take a seat in policy discussions do scholarly work there and help shape regulations as an aside I was here this morning and listening to the ethics discussions I just wanted to mention that technology is always pounced on by aggressive opportunistic people my first start start up with a low bandwidth streaming video company and guess who the first users were it was all for pornography and and we had to decide no we don't want pornography using our technology which we did didn't cost us much at the time but then when I in VMware the virtualization company some the first users were hackers using the sandboxing of the virtual machines to plan their hacking attacks look looking at the data problem in order to solve it I had a really interesting discussion with a guy Keith wind Rob he's a MIT alum and Stanford professor and he did a lot of work to simulate data but then he could not replicate the same results between his simulated data and the real data so then he went to incredible effort to collect his own data in this case streaming video he still only had a tiny fraction of what YouTube Netflix and the others had but he did it anyhow because in talking to the companies they were open to sharing with him but they didn't have all the data that his research needed these are still early days data is growing incredibly quickly and there's an enormous opportunity for the Schwartzman college of computing to be part of the solution to give a sense of how much more data will be coming think about edge to cloud connected device estimates and the CEO and MIT alumna dr. Lisa soory cently said at CES that two billion sensors are in use today and it's estimated to grow to 35 billion by 2020 anecdotally she also said just for fun I just really love building chips were so excited about technology we can help turn the impossible into the possible which is of course a classic mit phrase so in summary compute data and AI are now invaluable to researchers research scientists and engineers the industry labs have been able to attract the largest collections of top AI ax purtz AI experts and provide them with the most compute and salaries for four companies and our economy to continue to benefit from the commercialization of through discoveries we need to encourage more industry MIT collaboration and it's in the best interest to have government fund these efforts it's common that students and professors will do an internship or take a sabbatical and industry which is really valuable but the focus doesn't tend to be on long-term research or radical new ideas for how to approach problems and the IP stays at the company if we could also have top industry researchers and engineers take sabbaticals and invented MIT them will benefit from industries unique knowledge and assets and when a breakthrough does occur the industry researchers will be in a position to bring it back to their companies and accelerate the commercialization finally we need to find a way for government to properly fund the immense compute needs that faculty and students have I wonder could the cloud vendors receive a tax break for donating compute to universities should the government in universities collaborate on a giant cloud that would be super expensive it's urgent that academic researchers and their students have access to general generous amounts of compute on a part with industry this means developing these mechanisms to fund compute and salaries to solve the IP and profit privacy issues that will let us bring industry collaborators and their resources to to academics it's needed to help the extraordinary minds at MIT continue doing long term fundamental research with every advantage that's available it turns out that compute an advanced AI are now critical to the research process and thus they're integral as we know to the mission of MIT and the Schwartzman college of computing so thank you very much I'm excited to see what the coming years bring to MIT all right good afternoon everyone I am always so happy to be back at MIT it's always a highlight of my year and what an amazing occasion to be here so it really makes me really proud to see MIT continuing to lead from the front and making ambitious investments like the one in the college all right so we're going to talk about computing in 2030 and specifically something that I call the human machine partnership so humans have a pretty unique ability so we have this superpower let me get the clicker going let's see well we have a superpower where we are able to invent technology and tools that make our life easier and we offload a lot of our work to machines and so we've done that in a very when we've offload a lot of literal heavy lifting to machines culminating in things like the Industrial Revolution and more recently we've saw we've started to offload a lot of our intellectual work to machines so back in the 1600s a computer was a person not a thing and then now of course that's changed and computation is something that used to be done only by humans and now it's almost entirely done by machines and that's a really important trend that's going to continue so it's a we've created this virtuous cycle where we invent machines we offload more work we use some more free time to even vent even better machines and a lot of good things happen our productivity goes up our standard living or we raise our standard of living and we're able to tackle bigger and bigger challenges as a society and all of us here are beneficiaries of that compound of that compounding growth and with every turn of the cycle we also find new things to do with our time and I'm not talking about things like being an Instagram or Instagram influencer or a drone operator those are real jobs now I mean just things like the just the concept of knowledge work or the concept of going to an office from 9:00 to 5:00 that's a relatively recent phenomenon in fact the term knowledge work was only invented about 60 years ago and even in my own lifetime as a millennial we've seen a lot of profound change from the birth of the PC did the rise of the modern Internet to cloud and mobile in the last 10 or 15 years technology transform is continuously transforming our lives and the way we work and our productivity is always accelerating except for the fact that it kind of isn't and this was kind of surprising to me because I'm like oh we have all this technology but around 10 years ago the rate of productivity growth at least in the u.s. fell pretty dramatically below historical levels and now not a labor economist and probably some of you in the room are labor economists so I won't get like too ahead of myself here but I want to talk about a couple of things that can't be helping and I really started getting interested in this a few years ago because my company was doing super well but I felt I had this feeling that my life was kind of on autopilot and that my work or the experience of work had turned into this grind or this kind of treadmill where you wake up probably wake up trying to get a workout in go to the office meetings all day come home try to eat clean out my Inbox pass out repeat now to be sure these are first world problems I'm very lucky to be in that situation in the first place but at the same time I'm like is this it or hey and it was frustrating because I was I definitely felt busy and I was working hard but I didn't feel productive and when I looked into it it turned out I wasn't that productive and maybe none of us are and when you shine a light on where we spend our time at work and McKenzie did this a few years ago it turns out that we spend the majority of our time actually over 60% of it on certain tasks like finding information and email and coordinating people I like to call these things work about work and to be fair not all 60% of that is waste but it does mean that we only spend 40% of our time doing the jobs we were actually hired to do and so I thought about that a little more and I'm like okay what that's really saying is take Monday Monday Tuesday and Wednesday and just light them on fire this week and every week of your working life times a huge chunk of the planet or in other words if for those of you are students and you got your whole life in front of you and you're about to begin a 40 or 50 year career unless something changes you you have made a decade maybe two decades of email to look forward to and that's not all as you can see all the different apps were using now have taken our attention and they've shattered it into a million little pieces it doesn't make a lot of sense how many of you would say that you're you were hired for your minds that your job involves using your brain at work okay now how many of you feel that you actually have the time and space to actually think that you're able to you finish every day and you're like man I just crushed it I was in a flow state I was super focused if you have any tips for the rest of us but a lot for your hands right and and and no one isn't immune to this so if Einstein were alive today well in his debut like he'd spend he'd likes Venice first couple hours like archiving LinkedIn invitations and Groupons and then he get down to work and like just as he's about to have some brilliant flash of insight like his phone would buzz he'd be like a trump tweet or slack notification you know but we still understand relativity you know who and maybe there's like 10 other breakthroughs that would have happened by now if we weren't in this situation like we'll never know and it's not just inefficient it's also making us really unhappy so two-thirds of people report being disengaged at work another half people report being consistently exhausted at work and this is tritanus and those numbers have only been going up over the last 20 years and this is tragic because work it can be such a fulfilling experience and everybody deserves that okay so okay something needs to change here but but what should we do about it the first thing we need is a change in mindset so if you think about the knowledge economy it's basically one ginormous exchange of dollars for brain power right and every company can tell you down to the penny where payroll goes where the dollars go but then when it comes together aside which where's a brain power go we are totally blind we're not even measuring it let alone spending it well and this is insane because every single project that we've talked about today and every single challenge that we have as a society depends on being able to harness our brainpower and it depends on be able to do knowledge work well and we have a long way to go there but we need to start treating our brain power like the precious fuel that it is and it's the fuel for human progress and so it's things like this that that were reasons why we evolved our mission at Dropbox to put a dent in this problem and we believe that knowledge work has gone totally off the rails need some fundamental riyadh reimagine ation so we decided our mission is to design a more enlightened way of working and and we'll have more we'll talk drop box another time but to borrow the words of a wise professor from a couple hours ago my advice to you is download Dropbox but anyway um so what should we do about this so I think that they're there they're a few tactical things like we need to create the conditions for people to be able to use their minds at work so first I remember visiting my dad at work when I was a kid and he spent most of his career career at a Draper lab down the road and I went into his office and a lot of things are the same so he had a desk he had a PC he had a phone a lot of things were different he got five emails a day not 500 and he had a door that could close and he could turn his phone off and you could kind of see how someone like that could get actually use their mind at work now I'm not saying that we should go back to 1995 but we definitely need a much calmer and more focused working environment that's not gonna be enough and then when you think about well how do we improve knowledge work I think machines can help a lot and I think following this recipe of offloading a lot of our busy work to machines is a good one to follow and we can start there's a lot of busy work here but maybe if we could find a way to have machines do more than heavy lifting then we would be freed up and maybe things would start going in the right direction how do we do that humans and machines have these complementary abilities and computers have a kind of mechanical intelligence so they're really good at following instructions we're really good at following recipes they don't make mistakes so a lot of benefits but they have a ton of limitations too and as we've heard a little bit today a three-year-old human can run circles around our most powerful supercomputer when it comes to basic things like carrying on a conversation or just knowing basic facts about the world but that's starting to change and that's a big part of why all of us are here today is there's been this amazing renaissance in AI and machine learning in the last 10 years and machines have been getting all these new skills so they can see and they can hear and speak and to some extent that can read and understand things certainly to a much greater degree than in the past and these are just a few of the headlines in the last month alone and and some seem a little ridiculous and AI that's abuse fake news AI that plays Starcraft you know what are we really looking at here but under the hood there's some pretty amazing stuff happening so you just take the Starcraft example so Starcraft is so many of you probably heard about AI dominating humans and board games like chess and go but it turns out what's really exciting from the AI perspective is that these AIS are teaching themselves to play these games they're not running some pre-programmed strategy from a human and it also turns out that a game like Starcraft is actually a lot harder of a nut to crack than some of those games and the reason is because some of these realtime strategy games like a Starcraft what makes those hard and what make real life hard I have a lot in common so real there's a lot of uncertainty a lot of complexity a lot of ambiguity and things are happening in real time so decisions and stuff is coming at you a lot faster than you can deal with it I love Starcraft I've been playing Starcraft since I was 14 I'm also CEO of a 2,500 person company the parallels between those two things are actually pretty surprising and I don't think this is going to have in the next 10 minutes but I think what in the future we're gonna look back on this AI being able to figure out how to coordinate and plan and juggle these things is a really big deal and can help save us a lot of time but when you add all this up and kind of unpack what it means to actually do some of these things from an AI perspective it means that computers are starting to get these new skills right they're able to synthesize and summarize the real text they're able to like read articles and answer natural language questions about them and they're able to start doing things like organizing and coordinating planning and these are things that occupy a lot of our time so it's pretty exciting and then when you look a lot of the questions that we have at work are the things that we need to get done machine learning can help a lot and so you take a question like art which of these projects might run over time right run behind schedule or which of these emails do I need to respond to a data science would translate them you know that's a classification problem and computers are really good at taking all this data filtering out the noise figuring out what's relevant and making accurate predictions and they're getting a lot better or something like hey organized my email to topics of data science is like oh that's a clustering problem and importantly AI doesn t be perfect to be really useful right just ask anyone who's used Google Translator a ton of other examples and more and more what we have to look forward to we're gonna have our machine brains take a first pass and everything and cut out through a lot of the clutter so that our human brains don't have to and hopefully the one day that like super obnoxious like 15,000 unread everything badge will finally go away but anyway you can start to see how some of these skills might be applied to things like finding information and email and courting people's work about work that occupies the majority of our time will start being massively assisted by machines and frankly the work when you think about machines be able to organize and prioritize and plan coordinate these we're gonna see that these are increasingly machine tasks in disguise in the same way that computation went from being a human thing to a machine thing we're gonna see a lot of other verbs move in that direction too and it's gonna be a gradient so some will be fully automated and some will be more assisted but one way or another that the machines are gonna take a lot of the heavy lifting off our plate and we're gonna be freed up and this is important because the world needs humans computers don't have feelings or dreams or soul or imagination we've got a lot of unique abilities that computers won't be able to match anytime soon no matter how much of an AI a I optimists you might be and so what we should be asking for is the best of both worlds it won't be long before we all have an AI copilot that's able to not only take the drudgery out of our work but it will spend all of its cycles to make sure that all of our cycles are used as effectively as possible and this is important this all this complementarity and this feeling in augmentation is important because so much of the narrative in AI is kind of out this existential zero-sum battle against the robots the first we're gonna take all our jobs and second they're gonna wake up and then they're gonna kill all of us and I think at least part of that narrative is correct right and we should not kid ourselves like not about the apocalypse but things like automation are gonna displace a lot of jobs and they're gonna create real problems this is gonna be one of our big this gonna be one of our biggest challenges of society and fortunately we got a lot of smart people already making progress in some of these areas it's one of the reasons I'm so excited about the college but there's for example I've really enjoyed spending time with Zeynep ton who was at Sloan and she's shown that through her good jobs initiative that companies like Costco that invest in their employees and entry-level employees Pam welcomed career paths and training are actually more profitable and more competitive than than companies that treat that don't make those investments and or treat their employees like cogs in a machine so we're gonna need a lot more of that kind of thinking but we're excites me most about the computing in 2030 is that we're on the eve of a new part a new generation of our partnership with machines one where we can combine the unique superpowers of the human brain and the silicon brain and we'll be able to redesign knowledge work so that machines take a lot of the busy work and so that people can be freed up to do the people's stuff and we can spend our working days on the things on the subset of our work that's really fulfilling and meaningful and as we do that we'll be so much better equipped to tackle all the many challenges we face and my hope isn't is that in 2030 we'll look back on now at the beginning of a revolution that freed our minds the way the Industrial Revolution freed our hands and my last hope is that it happens right here that the new college of computing is a place where that revolution is born thank you good afternoon I'm John Kelly IBM executive vice president but more importantly this afternoon I am a technologist who has spent nearly four decades in technology in information technology I'm also a person who works in two most advanced labs in IBM and around the world in universities so I get to understand the past as well as see the future I want to congratulate Rafael and the entire MIT community for this new college it couldn't be more important to the United States and the world than right now and Steve and the Schwartzman family I want to thank you not only for the magnitude of the gift but I argue the timing of the gift it's what's really critical because we are not on some form of a continuum with just another technology rolling forward we are at an inflection point we are at the beginning of something which we're just barely beginning to understand and we've talked a little bit about it this morning but let me show you my perspective on this and let me put it quickly in a historic perspective the first era of computing was all about mechanical switches and doing arithmetic and that underlying technology evolved into vacuum tubes and eventually we ran out of power and we ran out humans to make those machines do what we wanted them to do so we went to a new technology transistors we started integrate circuits we started to integrate at an exponential rate which is Moore's law and and started to increase the density of memory and storage in these computers and all of a sudden we realized in the 50s that hey we can program those computers we can put a program into that machine the storage is big enough the processing is fast enough that we can teach the machine to act and do things that we do the last 50 or 60 years of computing has been the programmable era computing everything from the largest systems at IBM builds to the laptop or your phone are programmable systems that that computer cannot do a single thing that we haven't programmed it to do we're now entering an entirely new era of computing this is not just about it's all of a sudden artificial intelligence and machine learning we now are moving to technologies and machines that do not require programming they learn on their own which is the artificial intelligence and the underlying technology is moving fast as I'll show you and we we can't even begin to understand what's going to happen in this third error because we're only I pick 2011 as to start only because that's the the IBM Watson artificial intelligence machine but we're only eight years into what will be a minimum of 50 or 60 years and I will argue more than that so the timing Steve of this gift at this point in time at the beginning of a new era of computing couldn't be better it reminds me of the early days in IBM when we invented the first programming languages the fortran's of the world and we went to the universities and said we need we need a new curriculum around computer science and they said what's that so we need we need to invent programming what's that well we need colleges and universities now to major in this area let me let me point out what's also different now in the underlying technology computing systems have been designed basically for that programmable era we started in the 50s with very simple programming we advanced those technologies with Moore's law we began to do very heavy duty multi processing massive parallel processing we threw everything we could at this roadmap of computing to eke out computing power in the programmable era while we were doing that we were working on artificial intelligence but we were using the programmable systems to do artificial intelligence the first in the in the IBM AI history the first games of checkers the the first game of chess deep blue that beat Garry Kasparov was on a traditional computer with a little bit of AI specialty in it the system that the infamous Watson jeopardy system was fundamentally on a classical computer we operated AI separately from programmable computing that time has ended we are now Co designing artificial intelligence in our systems simultaneously and the best example I can give you of that new convergence and where computing is going now is the summit system or the Sierra system these are the two largest computers in the world that we built for the United States Department of Energy for science and for defense those systems not only perform the largest traditional modeling computation of 200 petaflop s' which by the way I was told not everybody's an MIT computer scientist that means that it does 200 billion calculations a million times a second 200 billion calculations a million times a second as impressive as that is that is a more powerful AI system than a compute system in fact it was only on the floor at Oak Ridge for about 30 days when a team from Oak Ridge and some people from Google helped proved that that machine set a world record in AI learning and that's because there's equal to or more AI custom accelerators in that system than there are general-purpose processors so for the first time the two worlds have come together and every system that IBM designs going forward now will be optimized for artificial intelligence it's a different different world keep that system in mind because I'll come back to it in a minute the other thing I should tell you about that system is not only is it big it's it's about three tennis courts and size it's still smaller than a cloud but more powerful than a cloud both in terms of compute and AI but that computer consumes 13 million watts of power 13 mega watts of power for reference what's between your ears is about 20 watts so with 13 million watts of power we're still barely approaching what a human can do with 20 watts of power so we're we're doing something wrong we're doing something wrong realizing that we had to push the roadmap of AI and computing together last fall we came up to MIT and Rafael and said from an IBM standpoint let's try to advance the research here let's try to advance the research we formed a new partnership we're putting in hundreds of billions of dollars we're putting our best talent from IBM co-located with the best faculty and students here at MIT to try to advance the technologies beyond Summit and Sierra all the way from the underlying physics of lower power artificial intelligence processors through how do we secure these systems through how do we share the responsibility for the systems that we're going to be creating and as new as this new Watson AI lab is I can tell you we are ecstatic with the success of this lab and on our view the synergy between this lab Stephan your college is going to be off the charts we're just we're just thrilled and I thank all of the tremendous faculty who have gravitated to this the the research and the papers and ideas that are coming out of this are stunning so as I said everything we've built in the last 60 or so years has ridden this Moore's Law curve and I see you know Rafael and I work for years and decades on the technology behind this I see race that I was worked for decades in the industry as well many of my colleagues from the industry Moore's law basically was an observation that we doubled the number of transistors in the performance every 18 months or so and that has served us well as we shrunk the devices but fundamentally that law is based on the fact if we keep shrinking it will go faster that is no longer happening and the energy consumption budget of writing that law as I told you is 13 megawatts so we're reaching the physical limits of shrinking and Moore's Law at the same time we can't get that much power and energy in and out of a box it's becoming physically impossible so with that being the case what's beyond Moore's law how do we decouple ourselves from this concept of let's build denser transistors which are either a 1 or a 0 and can we compute in a different way than just ones and zeroes and can we compute in a way that the computers capacity and capability scales beyond Moore's Law and faster well we're fortunate because in May of 1981 a group of scientists from IBM and MIT got together in May held a conference on something called quantum mechanics and started to discuss the theory of using quantum mechanics for compute way ahead of its it's time richard fineman is in that picture by the way in the back in the back right the guy taking the picture is an IBM fellow fellow Charlie Bennett the fundamental principles for quantum computing came out of that conference we and many others in the industry spent the next two to three decades trying to build a quantum computer 10 or 15 years ago we had quantum computers that look like this basically a research project basically had we had built instead of transistors quantum bits or qubits we had built two of them we had taken them down to almost absolute zero we had gotten them for the quantum physicists in the room to entangle and we believed we could compute for the first time ever using quantum theory at the bottom of that physics research project fast forward now just a few years and this was early last year this is the world's largest most publicly available quantum computer it's 20 cubits it has roughly the performance on certain problems as that enormous Summit sierra system that I showed you its prototype though of a commercial system a couple of months ago we announced this this is a commercial fully viable quantum computer reliable today at twenty cubits we've announced we have fifty at fifty cubits again you cross over what's what's possible with a traditional computer so that's the that's the new roadmap the exciting thing is scaling a quantum computer in qubits and I won't don't have time to get into it is a new exponential in fact when we reach a hundred qubits in that system I can prove to you that it will do calculations that exceed the number of atoms on the planet if they were all transistors so we are about to enter a whole new regime a whole new regime and it was thought up until very recently that there were only a few problems that those computers could do if anybody could ever built one build one so we built one and now we look at what are the kinds of problems we can do with that kind of machine well in in computing as we know it today there's a set of easy problems that programmable things the math that you know the the things that we all understand there's a set of really hard problems that classical computers struggle with as an example if you want to model from first principles how do I put two or three atoms together into a molecule and and get to the lowest energy state which is a Moloch which will be a stable molecule that's a tough calculation for a classic computer we have to make all kinds of approximations that quantum computer can do many of those kinds of problems can solve many of the hard problems classical computers can't solve and in fact can do a whole class of things that classical computers will never ever be able to do so when you look at this you say wow things like factoring numbers which is the basis of all encryption on the planet are really hard for classic computers these quantum systems do it in a snap things like simulating quantum physics itself what better way to do it than on a quantum computer things like finding new materials new drugs with large molecules dozens of atoms hundreds of atoms not two or three can only be done on a quantum computer and optimization routines whether it's sorting data or modeling the financial systems of the planet can only be done at this scale with these kinds of systems and most excitingly we have demonstrated many Alberta intelligence algorithms will run on this system so now we're in a world where we've brought AI and compute together with sumit and Sierra we're progressing those technologies to take down power and performance and we're introducing quantum computing into the world so I hope now you believe that this is not like a small increment or a typical point in time we are on the verge of some incredible things and I will end with what many of the other speakers I can prove to you either in my labs or what we're doing in our labs and with MIT everything I told you is going to happen in the computers the issues will be all around the human factors we take for granted when we communicate human to human a human to machine we know that we must know that that machine is fair we must be able to explain what is doing it must be secure it must have ethics and I will argue that in the past when I built big computers like I at first could just build them ship them if it had a reliability problem we'll go fix it we learned over time we had to build quality reliability into those systems from day one of development if I had time I could prove to you we must design in these attributes to these big systems and not think that we can do it later or papered on top we can build systems that increase the transparency and ethical behaviors into the guts of these systems and I think this I Steve I was thrilled that you included this whole domain of ethics and fairness and policy into this new school I would argue the physics is fun this is where the rubber really meets the road and so I want to congratulate MIT Raphael Anantha the whole team Marty the whole team at MIT for this wonderful time and with the birth of this new College I thought it was appropriate for us to give the college a new gift and so today I'm announcing that we are building in our IBM manufacturing plants five racks a summit that we will deliver to MIT and to the new college as a platform for the college thank you very much okay hi I'm Regina barzola there was some change in the program some knows antonio de alba and i'm gonna talk about how a eye changes the way we diagnose and treat diseases so i would say this is kind of a funny topic and the reason it is funny because today when you open a newspaper you read all the amazing things that a eye delivers to you and there are all the time breakthroughs but when you go to the hospital none of us actually see this AI and this is not surprising there were a lot of studies that demonstrated only 5% of US healthcare providers are saying that they're using AI to help their patients and the definition of a fee is pretty fluid so it's very important for me as a patient to make sure that whatever we are developing in this space is actually deployable in the hospitals and can be used to help patients so exactly a year ago i was standing here and talking in a quest of intelligence lunch and I was showing a system that we've just developed in lunches MGH which can read mammograms and I'm happy to tell you that the system was in production for 13 months it read 40,000 mammograms and here you can see actually how is it done this is a traditional room in every hospital where after you've taken you image there is a person four seats and read it so now each one of these images is read by machine and this is my collaborator dr. Connie Limon who looks at the prediction of the Machine and then science the report so the question that I had and it was great to see that it's working and it's helping and it's doing what humans supposed to be doing but the exciting part is can machines actually do what humans cannot do and now I will show you an example of something like that what you can see here are two images of women who at a time when they had a mammogram they didn't have cancer one of these women developed cancer another one didn't today none of you and none of their radiologists can say which one of this woman will develop cancer in two years but what we know from biology that cancer doesn't grow today to tomorrow essentially a very long process which makes a lot of changes in tissue so the logical question is can you take the Machine and train it on the images when we know outcome in two years so in five years to say what is there to come and because machine can see hundreds of thousands of images and because it has capacity to really distinguish between these details machine was able to do this task pretty well and this week we actually launching at MGH this new risk model and what we've seen is that if this model places you in top 20% actually have a very non-trivial chance to get breast cancer and now lots of physician at MGH thinking how we can design procedures which would help this women and the good news for breast cancer you can actually do something to prevent and you can imagine what it will do for other diseases like pancreatic cancer but diagnosing L is just one of the issues in the second big question is how we actually curing the disease correct and there are lots and lots of diseases for which we don't have a cure now you can see that investment in pharma continues to grow the drugs become more and more expensive and there are a lot a lot of drugs that actually fail fail even in late stages so the question is what can we do can we use technology to help us to designs drug foster and this is really a big question because if you are thinking about it when you are designing the drug even if you are designing small molecule which is an easier case it's a combinatorial space it's a huge huge space and you are looking in this huge space to just to find the molecule with the right profile so how actually doing it today there was a lot of advancement in manufacturing so you can do high-throughput screening you can check lots and lots of molecules and see if they are toxic or no toxic if they're potted and so on but obviously you are bounded by how many molecules you can do maybe you can do ten thousand a hundred thousand you cannot do hundreds of millions of molecules and that's exactly where our models come in these models are trained given the molecular graph to predict various properties that people care about and what we've demonstrated that this models actually can do it pretty well across both chemical and biological properties but what is more interesting is actually to see what this models do when these models try to interpret molecules they translate them into some continuous smooth space okay and the geography of the space actually relates to how good are the property of this molecule so it opens as doors to try something really new which goes into the heart of chemistry you can start with your molecule translated into this very nice continuous space optimize it in the continuous space and then generate a new molecule and that's exactly what we are currently doing and this work is very recent it's just half a year old and our hope is that when we achieve the required capacity we can totally change the process of drug design and as I told you I really care to make sure that what we are doing can actually be deployed and can make a difference so we have a consortium of setting pharmaceutical companies that not only help us with funding but also take our tools implement them in practice and give us feedback and jointly with them we are working on drug development so this is very exciting and let me just finish my talk by showing you some diseases and I'm sure all of you know the names of these diseases for none of these diseases we have cure today and there will be almost you know each one of us knows a person who was diagnosed with one of those diseases and to me the real question about AI and healthcare is if in ten years from now maybe in five years from now when we are celebrating the birthday of the college we can actually cross some of these diseases from the least and find the cure thank you [Applause] hello everyone so I'm sure all of you guys know what wearables are and probably many of you are wearing them so I'm going to tell you now about the move from wearables to Invisibles so I work on radio signals I am when I started as professor at MIT I worked a lot on improving Wi-Fi improving cellular connecting many many people on the internet etc but about five years ago I started thinking that these radio signals are way more powerful than we are using them today just think with me radio signals propagate in space they traverse walls on obstacles and they reflect of the human body because our bodies are full of water and some of these minut reflections will come back to us through walls who obstacles now if I have a smart device that I can interpret these reflections for moral signals maybe I can start seeing through walls so that was wild when we started and somehow I convinced my student let's try okay so let me show you some of our early experiments so this is our early device and we're going to put it in the office adjacent to this office behind the wall and we're gonna macho this person and this red dot that you see on the side of the screen is where the device things this person is standing right now so let me play this video for you so as he moves he can see the red dot moves with him and it's all purely based on the reflection from his from his body through the wall and we can track him pretty accurately he has no sensor on his body in your cell phone nothing he might be oblivious to being tracked from behind the wall it's quite okay so this is actually from a few years ago and over the past few years I've been working on this with my students here at MIT improving the technology further and I want to show you some of our most recent results and in particular when you look at this I mean you see the red dot and you know he's there at that particular location but the white thought doesn't tell you is he standing is he sitting like what does he do in there and when he reads you see the red dot slides with him but you don't know whether he took a step with his right foot he's left hood you didn't know so let me show you our most recent result from just a few months ago so now the big frame is what the wireless device sees from behind the wall and the small frame is the camera inside the room and as you can see we are getting the full skeleton of the people form behind the wall let me play this you see when he sits the device knows that he's sitting got his full skeleton it knows how people are moving there are multiple people who are moving and remember all of this using a radio signal reflection of the human body from behind the wall without any wearables now how do we do this like how can we make this work so there are two things we have advanced radios in machine learning so I took people radios is like your ear for our device this is the ear of the device it has to be very very good and very sensitive to sell these minut reflections from behind the wall but what is more important is the brain of course and that is a machine learning technology so you hear a lot about machine learning neural networks they operate on images on text data on audio but what we are doing here is we are making neural networks operate on radio signals to be able to do something not just get as good as a human because we all cannot see swirls but maybe to do something that we cannot do today okay so what else can we monitor using these wireless signals without any wearables steep stages so perhaps you guys know that when we go to sleep our brainwaves change and we enter different stages awake light sleep deep sleep rapid eye movement or REM now in the u.s. one in every three people have sleep problems so being able to understand sleep and improve sleep is very important so actually sleep stages are not just important for sleep they are important for a variety of diseases just to give you an example depression so do you guys know that one of the signs of depression is that realm this rapid eye movement stage happen too early very early during the night and that is one of the signs to stop that could happen in depression so imagine if you can monitor sleep stages every night in the home very easily then perhaps we can tell when someone is falling into depression even without them realizing it okay so unfortunately today if you want tomorrow sleep stages you send your patient to the sleep lab they put these electrodes on their head it's not really a happy experience probably you will do it one night chilly nights maximum but you don't want to live this way every night so let me show you what we can do so this is our device it transmitted very low-power wireless signal it analyzes these reflections using machine learning and spits out the sleep stages throughout the night it would know when this person is in RAM which is the stage the stage in which we dream what else can we do so here's this guy sitting like you guys and reading we can get his breathing these signals on but his inhales exhales and we asked him to hold his West and you can see the signal stays at steady level because he exhaled he did not inhale now I wanna zoom in on these signals further so this is the same signal the breathing signal these are the inhales these are the exhales and the first time we saw the signals like oh this is there is some noise on it these blips but it turned out actually this is not noise if you zoom in these are his heartbeats beat by beat and again remember without any wearables purely by analyzing the wireless signals in the environment so what applications this technology has so of course there are many applications but the applications that I am interested in is healthcare so when we started working on this we got a lot of emails and contacts from doctors saying can you monitor my patient at home and you can think about it like for example doctors discharged patients from hospitals and when the patient goes home they have no idea is he breathing is he's like what is his heartbeat his vital sign is he moving is he in bed what's going on they don't know imagine discharging every patient with a device like this so that you can continue the house monitoring at home this is also very important for our aging population we know that for all the people chronic diseases are very important problem but we know also that many hospitalization in chronic diseases are also avoidable if you can detect the problem early on so this is include heart problems pulmonary problems things that are related to UTI kidney problem there are so many Alzheimer issues even depression all of those things so today if you want amount of patients at home what do you have something like this so if you want to watch your breathing you put the nasal probe or chest burned on them for people who have Parkinson's you ask them to wear sensors the accelerometer on their limbs and move like that for sleep I told you you have all of these sensors on this on their head we are changing this image to this smart Wi-Fi like box that sits in the background of the home and Marta breathing heartbeat sleep falls gift mobility interaction with caregivers all of that purely by analyzing the surrounding wireless signals without asking the patient to wear a single sensor or to write Diaries or to change anything about the usual schedule we have deployed so far more than 200 devices in homes with patients in different strategic areas we are working with doctors in Parkinson's disease in Alzheimer's disease in pulmonary diseases and in depression and we are working together with our doctor colleagues with our pharmaceutical companies with the healthcare system to try to paint these technologies and these advancements to healthcare thank you [Applause]

4 Comments

  1. vallab said:

    Why not someone invent a AI text bot that can enables the speaker to give open public talk without tethering them to look down at the text.

    May 22, 2019
    Reply
  2. flemlion13 said:

    Thank god for the part with Drew Houston (starts round 33m), only time you're able to stay awake.
    Are the courses at MIT just as sleep generating as this?

    May 22, 2019
    Reply
  3. Harrison Taylor said:

    Dr Phil shaved his mustache?

    May 22, 2019
    Reply
  4. DeadShot_77 Fogs said:

    i want study in Massachusets University

    May 22, 2019
    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *