APL Forum: Biotechnology for the Nation – BioFuture Team Debate: BioUtopia or BioDystopia?

We will now begin the debate portion of
our afternoon which I’m very excited about this will be moderated by Jim
Miller who is a senior fellow at APL and president of adaptive strategies
previously he served as the Undersecretary of defense for policy and
as the principle civilian adviser to the Secretary of Defense on strategy policy
and operations he is a well respected expert in nuclear deterrence missile
defense cyber policy space policy and cyber warfare excuse me please welcome
Jim Miller and our debate teams Thank You Ally thank you very much
welcome we have a tremendous panel here today and in the interest of time I’ll
introduce them only briefly and I will introduce each of our panelists as he or
she is about to engage in their portion of the debate our format today will be a
debate and the question being debated is whether the US should adopt a gung-ho
approach to bioengineering for national advantage or a go-slow approach to
bioengineering for national advantage where that national advantage could be
military could be economic could be intelligence collecting across the board
and because that’s such a broad topic we’ve broken it into three more chewable
bites and each of our pairs of debaters will take on one of those bites we’ve
asked each of the participants to give a clear statement either gung-ho or go
slow they all have more nuanced views than that will come to those more
nuanced views after we get through our initial round so let me start by
introducing our first to dr. Jeff Lynn and dr. Gigi Granville sorry Gigi will
spur off first Jeff will advocate a gung-ho approach on
using biotech including genetic engineering in order to create quote
unquote superhumans to advance the capacity of humans and in various
dimensions that could be extending lifespans it could be making our future
population or segments thereof more intelligent creating super soldiers and
so forth Jeff is a double doctor md/phd retired
US colonel who headed ARPA’s biotech office from 2014 to 2016 currently
professor of neurology and CEO of sungkyu LLC gg-gone Vall will take the
other side of the argument and advocated go slow approach on genetic engineering
and biotech for human improvement and particularly for so-called superhumans
gigi is at PhD immunologist US and as a senior scholar and assistant professor
at Johns Hopkins and author of a 2016 book synthetic biology safety security
and promise so here’s how we’ll roll Jeff you’ll have three minutes to make
the gun whole case on bioengineering for humans
Gigi you’ll have five minutes to make the go slow case including responding to
Jeff’s points Jeff you’ll then have two minutes to make a counter-argument and
final closing argument as well and as you all know you’ll have a warning when
you’re at the one minute mark for each of your sessions as we get underway so
we’ll start there we’ll introduce the other debaters as we get to their their
part of the debate Jeff over to you you have three minutes
to make the affirmative case gung-ho absolutely no question about it
we’re on the precipice of remarkable opportunity you heard from the prior
panels and I’m not going to use up my three minutes going over and numerating
the vast opportunity space both from a biological and a technological
standpoint but first let’s ask ourselves what is a superhuman if we look at the
comic books we’d be impervious to bullets we would fly we would run way
way super fast and we would not get sick those are mostly elements when you look
at popular culture they’ve left out of course brilliance they’ve left out
brilliance if you notice that but what is a superhuman if we look at it in
those contexts and this think about where we’ve come from so just think not
40,000 years ago when man first started a walk
on earth and what has happened to this point that’s a very short time period
isn’t not it took millions of years for dinosaurs to evolve through the
different iterations but man has moved dramatically over 40 thousand years and
I would argue in the last three thousand years it has particularly been a
logarithmic rise and in the last hundred years it’s taken on an extraordinary
climb but what were those things that made this human superhuman compared to
the 40,000 years ago or even three thousand years ago or even a hundred
years ago number one is longevity yeah ours
lifespan in the 1900s at the dawn of 1900 my 45 years old it’s now 80 and you
heard from the prior panel and prior participants their expectation is the
generation that is a born now will likely live to 120 and that’s absolutely
true that’s already superhuman second is look
at the things that the new generations are developing right these are all
because we’ve empowered ourselves through education number one and through
education we’ve adopted technologies already antibiotics we’ve computers all
these things are one-offs but if you think about surgery if you think about
medicine and what it’s done for our each of us now I would argue that we’re
already on that pathway yes the new opportunities are even more dramatic but
those are things that to be embraced they’re not to say these things to be
afraid of thank you
Jiji your argument for a go slow please okay so I am NOT usually on the go slow
approach side of an argument I’m usually on the you know put your foot on the gas
but I am going to urge some caution in this case because I am I mean I am a
scientist I believe you know you have to have you have to do research you have to
be able to prove things in an objective way but when it comes to
controversial potentially controversial areas of research that have dual use
qualities this movie has played out a few times before and one thing that’s
constant is that things are not going to go the way that you think and you have
to be ready for surprises and you have to be able to take the short view that
you can into the near future to see what safety and security elements can be
addressed as you’re as you’re doing the research so first thing I want to
communicate to you is that we need to be careful not to be overconfident on what
kinds of soldiers and what what the military will need in the in the future
what qualities we’re going to be needing different skills could be required than
what we might think of right now and it might be that some of the things that we
would like to enhance will not be as useful as as other qualities and that
some of the things that we might want to enhance or play around with including
memory or the ability to be to be fearless have negative sides as well and
that we need to address those as you’re developing it also is even like
straightforward public health advances have the potential to be misused when
you create a monoculture of your population and that’s something that
needs to be addressed for even the most positive aspect of like ensuring that a
group of people are all resistant to a disease that makes this group of people
a monoculture and that could be exploited for for negative purposes it’s
also something that’s really important is to make sure that you give scientists
as they are working on this the tools to to look into address safety concerns and
security concerns as they come up self-governance is not just a nice thing
to have for science it is a necessity because it’s only the scientists who are
working in this field that understand how it could be misused in the very near
term future and so they need to be given tools and and – they need to be
encouraged to think outside the box and to not have tunnel vision I also want to
underline that any new technology is not going to help us implement what we
already know and that there are a lot of ways that we already know how to improve
performance that we’re not taking advantage of we know that eating well
and sleeping well are going to improve your mind and your ability to make
decisions and we don’t do enough to address that we know that to get the
best population the smartest group of people you have to increase the
diversity of the pool of people that you’re drawing from and we know that we
don’t do that enough and that the military also has a diversity problem
and a gender diversity problem so using this power of statistics to increase the
pool of people that that you want to have and making sure that you retain
those people will be will go a long way to performance enhancement and they’re
in a group way so in closing all these advances that we were talking about a
really exciting we should absolutely keep going but we need to examine safety
and security along the way and as long as we’re interested in increasing the
capabilities of our population we need to use the power of Statistics and
making sure that we implement what we already know thanks Gigi thank you very
much now Jeff back to you you have two minutes to adding additional points
making rebuttals that you feel that you can to DG’s points and a closing
argument you know I cannot disagree with GG Jesus articulated beautifully the
points that have to be considered but not a single one of them actually it and
she’s you know she already admitted that she too is just like me that were for
pushing forward as fast as we can but you do while you’re when you do it while
you you do it while you’re doing due considerations and things that she spoke
of she’s absolutely correct those are things and from the diversity all the
way to the untoward unexpected untoward effects all have to be considered but
not a single one of them tell you don’t go fast and there’s another reason to go
fast it’s because an hour we can have a debate like this in the
open as we are in a very diverse group and just look at this stage it’s a very
diverse group that’s very unusual when we look at the our peer competitors our
current competitors are doing things that we would never think to do
for example eugenics we don’t ever think about doing eugenic experiments here and
I say it’s but I can actually promise you that these are things that are being
considered elsewhere you heard dr. Wolk talk about the biological threat thing
we chose to shut our program down and although other solvation was banging
right along so there is a moral obligation I believe from our standpoint
to go fast because if we don’t get there first somebody else will and they may
not have the same sharing of this concerns that jiejie so beautifully
articulated that we have this look what is coming is coming we can debate it all
we want but it is coming the question is do we get there first or do we get there
second and I would argue when the US had the power to use nuclear weapons it did
not use it except for the time was backed against the wall during World War
two but since then we were we were dominant we never used it we could get
ahead in the bio weapons space but we didn’t use it and it’s because we
believe in this country of the moral obligations that we have to do humanity
not just the United States and so I argue that we have to go fast we have to
get there first because only we can trust ourselves and ask the tough
questions that Gigi posed because they’re not gonna do what elsewhere Jeff
and Gigi thank you both for your superhuman debate on this point and a
lot of a lot of not just energy but a lot of insight as well we will come back
to some of these issues but we’re going to move on to a second topic now the
second the second topic is bioengineering including genetic
engineering to create so-called superhuman helpers an example of that
would be more effective Canaries in a coal mine super sensitive animals and
plants we actually saw some of these examples in earlier presentations today
animals that could be genetically engineered to be more intelligent more
capable and if you think about the role that for example some some maritime
mammals if you will porpoises in particular played in in undersea
detection of ordnance so forth what could imagine taking that
to the next level and that there would be some ethical implications of course
on this topic we have dr. Patrick Boyle and dr. Peter Carr Patrick is a PhD from
Harvard Medical School since 2012 has had the pretty awesome title of organism
designer at gingham Bioworks they’ve started ginkgo Bioworks
incorporated Patrick will advocate that go-fast approach the gung-ho approach
Peter Carr will advocate go-slow on genetic engineering for super human
helpers peter has a PhD in biochemistry and molecular biophysics from Columbia
senior staff at the MIT Lincoln Laboratory where he leads the synthetic
biology research program again we’re gonna roll the same way 3-5-2 Patrick
you’re out great and I’ll just say before we start that I’m sure Pete
actually probably agrees with me on the go-fast approach but we’ll argue about
this stuff all the time so I’m glad we get to do this in front of everyone else
but so I’m gonna advocate for the the gung-ho approach and I’m actually going
to argue that from a bio defense perspective we’re in danger of taking
the go slow approach by default which i think is is a mistake I think Diane
mentioned some of the applications of this field in the previous panel I want
to highlight a little bit of the history there to compare and contrast the
history of developing these technologies within the defense space and outside of
the defense space because I think you know a lot of this community has a
picture of you know technologies like the internet being developed by ARPA and
then you know offering a great amount of good to the world by being developed
within the defense community that develops and understands that technology
biology is almost coming from the other direction right where a lot of the you
know killer apps as it were which is a bad term in biology but like the great
applications of biology are actually coming to the commercial space before
they come to the defense space so you know an example that Diane Brown brought
up in the last panel was spider silk right I own a spider silk hat made by
bolt threads I wear it every winter that’s a product that I was able to buy
you know off the internet as soon as it was available right you know where is
incorporating that type of technology into the DoD roadmap right in fact just
a couple of months ago there was an announcement that an aerospace company
has started to make investments in spider silk composites in partnership
with companies in that space and it wasn’t an American company was actually
Airbus who’s who’s making this investment so you know again there’s a
lot of evidence that we’re again really in danger of falling behind here by
taking a go slow approach like moving it more towards the human helpers aspect
another set of examples that I’m much more personally involved in are some of
the collaborations that ginko has in the space of engineering microbes so for
example the product that some logic is making to address metabolic disease if
you can imagine you know the ability to not only take metabolic diseases off the
table but enable war fighters to enhance their performance by having improved
microbes and their gut that either improve their ability to utilize you
know whatever you know food is locally available or even just you know be able
to perform better there’s a lot of low-hanging fruit in terms of being able
to engineer microbes in a way that doesn’t really have a lot of the same
ethical implications in terms of engineering humans as we as we just
heard about and is a technology that is not only near-term is actually entering
clinical trials and going to market in the commercial space right now another
example is in agriculture where we’re partnered with Bayer crop sciences to
engineer nitrogen fixing microbes for non legume plants that’s a that’s an
application that is you know again hard maybe even DARPA hard but if you solve
that you can actually reduce you know three percent of the world’s greenhouse
gases so again you know I think there are a lot of really great applications
happen in the commercial space and with you know startups and the world’s
largest corporations and other governments committed to this I think
it’s a you know be dangerous for us to consider it go slow approach right now
Patrick thank you Peter you have five minutes you can go fast and go slow
thank you but make the case for going slow on bioengineering human helpers for
all practical purposes please consider me solidly in the go slow camp
regardless of what may have set me up for it but in
particular that does not mean the no-go camp that means going slow and very
thoughtfully in G G made some compelling points in there and particularly I’d
like to call out the way not Patrick in this case but very frequently this kind
of dialogue is characterized by two things one is hyperbole going hand in
hand with oversimplification and I’d like to push back against each of those
to the extent that that’s possible in let now less than five minutes but I’d
like us to all collectively seek to move a little bit away from hyperbole and
back to right reality and I have my own opinions as to what that means when it
comes to oversimplification I really I’m a person who loves being in the weeds
again thank you thankfully for all of your sakes that’s limited by our
timeframe but I will try to touch briefly on an important detail or tooth
I think is meaningful so I have three overarching general points that are a
sampling of the larger number of I think strong arguments for going slow when it
comes to engineering are living helpers and I’ve been thinking more in the realm
of animal and plant specifically not that microbes are off-limits but one of
them is an ethical conundrum that is only going to increase as we seek to
especially for example increase enhanced functionality in animals to make them
more more useful more capable better helpers towards us we have a number of
challenges ethical challenges which includes the
consideration of weaponization of animals including how that then comes a
foul of potential biological weapons convention and other and other important
treaties the potential for enhancements to social emotional and cognitive
capabilities of animals cannot be taken lightly that the very things that would
make an animal much more helpful to us say on the battlefield or in detection
would also put those animals more at risk and also in a sense we’d be seeking
to make them more like us which isn’t necessarily intrinsically or
automatically a bad thing but it’s definitely a go slow kind of thing to be
taken thoughtfully and consider as you make an animal more intelligent how do
you think about consent under experimentation when it comes to to
those kinds of capabilities and I don’t think it’s off the table to imagine
those sorts of things particularly in this sort of environment
we’re conversing in another with with going slow and Gigi kind of touched on
this as well is we tend to use technology to avoid harder truths and
and harder but more fundamental problems so as we seek to high-tech our way out
of a problem let’s say the food the challenges of food availability
distribution it becomes an e-zpass to not deal with the fact that we actually
can make enough food on this planet now but we avoid those hard human solutions
in favor of flashy technological solutions in the third category where
I’d like to add a little detail is the area of unintended consequences that
were touched upon this morning and thankfully when it comes to meaningful
details we don’t actually need to be too imaginative to get back to reality so
the imaginative future looking movies and the great sci-fi books that were
referenced this morning there’s a lot of value that comes from those four
imagining the possible but we can also look at the reality around us some of
which is staggering and weird and amazing especially for biology for
looking at the plausible for what’s what’s around us in particular I don’t
need to evoke Jurassic Park life finds a way or a more recent one rampage starts
off with CRISPR in space there’s the obligatory CRISPR reference to to get at
some of the concerns and the things that can go wrong in particular referencing
control systems safety features and kill switches those brake John glass
acknowledge that those break as well and that there’s a need for redundant
control features nevertheless that’s one area where we are surprised how often we
are surprised in the different ways that our control features and more generally
our control of biology our control of life breaks down and we’re perpetually
surprised by we should not be surprised that we end up being surprised and in
particular I’ll give a couple examples of some of the things that surprised us
when we thought we knew everything or we thought we knew the salient details
about a system and it turned out that we didn’t with radical consequences so we
used to think we knew how inheritance works in etic inheritance that was
straightforward in a lot of ways but then epigenetics came forth and all of a
sudden there’s something new on the table
a long time ago it was surprising to think that viruses could cause cancer
a more recent surprise has been the emergence of the fact that cancer itself
in some species can be an infectious agent I’m not talking about viruses I’m
talking about cancerous cells themselves it can affect another individual of the
same species and the reason I bring these examples up but these are examples
of failure modes that would be surprising that we might not have
thought of when we design a control system so sexual reproduction and
control over fertile males are for fertile females is another one a really
staggering example or one that surprises me is the fact that there are now
crayfish that evolved in recent decades decades literally the last few decades
from a sexually reproducing species to an a sexually reproducing species of the
marbled crayfish now an invasive species in Europe and I’ll close out there but
just recognizing that there are so many ways that we are going to be surprised
that we have to factor those the room for those surprises into our need for
risk assessment and other other expectations of the future Peter thank
you patrick you’ve got two minutes to
counter the excellent points made well I agree with you Pete so thank you for
raising those points I mean you know I think you know I hope after especially
after the talks this morning that we all you know assume that any move fast
approach is going to come along with considering the ethical social legal and
implications of our work you know however I would say you know
none of the work that we’re doing happens it happens in a vacuum right and
you know one of the things we need to think about is is competition so you
know I would you know ask the this group whether we’re prepared to be you know
fighting this generation in the space race and I’d say we’re actually fighting
a war on two fronts one which is competition with with others so if you
look at private investment in synthetic biology in the United States is around
two billion dollars this year about 1.5 the year before which is great in terms
of private investment but if you look at you know what the Chinese government is
doing they’re likely making investments at least on that on that scale we know
much less about what’s what’s happening over there and we’re also you know
finding this war in terms of making sure that we can become actually better
stewards of the environment than we have been today right so yes there are
implications to the misuse of biotechnology particularly for you know
non-human animals or animal helpers as whereas we’re calling them at the same
time we’ve already done a lot of damage to the environment so how can we
actually leverage this technology to do better than we have in the past I think
that’s a real opportunity that we that we have and again you know timing is
everything we don’t have a lot of time to think about this others are moving
forward and a lot of problems out there we’ve already created that we need to
solve so I you know I agree with Pete wholeheartedly that we should consider
doing this smartly but at the same time we don’t have the luxury of waiting for
the best solution to emerge Patrick Peter thank you very much we’ll
return to some of the both specific issues and common themes between these
these two debates after we get through our next debate which I think in some
sense may seem too broad us but has particular particularly strong
implications that cut back into both trying to create human superhumans and
trying to create superhuman helpers and our third topic is whether it be gung-ho
or to go slow on bioengineering the environment and dr. Sarah Carter and dr.
Jason Jason Matheny will score off this topic Sara will advocate a gung-ho
approach to using biotech including genetic engineering near-term that could
be bioremediation longer term it could be terraforming
weather of the of the world we’re on now or a world we want to live in in the
future I was this is science fiction today but has important elements that
could come into fruition not that far down the line Sarah received her PhD in
neuroscience from UC San Francisco did time if I could say that as someone who
spent time the administration as well served served her country at the White
House Office of Science and Technology Policy and is now principal of science
policy Consulting LLC dr. Jason Matheny will advocate a go-slow approach on this
topic of genetic engineering of the environment
Jason was until about a month ago director of AI ARPA the intelligence
Advanced Research Projects Agency he has an MBA and masters in public
health and most impressively to me a bachelor’s I didn’t know this
intelligence recently but a bachelor’s from University of Chicago in art
history has incubated a number of firms in the private sector as well format is
the same three five – Sara you’re out all right well there are so many
opportunities for a bio engineering the environment and the way I’ve thought
about that is that these would be advanced bio technologies to make the
environment more suitable to people and to help us further our own goals and I
think that there are we’ve heard about some of these things earlier today
there’s all kinds of sensors we could have you know plants microbes that tell
us different things we could go a step further and actually you know put
barcodes put you know microbial signatures in different places or on
specific people or objects or you know vessels to be able to see where they
have gone where they’re going and to track them in different ways but we
could also engineer the environment they for example you gene drives can help us
walk into an environment and make sure that the vectors do not transmit their
their disease anymore the mosquito is no longer carried malaria that would be a
really major advance you mentioned bioremediation already
about you know engineering microbes – to make the soil uncontaminated to break
down those contaminants part of that – we can make the soil better for plants
that I think patrick mentioned the nitrogen fixing for for better plants
people have also talked about things like engineering corals right now we’re
just talking about engineering corals to make them more resilient to climate
change and things like that but all of those things that all of those different
things could if we can understand how those things work and if we can push
them even further we could get to something more like terraforming like
taking the environment and moving it into a direction and into a place that
is that maximizes its benefits for us one thing one thing that I do want to
emphasize is another thing that Patrick brought up which is that
the US government funding in particular there’s a really good reason to be
gung-ho about the US government in particular pushing these things it’s
because it keeps the the US government at the table about how these things
should be pursued and it makes a really big difference as so many of these
things are being pursued commercially by private funding or even in other
countries and if the US government falls behind on that that’ll be in a much
worse spot I also there’s another reason to be gung-ho about it now is that
there’s a very there’s a very iterative nature of development right now and we
have a very limited understanding of of you know microbial ecology and things
like that and if we’re gonna go now with these underdeveloped less scary types of
technologies we can learn from that we can see how it interacts we can learn a
lot about the microbial ecology and the other interactions out in natural
ecosystems that will inform all of the more interesting and more powerful
technologies into the future thank you very much Jason
well Sarah was already somebody I really admired and she’s made a really
thoughtful argument for this so I’m gonna focus on just geoengineering as an
application because it’s the only case that I think I can win and get the Tesla
three that Jim promised us so I think geoengineering in some ways
is like the most extreme of the versions of of engineering that Sarah mentioned
and so the easiest for me to criticize and that’s why I’m picking it it’s also
the ones that I prepared notes for and so it’s the least amount of work so I
I’m gonna make the argument that I think we should be pursuing geoengineering
research carefully because we might need a plan B in case our other mitigation
efforts fail but that there are four arguments for going slow the first
argument is humility we really don’t know what we’re doing and for example
thinking about how to re-engineer phytoplankton who have already undergone
you know selection pressures over three billion years and are immersed in
complex food webs and there’s enormous potential to massively screw things up
organisms that produce most of our oxygen so that would be bad
not only because of sort of widespread like collateral damage but also
extinction-level events which we’d like to avoid the second reason for going
slow is it’s really unclear how domestic laws and international treaties would
prescribe certain kinds of geoengineering efforts the 1970 Clean
Air Act prohibits actions that risk permanent damage the atmosphere which
could include for example changes that are permanent to phytoplankton the UN
environmental modification treaty and mod which we signed in 1980 prohibits
what’s seen as hostile damage to the environment without defining what
hostile is and then there’s also the Convention on Biological Diversity and
the UN Convention on the law of the Seas that were not signatories to but that we
really have self-interest in finding some international compliance to in part
in order to urge constraint for what could be otherwise unilateral action on
the part of state actors that are up against the wall and addressing say
climate change so low-lying countries that would have high temptation to
unilaterally act possibly in reckless ways a third reason to be slow is that
going quickly is likely to be self-defeating that even a minor mishap
in a say open ocean engineering effort is likely to cause an enormous chilling
effect on publicly funded research so going quickly could mean not going at
all and then a fourth reason to go slow is that geoengineering as well as some
of the other applications that Sara mentioned for example introducing
permanent changes to some insect populations that those technologies can
be weaponized and there is a proliferation risk so we should really
avoid developing technologies that allow fairly cheap quick and low observable
interventions in ways that permanently modify the environment
instead we should sort of take the same approach to to uranium enrichment in
which you want the technology to exist for yourself but you want it to remain
really expensive really complex really hard and really a easy to observe from
space so for those same reasons let’s let’s keep it difficult Jason thank you
Sarah all right I think those are all great points and as you can imagine I
agree with many of them but I think that the way I would put it is that if we are
going to be gung-ho on the technology we have to be equally gung-ho
on addressing the concerns doing the you know outreach engagement especially in
these international forums I think that there’s been for our first generation of
bio technologies genetically engineered crops which are extremely simplistic you
know in the context that we’re talking about we’ve sort of you know with the
rest of the world you know we’ve sort of agreed to
disagree you know there are some people that do it our way and some people that
don’t and word that we’re not going to have that luxury if if and when we move
forward with some of these technologies and so it’s it’s gonna require a lot of
work and it’s going to be very iterative it’s gonna be the same way that we have
to move forward slowly with the technologies we have to learn about how
they interact and the ecosystems will have to do the same with those
international discussions and with those that engagement and with the talks with
regulators and environmental assessments and risk mitigation and all of those
things and I would say to that if you are if you do that engagement properly
you can avoid that could be immunized immunizing to the types of backlash that
you were talking about like you know that is what you know you can really you
can really move forward and not get all the worst of that backlash instead of a
no-go you may have to go a little more slowly but it’s not going to to wipe it
out completely so I think that if you do that
engagement and you recognize that there are valid concerns and especially when
dealing with a complex environment there’s a lot of uncertainty I think
that you can move forward in those contexts but you do have to be very
gung-ho about that engagement about the the international discussions
on those topics okay thank you very much this has been a great discussion I said
at the outset that each of these individuals had more nuanced views than
could be captured by either gung-ho or go slow they made that clear in their
presentations I’m gonna take the prerogative of the chair to ask a few
questions then we’ll have audience questions after that and first I’m going
to try to get them to dig in a little bit to the original position so starting
back with Jeff and Gigi Jeff what is the nightmare scenario if we don’t go fast
Gigi Gigi what is the nightmare scenario if we do go particularly too fast as we
think about human engineering if you all trying to create more capable
quote-unquote superhumans what’s your single nightmare scenario
disparity disparity a superhuman is only super if they’re super over somebody
else if everybody else can run you know four-minute mile they’re equal they’re
not superhuman but everybody’s running six minutes and it’s persons running
four minutes they’re super so it’s disparity in my mind that’s the biggest
nightmare for in my opinion is disparity because once you have a disparity
situation that means that one group can dominate another one just to be clear
that’s that’s your nightmare scenario for go faster not go fast but we’re not
going fast what do you want me to do I guess I’m gonna I’m gonna intervene like
so if you go fast will you have disparities if we go we will have
disparities there intending to create the spiritual have huge disparities
because the people who have the most money are gonna get that first right
okay I think I understand the answer you surprised but you surprised me with the
way in which you you you answered well let me go to Gigi and then see get a
dialogue what what I would worry about and I don’t think it’s my all-time
nightmare scenario but I worry about that there’s gonna be things that we
could have done along the way that we’re gonna miss out on if we don’t have a
thoughtful approach going forward and if we don’t ask keep asking the developers
like are you considering this or you considering this and and have an open
discussion because then we’re gonna we’re going to be they’re gonna be
missed opportunities where we could have done things better and and maybe reduce
some of these disparities which I think are already a huge problem okay thank
you now Patrick nightmare scenario if we do not go fast if we don’t follow your
yeah I think you know along with Jeff I would assume that if we don’t go fast
others well and I think we’ve seen this in many different technologies that
these technologies really reflect the values of their creators and the
motivation behind those those technologists in terms of the problems
they’re trying to solve and what they weren’t thinking about you know
something that keeps me up at night is thinking about how you know computer
technology developed and if we could have a time machine that went back to
the 1960s and and talked about cybersecurity and the implications of
that now maybe that technology would have been developed differently and I’m
afraid if you know we as a nation are not involved in moving quickly in this
space that those norms and standards and implications will be set for us great
Peter so true to how I set out I’m going to avoid the imaginative nightmare
scenario or that you know my imagination can take us to plenty of places but the
your the concrete serious risks that I have is severe damage to ecosystems in
particular put in that context of much more damage than we are already doing to
ecosystems and that can come from microbial plant or animal sources if we
run the risk of creating invasive species which we’ve already had a
problem with with our normal human efforts even before we started doing
more serious engineering they let me stay on this topic for a second so if we
think about both what the United States does and supports through through
funding and through its governmental activities and what other nations and
non-state actors do with respect and you can cut across all the topics that we
have here does that change your perspective on in other words is there
an argument to go fast on there Rd aspect to understand them at least so we
can think through implications and countermeasures as well still so fair
enough yeah I’m not so much as go slow as to go carefully and thoughtfully at
the best speed that you can being careful and thoughtful so the Rd side is
really fundamental there to be able to evaluate the risks more clearly
including being giving yourself a buffer for how you
think about those really okay great thank you Sarah your nightmare scenario
alright well I want to agree with some of the things that people have said that
if we don’t do it other people will and then we’ll be you know we’ll need to
deal with that without having the experience of having done it and knowing
more about it but in addition that like especially for some of these first steps
that I mentioned for the environment there’s risks to not doing it you know
that the malaria is out there transmitting disease every day and
that’s a measurable risk and so if you can do something to stop it you will
have a net gain even if you go no further the same is true for
bioremediation and for engineering resistance to climate change into coral
these things are real tangible benefits right now that we could be taking
advantage of that we’re not and I think that that’s it maybe the environment is
a unique case where they you know this has driven a lot of this the first
investments in these things is to address actual real problems with real
consequences thank you Jason your nightmare scenario so it it’s it’s it
seems like there’s lots of clever mechanisms that we’ve been able to
figure out over the last few years that nature didn’t arrive at despite no
billions of years of investigation and pressure and it it seems like some of
those could end up actually creating extremely vulnerable or fragile systems
that could be exploited by bad actors and the kinds of scenarios that I worry
about is if you’ve got say a dictator who’s backed into a corner really has
nothing to lose by threatening you know shutting down oxygen production or
nitrogen fixation or other processes that we depend on it would be really
good for recipes to do that not to be published those are great comments I want to put
forward a three propositions about what a if you will an integrated approach to
going smart would be because each of you in your own way is saying in some ways
you want to go fast and some in many ways we want to be careful and we can
call that the go smart approach which is hard to argue against as a bumper
sticker but if you then try to unpack it I want to put forward three propositions
that I’ve heard in the conversation and as we go back and we’ll go the
opposite order this time Jason we’ll start with you I want you to either
contradict one of my three or add an additional one or modify one okay so
proposition one is go fast on research and development particularly on the
research side to build the base of understanding and wargame it under
analyze it it’s not just what’s done in the laboratory it’s thinking through
what the possible consequences are as we’ve heard about in the conference
today so think of that as the research and development side number two go slow
on those things that are irreversible particularly when they are existential
you can refine that a little bit but it seems and number three go fast using the
research and development as a base go fast on things like indicators and
warning of what states and non-state actors may be doing go fast on
developing countermeasures understanding that sometimes there can be ambiguity
about whether you’re working on offensive defense if you will but go
fast on exploiting that research knowledge for the defensive and
protective side and to also create a basis of public education
so Jason kill one of those or modify it or add a new one I’ll modify it go one
that go fast on R&D I would say go fast on R&D for things that have defense
dominance on things that are offense dominant go slow and keep it private Sara yeah I would just modify the number
three they go fast on indicators and countermeasures the defense side and to
make sure that the countermeasures include a degradation of the counter
measures towards degradation of the environment so you know some of the
measures that we talked to that bioremediation to me seems like a
no-brainer you know something that you can you know do and work on in a
controlled way that you know has tangible benefits and it’s not going to
be perceived as or be offensive in any way okay great thank you
so I would modify go fast on Rd in in order to especially emphasize not just
application-oriented Rd but foundational research too
understand the living world around us which has been going on for a long time
but that is it’s easy to lose sight of in terms of a government investment
dollars and I would argue that there need to be more government investment
dollars in basic biology research including in spite of my own molecular
preferences at the non molecular level at the organism ik ecosystem level
around the world so that we can better understand the nature of the living
world and then be a little less surprised by it when it doesn’t work the
way it doesn’t get isn’t as engineer able as we think it might be thank you
good Patrick I would add one point which would be to be as transparent as
possible you know biology crosses borders and we’ve been talking a lot
about competition but there’s also a tremendous opportunity for more
international cooperation both in terms of understanding what people’s
motivations and values are and understanding what technologies are out
there to try and prevent the type of surprise that could cause caused panic
so I think you know as much as we can being transparent about our motivations
and what we’re actually doing could help prevent some of those downside scenarios
okay great Thank You Patrick Gigi I would say to go as fast as
possible on things that could actually improve people’s lives or save lives
because right now we have a lot of these technologies could actually be saving a
lot of people on the planet and they’re not you’re not using we’re not pushing
those as far as we can I’m gonna be unfair to you Gigi and to be unfair here
and ask if one of those allowed doubling of IQ for 10% of the population would it
fit in that category thousands of people who are dying of malaria I don’t think
it makes a difference okay thank you I thought that maybe a few but I wanted
you to express it Jeff I wouldn’t change I wouldn’t change the way you
articulated those three points I think they’re actually quite good the only
thing that I would add in so that Peter talked about which is respect the
unexpected I mean that is always going on and certainly at the basic
fundamental science part that they don’t we don’t look I have a PhD in norm
ecology and I will tell you that as a basic scientist which I had a lab in for
a long time we don’t think about that enough we just don’t CRISPR was out
there and it is it is a double-edged sword just like everything else well I
think that one of the things that we have not really as
Society talked about is engagement at the fundamental science level with the
ethicist we just we done we assumed that because they’re academics that they they
behave that way but I would remind you that was Craig Ventnor a basic
scientists who created the first synthetic cell and he did it because he
just did it dolly the the the you know the sheep was
done in basic science laboratories these are all things that were done for the
sake of science but they really didn’t get much thought really to the ethics of
it they just said this is good sciences to do it so I would not change the three
things that you had talked about but I would all I would as sitting where I am
now is an old burned out ugly Colonel you know and who ran the the BTO before
my good friend the handsome Justin Sanchez did is that when I’m really fun
remarkable that we in the DoD think about these problems but if I go to the
average University I don’t think about them at all they talk about it over beer
but they really don’t think about it at all okay thank you
we now have time for some audience questions a question for the panel in
the go fast but with accountability argument who should be part of that
accountability / oversight space should be scientists in government only what
about press and citizens on the environment side I think that there’s no
way you can keep it you know government and scientists only I think that it’s
necessary environment everybody is a stakeholder and I think
there’s no way around that I would add to that I think you know even a lot of
the demand signal for a lot of this development is coming from citizens
right or consumers or warfighters right I think not enough attention is paid to
how can these applications actually positively impact people and I think you
know coming from a basic science perspective and moving towards Applied
Science it’s often hard this is one of the fun parts of IGM I think is working
with teams and trying to figure out how are they actually impacting the the end
user and I think oftentimes we overlook the the needs of those of those folks
when considering what we should do Jiji you want to add
just to say I mean the conversation has to be broader but but scientists have to
be included because so often these are really technical discussions and it’s
when you abstract it with language that makes it more simple it it often it
confuses the issue so it’s it’s important to have it be a group
conversation but the technical expertise has to be there okay thank you very in
favor of bringing in multiple stakeholders including representatives
from the public where I struggle it’s a question back to all of us because I
don’t have the answer is what’s the most effective mechanism for bringing in its
a public who we talk about is a big cloud as a you know as we’re not gonna
pick a person off the street necessarily how do we find the right mechanisms to
engage with the public and not simply say preach at the public or tell them
what the right answer is I don’t have that answer but I’d like to hear ideas
I’ll take a not a swing at a complete answer but just suggest that events like
this and where individuals go back and talk to their colleagues including in
the press as well as that the people in your community about these issues is a
good starting point it’s like old traditional questions many of the
debaters expressed concern about unintended consequences given the
technological applications we pursue will often surprise us how can we ensure
we are prepared for the consequences if we can’t be sure we envision them good
question I mean I think the number one thing that we talk a lot about you know
at ginkgo is just respect the technology right a question of understanding what
is the envelope of biological functions that you’re exploring when you’re you’re
when you’re developing something and again to the previous comment figuring
out who are the stakeholders who get you know impacted by it by what you’re doing
both on the positive side if you’re if you’re right but also on the on the
negative side if you’re not but I think you know fundamentally it has to come
from a respect of the of the technology and what nature has already shown shown
us it can do I think red teaming is incredibly valuable you know the
military we do red teaming all the time and but we don’t really do it for some
of the science and technology doing a formalized basis and we should
where’s in the military in the DoD it’s a very formal approach to do red teaming
we should do that because that is in fact how you find out these different
unexpected things we and we have to do it in a very broad way in that it’s just
not the usual kind of scientists alone but it’s also the scientists from other
disciplines it’s gonna be folks that that you wouldn’t even normally interact
with so somebody from prison for example I’m not joking because I mean if you
thought about it that’s in fact where these unexpected things come up with you
morally or ethically just can’t think that way I mean you could not possibly
take a knife up and go stab somebody in the chest but there are people who do
and those are kind of people you have to engage in the conversation in a red team
and I’m being deadly serious about this if you really want to find out what
you’re not thinking about also on the positive side sometimes when you have a
problem you know when that seems like a really difficult technological problem
from your disciplinary perspective if it has a broader airing sometimes it is
solved by somebody from a different you know from a different perspective it’s
not as much of a security risk as it is it has a broader audience
great thank you good answers any more questions from the audience oh I’ve got
more let’s go ahead let’s have another what do you see as the single most
promising enhancer to the quality of life for all that also has tremendous
potential downside or risk I think I can Smith so on the one hand because human
cognition is the driver for all the innovations around us that have improved
human welfare over the last few centuries even if you could you know
enhance 10% of the human population to all be Richard Feynman’s and Marie
Curie’s that would offer enormous benefits to all of humanity on the other
hand it would also introduced new disparities looks like the panel agrees
with that answer across the board yes yeah I think disparity is the is the
thing that that I think you know our current economy almost favors that right
so we have to think about how that will play into this tech
okay seven other okay if our APIs Aries are already moving ahead in human
adaptation where is the regulated control group experimentation occurring
now so the US doesn’t fall behind while it is politically vacillating between
the plant and the meat eaters the one thing that the US has that’s an
advantage is that we are democratized society which means is that kids are
raised in the society are taught to challenge challenge Convention count
challenge conventional wisdom and so on and so forth I mean they’re actually if
that’s part of your ph.d program and as an example but the whole idea is is that
allows for the imagination and the innovation that’s really the way that US
stays ahead the u.s. stays ahead because we embrace innovation we embrace radical
ideas we embrace a you know a novelty those are things we embrace it’s not
entirely true if you go to other countries and in particular appear
adversaries in fact I was at a really remarkable conference recently with the
chairman on the phone I can’t remember his name but he was really remarkable
and he actually got up there and he said if you tried to beat us based upon you
know industry and this sort of thing China he’s talking about says that will
not happen but what will beat us is our own government so that was really a
remarkable statement on his part and it really speaks to the societal
differences and I think that’s in fact when to answer your question I think
that the the way that we stay ahead is to be embraced more of what we are the
diversity of what who we are embrace the the radical ideas or base
the difference we you know in our society we embrace the differences of
each other which is an awesome thing it’s truly truly awesome and that’s I
think in fact what we have to continue to encourage right and and we can go
through a political discussion which I’m going to avoid completely but I’m just
saying that we should embrace those things that really are the qualities
that make the u.s. ahead it’s a great college effort it’s fitting pretty
common for then I’m reminded of Winston Churchill’s comment about America we
said the Americans always get it right after they’ve tried everything else but
that sense of experimentation that it wind us to give it a given
roll and particularly if you can do it in a way that that doesn’t put great
risks to the to the population or to our future as we learn is a tremendous
approach I want to offer a last word we’ll go we’ll come right down the line
and Jeff we’ll start with you and the only require requirement is you’re going
to say the single most important point that people should take away from this
conversation and you have to do it in one breath opportunity thank you Jeff embrace diversity and get your flu shot
did that yesterday sorry we’re not talking about we’re not talking about
the future we’re talking about biotechnology we’re talking about the
present thank you Peter foundationally we need to embrace a greater degree of
logical consistency and understanding of the fundamentals of risk that crosses
not just the biological lines but really our societal lines for how we think
about risk probability and other other factors thank you Peter
Sarah I think it’s okay to be gung-ho about some of these technologies but we
have to be equally gung-ho about addressing some of these other issues
and making sure that we’re doing it carefully and with a lot of engagement
and input thank you Jason you have the last word pre-mortem so think about what
could go wrong before you do it thank you that’s a good job for those who
don’t who don’t know that there’s a nice piece and I’ll pull up the person’s name
in a moment it’s about 25 years old now but it means it’s apply a post-mortem
methodology but do it before you undertake the operation and I didn’t
have to do a pre-mortem but I’ll do a quick post-mortem terrific job panel
thank you very much

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *