How to Build a Human Page #5
- Year:
- 2016
- 60 min
- 96 Views
They both sound like something
that a person could have written.
That's right. We are really heading
towards a kind of tipping point,
are going to accelerate
beyond anything we've seen before.
This is just really historic.
It's not just about muscle power
any more. It's about brainpower.
Machines are moving into cognitive
capability and that of course
is the thing that really
sets people apart.
That's the reason that
most people today still have jobs,
whereas horses have been
put out of work.
It's because we have
this ability to think, to learn,
to figure out how to do new things
and to solve problems,
but increasingly the machines
are pushing into that area
and that's going to have huge
implications for the future.
Are any jobs safe?
Right now it's really hard to
build robots that can approach
human ability in dexterity
and mobility,
so a lot of skilled
trade type jobs,
electricians and plumbers
and that type of things,
are probably going to be
relatively safe,
but that's thinking over
the next 10, 20, maybe 30 years.
Once you go beyond that, really,
nothing is off the table.
But the biggest danger
may not be losing our jobs.
Professor Stephen Hawking
recently warned that the creation of
powerful AI will be either
the best or the worst thing
ever to happen to humanity.
Hawking was recently joined by
Tesla founder Elon Musk and other
leading figures in an open letter
highlighting the potential
dangers of unchecked AI.
One of the most vocal
was Professor Nick Bostrom.
Developments in the last few years
in machinery have just been
more rapid than people expected
with these deep learning
algorithms and so forth.
How far away are we from achieving
human level artificial intelligence?
The median opinion - by which year
do you think there's a 50% chance?
There's 2040 or 2050...
Within our lifetime? Yeah.
Within the lifetime of
a lot of people alive today.
The concern there is you are
building this very, very powerful
intelligence and you want
to be really sure then
that this goal that it has
is the same as your goal,
that it incorporates human values
in it,
because if it's not a goal
that you're happy with,
then you might see the world
transform into something that
maximises the Al's goal but leaves
no room for you and your values.
This idea of autonomous and
dangerous Als is a recurring theme
in the world of science fiction.
Are you ever going to let me out?
Yes.
Nick thinks that super intelligent
machines could one day
inhabit the real world and use
their power to negative effect
if we don't put
the right safeguards in place.
Als could take their instructions to
logical but unanticipated extremes.
Ava, I said stop!
The concern is not
that these Als would resent us
or resent being exploited by us
or that they would hate us
or something,
but that they would be
indifferent to us,
so if you think, maybe
you have some big department store
and it wants to build
a new parking place.
Maybe there was an ant colony there
before, right? So it got paved over.
It's not because we hate the ants,
it's just because
they didn't factor into our goal.
I see. And we didn't care.
Similarly, if you had a machine that
wants to optimise the universe
to maximise the realisation
of some goal,
in realising this goal, we wouldn't
be kind of stamped out...
Collateral damage...in the same way
that... Yeah, collateral damage.
The way in which Als can be diverted
from what their architects intended
played out earlier this year when
Microsoft introduced Tay to Twitter.
to act like an American teenager
to attract a younger audience.
The chatbot worked by absorbing
other Twitter users, but
Tay was hijacked by Internet trolls,
who gave it a very different
set of values.
The original intention was corrupted
and Tay was unable to work out
which views were acceptable
and which weren't.
Within a clay, Tay became
a Hitler-loving, racist sex pest.
This shows what can happen to AI
if it falls into the wrong hands
but could a future Tay be far worse?
Once you have a kind of
super-intelligent genie
that's out of the bottle,
put it back in again.
You don't want to have a
super-intelligent adversary that is
working at cross purposes with you,
that might then resist
your attempts to shut it down.
It's much better to get it right
on the first attempt, not to
build a super-intelligent evil genie
in the first place, right?
You want to have, if you're going to
have a super-intelligent genie,
you want it to be... You want it to
be on your side. Yeah, exactly.
In Cornwall, our genie is about
to be let out of its bottle,
and I want to know
whose side it's on.
The robot's upstairs here.
Bear in mind this is
not your final skin,
so let's have a look inside.
Let's just let her peep out.
Oh, my goodness.
That's so weird!
See...
It's quite warm.
Just feel it, though. It's weird.
What do you think of the eyes?
Oh. my God!
SHE LAUGHS:
It's so strange.
Because she's not quite right,
but she...
You know, I can recognise...
...that the nose is...
Well, I mean, yeah. It's my nose.
Try asking her something.
OK. What have you been up to today?
We've been busy filming
season two of Humans since April.
And it's been very exciting.
Not bad!
So we can have a kind of guess of
the sort of things people might say,
saying...
I can't resist cheese on toast.
What did you have for breakfast?
I had a toasted cheese sandwich.
Is that because
you can't resist cheese on toast?
I like the taste of cheese.
THEY LAUGH:
Is that true?
Do you really like cheese on toast?
I love cheese, yeah. Ah!
So, you know,
there's a little personality trait
we might have got right. Maybe not
as much as she likes cheese. Right.
It's the facial expressions.
They're not quite in sync
with what she's saying.
Basically, there's
a slight software bug... Raining.
But don't worry too much...
GEMMA LAUGHS:
She's confused! Very strange input.
Yeah, she has, but, you know,
we've tried this,
so it's very, very new,
and what you'll find is things
will progress very, very quickly.
Facial tics aside,
Every day the robot is
how it looks, sounds and thinks,
so we've come up with an idea
to put it to the test.
Hey, Gemma. Hi.
Robot Gemma, do not fail.
We're building an artificially
intelligent robot that
looks like me, talks like
me and thinks like me.
And today, I'm going to meet her
in her finished form.
Last time I saw her,
she needed quite a bit of work.
So I'm hoping that today we'll have
more of a finished product.
Yeah, I don't know,
I'm quite nervous.
Oh, no.
Really strange.
It is spooky, isn't it?
SHE GASPS:
It's really quite uncanny.
Translation
Translate and read this script in other languages:
Select another language:
- - Select -
- 简体中文 (Chinese - Simplified)
- 繁體中文 (Chinese - Traditional)
- Español (Spanish)
- Esperanto (Esperanto)
- 日本語 (Japanese)
- Português (Portuguese)
- Deutsch (German)
- العربية (Arabic)
- Français (French)
- Русский (Russian)
- ಕನ್ನಡ (Kannada)
- 한국어 (Korean)
- עברית (Hebrew)
- Gaeilge (Irish)
- Українська (Ukrainian)
- اردو (Urdu)
- Magyar (Hungarian)
- मानक हिन्दी (Hindi)
- Indonesia (Indonesian)
- Italiano (Italian)
- தமிழ் (Tamil)
- Türkçe (Turkish)
- తెలుగు (Telugu)
- ภาษาไทย (Thai)
- Tiếng Việt (Vietnamese)
- Čeština (Czech)
- Polski (Polish)
- Bahasa Indonesia (Indonesian)
- Românește (Romanian)
- Nederlands (Dutch)
- Ελληνικά (Greek)
- Latinum (Latin)
- Svenska (Swedish)
- Dansk (Danish)
- Suomi (Finnish)
- فارسی (Persian)
- ייִדיש (Yiddish)
- հայերեն (Armenian)
- Norsk (Norwegian)
- English (English)
Citation
Use the citation below to add this screenplay to your bibliography:
Style:MLAChicagoAPA
"How to Build a Human" Scripts.com. STANDS4 LLC, 2024. Web. 20 Dec. 2024. <https://www.scripts.com/script/how_to_build_a_human_10302>.
Discuss this script with the community:
Report Comment
We're doing our best to make sure our content is useful, accurate and safe.
If by any chance you spot an inappropriate comment while navigating through our website please use this form to let us know, and we'll take care of it shortly.
Attachment
You need to be logged in to favorite.
Log In