Yudkowsky and MIRI |
July 27th, 2017 |
airisk, ea |
They brought up the example of So you want to be a seed AI programmer, saying that it was clearly written by a crank. And, honestly, I initially thought it was someone trying to parody him. Here are some bits that kind of give the flavor:
First, there are tasks that can be easily modularized away from deep AI issues; any decent True Hacker should be able to understand what is needed and do it. Depending on how many such tasks there are, there may be a limited number of slots for nongeniuses. Expect the competition for these slots to be very tight. ... [T]he primary prerequisite will be programming ability, experience, and sustained reliable output. We will probably, but not definitely, end up working in Java. [1] Advance knowledge of some of the basics of cognitive science, as described below, may also prove very helpful. Mostly, we'll just be looking for the best True Hackers we can find.
Or:
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified.
Or:
Much of what I have written above is for the express purpose of scaring people away. Not that it's false; it's true to the best of my knowledge. But much of it is also obvious to anyone with a sharp sense of Singularity ethics. The people who will end up being hired didn't need to read this whole page; for them a hint was enough to fill in the rest of the pattern.
Now, this is from 2003, when he was 24, which was a while ago. [2] On the other hand, it's much easier to evalute than his more recent work. For example, they had a similarly negative reaction to his 2007 Levels of Organization in General Intelligence, but I'm much less knowledgeable there.
Should I be considering this in evaluating current MIRI?
[1] This was after trying to develop a new programming language to
create AI in, Flare:
Flare is really good. There are concepts in Flare that have never been seen before. We expect to be able to solve problems in Flare that cannot realistically be solved in any other language. We expect that people who learn to read Flare will think about programming differently and solve problems in new ways, even if they never write a single line of Flare. We think annotative programming is the next step beyond object orientation, just as object orientation was the step beyond procedural programming, and procedural program was the step beyond assembly language.
— Goals of Flare
[2] I wrote to Eliezer asking whether he thought it was reasonable at the time, and asked if it was more like "a scientist looking back on a 2003 paper and saying 'not what I'd say now, conclusions aren't great, science moves on' vs retracting it". Eliezer skimmed it and said it was more the first one.
Comment via: google plus, facebook