View Single Post
  #54  
Old 06-16-2016, 04:28 AM
lpetrich's Avatar
lpetrich lpetrich is offline
Member
 
Join Date: Jul 2004
Location: Lebanon, OR, USA
Gender: Male
Posts: DXXIII
Default Re: Neural Networks (aka Inceptionism)

lisarea, that's likely correct. But a problem with a bottom-up approach is that it can be difficult to interpret what its parameter values mean.

The history of AI is rather interesting.
History of artificial intelligence - Wikipedia
Timeline of artificial intelligence - Wikipedia
Progress in artificial intelligence - Wikipedia
Applications of artificial intelligence - Wikipedia

AI was speculated on for centuries, and I think that the culmination of such speculations was Alan Turing's classic paper Computing Machinery and Intelligence A.M. Turing

He proposed the Turing Test in it. In present-day terms, it is whether it is possible to write a chatbot that gives responses indistinguishable from a human interlocutor's responses.

Something like this:

Quote:
Q: Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
The addition result is incorrect: it's really 105721. Alan Turing was imagining simulating human thought processes, erroneous result and all.

The chess notation is old descriptive notation, something that's gone out of style. Nowadays, we use algebraic notation: I have my king at e1 and no other pieces. You have your king at e3 and your rook at h8, and also no other pieces. What move do you make? The answer: Rh1 checkmate.

Alan Turing also considered several counterarguments to the possibility of human-level AI, like a theological one about souls, "heads in the sand", Goedel's theorem, informality of behavior, etc.

-

Not long after he wrote his article, actual AI programming started. In 1951, some programmers wrote programs for early computers that could play checkers and chess.

In 1956 was a famous conference on AI at Dartmouth College in New Hampshire. The proposal for it included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it".

AI had some successes for the next 18 years, like being able to solve high-school-algebra word problems and being able to understand what goes on in "microworlds", like sets of blocks that can be moved around and stacked.

Some AI advocates were optimistic enough back then to expect human-scale AI in the next few decades.

-

But in 1974, the first "AI winter" happened. Funding agencies were reluctant to finance much work in AI, and this was because AI research was not delivering anything close to its more optimistic expectations. Part of it was that the expecters rather grossly underestimated how difficult AI would be. Part of it was the enormous amount of computing power needed for some applications, like artificial vision, and part of it was the huge quantities of data that are "common sense" for us. Both were far beyond the computers of the 1950's and 1960's, though they are not as far nowadays.

-

Then in 1980, AI funding came back, with such successes as expert systems and artificial neural networks.

But then, in 1987, another AI winter, another funding drought.

Then in 1993, AI came back again, and it has continued to the present without another AI winter.

There are at least two differences:
  • AI has proven to be much more useful than previously.
  • AI researchers often don't call it AI.
Reply With Quote
 
Page generated in 0.16414 seconds with 11 queries