Go Back   Freethought Forum > The Marketplace > The Sciences

Reply
 
Thread Tools Display Modes
  #26  
Old 07-16-2015, 07:17 PM
BrotherMan's Avatar
BrotherMan BrotherMan is offline
A Very Gentle Bort
 
Join Date: Jan 2005
Location: Bortlandia
Gender: Male
Posts: XVMMXVIII
Blog Entries: 5
Images: 63
Default Re: Neural Networks (aka Inceptionism)

:prettycolors:

:dead:
__________________
\V/_
I COVLD TEACh YOV BVT I MVST LEVY A FEE
Reply With Quote
Thanks, from:
Ensign Steve (07-16-2015)
  #27  
Old 07-17-2015, 05:23 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
But (07-17-2015), Ensign Steve (07-17-2015), lisarea (07-17-2015), SR71 (08-15-2015)
  #28  
Old 07-17-2015, 05:34 PM
lisarea's Avatar
lisarea lisarea is offline
Solitary, poor, nasty, brutish, and short
 
Join Date: Jul 2004
Posts: XVMMMDCXLII
Blog Entries: 1
Images: 3
Default Re: Neural Networks (aka Inceptionism)

Dogs and juggernauts. Juggernauts with dogs' legs.
Reply With Quote
Thanks, from:
BrotherMan (07-17-2015), Ensign Steve (07-17-2015), JoeP (07-18-2015)
  #29  
Old 07-18-2015, 01:22 AM
Ensign Steve's Avatar
Ensign Steve Ensign Steve is offline
California Sober
 
Join Date: Jul 2004
Location: Silicon Valley
Gender: Bender
Posts: XXXMMCXXI
Images: 66
Default Re: Neural Networks (aka Inceptionism)

They're doing it with music, too! :ipod:

No more Dogs! Deep Dreaming on MIT Places CNN - YouTube

RNN Against the Machine
__________________
:kiwf::smurf:
Reply With Quote
Thanks, from:
BrotherMan (07-18-2015), fragment (07-19-2015), JoeP (07-18-2015), lisarea (07-18-2015)
  #30  
Old 07-18-2015, 03:07 AM
BrotherMan's Avatar
BrotherMan BrotherMan is offline
A Very Gentle Bort
 
Join Date: Jan 2005
Location: Bortlandia
Gender: Male
Posts: XVMMXVIII
Blog Entries: 5
Images: 63
Default Re: Neural Networks (aka Inceptionism)

:freakout:

WHAT IF WE'RE ALL ALREADY PART OF A GOOGLE DEEP DREAM AND NOW WE'RE GETTING INCEPTIONATED.

:wheelchairpanic:
__________________
\V/_
I COVLD TEACh YOV BVT I MVST LEVY A FEE
Reply With Quote
Thanks, from:
JoeP (07-18-2015), lisarea (07-18-2015)
  #31  
Old 07-18-2015, 10:25 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

Have we had this? It starts with edge enhancement, progresses to the familiar dogslugs, eyeshoggoths, toy cars etc, and gets really freaky with human-looking stuff later on.

__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
Ari (07-18-2015), But (07-25-2015), fragment (07-19-2015), lisarea (07-18-2015)
  #32  
Old 07-18-2015, 10:38 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

Now. Srs bzns: read this. The Unreasonable Effectiveness of Recurrent Neural Networks

Quote:
... I'm training RNNs all the time and I've witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.
More specifically, Long Short-Term Memory (LSTM) networks.

Behold:

Quote:
PANDARUS:
Alas, I think he shall be come approached and the day
When little srain would be attain'd into being never fed,
And who is but a chain and subjects of his death,
I should not sleep.

Second Senator:
They are away this miseries, produced upon my soul,
Breaking and strongly should be buried, when I perish
The earth and thoughts of many states.

DUKE VINCENTIO:
Well, your wit is in the care of side and that.

Second Lord:
They would be ruled after this chamber, and
my fair nues begun out of the fact, to be conveyed,
Whose noble souls I'll have the heart of the wars.

Clown:
Come, sir, I will make did behold your worship.

VIOLA:
I'll drink it.
It can make up Wikipedia articles:

Quote:
Naturalism and decision for the majority of Arab countries' capitalide was grounded
by the Irish language by [[John Clair]], [[An Imperial Japanese Revolt]], associated
with Guangzham's sovereignty. His generals were the powerful ruler of the Portugal
in the [[Protestant Immineners]], which could be said to be directly in Cantonese
Communication, which followed a ceremony and set inspired prison, training. The
emperor travelled back to [[Antioch, Perth, October 25|21]] to note, the Kingdom
of Costa Rica, unsuccessful fashioned the [[Thrales]], [[Cynth's Dajoard]], known
in western [[Scotland]], near Italy to the conquest of India with the conflict.
Copyright was the succession of independence in the slop of Syrian influence that
was a famous German movement based on a more popular servicious, non-doctrinal
and sexual power post. Many governments recognize the military housing of the
[[Civil Liberalization and Infantry Resolution 265 National Party in Hungary]],
that is sympathetic to be to the [[Punjab Resolution]]
(PJS)[http://www.humah.yahoo.com/guardian.
cfm/7754800786d17551963s89.htm Official economics Adjoint for the Nazism, Montgomery
was swear to advance to the resources for those Socialism's rule,
was starting to signing a major tripad of aid exile.]]
and XML, and Linux source code, and Latex:

Quote:
begin{proof}
We may assume that $mathcal{I}$ is an abelian sheaf on $mathcal{C}$.
item Given a morphism $Delta : mathcal{F} to mathcal{I}$
is an injective and let $mathfrak q$ be an abelian sheaf on $.
Let $mathcal{F}$ be a fibered complex. Let $mathcal{F}$ be a category.
begin{enumerate}
item hyperref[setain-construction-phantom]{Lemma}
label{lemma-characterize-quasi-finite}
Let $mathcal{F}$ be an abelian quasi-coherent sheaf on $mathcal{C}$.
Let $mathcal{F}$ be a coherent $mathcal{O}_X$-module. Then
$mathcal{F}$ is an abelian catenary over $mathcal{C}$.
item The following are equivalent
begin{enumerate}
item $mathcal{F}$ is an $mathcal{O}_X$-module.
end{lemma}
__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
But (07-25-2015), fragment (07-19-2015), lisarea (07-18-2015), SR71 (08-15-2015)
  #33  
Old 08-15-2015, 01:45 PM
S.Vashti's Avatar
S.Vashti S.Vashti is offline
nominalistic existential pragmaticist
 
Join Date: Nov 2005
Location: Cheeeeseland
Gender: Female
Posts: MMMDCCLXX
Images: 105
Default Re: Neural Networks (aka Inceptionism)

This stuff just does not amaze me. It's pretty much exactly what goes on in my head when I'm zoning out" or going to sleep-but it's there all the time. My head is normally filled with faces and heads, unless I'm thinking about something specific--they are always there, morphing into this face or that, animal, person, cartoon, "alien", etc. It's just basic background brain activity, is my guess. I sort of think of it as a strong pattern recognition resting state.
__________________
:marsh:
:coffeeff:
Reply With Quote
Thanks, from:
Ari (08-15-2015), Ensign Steve (08-20-2015), Janet (08-21-2015), JoeP (08-15-2015), SR71 (08-15-2015)
  #34  
Old 08-15-2015, 05:31 PM
Ari's Avatar
Ari Ari is offline
I read some of your foolish scree, then just skimmed the rest.
 
Join Date: Jan 2005
Location: Bay Area
Gender: Male
Posts: XMDCCCLV
Blog Entries: 8
Default Re: Neural Networks (aka Inceptionism)

Oh come on, it has to amaze you at least a little!
But yeah you're right, certain people have different thresholds and it's not uncommon for people to have a lowered threshold after certain drugs in a condition known as HPPD (hallucinogenic persisting perception disorder) where the more benign versions are literally visual static. Other drugs seem to raise the threshold. The interesting thing is that this threshold seems to be at least partly responsible for how intensely or 'where' you have the sensory experience.
Reply With Quote
  #35  
Old 08-15-2015, 05:52 PM
S.Vashti's Avatar
S.Vashti S.Vashti is offline
nominalistic existential pragmaticist
 
Join Date: Nov 2005
Location: Cheeeeseland
Gender: Female
Posts: MMMDCCLXX
Images: 105
Default Re: Neural Networks (aka Inceptionism)

I think it struck me as so very familiar, it lost its ability to amaze me. To me, these were just instantly recognized as something I do as I live. Sort of like, I don't know, electric brain static, as you say; but with the "overhead light" of "me" making it into faces. I don't see it as hallucinatory, it goes away immediately once I concentrate on thinking about stuff, but I've become aware over the years that it's always there in the background. Seems to me that this i likely a normal result of "training" neural networks, only this time it's visualized for people that don't think visually.
__________________
:marsh:
:coffeeff:
Reply With Quote
  #36  
Old 08-20-2015, 10:59 AM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCI
Default Re: Neural Networks (aka Inceptionism)

Neural networks were (as you might guess from the name) originally supposed to be used to mimic the brain, in the hope that it might help us understand their behaviour.

They were pretty unsuccessful in that regard, and fell out of favour. They've come back in as machine learning tools due to some recent mathematical advances.
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
Thanks, from:
ceptimus (08-20-2015), Ensign Steve (08-20-2015), Janet (08-21-2015), lisarea (08-20-2015), slimshady2357 (08-20-2015)
  #37  
Old 08-20-2015, 04:35 PM
Ensign Steve's Avatar
Ensign Steve Ensign Steve is offline
California Sober
 
Join Date: Jul 2004
Location: Silicon Valley
Gender: Bender
Posts: XXXMMCXXI
Images: 66
Default Re: Neural Networks (aka Inceptionism)

Whether someone with a human brain sees these types of visions all the time or just with the assistance of drugs, how is not amazing that we can get a computer to "see" them? That's the amazing part, to me anyway.
__________________
:kiwf::smurf:
Reply With Quote
Thanks, from:
Janet (08-21-2015), lisarea (08-20-2015), SharonDee (08-20-2015), slimshady2357 (08-20-2015), SR71 (08-21-2015)
  #38  
Old 08-20-2015, 05:54 PM
lisarea's Avatar
lisarea lisarea is offline
Solitary, poor, nasty, brutish, and short
 
Join Date: Jul 2004
Posts: XVMMMDCXLII
Blog Entries: 1
Images: 3
Default Re: Neural Networks (aka Inceptionism)

Yes, that is the amazing part to me, too.

The parallels between the human brain and technology has always kind of freaked me out, but until pretty recently, they could be attributed to humans, consciously or subconsciously, creating technologies in their own image. They were pretty much limited, prescriptive* models.

But this goes beyond that. It is not based on explicit rules, but on the same type of pattern recognition that our brains do, without direct instructions and predefined patterns. Machines are learning the way we do. (Disclaimer: I no longer actually understand how AIs work, with the modern whoosits and the flurgenblurgen.)

* Bort: No. Stop noticing things right this minute. You are making me nervous, and you already made fun of me for this anyway.
Reply With Quote
Thanks, from:
BrotherMan (08-20-2015), Janet (08-21-2015), Pan Narrans (08-20-2015), SharonDee (08-20-2015), slimshady2357 (08-20-2015), SR71 (08-21-2015)
  #39  
Old 08-20-2015, 06:13 PM
S.Vashti's Avatar
S.Vashti S.Vashti is offline
nominalistic existential pragmaticist
 
Join Date: Nov 2005
Location: Cheeeeseland
Gender: Female
Posts: MMMDCCLXX
Images: 105
Default Re: Neural Networks (aka Inceptionism)

I was reading stuff proposing this at least 20 years ago, how is this amazing? Our bodies are chemical machines with consciousness as a product of the electrochemical reactions feedback. Doing this for an AI, I would simply guess that they needed enough computing power in a small enough space to accomplish the same thing. I'm fond of the concept of emergent properties being the foundation of consciousness, ala Daniel Dennett, in the 1991 book Consciousness Explained, because really, biochemistry is just a subset of chemistry, which is a subset of physics, which is where AI intersects with human consciousness.

AI is going to happen, it just will take more work on our part to create the appropriate non-biological mimics to do it. And then it won't be "artificial", it will simply be non-biological. We are all p-zombies.

P.S. This means that I don't think AIs will be more rational than humans or other animals, I do think they will not be able to be rational, in the way we think computers are rational, because the foundation of consciousness will rest on non-linear, randomized elements, and an error in the start of a thought will be just as detrimental to AIs as it is to humans.
__________________
:marsh:
:coffeeff:

Last edited by S.Vashti; 08-20-2015 at 06:21 PM. Reason: add stuff
Reply With Quote
Thanks, from:
Ari (08-21-2015), Ensign Steve (08-20-2015), Janet (08-21-2015), Pan Narrans (08-20-2015), SR71 (08-21-2015), The Lone Ranger (08-20-2015)
  #40  
Old 08-20-2015, 08:19 PM
lisarea's Avatar
lisarea lisarea is offline
Solitary, poor, nasty, brutish, and short
 
Join Date: Jul 2004
Posts: XVMMMDCXLII
Blog Entries: 1
Images: 3
Default Re: Neural Networks (aka Inceptionism)

Twenty years ago, functioning AIs* (as far as I know) were still being developed primarily through humans discovering and articulating rules, and outlining relatively crude heuristic models for problem areas.

It's been the goal for much longer than that to have systems that actually mimicked human learning processes, but much of the focus was on identifying and describing them ourselves. Machines intelligence was largely limited to identifying limited correlations and applying articulated rules.

It's gone well beyond that now, as machines are discovering those rules on their own, to the point that they have outpaced humans' abilities to describe them. That is a huge, tipping point, and it's (again, as far as I know) very recent.

Generally, the things we think of as 'rational' are mostly just the things that we can articulate the rules to. That doesn't mean that the things we classify as irrational are any less rule-based. It just means that we don't consciously know what the rules are. And it is that type of irrational decisions that machines are making now. Which is the original, central definition of the technological singularity.
Reply With Quote
Thanks, from:
Ari (08-21-2015), BrotherMan (08-20-2015), But (08-20-2015), Ensign Steve (08-21-2015), Janet (08-21-2015), Pan Narrans (08-20-2015), S.Vashti (08-20-2015), SR71 (08-21-2015), The Lone Ranger (08-20-2015)
  #41  
Old 08-21-2015, 11:05 AM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCI
Default Re: Neural Networks (aka Inceptionism)

You should be a bit careful in this stuff though. Brains are not desktop computers.

I mean this in a few ways.

First brains aren't binary, and even their lowest level components aren't binary. That means that just because we have a Turing system, it isn't easy to mimic a brain.

Second, brains operate in a completely different fashion to modern computers. A computer is a few (maybe a few hundred at its biggest) remarkably powerful processing units, each capable of huge amounts of computational power. In contrast, brains are contain massive numbers - many orders of magnitude more - of utterly crap processing units, that don't even process things in the same way processors do.

The neural network stuff is cool, but I think it's easy to anthropomorphise. We might end up with an AI one day from all this stuff, but it won't be the same sort of intelligence we associate with humans without a big shift in how we carry out computation, and nobody is sure how to make that shift.
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
Thanks, from:
Ensign Steve (08-21-2015), lisarea (08-21-2015)
  #42  
Old 08-21-2015, 07:54 PM
Ensign Steve's Avatar
Ensign Steve Ensign Steve is offline
California Sober
 
Join Date: Jul 2004
Location: Silicon Valley
Gender: Bender
Posts: XXXMMCXXI
Images: 66
Default Re: Neural Networks (aka Inceptionism)

Here, let me say some stuff I think I know about the specific AI from the paper in the OP. I may be talking out my ass a little, because I don't have a PhD. You guys should probably read all these papers and fact-check me.

Quote:
Originally Posted by lisarea View Post
The parallels between the human brain and technology has always kind of freaked me out, but until pretty recently, they could be attributed to humans, consciously or subconsciously, creating technologies in their own image. They were pretty much limited, prescriptive* models.

But this goes beyond that. It is not based on explicit rules, but on the same type of pattern recognition that our brains do, without direct instructions and predefined patterns. Machines are learning the way we do. (Disclaimer: I no longer actually understand how AIs work, with the modern whoosits and the flurgenblurgen.)
Recurrent neural networks haven't changed much since 1993, so I'm sure you are totally up to speed on that. Mostly I think the big difference between then and now is that as we get more computing power, RNNs become faster and cheaper. (:bunnythrust:)

Neural Turing Machines, like the one in the OP, take an RNN and train it to mimic a classical computer (given specific input, return desired output). This is where the "inception" and "deep" words come from. It's a simulated computer inside a real computer. So instead of human people specifying an explicit set of rules, the real computer feeds different random sets of rules to the simulated computer, and uses a reinforcement signal and stochastic gradient descent* to learn which rules are the best. So the big breakthrough here is that the simulated computer has access to memory in the real computer, so it can learn from previous iterations and remember things.

Of course from there, the next obvious step was to make a million of these simulated computers and have them all work massively in parallel on GPUs.** That way they can share memory and learn from each other.

What could possibly go wrong? :chin:

Quote:
Originally Posted by Dragar View Post
You should be a bit careful in this stuff though. Brains are not desktop computers.

I mean this in a few ways.

First brains aren't binary, and even their lowest level components aren't binary. That means that just because we have a Turing system, it isn't easy to mimic a brain.

Second, brains operate in a completely different fashion to modern computers. A computer is a few (maybe a few hundred at its biggest) remarkably powerful processing units, each capable of huge amounts of computational power. In contrast, brains are contain massive numbers - many orders of magnitude more - of utterly crap processing units, that don't even process things in the same way processors do.

The neural network stuff is cool, but I think it's easy to anthropomorphise. We might end up with an AI one day from all this stuff, but it won't be the same sort of intelligence we associate with humans without a big shift in how we carry out computation, and nobody is sure how to make that shift.
NTMs aren't desktop computers either. GPUs, unlike CPUs, contain massive numbers of crap processing units (not billions, like a brain, but more than a few hundred. Like thousands). Besides, isn't there a bit of a binary aspect to a neuron? It either fires or it doesn't, the analog part is the action potential. I don't think a neuron can half-fire (correct me if I'm wrong).

Anyway, that's all irrelevant. There isn't a one-to-one relationship from processing unit to neuron. Rather, virtual neurons are implemented programmatically. They're not any more binary than a Pixar movie is binary. They use a sigmoid function to mimic a neuron's action potential threshold. Okay, yes, a sigmoid function is continuous and a computer has to use a discrete approximation, but that's an error of only 10-38 at 32-bits, which is where most GPUs were last time I checked. Once we get them up to 64 bits, the error drops to 10−308.

The most practical difference I see between virtual and biological neurons is that virtual neurons have a negligible reset time. I think the advantage goes to the machines there.


* Traditional machine-learning method. Sort of like trial and error with a hot-cold signal to tell it whether it's getting close.

** That's close to my heart cuz that's my speciality. :vibes:
__________________
:kiwf::smurf:

Last edited by Ensign Steve; 08-21-2015 at 09:34 PM. Reason: formatting and "clarity" and such
Reply With Quote
Thanks, from:
Ari (08-21-2015), beyelzu (08-21-2015), Dragar (08-21-2015), Janet (08-21-2015), JoeP (08-21-2015), Kael (08-22-2015), lisarea (08-21-2015)
  #43  
Old 08-21-2015, 08:50 PM
Ari's Avatar
Ari Ari is offline
I read some of your foolish scree, then just skimmed the rest.
 
Join Date: Jan 2005
Location: Bay Area
Gender: Male
Posts: XMDCCCLV
Blog Entries: 8
Default Re: Neural Networks (aka Inceptionism)

Quote:
Originally Posted by Dragar View Post
First brains aren't binary, and even their lowest level components aren't binary.
There are some parts that are binary. As ES mentioned a neuron either fires or it doesn't. There are a lot of analog bits both before and after though. There's also decent evidence brains have clocks similar to a cpu.

ETA: Here's a great little graph,

As you can see prolonged stimulus increases the frequency of firing but not the intensity.

Last edited by Ari; 08-21-2015 at 09:53 PM.
Reply With Quote
Thanks, from:
Dragar (08-21-2015), Ensign Steve (08-21-2015), fragment (08-22-2015), Janet (08-21-2015), JoeP (08-21-2015), Kael (08-22-2015), lisarea (08-21-2015)
  #44  
Old 08-21-2015, 11:48 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

We need The Lone Ranger in this thread.
__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
  #45  
Old 08-22-2015, 11:59 AM
Dingfod's Avatar
Dingfod Dingfod is offline
A fellow sophisticate
 
Join Date: Jul 2004
Location: Cowtown, Kansas
Gender: Male
Blog Entries: 21
Images: 92
Default Re: Neural Networks (aka Inceptionism)

Too bad you can't summon him by tagging his name like in Facebook, or with a hashtag.

#TheLoneRanger
__________________
Sleep - the most beautiful experience in life - except drink.--W.C. Fields
Reply With Quote
Thanks, from:
Ari (08-22-2015), JoeP (08-22-2015)
  #46  
Old 08-22-2015, 05:35 PM
Ari's Avatar
Ari Ari is offline
I read some of your foolish scree, then just skimmed the rest.
 
Join Date: Jan 2005
Location: Bay Area
Gender: Male
Posts: XMDCCCLV
Blog Entries: 8
Default Re: Neural Networks (aka Inceptionism)

To expand on that, the brain is a lot of analog gain controls combined with occasional binary filters. Neurons are able to perform weighted value algebra. At the end of an action potential a neuron releases neurotransmitters into the gap between it and connected neurons. These transmitters then float across the gap and connect to gates on the other side like a puzzle piece, the 3 basic actions are to increase or decrease the polarization of the neuron, or to block other chemicals from doing the same. All of these actions are pretty analog in that we are talking about many binding sites on a single receiving connection and different chemicals have different levels of interaction with the binding site. Temporary or permanent removal of these binding sights can allow the brain to adjust how much that next neuron listens to the signal. The neuron collects and counts these now weighted pulses from many other neurons, if enough depolarization occurs to push the neuron past the threshold, it fires, causing an electrical impulse to travel down the fibers and activate the release of neurotransmitters. The neuron then quickly re-polarizes to fire again, often hyperpolarizing for a brief moment making it less likely to fire for a few milliseconds after a pulse. This allows both constant stimulus to cause it to repeatedly fire and excessive constant stimulus to cause the frequency of firing to increase. Hypothetically there's a limit to how long a single neuron can sustain this repeated firing but experiments have brought some of that into question.

All of this makes sense if you think about what the neuron was originally designed for way back in the days of the jelly, converting sensory input to "Should I move this to my mouth? Yes/No" without wasting energy constantly on false triggers. You might notice that while it does algebra, it's a fuzzy version of it as getting close is good enough.
Reply With Quote
Thanks, from:
Dragar (08-24-2015), Ensign Steve (08-24-2015), JoeP (08-22-2015), lisarea (08-22-2015), The Lone Ranger (08-22-2015)
  #47  
Old 08-22-2015, 06:48 PM
The Lone Ranger's Avatar
The Lone Ranger The Lone Ranger is offline
Jin, Gi, Rei, Ko, Chi, Shin, Tei
 
Join Date: Jul 2004
Posts: MXDXCIX
Images: 523
Default Re: Neural Networks (aka Inceptionism)

An individual neuron is definitely binary insofar as it either generates an action potential or it doesn't.


An individual neuron has a resting membrane potential of -70 mv, and -- typically -- a threshold of -55 mv. If -- and only if -- the neuron's membrane potential is pushed down to -55 mv, it generates an action potential. (Some neurons are more sensitive than are others.) The AP of a neuron is thus "all or nothing," as physiologists describe it -- either an AP is produced or it is not, and the strength of the AP does not vary according to the intensity or the duration of the stimulus. (The same is true of an individual skeletal muscle cell -- it either contracts or it doesn't.)

Under normal circumstances, the neuron will not generate another AP until it has had time to repolarize. This means that even with constant or very rapid stimulation, there is a maximum frequency that any given neuron can go through a depolarization/repolarization/depolarization cycle.

It typically takes about 1 millisecond for a neuron to depolarize and generate an AP, then completely repolarize. Thus, theoretically, the fastest firing frequency of a neuron is about 1,000 times per second. Only a very few neurons can fire that fast, however. Most can apparently fire at a frequency of only 100 - 200 times per second, at best. (Some of the neurons in the auditory system can fire 1,000 times per second, which allows us to detect very small differences between the timing of when sound waves reach each ear.)



An individual neuron in the brain can synapse with 10,000 or more other neurons. Most neurons communicate with each other by releasing neurotransmitters, of course. Some neurotransmitters are excitatory and push the receiving neuron closer to its threshold, making it more likely to generate an AP. Other neurotransmitters are inhibitory and cause the membrane of the receiving cell to hyperpolarize, pushing it further from its threshold and making it less likely to depolarize.

So, an individual neuron can theoretically be receiving input from something like 10,000 sources simultaneously. Still, it either generates an AP or it doesn't -- there's no in-between, and the strength of the AP does not vary.

Whether or not it generates an AP depends on the summation all all those inputs. If the sum of all the excitatory and inhibitory inputs is sufficient to push the neuron's membrane potential to -55 mv, then it will generate an AP.

The impulses sum in both time and space. For example, suppose a neuron receives an excitatory impulse from another neuron -- and the impulse is not quite enough to push the receiving neuron's membrane potential to -55 mv. Maybe it will push it to -60 mv, for example.

In that case, what you get is a graded potential. That portion of the membrane partially depolarizes, and a graded potential travels down the length of the neuron, rapidly losing intensity as it does. So the graded potential travels only a short distance and the neuron rapidly returns to its resting state, with no AP generated.

Now imagine that at the same time, the opposite side of the neuron receives an identical excitatory stimulus. Will the neuron generate an AP? Almost certainly not. Remember, the graded potential is only a very localized depolarization of the neuron's membrane, and neither stimulus is sufficient to push the neuron's membrane to its threshold.


But if both of those stimulatory inputs occur at essentially the same place, since each of them causes the local membrane potential to change by +10 mv, their cumulative effect will push the membrane potential to -50 mv. And thus, an AP occurs.


A graded potential is a localized change in the membrane potential of a neuron. Since the stimulus triggers the opening of only a relatively small number of Sodium channels, the change in membrane potential is not sufficient to trigger the opening of voltage-gated channels. (That is, the threshold is not reached.) And so, there is no Action Potential. The graded potential travels only a short distance before dying out.


Timing matters, as well.

Say an excitatory input is received, pushing the local membrane potential to -60 mv. There will be a graded potential, of course, but that dies out almost immediately and in 0.001 seconds or so, the neuron will have completely repolarized. So if an identical excitatory input is received, say, 0.002 seconds after the first, there's no AP because the neuron has had time to completely repolarize before the second excitatory input is received.

But, if the second excitatory input is received before the neuron has had time to depolarize, then their cumulative effect will be sufficient to push the neuron to its threshold and cause it to generate an AP.



Summation in time and space. Note that Neuron A and Neuron B are releasing excitatory transmitters and so push the local membrane of Neuron D closer to its threshold. Neuron C is releasing inhibitory neurotransmitters and pushes the local membrane of D further from its threshold. Separately, neither A nor B can push D's membrane potential to -55 mv, and so cannot force an Action Potential to occur.

Together, A and B can push D's membrane potential to -55 mv and trigger an AP. But if the input from B is received long-enough after that from A, then D's membrane has completely repolarized before the input from B occurs, and so there is no AP. Even if the inputs from A and B are received simultaneously, if they are far-enough apart, the areas of local partial membrane depolarization on D do not overlap, and so no AP results.

Since Neuron C is releasing inhibitory neurotransmitters, it can prevent D from generating an Action Potential even if it's simultaneously receiving inputs from A and B.



In other words, without getting into the details of how gated ion channels work (which would take a bit of time), a neuron is often likened to a bunch of dominos. An input can be excitatory and thus makes an individual domino more likely to fall over -- or inhibitory, stabilizing the domino and making it less likely to fall over.

An excitatory input makes the domino wobble a bit, but if it isn't strong enough, it won't destabilize the domino enough to make it fall over. But another such input while the domino is still wobbling might be sufficient to cause the domino to topple.

If the domino does topple, then it knocks over the next domino. And that domino knocks over the next, and so on and so on. And pretty soon, all the dominos are knocked over.

The action potential of a neuron is similar. If you can push any part of the neuron's membrane to its threshold, then an action potential is generated, and it races down the length of the neuron.



If any part of a neuron's membrane can be pushed to its threshold, this triggers the opening of nearby voltage-gated ion channels, which open in sequence. As the ion channels open in sequence, a wave of depolarization moves down the length of the neuron. This is the Action Potential.



Okay, so an action potential is definitely binary. Either the neuron generates one or it doesn't, and under normal circumstances, the AP can not vary in intensity.

On the other hand, whether or not the neuron generates an AP at any given moment is due to inputs -- some excitatory and some inhibitory, and of varying intensities -- from thousands of different sources.



I said "under normal circumstances," by the way. Right after a neuron generates an AP, there is a brief absolute refractory period during which no stimulus --no matter how intense -- will cause the neuron to generate a second AP. This sets a limit on the frequency with which any given neuron can fire.

After the absolute refractory period, the neuron enters a relative refractory period. This occurs because a neuron actually hyperpolarizes as it recovers from generating an AP. During the relative refractory period, since the neuron's membrane is briefly hyperpolarized, it can generate an AP, but it takes a larger-than-normal stimulus to make it do so. Even so, the AP itself is all-or-nothing.


When a neuron is pushed to its threshold, it rapidly depolarizes, generating an Action Potential. Following the AP, during the absolute refractory period, the neuron is depolarized and thus no stimulus can make it generate a second AP. During the relative refractory period, the neuron is hyperpolarized. It can generate an AP during this time, but it takes a larger-than-normal stimulus to force it to its threshold.


Hope this is of some help!

-- Michael
__________________
“The greatest way to live with honor in this world is to be what we pretend to be.”
-- Socrates
Reply With Quote
Thanks, from:
Angakuk (08-24-2015), Ari (08-22-2015), BrotherMan (08-22-2015), But (08-23-2015), ceptimus (08-24-2015), JoeP (08-22-2015), lisarea (08-22-2015), Pan Narrans (08-22-2015)
  #48  
Old 08-22-2015, 06:52 PM
Dingfod's Avatar
Dingfod Dingfod is offline
A fellow sophisticate
 
Join Date: Jul 2004
Location: Cowtown, Kansas
Gender: Male
Blog Entries: 21
Images: 92
Default Re: Neural Networks (aka Inceptionism)

God was sure sloppy when he didn't hardwire us.
__________________
Sleep - the most beautiful experience in life - except drink.--W.C. Fields
Reply With Quote
  #49  
Old 08-22-2015, 08:45 PM
Ari's Avatar
Ari Ari is offline
I read some of your foolish scree, then just skimmed the rest.
 
Join Date: Jan 2005
Location: Bay Area
Gender: Male
Posts: XMDCCCLV
Blog Entries: 8
Default Re: Neural Networks (aka Inceptionism)

One thing that's frustrating with many diagrams is they often leave out the spines between showing the synapse and the whole neuron. They are especially important here since they are one of the big differences between neurons and computer chips.


Most diagrams make it appear that the dendrite reaches out and forms a single synapse with each axon but in fact spines form on the dendrite which end in a synapse. This means that for some connections, especially strong ones, the gap can look much more like a zipper with the dendrite and axon parallel to each other and many spines connecting multiple synapses between two neurons, weaker connections might have fewer spines connecting them. The real magic comes from the ease at which this allows neurons to form and break connections, modifying the network on the fly. Spines grow and shrink all the time and along with up and down regulation of their neurotransmitters this allows the brain to make and break connections temporarily without expending excess energy or building whole new neurons. Over a short time (minutes to hours) a portion of a network can be readjusted, brought online or ignored without making any serious changes in the core design.

(While we are talking about the brain side of things a major pet peeve of mine is when articles call a neurotransmitter the ____ chemical. Like Serotonin the happy chemical or oxytocin the bonding chemical. As it should be obvious all these chemicals do is wander across a gap and help to activate or deactivate the next neuron, that's it. They may be common in specific networks (serotonin could be called the sight chemical given how heavily it's involved with vision processing) but it's the network that's causing the feeling and not the neurotransmitter. Serotonin is a great example of this as if injected it can cause burning pain (activating certain pain receptors) and constriction of blood vessels, neither of which sound very happy to me.)

Last edited by Ari; 08-22-2015 at 09:13 PM.
Reply With Quote
Thanks, from:
ceptimus (08-24-2015), Dingfod (08-22-2015), Dragar (08-22-2015), JoeP (08-22-2015), The Lone Ranger (08-22-2015)
  #50  
Old 06-09-2016, 07:13 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
BrotherMan (06-10-2016), Ensign Steve (06-09-2016)
Reply

  Freethought Forum > The Marketplace > The Sciences


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

 

All times are GMT +1. The time now is 06:08 AM.


Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Page generated in 0.71988 seconds with 15 queries